DevOps

Important Actions After Cloud Migration

Important Actions After Cloud Migration

Cloud migration follows the same principles as any other complex task. With methodical planning and appropriate preparation, it can be a smooth and painless experience. While “divide and conquer” can have negative connotations, it’s a popular technique in computer science and can be applied to program management equally.

Assuming a migration was planned as such and all workloads are in the cloud now, does that mean the job is done? Absolutely not! There are several tasks that should immediately follow, some of which should become iterative processes going forward.

Closure

Immediately after migration, it could be all high-fives on a job well done. However, it’s crucial to do a clean-up immediately after, including shutdown and removal of any redundant systems, severing of superfluous network connectivity, and switching to production status of relevant applications on the cloud platform.

These actions are extremely important from an operations standpoint. The cost of running those extra resources is one thing, but those systems are easily forgotten over time. While they exist, they only cause operational confusion and security issues. Action should be taken to do a controlled but prioritized decommissioning of such systems and connectivity.

Evaluation

Once closure has happened and the platform is stable, evaluate if the migration was successful and achieved all it was set out to do.

The first point of reference is the business case that was formed as part of the project initiation phase where all stakeholders agreed it makes sense to migrate to a cloud platform. The other one is the list of KPIs (key performance indicators) that were defined as part of the audit that was defined just before carrying out the migration.

The latter is tangible proof of what was gained from the whole exercise. You should be careful that the measurements are “like for like” and the target objects will exist in their current form post-migration when defining the metrics, so there’s no confusion. Such evaluation and documenting before and after the migration is important as it keeps the team honest about their goals and any decisions made. At the same time, it also makes success undeniable.

Costs

Sizing of machines is one area where you should err on the side caution and oversize. That combined with the fact that in the early days applications tended to live in both environments, increases the running costs in the short term. Once the dust has settled, it is time to focus on optimizing for costs. Most cloud platforms offer discount pricing for infrastructure and native tools to determine where such savings can be made. You can also try a free tool like this Azure Cost Calculator that can provide a consolidated view of all your Azure account expenses to help you better identify any potential cost-saving opportunities.

This is a quick and easy win. Furthermore, it’s easy to identify machines that will be required permanently and are good candidates for discount pricing by committing a certain amount to the vendor. In some cases, further savings might be possible by standardizing certain types or family of instances. A review should be done every year to determine how many resources can benefit from such discounts.

Refactoring

Speaking of cost optimization, another way is to refactor applications so that they can benefit from on-demand resource provision such as “function as a service” architectures or even stateless applications where infrastructure can be deployed for the duration of the job and destroyed thereafter.

Major cost efficiencies can be found using cloud-native technologies and methodologies. The best part is that due to the nature of the public cloud, such refactoring can be done in isolation and tested at full-scale in parallel.

Small and manageable improvements can be made over time to gain those efficiencies and that work should become part of an ongoing process to improve applications.

Security

In the coexistence phase during migration, security policies have to allow traffic and system access between applications on both environments. Those policies need to be reviewed immediately after decommissioning of the “legacy” side of the application.

There could be a tendency to wait until all legacy applications are decommissioned but by doing so, you’d be introducing a security risk for the duration of the migration. While it can be a tedious process, any security breaches will end up consuming even more time, so it’s best dealt with as soon as possible. The security review shouldn’t be limited to just the legacy workloads. Security for cloud platforms is very different from traditional platforms, and migration provides an opportunity to review the capabilities available and take advantage where possible.

Center of Excellence

The migration team goes through many highs and lows during this process. That not only improves bonding but also develops a lot of skill and knowledge.

That knowledge is priceless, and it would be a shame if that team disbands completely and goes back to their “day job.” Technical members of the team will likely continue with the new platform, but other members will also have precious knowledge of the whole journey and shouldn’t be ignored. It makes sense to preserve that knowledge and experience by keeping the team as the “Center of Excellence” for the long term. It should reserve time and meet on a regular basis for strategic discussions and decisions going forward.

Conclusion

Migration to the cloud is no mean feat, but once achieved, it opens up so many possibilities to morph the infrastructure and tools into something in line with modern-day architecture.

This list is by no means exhaustive but does give a good start. As cloud technologies and skills to use them develop, only the sky is the limit—no pun intended!


Ather is a solutions architect and works for Rackspace. His focus is on all things related to cloud, technology, storage, virtualization, and whatever comes in between. Being in the industry for over 20 years, he feels ancient. If you feel that inclined, he can bore you with stories on how he used to manually park heads on a hard drive or bound protocols to network cards. Seriously though, he has designed, deployed, and managed many enterprise environments, involving virtualization, storage, directory, and mail services. Ather started blogging over nine years ago so that he could share some of his knowledge with the community. He has been a vExpert for six years running and is also vExpert NSX/Cloud. He has been an official VMware blogger at VMworld EU and US for a few years too. He is one of the founding members and contributor to Open HomeLab Wiki and co-hosts @OpenTechCast as well. Ather’s natural habitat is tech events like VMworld, Cloud (and other) Field Days, VMUGs, etc., and he thrives on meeting like-minded people and having a good old chat about technology. He’s friendly and not dangerous at all, so please do interact with him whenever you spot him in such surroundings.