0.9 C
New York
Saturday, November 29, 2025
Array

Ask the Experts: Validate, don’t just migrate


More organizations are moving to hybrid cloud because it’s a better fit for the business. Some have repatriated a portion of their company’s data from a public cloud, while others have been moving from pure data center architectures. Using more than one public cloud has become increasingly common, and many organizations are enabling cloud on-premises with hybrid cloud, too.

Migrating legacy applications to the cloud proves challenging for many organizations, simply because they’re not prepared for it. For example, do they understand the application’s dependencies? Can they move the legacy application in the first place? Have they adequately addressed the data and cybersecurity issues? Does the application really need to be migrated? How does one know the timing is right?

We asked three IT leaders for their opinions on these questions and more. Their quick takes? 

  • Bill Hineline, field CTO at observability platform provider Chronosphere and former director, observability and automation at United Airlines, said, “why” is the crucial question for legacy application migration. Not surprisingly, he also says you need to be committed to refactoring, the process of modernizing and cleaning up old code. 

  • Eric Helmer, executive vice president and global CTO at enterprise software support and services provider Rimini Street
    , a font of useful pointers, alsoemphasized the “why” question from a different perspective, urging IT leaders to think more critically about migration, as it may not be necessary or even possible. 

  • David Vidoni, CIO at enterprise transformation platform provider Pega, stressed the risks of false assumptions and the importance of understanding operational metrics and cost drivers.

Related:The Fractured Cloud: How CIOs Can Navigate Geopolitical and Regulatory Complexity

Below are their detailed responses (lightly edited for clarity).

Bill Hineline, field CTO, Chronosphere

Bill Hineline, Chronosphere : Refactoring is key

“You’re going to spend a lot of money on cloud, so you must understand that your critical application can’t just be lifted and shifted. You might be able to containerize it and put it up there at some point quickly to get some quick action, but you’re not going to get the advantages and the performance that you want in cloud if you don’t refactor your code. 

“There’s this whole commitment, which is why I start with ‘why?’ That means getting back to basics — what’s the health of the application today, and if it’s not healthy, why isn’t it healthy? Then, once it’s healthy, you need to commit to refactoring and what that does to your architecture. Otherwise, you may have an application that doesn’t scale the way you want.

Related:When a Provider’s Lights Go Out, How Can CIOs Keep Operations Going?

“Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you’re just setting yourself up for failure. Similarly, if you haven’t tagged properly, you have no way to attribute it to the project, and that becomes a cost problem.”

Lessons from United Airlines 

“I ran observability for United Airlines, which did a lot of cloud migrations. The MileagePlus [loyalty program], for example, lived on a mainframe, and we migrated it to the cloud. The code refactoring and all the work to get to that point took months to prepare.

“We cut over the entire system in one evening because we had good insights from observability, like how things were performing and how they were performing versus the mainframe. We migrated so that we could scale better and do more agile development. 

“I’m a big believer in keeping things agnostic. If you tie yourself to a single cloud provider, a single tool, that’s OK, but you’re going to make that inevitable move harder. If you don’t tie yourself to a more proprietary capability, you keep yourself open for options.”

Eric Helmer, EVP and global CTO, Rimini Street

Eric Helmer, Rimini Street: Evaluate necessity first 

“Why would you want to do it in the first place? A lot of times it’s because you’re either getting out of a data center or the hardware is getting old, [but often], it’s unnecessary and can create security, integration or latency problems.

Related:Future-Proofing Cloud Security Priorities

“If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren’t designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload.

“[To prepare a mission-critical application], it’s key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory.

“You must test to make sure you can bring the application over, and you should also have the installation binaries [the original installation and setup files]. Some people think you can just back up the application and restore it over there. Sometimes it works, and sometimes it doesn’t, so you’re going to have to do a fresh install. If you do have the application binaries, do you still have access? Can that be executed in a public cloud model? Step one is [identifying] an appropriate home for the application.”

Weigh risks and costs carefully 

“[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn’t talk to new stuff. And the third one is supportability, because it’s hard to find old people to support old systems.

“But, if I’m able to make the application]completely secure, interoperable and compatible with anything, I can completely support it, and we agree that tech debt is turned back into tech, then the situation isn’t on fire so we don’t have to make knee-jerk reactions to cloud. If we can’t address those [cyber, interoperability, compatibility and supportability] risks, or it’s time to move to a SaaS model or lift and shift to infrastructure as a service or cloud, then it’s time to make those decisions.

“My No. 1 factor when talking about ROI, is what is the year or month [in which we achieve] payback? That really helps dictate decisions a lot, at least financially.

“An example is a midsize client [with] multiple warehouses [that needed to increase efficiency]. The problem was that warehouse managers had to log into the ERP system to look up some stuff, log into the supplier system to order the right stuff, and log into the inventory system to do some other things. The traditional line of thinking was to consolidate it all into a single system. That’s an expensive proposition, but that’s what we had to do. 

“What we also did was put an interface on top of these three systems that would log into the systems on behalf of these inventory managers [and provide] complete intelligent workspaces with chatbots, so you can ask how many widgets you have in warehouse five as opposed to having somebody log into three different systems and reconcile things across a spreadsheet.”

David Vidoni, CIO, Pega

 David Vidoni, Pega: Beware of false assumptions 

“[Thinking about legacy modernization is] a great opportunity to revisit all the things that you’re doing and just having a sanity check on whether those are the right things to contest or not.

“It starts with understanding the performance profile of the systems and connections you have — where are things are running, how are they running, and if there’s an issue, how can you transfer it over to somewhere else that’s available? If you’re running an application in a data center, they’re up all the time, so you’re not paying for processing by the minute or hour. You really need to understand the performance profile of the applications you’re running and do proper sizing, because if you overprovision resources, or you have too many running, you might get some unpleasant surprises when your bill comes in next month.

“Sometimes, people have the false sense that if it’s in cloud, then I’m all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind. 

“Redundancy isn’t free. If the organization can tolerate a brief interruption, you can potentially go with a much lower cost option to support failover. If it’s something where you can afford near-zero downtime, that comes with an additional cost because you’re having to run both things at the same time. The two biggest mistakes that I’ve seen are not understanding the cost of what it takes to run your systems today and not having adequate controls to monitor and manage that.

“On the protection front, it really comes down to making sure that all the settings are disabled so you’re not open too much to the internet when moving to the cloud to get things working. You need secure communications between your systems, and where you’re calling them from. You also need good monitoring in place, so if you see any anomalies, then the right teams can be alerted and take appropriate action.

“The main cost drivers are the workloads themselves. What are you running? What kind of hardware resource are you running them on? What features do you have available? There are different tiers of storage. Some cost more than others.”

Lessons from Pega’s migrations 

“Back in 2020, we moved our ERP implementation from a colocated data center to Google Cloud, and we were able to do that start to finish in 13 weeks. That included all the planning and all the migrations of our development environment. We moved 40-plus environments comprising the development, testing and production environments.

“[First, we did] all the initial testing — multiple mock runs and making sure the environments were secured — to right-size the environments, so we weren’t spending any more than we needed to, month over month. Once we understood the performance profile, we locked in some further savings by reserving some capacity. 

“We also moved from our data centers to AWS, and that [allowed] us to take advantage of the elastic capacity. We were also able to take advantage of some of the AI capabilities that were out in the cloud. It was just much easier to connect to those as well as other services, and we were able to do that very rapidly without having to send licenses for software or go through the traditional routes. My teams now have a lot of agility to turn these things on and add functionality to applications rapidly.

“If this is part of a broader play to understand what the rest of your migrations look like, [you should understand that] some clouds are better at doing some things compared to others. If you have workloads in AWS, Google and Azure, they have [data] egress charges. So, you need to understand where your systems are going and the systems [with which] they will be communicating. Whether they are all together in the same cloud, or across separate clouds, really matters.”



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp