Is it possible to save money and run on a public cloud?

Based on the hype, you would think that enterprises are scurrying off cloud platforms like rats from a sinking ship. The reality is much more nuanced. According to Andover Intel research, only about 9% of enterprises have moved applications out of the cloud. They also found that less than 3% of enterprises see any reason for cloud repatriation other than cost, although more than half expressed some disappointment with higher than-anticipated cloud costs.

Although cost is still the primary reason enterprises move applications off the cloud, it’s rare to move all their applications and data sets, and it’s usually only after they see those workloads hemorrhaging cash.

A self-inflicted wound

Ten years ago, everything was “cloud-first,” any economics were ignored, and finops was non-existent. I always tell my clients that this was (mostly) not the fault of public cloud providers. In the early days of cloud computing, big providers promoted the migration of applications and data to the cloud without modification or modernization. The advice was to fix it when it got there, not before.

Guess what? Workloads were never fixed or modernized. These lift-and-shift applications and data consumed about three times the resources enterprises thought they would. This led to a disenchantment with public cloud providers, even though enterprises also bore some responsibility.

As we move into 2025, it’s no surprise that enterprises face real challenges trying to manage cloud costs. There are no perfect options. You can repatriate the applications and data back to on-premises systems, hoping the cheaper hardware will save you some dollars. Or you can leave them in place, do nothing, and hope the bosses will overlook the steady cash drain. There is another option, even though it is rarely considered: Optimize the existing applications and data sets, which can provide financial relief.  

Splitting the baby

Enterprises can optimize cloud usage and avoid cloud repatriation with careful planning and by exploring issues beyond cost. Warning: This path does not always work and can get you into deeper trouble. Still, it’s often the best approach for many workloads burning cash on public cloud providers.

Most businesses need a better strategy than cloud repatriation for problematic applications. These applications hid their inefficiencies while running on premises because we never saw a bill for resource utilization, including storage, network, computing, etc. Often, these applications did not undergo any architecture review when they were built. “It works, doesn’t it?” was the metric that determined success. I would call something that works but costs five times more money in the cloud than on-premises a failure, but most did not.

The compromise approach is to optimize in place. This means doing the bare minimum to get the applications and data sets in a state that minimizes resource use and maximizes optimization when running on a public cloud provider.

Rethinking costs

High cloud costs usually stem from the wrong cloud services or tools, flawed application load estimates, and developers who designed applications without understanding where the cloud saves money. You can see this in the purposeful use of microservices as a base architecture. Microservices are a good choice for some applications but can burn about 70% more cloud resources. Changing the architecture to a more simplistic approach (such as monolithic) can be more cost-effective.

Tools often contribute to cost problems as well. In many cases, those charged with redeploying applications on public cloud providers don’t put much thought into the tools they use. The difference in the usage cost of one tool over another can be 3 to 5 times higher. Simply swapping out development, testing, and operations tools for services that provide better cost-effectiveness can reduce case burn by 50% to 70%.

The key to winning this war is planning. You’ll need good architecture and engineering talent to find the right path. This is probably the biggest reason we haven’t gone down this road as often as we should. Enterprises can’t find the people needed to make these calls; it’s hard to find that level of skill.

Cloud providers can also be a source of help. Many have begun to use the “O word” (optimization) and understand that to keep their customers happy, they need to provide some optimization guidance. Although I would not call this a massive movement, I see it emerging as a focused approach to delivering better cost efficiency.

Steps you can take

To effectively manage application costs on public cloud providers, enterprises can follow these guidelines:

Select proper cloud services and tools. Carefully choose the cloud services and tools that match your application’s needs. Avoid advanced or costly features that may be unnecessary.

Use accurate load estimations. Precise load estimations avoid unnecessary scalability and associated costs. Dig through historical data and growth projections to ensure you’re not overprovisioning or underutilizing resources.

Be cost-aware when designing applications. Develop applications with a clear understanding of where and how the cloud delivers cost benefits. Align application architecture with cloud cost dynamics.

Understand utilization patterns. Determine the usage patterns of your applications. For example, if server utilization is stable at around 70%, consider whether maintaining on-premises resources would be more economical.

Will this be easy? Nope. It’s going to take hard work—a lot of it. However, there’s (usually) gold at the end of the cloud optimization rainbow. I suggest you at least take a look.

Go to Source

Author: