When Cloud Becomes “Too Expensive”: How Lift‑and‑Shift Blows Up the Budget
On paper, your cloud migration looked responsible: largely lift‑and‑shift, minimal changes to applications, clear dates to get off ageing infrastructure. Then the invoices started coming in, and within a few months it felt like you had burned through a year’s worth of budget without a clear story for the board. This is how “cloud is too expensive” is born—not from one bad decision, but from a series of reasonable shortcuts that add up to a cost crisis.
Executive summary
What’s happening: Many organizations under pressure to meet data‑center exit dates choose a fast lift‑and‑shift path and delay cloud cost governance until “after we’re migrated.”
Why it matters: Within the first 3–12 months, spend is often 20–30% higher than expected, confidence in the original business case erodes, and CIOs and CFOs are forced into reactive cost‑cutting conversations.
What you’ll get in this post: A clear picture of how the cloud cost crisis unfolds, a “forged in fire” story drawn from real experience, and a way to recognize early warning signs before they become career‑risking headlines.
How lift‑and‑shift quietly sets up a crisis
When boards say, “we need to be more digital” and “we can’t invest more in data centers,” the safest‑sounding plan is often a like‑for‑like move: same workloads, new platform, minimal change. Lift‑and‑shift feels low risk because you are not redesigning core systems while also trying to meet tight migration deadlines.
Yet financially, lift‑and‑shift works against you. Most on‑prem systems have been tuned over years to a fixed capacity model, while cloud bills you for what you provision and what you forget to turn off, not just what you theoretically need.
When you move those same oversized VMs, databases, and storage volumes into cloud without rethinking their shape, you effectively import all your historic waste into a pay‑as‑you‑go world. At the same time, the team is focused on stability and timelines, not on reservations, right‑sizing, or clear tagging, so the early invoices look high and are very hard to explain.
This is not a moral failing; it is a predictable side‑effect of how most migration programs have been scoped and governed. The business case slide deck assumed some level of future optimization, but no one owned the work to make that optimization real during the rush to exit the data center.
“Forged in fire”: from approved business case to budget panic
In one large organization, the cloud decision started the way many do: an ageing data center, rising hardware and support costs, and growing concern that on‑prem capacity would hold back new digital services. The CIO and CFO agreed that moving to cloud was the right long‑term play, and a business case was approved with reasonable assumptions about efficiency and future savings.
To hit exit dates, the migration program chose a mostly lift‑and‑shift strategy. Applications would be moved with minimal change to reduce delivery risk; a small architecture group would “circle back” on optimization once everything was stable in cloud.
For the first few months, the narrative was positive. Critical systems were running, project milestones were being met, and the IT organization could tell the board that the move to cloud was well underway.
Then the finance team started to notice the run‑rate. Month‑on‑month cloud costs were significantly higher than expected, and early trend lines suggested the annual forecast would be overshot by a wide margin if nothing changed.
When the CIO and CFO dug in, three patterns emerged:
· Many workloads had simply been moved as‑is, with the same or larger VM sizes and storage tiers “to be safe,” rather than being right‑sized for cloud.
· Tagging was inconsistent, so 20–30% of spend could not easily be attributed to an application or business owner during early reviews.
· There was no agreed playbook for reservations, savings plans, or budget thresholds, so cost optimizations were ad‑hoc and reactive.
By the time these issues were visible, the story inside the organization had already shifted. What began as “cloud will help us modernize and manage costs better” had become “cloud is more expensive than on‑prem and we don’t have a clear handle on it.”
The work that followed—standing up proper showback, improving tagging, implementing cloud cost management tools, and systematically lowering the cost run‑rate—was successful, but it had to be done under the pressure of a perceived crisis. In the next post, that recovery journey becomes the focus, but it is important to see clearly how avoidable the crisis phase itself was. In hindsight of course :)
Early warning signs you might be heading into the same trap
From the outside, cloud cost crises look sudden, but inside they leave clues months in advance. If you know what to watch for, you can act before the narrative hardens into “cloud is too expensive here.”
Common early warning signs include:
· Steering committees where schedule and cutover dates dominate the agenda, while cost, tagging, and ownership are deferred to “phase two.”
· Monthly cloud cost reviews that focus on explaining variance in total spend, rather than on unit costs by product, environment, or capability.
· A growing share of “unallocated” or “other” costs on internal dashboards, which everyone knows is real money but no one can confidently tie to a team or value stream.
If you recognize these patterns, you are not alone; they show up in many organizations, across industries and sizes. The point is not to assign blame, but to use them as prompts to rebalance your migration effort before the invoices force your hand.
What leaders can do when costs already look bad
By the time the CFO is asking, “why is cloud so expensive?” it can feel like it is too late to change the story. In reality, this is often the moment where a clear, structured response can rebuild confidence.
The organizations that recover best tend to do three things quickly:
· Name the problem clearly: Instead of vague talk about “usage growth,” they acknowledge that the migration prioritized speed over cost governance and that a specific FinOps effort is now required.
· Establish basic visibility: They invest a focused burst of effort into tagging, showback, and a single view of costs so that conversations move from “too expensive” to “this is what we are spending on which products.”
· Create a short, sharp optimization plan: They identify 3–5 concrete levers with measurable impact—such as right‑sizing, reservation strategy, or storage tiering—and track value realized month by month.
This is where a FinOps mindset shifts from theory to practice. In Part 3, the focus will be on the playbook: how to move from reactive crisis management to a repeatable way of keeping cloud costs close to on‑prem levels while still growing usage.
Three practical takeaways for today
If you are mid‑migration, ask your program team to show where cost ownership, tagging standards, and a basic reservation strategy are written into the plan—not just implied.
If you are already seeing higher‑than‑expected bills, shift at least one steering committee discussion from timelines to a frank review of cost visibility and governance gaps.
Treat “cloud is too expensive” as a signal that your operating model has not caught up yet, not as a verdict on cloud itself; this framing makes it easier to get alignment on investing in FinOps capabilities.
Ready to Optimize Your Cloud Costs?
Let's discuss how Finoptica can help you achieve significant savings
Book Free Assessment