hybrid cloud

Repatriation isn’t a step backwards – it’s part of a mature cloud strategy

By Gabriel Pucek, Technology Architect, SEC DATACOM.

Remember back in 2021, when a massive cargo ship got stuck trying to squeeze through the Suez Canal? One wrong angle, and all the other ships had to sail around Africa while companies couldn’t produce or sell anything because their goods didn’t arrive. Not because the ocean stopped working, but because the world depended on a single narrow passage.

When cloud outages become a nuisance, it means we’ve allowed too much of our digital world to rely on too few paths: One mistake and everyone has to sail the long way around. This realization, I think, is why we’re seeing a movement towards cloud repatriation right now.

The world has changed and so has the conditions we build IT on.

For some, repatriation is simply a knee-jerk reaction to the global circumstances we’re seeing now.

Not me.

I see repatriation as an integral part of a proactive cloud strategy that recognizes the forces shaping global IT now and tomorrow: risk, workload behavior, and cost. It’s an answer to the question that goes:

“How do we keep our ships moving, even when the Suez Canal is out of order?”

When does repatriation
make sense?

These are some of the questions that IT leaders should ask themselves when it comes to repatriating or not.

1.     Are you worried about geopolitics, security, and confidentiality?

The past five years have pushed geopolitics straight into the datacenter. Questions of jurisdiction, national interests and exposure now sit on every CIO’s desk.

If your data lives in a public cloud under another nation’s laws, you have to assume that the nation’s intelligence agencies could access it. Repatriation is a pragmatic way to reduce that kind of exposure.

It’s not about abandoning the cloud, but by ensuring that the most sensitive workloads are under the necessary legal and operational protection.

2.     The need for predictable IT costs vs. running high-intensity workloads at scale

All organizations need financial clarity. And when real-world compute or data transfer costs exceed what you consider defensible, it’s time to evaluate what stays and what comes home.

The best example I can think of is AI. Developers working with AI love building in the public cloud, but when they have to deliver the service, the induced costs stop making sense.

And realistically: who imagines not using more AI in the next five years? As adoption grows, heavy, constant workloads move toward specialized infrastructure where latency, cost and control can be managed.

Because workloads go where they best support how people actually work and live, and sometimes that’s the cloud, sometimes it’s closer to home.

I’ve seen my share of cases where moving the right CPU‑intense or data‑heavy workloads home delivered hardware ROI in under a year.

3.    How much downtime can you actually tolerate?

There is no technology with 100 percent uptime, so a realistic tolerance for downtime should be a guiding principle in your strategy. If you’re in financial services, healthcare, utilities or any other line of business that has workloads that simply can’t be down for more than minutes or hours without incurring critical damage, it needs a concrete exit plan: clean, recent, uncompromised copies of data (maybe even air‑gapped) and the ability to start up elsewhere within your tolerated window of time.

Once you know your number, it’s much easier to choose a plan for resilience. And repatriation (or a hybrid placement) will often be part of that plan depending on your industry sector.