- The Routing Intent by Leonardo Furtado
- Posts
- Networking and Platform Aren’t Separate Systems: They Never Were
Networking and Platform Aren’t Separate Systems: They Never Were
Network and platform teams are often divided by priorities, tools, and language but they’re solving the same problem. When you bridge that divide, systems stop breaking quietly.

1. Scale Only Works When We Connect People
In large-scale systems, technical architecture is only half the battle. The other half is organizational architecture, how teams work together, how knowledge flows, and how decisions get made under pressure. And nowhere is this more apparent than in the long-standing divide between network engineering and platform engineering.
These teams often operate in different universes. One is deeply concerned with routing, telemetry, fault domains, packet loss, and path preference. The other is focused on containers, deployment velocity, service reliability, and observability. They speak different operational languages. They build different tools. They even look at different dashboards.
However, undeniably, neither team can succeed alone, especially not at the hyperscale level.
When a service degrades, it doesn’t matter if the root cause is a routing flap, a misconfigured container policy, or an overwhelmed link. The user only experiences failure, the platform team gets paged, the network team sees nothing wrong, and the incident can last for hours. People begin guessing. Infrastructure confidence declines.
This isn't a tooling problem. It's a collaboration problem.
And if left unaddressed, it creates a kind of organizational friction that degrades not only performance but also trust.
At hyperscale, we learned this the hard way.
In the early years, our teams operated with a well-intentioned separation: network engineering managed the underlay, platform engineering handled the services, and a thin troubleshooting bridge existed between them. But as both systems and user expectations grew more complex, that bridge started to crack. Triaging issues took too long, and RCA efforts missed the mark. And worst of all, engineers on both sides felt increasingly alone, each trying to fix a piece of a system they only partially understood.
So hyperscalers changed course.
We made it a priority to engineer collaboration, not just connectivity.
We stopped thinking about NetEng and Platform as two distinct pillars and started treating them as two views into the same system. And we rebuilt the way they worked together through shared tools, shared SLOs, and shared culture.
The results were transformational.
This article tells that story, not just of what we built, but why we built it. And how any organization operating at scale can begin to close the gap between the people who move packets and the people who ship product.
Because in the end, your network doesn’t stop at the switch. It stretches all the way to the service.
And if your people aren’t as connected as your systems, then you’re not building for resilience, you’re just hoping for it.

Subscribe to our premium content to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content. No fluff. No marketing slides. Just real engineering, deep insights, and the career momentum you’ve been looking for.
Already a paying subscriber? Sign In.
A subscription gets you:
- • ✅ Exclusive career tools and job prep guidance
- • ✅ Unfiltered breakdowns of protocols, automation, and architecture
- • ✅ Real-world lab scenarios and how to solve them
- • ✅ Hands-on deep dives with annotated configs and diagrams
- • ✅ Priority AMA access — ask me anything