The Network Hasn’t Changed… But Expectations Have
And the automation you’ve adopted might be making things worse, not better.
Every network still needs to be available, secure, fast, and invisible. But today's automation practices are increasing fragility, not reducing it. Here's why network usability, state consistency, and business alignment matter more than speed alone, and what you should do instead.
The Core Requirements Haven’t Changed
No matter how many layers we’ve added, no matter how the topology shifts: the foundational requirements of networking remain the same.
Whether you’re a telco, cloud provider, enterprise, SaaS platform, or even a government, your network must deliver:
High Availability: Downtime, in any form, is unacceptable. Some 9s are table stakes.
Resilience: Networks must withstand multiple simultaneous failures, including links, paths, control plane anomalies, and self-heal with minimal impact.
Security by Design: Not bolted on after deployment, but baked in from day zero, from device hardening to trust models and key management.
Consistent Performance: Predictable behavior across traffic classes, with deterministic latency, jitter, and throughput.
Scalability and Interoperability: It must grow, evolve, and integrate with other environments without ripping and replacing.
Usability and Invisibility: Because users don't interact with the network or think about the network. They should only experience fast, secure, reliable applications. The network must remain invisible and not get in the users’ way.
So what changed?
The Pressure Curve Is Rising
While the requirements mentioned above haven’t changed, the intensity has skyrocketed.
Today’s environments are multi-cloud, edge-distributed, and globally available. SLAs are tighter, attack surfaces are larger, and expectations from users, regulators, and product teams are unforgiving.
The network is no longer just plumbing; it’s critical business logic. It must:
Adapt dynamically to workloads.
Serve as a policy enforcement point.
Deliver telemetry and observability at very short intervals.
Recover from failure before anyone notices.
And so… automation arrived as the savior.
But it didn’t always work.
When Network Automation Goes Wrong
The network engineering community, long skilled at mastering protocols, diagrams, and CLI, recognized the need to modernize.
So, we learned Python. We adopted YAML, JSON, XML, Jinja. We wrote scripts. We pushed configs via APIs instead of SSH. This shift was real. Necessary. Even urgent.
But as with all evolutions, we’ve hit growing pains.
Let’s examine the problems introduced when automation is treated as the end goal rather than a method to express intent and enforce correctness.
Problem 1: Script Proliferation and Silos
In many orgs, every engineer writes their own scripts for various purposes, including provisioning, failover testing, interface templates, ACL pushes…
Before long:
You have 10 different ways to configure a BGP peer.
There’s no code reuse, only copy-paste.
When someone leaves, their scripts die with them.
This is how spaghetti automation happens. It’s no better than doing things manually.
Problem 2: Inconsistent Network State
Automation doesn’t guarantee consistency. If anything, it amplifies inconsistency faster.
When engineers run different scripts with overlapping logic, configuration drift emerges:
Interfaces tagged inconsistently.
Routing policies behave differently across sites.
QoS parameters misaligned with service expectations.
Without a single source of truth and state validation, you’re just automating chaos.
Let me elaborate on this idea so you can get a better sense of what I'm trying to convey. Several posts in this newsletter will touch on this concept:
Automation, by itself, is not a silver bullet for consistency. In fact, when implemented without discipline, it becomes a powerful amplifier of chaos. Consider a scenario where multiple engineers independently create automation scripts to manage similar parts of the network, VLAN provisioning, route redistribution, ACLs, or interface descriptions.
Without a unified model or governance, these scripts often have subtle differences in logic, formatting, and assumptions. While each may function correctly in isolation, the moment they overlap, like targeting the same devices or configurations, they introduce unpredictable behaviors and configuration drift.
The result is a fragmented network state that diverges from the original intent, often without clear visibility into what changed, when, or why.
This fragmentation is particularly dangerous because automation operates at speed. In a manual world, inconsistency spreads slowly, one CLI command at a time.
In an automated world, a single misaligned script can propagate incorrect configurations across hundreds of devices in seconds. The more automation you add without a shared source of truth, version control, or validation framework, the faster the entropy builds.
In this sense, automation is not inherently a solution. It is a force multiplier. Whether it multiplies good or harm depends entirely on the design discipline, operational maturity, and the presence of rigorous safeguards around its use.
Problem 3: No Testing, No Version Control, No Safety Nets
Would you ship production software that:
Has no unit tests?
Isn’t peer-reviewed?
Can’t be rolled back?
Isn’t integrated with CI/CD?
Of course not.
Yet, many network teams still treat automation as fire-and-forget scripting.
This creates fragile systems where:
A single typo can bring down a POP.
Migrations lack confidence or traceability.
You can’t predict the network's actual behavior under change.
Let me elaborate on this.
In modern software engineering, no serious application reaches production without rigorous safeguards. Things like unit tests, peer reviews, version control, and CI/CD pipelines are non-negotiable. These practices exist not just for quality assurance, but to ensure predictability, traceability, and recoverability.
Yet, in many networking environments, automation is still treated as a set of quick scripts written and executed ad hoc, outside any structured development lifecycle. This is equivalent to deploying untested, untracked code directly into a production environment —a practice that would be unthinkable in software teams.
Without testing and rollback mechanisms, every execution becomes a gamble, and when something goes wrong, root cause analysis is slow, assumptions go unvalidated, and the blast radius can be massive.
This lack of discipline transforms automation from an enabler of velocity into a liability. A single untested script, perhaps with a typo in a BGP policy name or an ACL sequence, can bring down critical network segments, such as an entire POP or core site.
More critically, without version control, it's impossible to understand what changed, revert safely, or audit decisions after an outage. Network changes should be developed, reviewed, and validated just like production software.
Until automation is treated as software, with safety nets in place, network teams will continue to operate with a false sense of agility, unknowingly increasing risk rather than mitigating it.
Problem 4: Automation Isn’t Intent-Aware
Most current automation tools don’t understand why they’re doing what they’re doing.
They don’t validate business requirements. They don’t map logical intent to state.
They just… execute. And they execute things really fast.
And that’s a problem.
Because the future of networking is intent-based, let's review why.
The core issue with most current automation approaches is that they operate blindly. They don't understand the why behind the tasks they're executing. These scripts and workflows may be excellent at pushing configurations rapidly across devices, but they lack the intelligence to assess whether those configurations align with business objectives or intended outcomes.
For instance, a script might deploy a routing policy flawlessly across hundreds of routers. Still, if that policy unintentionally blackholes traffic from a high-priority customer, the automation has succeeded technically but failed strategically. This disconnect between execution and intention is where traditional automation begins to fall apart.
Networking, especially at scale, is no longer about just making changes. It's about guaranteeing outcomes. This is where intent-based networking (IBN) comes into play.
IBN doesn't just ask what configuration to apply, but why that configuration is necessary in the first place. It starts with high-level goals (e.g., "Customer X traffic must always follow this path unless congestion exceeds Y%") and translates them into enforceable, observable network states.
Without an intent-aware system, automation becomes fragile: it might maintain configurations, but it can’t assure behavior. And in environments where availability, security, and performance are non-negotiable, behavior is all that matters.
The Trap of Script Proliferation: When Quick Wins Become Long-Term Liabilities
At first glance, scripting feels like progress. When a network engineer writes a Python or Bash script to automate a repetitive CLI task, it’s easy to celebrate the efficiency gain. That single script might save hours, avoiding tedious, error-prone manual steps.
But beneath that short-term win lies a dangerous pattern: the silent accumulation of brittle, ungoverned automation.
Over time, as teams scale and engineers come and go, these “quick scripts” multiply. Each one was written for a slightly different purpose, against a slightly different device model, OS version, or topology layout. Some scripts were copied and tweaked. Others were renamed and repurposed without proper testing.
Few were documented. Almost none were version-controlled. And most of them don’t talk to each other, or even follow the same logic for similar tasks.
The result is a chaotic landscape of fragmented tooling where:
One script sets the interface description. Another updates BGP policy. A third clears sessions. A fourth pushes VLANs.
No one knows which scripts are still safe to run. Or whether they're mutually exclusive. Or what state the network will be in afterward.
Engineers stop trusting automation. Or worse, they run scripts without understanding their full consequences, turning speed into silent risk.
This is not network automation. This is shadow IT at the infrastructure level.
True automation means building systems (not scripts) that are declarative, versioned, testable, observable, and repeatable.
It means designing workflows where the network intent is separated from the low-level execution, and changes go through controlled pipelines that simulate, validate, and reconcile before touching production.
Script proliferation bypasses all of this.
It creates a fragile foundation, where one engineer’s “time-saver” becomes another team’s untraceable disaster. It undermines institutional knowledge, sabotages onboarding, and breaks the promise of reliability that automation is supposed to deliver.
In short: scripts are tools. But proliferation is a symptom of missing architectural direction, lack of governance, and absence of system thinking.
If you want automation to scale, survive team turnover, and enable safe velocity, your goal isn’t “more scripts.”
It’s fewer systems that do more, safely.

The Real Goal: From “Faster” to “Smarter”
Many teams chase automation for speed. And while that’s great, speed without correctness is just a faster way to fail.
What we need instead is:
Business-Driven Network Intent
Capture what the network should be doing in human-readable terms.
E.g., "This DC must always reach that DC within 35ms over two diverse paths."
Source of Truth Integration
Networks must converge on a known, modeled state, and not what’s manually applied.
Validation and Reconciliation
Systems must continuously check: Does the network match intent?
Safe Change Pipelines
Use CI/CD not just for pushing changes but for simulating, testing, and approving them.
✅ Homework: Fixing Your Automation Culture
There's a significant gap between reading this article, understanding it, and putting these concepts into practice.
I recognize this challenge and want to propose a plan to help you transition your automation mindset from fragile scripting to intent-aware engineering.
Here are some foundational ideas to help you develop a plan for shifting your mindset about network automation. Let's explore them.
1. Inventory Your Automation
List all scripts, playbooks, templates.
Identify ownership, duplication, and gaps.
Build a central repository.
2. Introduce a Source of Truth
Start with a Git repo, NetBox, or IPAM that defines:
Topologies
Device roles
Standard configurations
Automate against that, and not against the live network.
3. Use a Declarative Model
Define desired state, not how to get there.
Adopt tools like Batfish, SuzieQ, or PyGNMI to validate state against expectations.
4. Build a Pipeline
Trigger automation via Git commits.
Add stages:
Linting
Syntax testing
Simulation
Approval
Deployment
5. Declare Your Intent
Document what your network should do, not just what config it should have.
Translate these into test cases and simulation baselines.
You can argue that automation is about speed, and I won’t fully disagree.
But if speed breaks your business, what’s the point?
The future of network automation must be delivering correctly, at scale, under pressure, with confidence. And that starts with intent, validation, and design maturity.
So go ahead… change my mind 😄
If this resonated with you, subscribe to The Routing Intent newsletter and be the first to receive every deep-dive post. No fluff, no marketing, just real engineering insight.
See you in my next post!
Leonardo Furtado

