Two Worlds, One Disparity
Take a moment to look at the image above.
On the left, you've got the DevOps world: sleek offices, top-notch tooling, and deployment pipelines running on autopilot. Engineers are sipping coffee while robots handle production workloads. It's all about orchestration, abstraction, automation, and observability.
On the right? NetOps in the Stone Age. Engineers are barefoot, hunched over chisels and stone tablets, manually etching code into rock. They're surrounded by primitive huts and beasts of burden. It’s funny because it’s true.
This isn’t just a cartoon. It’s a brutally accurate depiction of a gap that has only grown wider in recent years.
While the layers above the network, like applications, services, compute workloads, containers, VMs, and databases, have undergone profound automation revolutions, the network itself has remained largely artisanal, operated by humans typing command after command into CLI terminals, hoping not to break something with a misplaced space or a forgotten no.
The Paradox of Progress
Here’s the paradox: all modern digital infrastructure depends entirely on the network, and yet the processes to build, maintain, and operate it are stuck in the past.
DevOps and platform engineering teams have:
Fully automated build-test-deploy cycles
Self-service portals for infrastructure provisioning
Auto-scaling workloads and zero-downtime rollouts
Observability stacks that light up anomalies before they become incidents
Meanwhile, in the networking world:
Engineers still log into individual devices to push changes
Network state is often stored in a dozen engineers’ heads
Incident response starts with “let me SSH into that router”
Configuration rollback? Maybe… if someone remembered to copy-paste the last working state
We’re not exaggerating. We’ve seen Fortune 100 enterprises, global carriers, major banks, and even tech companies still operating in this fragmented, manual, “CLI-first” mode. In many of them, automation means running a Python script from a laptop.
This Isn’t a Rant. It’s a Call to Evolve
This article isn’t meant to shame or mock network engineers, far from it. The truth is that the tooling hasn’t evolved fast enough, the culture hasn’t shifted boldly enough, and the training hasn’t caught up.
We’re here to raise awareness, highlight the consequences of this disparity, and chart a path forward. Because, at some point, the cost of not automating your network becomes greater than the cost of doing it wrong.
And the longer we delay, the harder the transition will be.
It’s time to stop romanticizing our CLI reflexes and start thinking about networks as software-driven systems, ones that must be declared, validated, versioned, tested, monitored, and evolved just like any other digital platform.
The rest of this piece will unpack what’s really holding NetOps back, why “scripts” aren’t enough, and what real network automation actually looks like beyond the buzzwords.
Let’s dig in.
The Automation Mirage: Why Script ≠ Automation
In countless organizations, you'll hear the same confident declaration:
“Oh, we already do automation. We have scripts.”
It sounds reassuring. Forward-thinking. Progressive.
But peel back the surface, and you often find this “automation” is a patchwork of brittle scripts stored on someone’s laptop, built to solve one-off problems with no reusability, testing, documentation, or versioning in sight.
Let’s be very clear: automation is not simply executing work faster.
True automation is about changing how work is done entirely.
The Script Trap
Here’s what scripting often looks like in real-world networking environments:
A Python script that logs into 15 routers and pushes a prefix-list
A Bash script that collects
showcommands and dumps logs into a text fileA TCL script that upgrades IOS on a few devices via TFTP
A set of
.expectscripts that automate Telnet/SSH interactions with prompts hard-coded in
Now ask:
Can a new engineer understand and safely run that script?
Does it validate preconditions before acting?
Does it have rollback logic or exception handling?
Can it scale to hundreds or thousands of devices across multiple vendors?
Does it encode intent or just automate the same repetitive muscle memory?
The answer is often a resounding “no.”
What we’re doing is not automation; it’s speeding up manual labor. It’s the digital equivalent of hammering faster instead of building a machine to do the job better.
Real-World Example:
Let’s take a typical enterprise scenario.
“We need to deploy a new branch site with a specific MPLS L3VPN config and QoS template.”
Here’s how scripting might be applied today:
Someone runs a Jinja2 script with hardcoded variables
It generates a config
Another script logs in via SSH and pushes the config
Done.
Is it faster than copy-paste? Absolutely.
Is it automation? Only partially.
Now, imagine a proper software-centric approach:
The business team requests the new site via a self-service portal
Metadata is fed into a controller that uses a source of truth (NetBox, Git, IPAM)
A CI/CD pipeline generates device-agnostic intent
Templates render device-specific config
Pre-flight checks validate capabilities
Deployment is staged with rollback triggers
Post-deployment telemetry confirms state alignment
That's not just speed. That’s governance, predictability, and scale.
Speed vs Safety
Here’s the kicker: speed without structure is a liability.
When engineers write dozens of tiny scripts, each solving a narrow pain point in isolation, you eventually accumulate:
A sprawling, undocumented mess
Competing logic and side effects
No centralized observability
Fear of touching “legacy scripts” because “Bob wrote that, and Bob left in 2021”
What you gain in perceived velocity, you lose in reliability, auditability, and team agility.
Worse, most scripts don’t encode intent, they just replicate what the CLI would do, without validating whether the outcome makes sense.
Scripts Still Have Their Place
Let’s be fair: scripting is not inherently bad. In fact, it’s often a gateway into automation maturity. Engineers should learn scripting; it’s a valuable tool.
But it’s just that: a tool.
Scripts don’t define a strategy. They don’t solve systemic fragility. And they certainly don’t deliver the resilience, reliability, or scalability that networks at modern scale demand.
The Mirage Is Dangerous
When organizations believe they’re “doing automation” simply because they run scripts, they create a false sense of progress. It discourages investment in proper automation pipelines, tools, design principles, and cross-functional collaboration.
This is the automation mirage, the illusion that writing code equals solving problems.
It’s time to leave the mirage behind and move toward software-centric thinking, where automation is not about how fast you can send commands but about how accurately, predictably, and safely you can fulfill network intent.
The Reality of Traditional Infrastructure
The illusion of progress from scripting becomes even more concerning when you consider how far behind traditional network infrastructure really is.
Walk into a data center, a telecom backbone NOC, a bank’s networking war room, or a large retail infrastructure office, and you’ll often find the same picture:
Thousands of devices across dozens of vendors
Routing protocols patched with knobs upon knobs
Monitoring systems cobbled together with SNMP and syslog
Change control policies from the 1990s
Engineers who know more about IOS or Junos quirks than software best practices
It doesn’t matter if the company sells smartphones, trades stocks, delivers groceries, or processes hospital records; traditional networking infrastructures are essentially the same everywhere: complex, brittle, and largely untouched by modern software practices.
Why?
Because networking evolved under a completely different context than DevOps. For decades:
Networks were built by CLI specialists, not software developers
Reliability was defined by "if it ain’t broke, don’t touch it"
Configuration drift was solved by discipline, not automation
Vendor lock-in was accepted as a tradeoff for stability
Business and networking were separated by layers of abstraction and politics
Even today, in Fortune 100 environments, the network is often a black box nobody wants to touch unless it’s absolutely necessary. The culture of caution and fear is embedded because breaking the network breaks everything.
Ironically, while the world has moved toward microservices, API-first design, zero-trust security, and real-time observability, many network stacks are still defined by manual provisioning, bespoke protocols, and tribal knowledge.
That’s not an infrastructure strategy. That’s technical debt at planetary scale.
Still Glued to the CLI: The Manual Reality of NetOps
If you talk to most network engineers, they’ll tell you:
“We use automation. We have some scripts. But yeah, we mostly configure devices manually.”
This isn’t an isolated case. It’s the default.
Despite all the conferences, whitepapers, vendor promises, and GitHub repos, most networking work is still done through direct human interaction with devices via the CLI.
Here’s what it usually looks like:
A ticket comes in requesting a VLAN, a new route, or an ACL change
An engineer logs into a jump box, SSHs into a switch or router
They copy-paste a snippet of config from an internal wiki or a colleague’s email
They paste it into the terminal; press Enter, hope for no errors
They write up the ticket, attach the terminal output, and mark it "done"
And if they’re feeling fancy? Maybe they run a quick Python or Ansible script locally that does some parts of it for them. But there’s no telemetry feedback. No validation. No orchestration.
This is manual labor disguised as technical work.
The DevOps Comparison
In contrast, here’s what a DevOps workflow might look like for deploying an app:
A developer pushes code to Git
A CI/CD pipeline runs automated tests and security scans
If successful, a container is built and deployed to production via Kubernetes
Observability tools track latency, error rates, and rollout health
If anomalies are detected, the deployment is rolled back automatically
That’s not just automation. That’s autonomy + intelligence + feedback loops.
Now ask yourself:
How many of these principles exist in your network?
Where’s the CI/CD for BGP policies?
Where’s the rollback for misconfigured OSPF timers?
Where’s the observability that tracks forwarding anomalies in real-time?
In most places, it doesn’t exist because we’ve made the CLI a crutch.
Real Consequences
This isn’t just inefficient. It’s risky.
Manual changes are error-prone and untraceable
Onboarding new engineers becomes tribal and slow
Outages take longer to resolve because there’s no clear picture of intent vs state
Engineers spend their energy on repetitive tasks instead of systemic improvement
And perhaps worst of all, the network becomes a bottleneck.
While DevOps ships daily, NetOps still waits for the Tuesday maintenance window.
That lag is killing agility across entire organizations.
A Culture Shift Is Needed
It’s not that network engineers aren’t capable; quite the opposite. They are some of the most battle-tested, detail-oriented technologists in the business.
But the tooling, training, and mindset around NetOps have not evolved fast enough to support a software-driven, cloud-native world.
The problem isn’t just the CLI. The problem is how dependent we’ve become on it, and how little pressure there’s been to move beyond it.
To do that, we must recognize the CLI as a tool of last resort, not the foundation of operations.
What Real Network Automation Actually Looks Like
Let’s put the scripts and terminals aside for a moment.
What would it mean to truly automate a network in the same way DevOps has automated software infrastructure?
Not just executing tasks faster.
Not just reducing keystrokes.
But creating a system that is aware, adaptive, intent-driven, and safe at scale.
Let’s break that down.
From Procedures to Intent
Real automation begins when you no longer think in commands, but in desired outcomes.
You stop asking:
“How do I configure this OSPF neighbor?”
“What commands do I need to push this ACL?”
And you start asking:
“What should this segment of the network be doing?”
“What is the intended service behavior here?”
“How can I express that in a reusable, testable, and observable way?”
In other words, you define the "what" and let automation figure out the "how."
This is intent-based networking, and while it’s often misused in marketing buzz, the principle is sound:
Define your network in terms of business logic and technical outcomes, not low-level procedures.
The Components of Real Automation
Let’s outline the key pillars of what real automation in networking must include:
✅ Source of Truth (SoT)
A single, structured place to define topology, ownership, IP schemes, service metadata, and policy intent, e.g., NetBox, Nautobot, or a Git-backed YAML database.
✅ Declarative Configuration
Templates (Jinja2, Mako, etc.) that render configs based on input from the SoT, device-agnostic logic that outputs device-specific configurations.
✅ Validation and Testing
Pre-deployment checks, syntax validation, and compliance testing using tools like Batfish, pyATS, NAPALM, and custom logic to simulate outcomes before committing changes.
✅ Orchestration Pipelines
CI/CD workflows (e.g., GitHub Actions, Jenkins, GitLab CI) that coordinate linting, rendering, approval, deployment, and rollback with human gates where needed.
✅ Telemetry and Feedback Loops
Post-deployment verification from real-time data, SNMP, gNMI, flow logs, and streaming telemetry to ensure the observed state matches the intended state.
✅ Event-Driven Triggers
Not just one-time pushes but stateful automation that responds to events, like BGP session loss triggering traffic rerouting or bandwidth thresholds prompting scale-up actions.
A Real-World Flow
Imagine this real-world use case:
You’re deploying a new customer site into an MPLS-based enterprise WAN.
With real automation:
The sales team inputs service data into a portal
That feeds your Source of Truth, triggering a Git commit with new metadata
A CI pipeline generates new loopback IPs, VPN configs, and QoS policies
Pre-deployment validation confirms the topology supports the new PE
Changes are pushed to staging routers in dry-run mode
Upon approval, the pipeline deploys live changes with rollback-enabled
Streaming telemetry validates that the customer prefix is reachable and properly tagged
No human logs into a device.
No one copy-pastes an ACL.
No one gets paged unless something breaks verification.
That is real automation, and it enables a fundamentally different operating model.
Why It Matters
Real automation doesn’t just reduce toil. It enables:
Faster service delivery (days to minutes)
Fewer outages (validate before deploy)
Better compliance (everything versioned, auditable)
Improved collaboration (network becomes part of the software delivery flow)
Scalability (you can operate 10x more infrastructure without 10x the headcount)
Most importantly, it aligns the network with the business. It allows the infrastructure to evolve at the pace of demand, not the pace of human labor.
Getting There
Is this easy? No.
Is it possible? Absolutely.
You’ll need:
A shift in mindset from CLI to system architecture
Time to refactor legacy infrastructure
Collaboration across teams (NetOps, DevOps, Security, Product)
A platform to test, iterate, and learn safely
But the payoff is enormous.
And you don’t need to do it all at once. Start with your most repetitive workflows. Build guardrails. Test in staging. Document learnings. And grow from there.
The path to real network automation isn’t paved with scripts but instead built with systems.
Let's examine what happens when you don’t follow that path and instead accumulate a mess of scripts, fragile code, and siloed logic.
The Bloat Problem: Scripts, Scripts Everywhere
Once a team realizes that the CLI alone is too slow and too brittle, the next step, often unintentionally, is the script explosion phase.
It starts with good intentions:
“Let’s write a quick script to automate this BGP peering config.”
“Here’s a Python function to pull MAC addresses from all switches.”
“I’ll wrap a loop around these CLI commands to push them to 30 devices.”
Now multiply that mindset across 5–10 engineers over several years across multiple projects, vendors, platforms, and operational pain points.
The result?
A graveyard of unmaintainable code.
The Unseen Cost of Scripting Bloat
When scripts multiply without structure or standards, the side effects are real and dangerous:
No Standardization
Each engineer writes in their own style, using different naming conventions, logic flows, and libraries.
Some scripts use Netmiko, others Paramiko, and others Pexpect. Some require Python 2.7 (still!), others 3.11 or 3.12.
No Central Repository
Scripts often reside on local laptops, Dropbox folders, and old GitLab repositories or are buried in email threads.
You only realize this when someone leaves and takes all their tooling knowledge with them.
No Permissions or Review
Anyone can write and run scripts directly on production systems. There’s no peer review, no security model, and often no logging.
A single mistake can cascade into a large-scale outage.
No Testing or Validation
Scripts are rarely tested outside of ad hoc runs in production. There’s no mock environment, no dry-run mode, no unit tests, and no rollback logic.
No Observability
If a script fails, there’s no structured output. You might get partial logs or a traceback, but there's no telemetry, alerting, or state confirmation.
This creates uncertainty and slows incident response.
Real-World Example: The Script Jungle
I once consulted for a large enterprise that had “fully embraced” automation.
They proudly showed us a private Git repository with over 900 individual scripts. No documentation. No tests. No integration points. Most were written by interns or engineers who had long since moved on.
Each script addressed one specific pain:
Create VLAN on specific hardware
Enable SNMP on certain IOS versions
Rotate BGP passwords on only Juniper PE routers in Region X
Run
showcommands across DMVPN tunnels
Some scripts were redundant. Others contradicted each other. Many no longer worked due to firmware changes or Python version mismatches.
Engineers were terrified to touch most of them.
Despite this “automation arsenal,” they continued to:
Perform changes manually
Avoid automation in critical paths
Suffer from configuration drift and inconsistent outcomes
This is not automation. This is an unstructured, unmanaged scripting bloat.
When Scripts Become a Liability
Here’s the harsh truth:
Scripting can take you further into chaos, not out of it, if done without architecture.
Eventually, your scripts become so numerous and fragile that:
They can't be audited or reused
They break silently and cause outages
They make onboarding new engineers impossible
They block you from adopting better tools because you're afraid to let go
In short, scripting bloat becomes a technical and cultural anchor.
You move fast… until everything breaks. Then you move slowly… because you’re afraid to break it again.
How to Avoid the Bloat Trap
You don’t need to stop writing scripts entirely. You need to evolve how you approach them.
Create a common base: connection handling, input validation, logging, dry-run capabilities, and error tracking.
Containerize and Standardize
Ensure every script runs in a known environment (e.g., Docker), with versioned dependencies and consistent output.
Refactor into Services or Pipelines
Group scripts into reusable components. Move toward declarative intent, templates, or APIs. Use orchestration tools (Ansible, Nornir, etc.) to manage logic layers.
Enforce Peer Review and Testing
Use Git workflows. Validate logic before production runs. Write basic tests. Even basic review prevents dangerous anti-patterns.
Add Observability
Every script should emit structured logs, handle exceptions cleanly, and confirm post-state.
Scripting Is the Start, Not the Strategy
Used thoughtfully, scripting is a bridge. It helps teams move faster, prototype ideas, and relieve toil.
But don’t mistake it for the endgame.
The real value of automation lies not in how many lines of code you’ve written, but in how effectively your network can deliver on business goals, safely, reliably, and at scale.
Let's explore next what it actually takes to build that kind of software-centric NetOps foundation, without over-engineering or over-promising.
The Path Forward: Software-Centric Thinking
By now, the message is clear: real network automation is not a faster terminal, and scripts are not a strategy.
So, how do we actually move forward?
The answer isn’t to demand every network engineer become a full-stack software developer. It’s not to replace every CLI with a complex controller. And it’s definitely not to start building custom SDN platforms from scratch.
Instead, the path forward is about embracing software-centric thinking, not as a buzzword but as a practical operating model for how networks are built, validated, and managed.
What It Really Means to Be Software-Centric
Being software-centric doesn’t mean doing everything in Python.
It means thinking like a systems designer:
Defining state instead of pushing commands
Encoding intent and validating outcomes
Using abstraction and reuse to reduce fragility
Building workflows that are observable, testable, and scalable
It’s not about becoming a “developer.” It’s about adopting proven software practices to reduce operational risk, improve consistency, and align infrastructure with business velocity.
This means:
Declaring what you want the network to do, not how to do it on each box
Validating that what you deployed is what you meant
Catching errors before they hit production
Rolling forward or back safely and consistently
Start with Your Pain Points
You don’t need to boil the ocean. In fact, you shouldn’t.
Start with:
The changes that happen most frequently
The areas that cause the most outages or errors
The operations that cost the most engineering time
Examples:
BGP neighbor provisioning
VLAN/VRF creation
IP address allocations and tracking
ACL or route-map updates
Backup and state validation
These are perfect candidates to:
Model in a source of truth
Automate with templated configuration
Validate with basic pre/post checks
Deploy through a CI/CD loop (even a simple one!)
Even just automating one core service well can give you a major confidence boost and a template for others to follow.
Avoid the “Overengineering” Trap
One common failure mode is to assume that being “software-centric” means you have to build a controller, write custom frameworks, or design your own intent engines.
You don’t.
Use the ecosystem:
Tools like Ansible, Nornir, Salt, and Netmiko exist for a reason
GitOps-style flows work just as well for network config as for app deployments
Validation libraries like pyATS, Batfish, and NAPALM offer battle-tested logic
Open-source platforms like NetBox, Nautobot, and NSOT give you a strong source of truth
Don’t reinvent everything. Pick reliable building blocks and glue them together thoughtfully.
Remember: your goal isn’t complexity. It’s clarity, safety, and scale.
Embrace Incremental Wins
This evolution will take time. Don’t expect everyone on your team, or every stakeholder, to instantly adopt this mindset.
But you can:
Build proof-of-concept pipelines that demonstrate value
Document small wins (e.g., “This used to take 3 hours, now it takes 15 seconds”)
Refactor one script at a time to support structured input and dry-run mode
Lead brown-bag sessions and internal demos
Share results with leadership in business terms (fewer outages, faster delivery, lower ops cost)
In time, your internal culture begins to shift. Engineers begin to ask:
“Why would I run this manually if we can describe it in code and verify the outcome?”
That’s when real change starts to compound.
Think Bigger, But Execute Smaller
You don’t have to solve network automation for the entire company right away.
Start by solving it for yourself. Then, your team. Then, your department.
Over time, that system of thinking scales.
And one day, you look back and realize:
You're no longer stuck in a world of brittle scripts and one-off CLI heroics.
You're building a system that behaves and evolves like modern infrastructure should.
Let's explore now how that evolution connects directly to the business: when your network actually reflects the performance, reliability, and security expectations of the organization and how intent becomes your guidepost for scale and resilience.
Intent, State, and the End Game
So far, we’ve talked about the limitations of scripts, the dangers of unstructured automation, and the need for software-centric thinking.
But what does all of this actually build toward?
The true endgame isn’t simply “automated configs.” It’s not about writing more code, managing fewer devices manually, or deploying faster.
The real endgame is a network where intent and state are always in sync and aligned with your business outcomes.
This is the frontier of modern network operations.
Define the Intent → Enforce the State → Monitor the Outcome
To reach this level of operational maturity, you need to rethink the fundamental relationship between what the business wants and what the network is doing.
It starts with a pipeline that:
Captures Intent
“We need this site to have resilient Internet access.”
“This application must have low-latency east-west routing between DCs.”
“This VRF should never talk to that VRF.”
“All critical apps must run on paths with under 30ms jitter.”
Translates That Intent into Configuration
Templates and logic render device-specific configurations
Routing, ACLs, QoS, tunnels, telemetry, etc. are automatically generated
Enforces and Validates State
Control plane: is the route table what we expect?
Data plane: are packets forwarding as designed?
Security: are access controls matching policy?
Availability: are services up and reachable?
Monitors Divergence
Drift detection alerts you when configs or states deviate from the expected intent
Telemetry confirms if performance is degrading
State snapshots allow rollback and post-mortem forensics
This loop forms the core of intent-based networking, but more importantly, it aligns the network with business expectations and user experience.
Real-World Example: The Business-Aligned WAN
Imagine you operate a multinational retailer with 600 branches and a hybrid cloud backbone.
The intent for each branch is simple:
Use dual ISPs for resilience
Enforce zero trust between guest Wi-Fi and POS systems
Backhaul critical transactions to a regional hub
Use direct internet access for all non-sensitive workloads
Maintain under 100ms round-trip time to core SaaS apps
If you hard-code that logic into CLI configs? You’ll be chasing ghosts forever.
But if you encode that intent into structured policies, and the system:
Provisions each branch consistently
Validates forwarding paths dynamically
Alerts when latency spikes beyond SLA
Auto-corrects drifted routing policy
...then you’ve achieved real automation.
Now, the network is not just automated; it’s truthful, resilient, and aligned.
The End of Fragility
Most traditional networks are fragile because they rely on:
Assumed correctness (“We pushed the config, so it must be fine.”)
Human awareness (“We’ll know if something breaks.”)
Manual fixes (“We’ll just SSH and fix it.”)
That mindset doesn’t scale. It doesn’t support real-time systems, hybrid cloud, or always-on platforms.
When intent and state are decoupled, things break silently.
When they’re bound tightly, the network can self-heal, self-validate, and self-correct.
That’s the real goal. That’s the vision.
Why This Matters More Than Ever
Modern organizations are:
Deploying services faster than ever
Operating in multi-cloud and hybrid environments
Scaling globally while reducing operational headcount
Facing stricter SLAs, compliance requirements, and security demands
The network cannot be a bottleneck.
To avoid this, it must evolve from a set of interconnected boxes to a software-driven system of intent and state.
That’s not a future dream; it’s already happening at scale in hyperscalers, cloud providers, and forward-looking enterprises.
You don’t need to be a FAANG company to do this. But you do need to start thinking like one.
Stop Romanticizing Scripts
Let’s be honest: we’ve all done it.
You write a script that saves you hours of repetitive CLI work. It feels amazing. You automate device upgrades, deploy BGP neighbors, or back up configs across 100 routers in a few seconds, and you think:
“This is it. I’m doing automation.”
And you are, in a way. But only part of it.
What you’re really doing is speeding up a manual process. You’re accelerating toil. You’re giving the CLI a rocket engine without ever questioning whether that CLI should even exist in that context.
It’s not that scripts are bad. In fact, they’re often the first step toward progress. But the moment you start believing that a folder full of Python files is your automation strategy, you’re no longer solving the real problem; you’re just hiding it under the illusion of velocity.
Velocity ≠ Progress
You can ship bad changes faster.
You can break things at scale.
You can accumulate so much script-based debt that your team becomes afraid to touch the very tools they’ve built.
That’s not automation. That’s entropy disguised as efficiency.
True progress is not doing old things faster. It’s doing better things entirely, with purpose, architecture, and alignment to real-world outcomes.
We Need a Cultural Reset
Network engineering, as a discipline, must move beyond the pride of terminal mastery.
We have to stop celebrating the “hero engineer” who logs into devices during an outage and saves the day by pasting in a fix from memory.
We need to celebrate predictability over improvisation, design over duct tape, and intent alignment over quick hacks.
The best network is the one that just works reliably, silently, and invisibly.
The best automation is the one nobody notices because it prevents outages before they happen and ensures users never feel pain.
This Is Your Moment
The tools exist. The frameworks are mature. The problems are well-understood.
The only missing ingredient in many organizations is the willingness to change.
So, let’s make this personal.
Ask yourself:
Is my automation strategy helping us scale safely?
Do our tools reflect the network’s purpose, or just make config easier?
Could someone else use my scripts without fear?
What happens to our systems if I walk away tomorrow?
If the answers aren’t what you hoped, that’s not failure. That’s an opportunity.
What’s Next?
Start small. Think big.
Refactor a script. Introduce dry-run validation. Adopt a source of truth. Teach your team. Build your own POC pipeline.
Show that automation isn’t a buzzword. It’s a better way to build.
And when you hit friction, cultural, technical, or organizational, remember:
You’re not just pushing config.
You’re building infrastructure that supports ideas, businesses, and lives.
That deserves better than scripts in a folder.
It deserves intent, architecture, and purpose. It deserves the future.
Let me know how you feel about this rather long article! See you in the next high-signal edition!
Leonardo Furtado

