- The Routing Intent by Leonardo Furtado
- Posts
- The Current State of Network Automation in Network Engineering
The Current State of Network Automation in Network Engineering
The Present and Future of Network Automation

Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
The Reality Check: Networking Lagging Behind
Despite decades of technological advancements, network engineering remains one of the last frontiers of true infrastructure modernization. While servers, containers, and services have been abstracted and fully integrated into CI/CD workflows, networking still lags significantly behind.
In industries like finance and telecom, often considered pioneers of technology, the network remains largely bound to manual intervention and outdated paradigms. The command-line interface (CLI) is still the dominant tool, despite its known limitations around scale, consistency, and auditability.
Modern infrastructure goals, including self-healing, elasticity, observability, policy-based intent, are commonplace in compute and storage. Yet networking frequently operates in isolation, as if immune to the demands of velocity and reliability. This technological divide not only slows down business transformation but also creates significant operational risks.
Even in 2025, many organizations remain entrenched in legacy network operations, failing to embrace the same modernization efforts that have transformed application development and infrastructure management.
Real-Life Examples of the Issues I'm Discussing
Let's revisit some of the industry examples I mentioned earlier to put things into context:
Let’s take financial institutions as an example. Banks with billions of dollars under management often run networks with manual ticket-based change processes, where engineers copy and paste pre-approved CLI snippets during off-hours maintenance windows. These workflows, unchanged for years, create bottlenecks where a seemingly minor change, such as updating BGP communities or ACL rules, can require days of review, staging, and risk approvals.
Worse yet, rollback plans are often improvised rather than automated, increasing downtime exposure. Despite being highly regulated and risk-averse industries, this “manual-first” posture ironically increases operational risk due to inconsistency, human error, and slow time-to-resolution.
In the telecommunications sector, where network scale and complexity are enormous, many providers still depend on proprietary hardware managed through vendor-specific GUIs or CLI sessions. Network configuration drift is common because there is no single source of truth.
Cross-vendor inconsistencies, ad hoc scripts written by individual engineers, and undocumented tribal knowledge result in fragile networks where failures are difficult to trace and nearly impossible to prevent ahead of time.
In one recent example, a regional telco suffered a major DNS outage when a misconfigured route policy triggered a ripple of prefix suppressions. The fix? A war room, six engineers in real-time terminal sessions, and a half-day of manual reconfiguration.
Even cloud-first startups and SaaS providers sometimes suffer from automation asymmetry. Their developers push code to production 50 times a day, but the network layer, be it in their Kubernetes underlay, hybrid WAN architecture, or VPC peering, lags far behind. Engineers might use Terraform or Ansible for compute resources, but fall back to SSH and show run
diffs for routers and firewalls.
One fast-growing SaaS company admitted that network device onboarding required 22 steps, 18 of which were still manual, including VLAN tagging, ACL staging, and IPAM updates. These delays directly impacted their ability to expand into new data centers or regions at the speed the business demanded.
What Network Engineers Are Doing Today
In their effort to keep up with the growing complexity and scale of modern infrastructure, many network engineers have begun embracing automation. But in practice, this "automation" often looks more like scattered scripting efforts than a cohesive engineering transformation.
Python, Ansible, and Bash have become standard tools in the network engineer's toolbox; however, they are often used in isolation to automate tasks rather than as part of workflows. Let’s take a look at what’s typically happening in real-world environments.
Across the industry, it’s common to see automation used for routine tasks such as:
Backing up device configurations using scheduled Python scripts or RANCID derivatives
Performing bulk configuration changes using
netmiko
/napalm
scripts, or templated Jinja2 playbooks with AnsibleMonitoring SNMP OIDs or polling REST APIs to build rudimentary dashboards or alert systems
Restarting stuck daemons or resetting BGP sessions via cron jobs or
expect
-driven login macros
These efforts are not meaningless. In fact, they serve as an important gateway to broader network automation maturity. They demonstrate initiative, problem ownership, and a recognition that the CLI-only model is no longer sustainable.
But they are almost always reactive and tactical, designed to relieve immediate pain, not to enable systemic progress.
There are some critical structural flaws in this piecemeal approach:

Subscribe to our premium content to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content. No fluff. No marketing slides. Just real engineering, deep insights, and the career momentum you’ve been looking for.
Already a paying subscriber? Sign In.
A subscription gets you:
- • ✅ Exclusive career tools and job prep guidance
- • ✅ Unfiltered breakdowns of protocols, automation, and architecture
- • ✅ Real-world lab scenarios and how to solve them
- • ✅ Hands-on deep dives with annotated configs and diagrams
- • ✅ Priority AMA access — ask me anything