(The image I received from a LinkedIn post, but I don't know who generated it.)

Jokes aside, this one is serious.

When the AI hype started, a lot of us reacted the same way: we laughed.

The answers were wrong in hilarious ways. The images looked like fever dreams. Hands with seven fingers, routers with random ports, nonsense configs with made-up commands. It felt like a toy that would fade after a few viral memes.

I almost never do this, but I did it then: I dismissed it. I laughed hard.

But then I did what any halfway responsible engineer eventually has to do: I stopped laughing, took a deep breath, and looked at it clinically.

I approached it the same way I’d evaluate a new routing protocol or a new ASIC architecture: What can it actually do? Where does it break? How fast is it improving? What would this mean for real systems, for actual work, for the way we provide for our families?

And that’s when things got uncomfortable.

Because once you get past the memes and start using these models for serious work, especially code and reasoning, you realize something:

They are already better and faster than the average developer at a huge chunk of tasks.
Not perfect. Not infallible. But fast, tireless, and getting better.

For network engineers, that matters.

1. From Memes to “Oh, This Is Serious.”

Early AI outputs were easy to mock. You’d ask for a network diagram and get some surreal painting of switches melting into clouds. You’d ask for a config and see fabricated commands. It felt more like a party trick than an engineering tool.

Then the models iterated.

Suddenly they could:

  • Generate Python scripts using Netmiko or Nornir that actually logged into devices and collected data.

  • Draft Ansible playbooks that were syntactically correct and reasonably structured.

  • Produce Junos, IOS-XR, NX-OS, EOS snippets that were not just plausible, but in many cases correct, minus a few details.

I went from “this is ridiculous” to “wait, that’s… alarmingly good” in a very short period.

So I started testing it systematically:

Give it slightly messy natural-language requirements, some example configs, and see what happens. The results were uneven at first, but the trend was obvious: the direction of travel was towards competence.

While most of the internet was still sharing AI memes, I was quietly asking myself:

“If this thing can write 60–70% of the code I need, faster than I can, what exactly am I contributing?”

That’s not a comfortable question. But it’s one we all need to face.

2. What Changed: AI That Codes and Actually Thinks (a Bit)

There’s been “AI hype” before expert systems, traditional machine learning, and endless “AI-powered” products. Why does this wave feel different?

These models are general-purpose and code-capable.

Traditional ML was narrow. You fed it a labeled dataset, and it learned to classify or predict something specific: traffic anomalies, churn risk, link failure probabilities. Powerful, but tightly scoped.

Modern large language models are different. They’ve been trained on:

  • Natural language from books, articles, forums, and documentation.

  • Huge amounts of publicly available code and config examples.

  • A wide variety of problem-solving patterns.

The result is something that can:

  • Write working Python, Bash, YAML, Terraform, and vendor configs.

  • Explain complex topics like BGP convergence, EVPN, and SR-MPLS in plain language.

  • Combine “here’s the theory” with “here’s the script” in a single conversation.

In practice, that means they can automate both:

  • The repetitive, boilerplate tasks we always wanted to get rid of.

  • Some of the “higher-order” tasks that used to require decent judgment and experience.

If you’re an average coder, competent, but not exceptional, these models now feel like a colleague who:

  • Works faster than you.

  • Never gets tired.

  • Doesn’t get bored with boring things.

And even if you’re very good, they can still drastically compress the time you spend on glue work, scaffolding, and tedious transformations.

That’s where the “60–70% of tasks” figure comes from. If you decompose your week into:

  • “Things that are novel and require real judgment” versus

  • “Things that are pattern application, boilerplate, translation, or synthesis,”

You’ll notice how much of your time is in that second bucket. That’s exactly the bucket AI is already attacking.

logo

Subscribe to our premium content to read the rest.

Become a paying subscriber to get access to this post and other subscriber-only content. No fluff. No marketing slides. Just real engineering, deep insights, and the career momentum you’ve been looking for.

Upgrade

A subscription gets you:

  • ✅ Exclusive career tools and job prep guidance
  • ✅ Unfiltered breakdowns of protocols, automation, and architecture
  • ✅ Real-world lab scenarios and how to solve them
  • ✅ Hands-on deep dives with annotated configs and diagrams
  • ✅ Priority AMA access — ask me anything

Keep Reading

No posts found