I recall creating a survey for new newsletter subscribers, asking eight key questions about their content preferences and frequency. I also added an open-ended question for free feedback. One question stood out, inspiring me to craft a series of posts addressing this subscriber's inquiry:

Go deeper and more specific about what it actually looks like to build an intent-based network. What are the system components, and what do they do? Where do they run? How are they maintained? How do you introduce them into existing infrastructure? Even more specific: what does the repo layout, pipelines, and deployment look like for this system?

Ready to begin?

A few years ago, I watched a network team do something that looked perfectly reasonable… until it wasn’t.

They had a widespread incident triggered by a “simple” routing policy change. One engineer adjusted a BGP export filter for a single customer. Another engineer, somewhere else, fixed a related reachability issue by “just patching it on the box” because the customer was screaming and the maintenance window was closing. A third engineer later pushed an automation job that reapplied what Git said was true, overwriting the emergency patch, because Git was “the source of truth.”

The funny part of this mess was that everyone involved did what they believed was the right thing!

  • The config in Git was stale, but “approved.”

  • The CLI patch was correct, but “invisible.”

  • The automation was consistent, but “blind.”

So, the network oscillated between two realities: the reality people thought they were operating in, and the reality packets were actually living in.

If you’ve ever lived through that kind of drift, where your tooling is working, your engineers are competent, and your outcomes are still chaotic, you already understand why intent-based networking (IBN) exists.

But there's always a catch, there isn't? The problem is that most IBN explanations stop at philosophy, and that is a fact.

This article is the opposite. It’s the “concrete” blueprint: what “intent” really is, what it is not, and the system components you must have if you want to claim you’re building an intent-based network; whether you’re operating a data center fabric, a service provider core, or a global edge.

The real problem IBN solves: “configuration” is not control

Traditional network automation is often built on the belief that:

If we can generate and push the right configuration, the network will behave as desired.

That belief holds right up until it doesn’t; usually at scale, under pressure, and when failure modes emerge because the network is not a set of text files.

A network is:

  • A distributed control plane with partial information and convergence behavior

  • A data plane with forwarding constraints, hardware pipelines, and failure modes

  • A constantly changing environment: optics, capacity, maintenance, bugs, policy, security events

  • Humans making time-sensitive decisions

  • Systems interacting (DDoS appliances, load balancers, firewalls, tunnels, cloud routing domains)

Configuration is merely one lever in that system, and it’s the least honest one, because it can be “correct” while the network is objectively broken.

IBN begins when you admit this:

The job isn’t to maintain configuration.
The job is to maintain outcomes and invariants.

And that, my friend, changes everything.

What “intent” is (and why it’s not a prettier config template)

Intent is a declaration of outcomes, constraints, and safety rules expressed in a form that’s:

  1. Versioned

  2. Validated

  3. Auditable

  4. Compilable

  5. Verifiable against reality

Intent lives above vendor syntax and below hand-wavy prose. It’s not “we should have reliable routing.” It’s closer to:

  • “Tenant Payments must not exchange traffic with tenant GuestWiFi.”

  • “Customer C may advertise only these prefixes; any more is a leak.”

  • “All internet edge routers must prefer transit A for region EU unless capacity crosses threshold X.”

  • “Every leaf must have at least N ECMP paths to every spine.”

  • “Every RR-client session must be established with these capabilities; drift is a Sev2.”

These are operational truths you actually care about. IBN also forces a second step that most “automation” never gets to:

You don’t just declare intent. You declare what evidence would prove it’s being met.

That’s the missing layer, because “desired state” without “proof of compliance” is just wishful thinking with better formatting.

The IBN mental model: three states and a loop

If you remember nothing else, remember this:

An intent-based network is a closed-loop system.

It has three realities:

  1. Intended state
    What you want to be true (outcomes + constraints + invariants).

  2. Compiled state
    What your system thinks it must do to make the intent true and accurate (device configs, policies, workflow steps).

  3. Observed state
    What the network is actually doing (control-plane facts, forwarding facts, counters, events).

And then it has the loop:

ObserveCompareDecideActVerifyRepeat

Everything else, including UI, “AI,” and dashboards, is pretty much secondary.

If your system cannot continuously compare observed reality to intent and respond, you do not have IBN. You have config generation, and that's a totally different story altogether.

The minimum viable IBN system: the seven boxes you can’t avoid

Teams name these things differently. Vendors package them differently. Some companies buy half of them and build the rest.

But every serious IBN implementation converges on the same functional components. I call them “the seven boxes” because once you see them, you’ll start recognizing them everywhere.

1) Intent API and schema registry

This is where intent becomes a product, not an idea.

If intent is free-form YAML, it will devolve into tribal knowledge. If it’s strongly typed and validated, it becomes a stable interface.

A proper intent layer includes:

  • Schema definitions (JSON Schema, Protobuf, OpenAPI, or strongly typed models in code)

  • Versioning (v1, v2 migrations, deprecation strategy)

  • Validation rules beyond shape (e.g., “prefixes must be within allocations,” “ASNs must be registered,” “no overlapping VRFs across tenants”)

  • Policy controls that say what’s allowed (guardrails)

This is the difference between “we store configs in Git” and “we operate a network as a software product.”

Anecdote from the trenches: the first time you enforce schema validation, people complain that you’re “slowing them down.” The second time it prevents a production leak (because a prefix-list would have matched more than intended), it becomes your most popular feature.

2) Source of truth

No intent system survives contact with reality unless it can answer a simple question reliably:

What is this network made of?

Inventory, topology, roles, allocations, underlay addressing, BGP sessions, tenants, VRFs, link capacity, optics, device capabilities, OS versions, and operational constraints.

In practice, “source of truth” is rarely one database. It’s usually a federation. Some examples:

  • NetBox for inventory + IPAM

  • Internal DB for service definitions

  • Cloud APIs for VPC routing domains and attachments

  • A topology service that understands L2/L3 adjacency and link attributes

  • A capability matrix (this platform supports gNMI; that one is still SNMP/syslog)

IBN reads from this truth. Humans should not “just know” it.

3) Compiler / renderer

This is where intent becomes actionable.

The compiler takes:

  • intent declarations

  • source-of-truth facts

  • platform constraints

…and produces:

  • device-specific configuration artifacts (Junos policy, IOS-XR route-policy, EOS config, SR OS policy, etc.)

  • workflow steps (e.g., create ACL object, apply to interfaces, commit confirmed, verify)

  • and, crucially, expectations (what you should observe if everything is correct)

A compiler is not only a templating engine. It’s a translation layer from “what we mean” to “what the network must do,” including ordering and constraints.

And the moment you support more than one vendor, you realize why “intent” exists: it’s the only way to keep your higher-level logic stable while renderers vary.

4) Planner / diff engine

This component answers questions that engineers ask implicitly, but automation often ignores:

  • What will change?

  • In what order?

  • What’s the blast radius?

  • What dependencies exist?

  • What’s the rollback plan?

  • Is this safe to execute right now?

A planner is where your operational maturity lives.

Example: You want to rotate BGP authentication or change the route policy on the edges. The planner knows:

  • do not flap all sessions at once

  • canary one pair first

  • verify convergence and route counts

  • proceed region by region

  • stop if telemetry shows instability

Without a planner, “automation” becomes a way to do unsafe things faster.

5) Deployer / orchestrator

The deployer executes the plan.

This is where the real world bites: device locks, commit semantics, transient failures, partial success, and concurrency.

A deployer in an IBN system must be:

  • Idempotent (re-run safe)

  • Auditable (who changed what, when, why, from what PR)

  • Concurrency-aware (don’t blast 2,000 devices simultaneously unless you mean to)

  • Rollback-capable (commit confirmed, safe revert windows, or roll-forward strategies)

It must also speak the protocols you actually operate with:

  • gRPC/gNMI where possible

  • SSH where necessary

  • vendor APIs in some environments

  • transactional semantics (Junos commit confirmed, IOS-XR commit replace semantics, etc.)

6) State collector & telemetry pipeline

This is where IBN stops being a philosophy and becomes a control system. You cannot verify intent with “show run” screenshots.

You need continuous, machine-readable reality:

  • Model-driven telemetry (gNMI subscriptions)

  • Syslog and structured event streams (failures, protocol state, flaps)

  • BMP if you care about BGP truth at scale (what routes are being received/advertised)

  • IPFIX/NetFlow for traffic evidence

  • SNMPv3 where legacy remains

  • Counters for ACL hits, drops, congestion signals

  • Config snapshots for drift detection (but never as the only truth)

The telemetry layer must deal with backpressure, ingestion lag, and retention. It’s not “monitoring.” It’s the feedback signal of your control system.

7) Validator & drift/reconciliation engine

This is the brain of IBN. It compares observed state to intent-derived expectations and then decides what to do.

And it must be opinionated, because “drift” isn’t a single category:

  • Benign drift: a counter mismatch, a harmless cosmetic config difference, a temporary link down.

  • Dangerous drift: route leakage, missing security policy, inconsistent RR views, unexpected exports, blackholing, loss of redundancy.

  • Intent conflict: two intents cannot both be satisfied; the system must refuse or escalate.

Reconciliation actions are similarly graded:

  • Alert only (humans decide)

  • Auto-remediate safe classes (reapply telemetry subscriptions, restore known-good policy)

  • Quarantine (remove a leaking peer, dampen announcements)

  • Escalate with a structured incident bundle (diff, evidence, blast radius, recommended plan)

This is where IBN gets its reputation. If this layer is weak, you end up with expensive dashboards. If it’s strong, you get fewer incidents.

Where the system runs (and why placement matters)

Let’s be specific, because my subscriber asked a fair question: where do these components run?

In real hyperscale/service-provider environments, you typically see a split:

Control and product layers in a platform environment

Most commonly, Kubernetes; sometimes VMs; rarely bare metal, unless requirements demand it.

  • Intent API, compiler, planner: stateless services behind authn/authz

  • Deployer workers: queue-driven jobs with strict audit trails

  • Validator: stream processors + scheduled batch verification

  • UI: optional but helpful for visibility

Telemetry collectors close to the network

This is frequently overlooked, and it matters.

Collectors near the network reduce latency and improve survivability during core incidents. They also support scaling patterns such as sharded gNMI collectors.

You may run:

  • Regional collectors per POP

  • BMP listeners near route reflectors

  • Syslog relays local to a region

  • Then forward normalized data to a central pipeline

A break-glass path that is visible by design

IBN does not remove humans. It removes invisible human impact.

You still allow CLI access because of reality. But you make it:

  • strongly authenticated

  • time-bound

  • logged

  • and detectable by drift reconciliation

If “break glass” changes can persist silently, your IBN system will eventually become fiction.

A realistic use case: Internet edge policy without midnight heroics

Let’s ground this in a scenario nearly everyone recognizes.

You run an internet edge. You have multiple transits, peering, customers, DDoS constraints, and region-based preferences. Someone fat-fingers a community, and suddenly you export what you didn’t mean to export.

In a config-centric world:

  • You scramble

  • You diff configs

  • You hunt down where the policy drift occurred

  • You patch the box

  • You hope automation doesn’t overwrite your fix

  • You write a postmortem that says “we need better validation”

In an intent-based world, you start earlier, and you end calmer.

Your intent says something like:

  • Customer C is allowed to advertise only these prefixes

  • Customer C’s routes must never be exported to peers (only to transit, or only to internal networks)

  • Customer routes must carry specific communities; missing communities are an error

  • The maximum accepted prefix count is X; exceeding is a leak signal

  • If a leak signal occurs: automatically shut down advertisement or move to quarantine policy

Now the system:

  1. Compiles expected export policy across vendors

  2. Deploys it via a safe plan (canary one edge pair)

  3. Uses BMP to verify what is actually being advertised

  4. Uses telemetry to confirm policy attachment and session health

  5. Detects divergence immediately if a box is manually patched

  6. Reconciles or escalates based on the severity

And importantly, the incident response becomes structured.

Instead of “something is wrong,” you get:

  • What intent is violated

  • What evidence shows a violation

  • Which devices are out of compliance

  • What the safe remediation plan is

That’s the shift I've been saying here, on my posts: from heroics to systems.

What IBN is not (so you don’t build the wrong thing confidently)

It’s worth naming the traps, because teams repeatedly fall into them.

“We have config templates in Git, so we’re intent-based”

No. That’s configuration management.

If you cannot express outcomes and verify them continuously, you don’t have intent. You have version control.

“Our source of truth is Git”

Git is a record of what you think you deployed. It is not evidence of what the network is doing.

“We do diff of running config vs generated config”

Useful, but insufficient.

Many catastrophic failures occur while configs look “fine.” BGP can be up and still wrong. Forwarding can blackhole with correct configs. A route reflector can diverge while everything “looks consistent.”

IBN treats control-plane and data-plane evidence as first-class reality.

“IBN means zero CLI”

No. Mature IBN means CLI cannot create an invisible truth.

The simplest way to know if you’re building IBN

Here’s a litmus test you can use on your own stack:

If you make a change to intent and merge it, can the system answer these questions without humans?

  1. What will change? (per device, per domain, per risk level)

  2. Is it safe? (guardrails, blast radius constraints)

  3. Did it happen? (deployment evidence)

  4. Did it work? (control-plane + forwarding evidence)

  5. Is reality still compliant tomorrow? (continuous validation + drift handling)

If the answer is “yes,” you’re doing IBN.

If the answer is “we can push config,” you’re doing automation.

Both are valuable, but only one is intent-based.

What we’ll do next in this series

This article gave you the system and the minimum components.

The next pieces get even more concrete:

  • What intent models look like in practice (not toy YAML)

  • How “expectations” are defined and validated

  • What a real repo layout looks like

  • What the CI/CD and rollout pipelines look like

  • How to land this in brownfield networks without a big-bang rewrite

Because that’s where the real work and the real payoff live.

See you then!

Leonardo Furtado

Keep Reading