This is the second article in this [IBN] series. Make sure you check out the first and following parts!
If the first article of this series was the “you need a closed loop” argument, this second article is the part everyone asks for next, usually with a skeptical squint:
“Okay. Show me the intent.”
Not a PowerPoint abstraction, a vendor demo, or a set interface xe-0/0/0 unit 0 family inet address… wrapped in a Jinja template.
Show me what intent looks like when:
The network has multiple regions and POPs,
multiple vendors,
multiple teams,
multiple tenants,
multiple services,
a constant stream of change,
and a constant fear that your “model” will become an unmaintainable monster.
That’s what we’re doing here.
We’re going to build an intent model the way you build a software interface at scale: stable contracts, strict types, explicit invariants, and testable expectations.
And we’ll do it while keeping one promise:
Intent should read like a human declaration of outcomes and constraints, but behave like a machine-validated API.
The mistake most teams make: they model the device, not the network
When teams first “model intent,” they end up accidentally building a vendor-neutral CLI.
They create YAML like this:
interfaces:
- name: xe-0/0/0
unit: 0
ipv4: 10.0.0.1/31
It feels structured, right? It’s in Git, it has indentation… it must be modern. Right?
And it collapses, fast. Because that’s not an intent, that’s just a configuration expressed in a different syntax.
It doesn’t answer the fundamental questions the business actually cares about:
What services exist?
What is allowed to talk to what?
What are the routing contracts?
What are the invariants that must never be violated?
What evidence proves those invariants hold?
Intent modeling starts by refusing to model “how to configure devices” and instead modeling “what the network is meant to provide.”
The easiest way to do that is to think in domain objects.
1) The domain objects you must model (the set you can’t escape)
IBN models fail when they either:
become too generic (“anything goes”)
or become too specific (“every knob gets a field”)
The sweet spot is a small number of domain objects that map to how networks are actually operated and reasoned about.
Tenants / VRFs / segments
At scale, “tenant” is the unit of ownership, isolation, and blast radius.
A tenant isn’t just a VRF name. It has:
Identity (who owns it, what environment it belongs to)
Segmentation (VRFs, sub-segments)
Routing domains (what it can import/export)
Security posture (allowed flows, protected services)
Observability requirements (what must be measured)
If your model doesn’t put tenants first, you’ll end up putting “meaning” into device tags and spreadsheets. That is how drift is born, so it is time for you to avoid this approach if you want to adopt intent-based networking.
Reachability intents
These are deceptively simple and incredibly powerful.
“Prefix P must be reachable from region X” sounds trivial until you operationalize it:
Reachable via which path classes?
Under what failure conditions?
With what acceptable loss/latency?
With what route constraints (no transit through region Y)?
From which sources (internet, corp, partner, mgmt)?
Reachability intent is about stopping the argument over configs and starting to assert outcomes.
Routing policy intents
This is where real networks go to die if you don’t model it.
Routing policy isn’t “route-map 10 permit.” Routing policy is a contract:
Who is allowed to advertise what (prefix authorization)
What attributes must be attached (communities, LP, MED, AS-path prepends)
What must never happen (leaks to peers, re-origination, wrong next-hop)
What constitutes abnormal behavior (prefix count spikes, unexpected ASNs)
At hyperscale/service-provider scale, routing policy intent is often more important than IP addressing.
Path constraints
Not everyone needs SR policies on day one, but everyone needs constraints:
Avoid certain links (maintenance, cost, risk)
Prefer POPs or egress types (peering vs transit)
Respect regulatory boundaries (traffic must not exit the region)
Enforce failure-domain separation
The key is not to model “RSVP vs SR vs static.” The key is to model the constraint and let compilers map to mechanisms.
Security intents
Security intent must be expressed in business terms:
Who can initiate what
What must be blocked
What must be rate-limited or scrubbed
What exceptions exist and how they’re justified
And then, critically, security intent must tie to evidence:
ACL entries present
Counters behaving plausibly
Drops/spikes correlated to events
DDoS policies applied to the correct attachment points
Reliability intents
This is the “guardrails” layer for engineering quality:
“Tenant must have dual-homed attachments”
“Minimum ECMP fanout from leaf to spines”
“No single ToR may carry >X% of a critical service”
“RR clients must be multi-RR with distinct failure domains”
Reliability intent prevents your design from slowly degrading as exceptions accumulate.
Observability intents
This is the part most teams bolt on after incidents. Instead, it should be modeled up front.
Observability intent is not “we have dashboards.” It’s:
What signals are required (BGP session states, route counts, interface errors, drops, queue depth)
At what frequency
Where they should be collected
What thresholds define unacceptable drift
What evidence bundles should be attached automatically to incidents
If you don’t model observability intent, you end up with:
Expensive telemetry you don’t use
Missing telemetry exactly where you need it
And an incident response that depends on tribal knowledge
2) A concrete intent schema (realistic, not toy-level)
Let’s build a starter schema set that could credibly live in a production repo.
Two design principles before we write any YAML:
Principle A: Model the contract, not the mechanism
We’ll say “Customer C may export only these prefixes and only to these parties.”
We will not say “apply route-policy CUSTOMER-C-OUT on neighbor x.x.x.x.”
Principle B: Make invalid states unrepresentable
This is the software-engineering trick that changes everything.
If “a prefix list may be either IPv4 or IPv6” is encoded as a string, you will ship bugs.
If it’s a typed object validated by a schema, the bad PR never merges.
The YAML: Intent declaration for a tenant + routing contract + reachability + observability
apiVersion: intent.networking.example/v1
kind: TenantIntent
metadata:
name: customer-c
owner: "customer-success-emea"
environment: prod
ticket: "CHG-2025-1209"
spec:
tenant:
id: "cust-c"
description: "Enterprise customer C - MPLS L3VPN + Internet breakout"
vrfs:
- name: CUSTC-PROD
rd: "65010:1203"
import_rts: ["target:65010:1203"]
export_rts: ["target:65010:1203"]
segments:
- name: apps
cidrs:
- "10.84.0.0/16"
- "2001:db8:84::/48"
routing:
bgpContracts:
- name: custc-ce-to-pe
role: customer_edge
allowedAfiSafis: ["ipv4-unicast", "ipv6-unicast"]
peerAsn: 65123
maxPrefixes:
ipv4: 200
ipv6: 100
prefixAuthorization:
allowedPrefixes:
- "10.84.0.0/16"
- "10.84.20.0/24"
- "2001:db8:84::/48"
attributes:
requireCommunities:
- "65010:1203" # tenant marker
- "65010:30010" # "customer-learned"
attachCommunities:
- "65010:55555" # internal handling tag
localPreference: 120
exportConstraints:
allowTo:
- "core" # may be exported to core
denyTo:
- "internet-peers" # must never leak to peers
- "transit" # (example) keep off transit unless explicitly allowed
invariants:
- type: "no-leak"
description: "Customer C routes must not be exported to any public peer or transit"
- type: "no-default-origination"
description: "System must not originate default route into CE unless explicitly enabled"
reachability:
intents:
- name: custc-apps-reachable-from-emea-corp
from:
zone: "corp-emea"
to:
prefixes:
- "10.84.0.0/16"
- "2001:db8:84::/48"
constraints:
requireRedundancy: true
maxConvergenceSeconds: 30
security:
policies:
- name: custc-inbound-protect
attachTo:
- deviceRole: "pe"
interfaces:
- match: "to-customer"
ddosProfile: "standard-l3vpn"
acl:
allow:
- proto: "tcp"
dstPorts: [443, 8443]
srcZones: ["corp-emea"]
dstPrefixes: ["10.84.20.0/24"]
deny:
- proto: "any"
srcZones: ["internet"]
dstPrefixes: ["10.84.0.0/16"]
observability:
requiredSignals:
- signal: "bgp.session.state"
scope: { contract: "custc-ce-to-pe" }
frequencySeconds: 10
alerting:
severity: "sev2"
condition: "down_for > 60s"
- signal: "bgp.route.count"
scope: { contract: "custc-ce-to-pe", direction: "inbound" }
frequencySeconds: 30
alerting:
severity: "sev2"
condition: "count > maxPrefixes OR count spikes > 30% in 5m"
- signal: "policy.export.violation"
scope: { contract: "custc-ce-to-pe" }
frequencySeconds: 30
alerting:
severity: "sev1"
condition: "any_export_to(internet-peers, transit)"
This is not a “toy,” because it expresses:
Tenant identity and segmentation
BGP contract rules
Authorization (allowed prefixes)
Required communities and attribute policy
Explicit “deny export to peers/transit”
Reachability and redundancy constraints
Security attachment intent
Observability requirements tied to the contract and meaningful conditions
And it still doesn’t mention “Junos vs XR vs EOS.” And that’s intentional!
Strict schema validation: making bad intent impossible to merge
You can implement validation in many ways. The core is the same: the shape and semantics are enforced before deployment.
A pragmatic and popular pattern:
YAML is the authoring format
Pydantic models define the schema and types
JSON Schema is generated from Pydantic and used in CI editors
Protobuf/OpenAPI are used if you need an API boundary across teams
Strong typing examples (what “stringly typed” breaks at scale)
peerAsnis not a string. It’s an integer with bounds.CIDRs are not strings. They’re validated prefixes.
Communities are not arbitrary text. They’re validated communities (standard/large if you need).
AFI/SAFI values are enums.
Device roles are enums or controlled vocabulary.
Why this matters: the failure mode at scale isn’t that engineers don’t know networking. The failure mode is that a slight formatting ambiguity becomes a major deployment issue.
IBN prevents ambiguity from existing.
3) Expectations are first-class artifacts (this is where IBN becomes real)
Now we get to the part that transforms “intent modeling” into “intent operations.”
Most teams stop after compiling the config.
IBN doesn’t stop there. It outputs expectations: a machine-readable statement of what the network should look like if the intent is being met.
Think of expectations as the “unit tests” of your network’s runtime behavior.
Why expectations must exist as artifacts
Because it forces you to answer:
How will we prove this intent is satisfied?
What signals matter?
What is normal vs abnormal?
What constitutes drift?
And because it unlocks automation that isn’t dumb:
If an expectation fails, you can trigger the right runbook or auto-remediate safe classes
You can attach evidence automatically to incidents
You can gate rollouts on observed compliance
Example: compiler outputs “expected operational state” for our BGP contract
The compiler might generate an expectations file like:
apiVersion: expectations.networking.example/v1
kind: ContractExpectations
metadata:
name: custc-ce-to-pe
tenant: customer-c
spec:
bgp:
sessions:
- deviceSelector: { role: "pe", region: "emea" }
neighborIp: "192.0.2.10"
peerAsn: 65123
afiSafis: ["ipv4-unicast", "ipv6-unicast"]
mustBe:
sessionState: "established"
minUptimeSeconds: 300
routePolicy:
inbound:
maxPrefixes:
ipv4: 200
ipv6: 100
mustIncludeCommunities: ["65010:1203", "65010:30010"]
outbound:
forbiddenTargets: ["internet-peers", "transit"]
forbiddenActions:
- "export_prefixes_from_tenant(customer-c)"
forwarding:
nextHopConstraints:
- prefixes: ["10.84.0.0/16", "2001:db8:84::/48"]
mustResolveVia: ["mpls-core"]
mustNotResolveVia: ["public-internet-edge"]
security:
aclExpectations:
- attachPointSelector: { role: "pe", interfaceClass: "to-customer" }
rulesPresent: ["custc-inbound-protect"]
counters:
- rule: "deny.internet.to.10.84.0.0/16"
expectedHitRate:
minPerHour: 0
maxPerHour: 5000
telemetry:
required:
- signal: "bgp.session.state"
freshnessSeconds: 15
- signal: "bmp.advertised.routes"
freshnessSeconds: 30
Notice what this does:
It encodes not “what to configure” but “what must be true.”
It includes both control-plane facts (BGP session established, AFI/SAFI) and policy facts (no export to peer/transit).
It even hints at forwarding constraints (next-hop resolution must not use public edge).
It ties to telemetry freshness (if you can’t observe it, you can’t claim compliance).
How do you validate these expectations in practice?
This is where your earlier protocol list matters in the real world:
gNMI / telemetry validates session state, interface health, policy attachment, counters, and route count metrics.
BMP validates what is being received/advertised; this is gold for leak detection and policy compliance.
Syslog catches churn and failure events; correlates with changes.
IPFIX/NetFlow can validate that traffic patterns match segmentation intent (“no traffic from internet to internal prefixes”).
Config snapshot diff can be used as a secondary signal, not the primary truth.
The validator doesn’t need to be omniscient. It needs to be consistent, scoped, and continuously improving.
A subtle but critical point: expectations are not always exact
At scale, you’ll often encode ranges and invariants, not exact values:
“Route count must be ≤ 200” is stable.
“Route count must equal 173” collapses the moment a legitimate prefix is added.
Likewise for ACL counters: you often encode “plausible bounds” rather than precise rates.
That’s how you avoid the “monitoring that cries wolf” problem, another common cause of model collapse.
4) Multi-vendor reality: intent stays stable, renderers change
Multi-vendor isn’t a complication to hide. It’s a forcing function that proves whether your model is actually intent or just an abstraction of one vendor.
Here’s the fundamental rule:
Your intent model must be vendor-agnostic and stable.
Your renderer modules are vendor-specific and replaceable.
When you do this right, your system evolves like a compiler toolchain:
Intent schema v1 stays mostly stable
Renderers iterate as platforms change
New vendors are “ports,” not rewrites
Capability differences are expressed as constraints, not schema fragmentation
How do you avoid “schema explosion”
A classic failure mode is adding vendor knobs into the intent model:
junos:
policyOptions:
...
iosxr:
routePolicy:
...
The moment you do that, you’ve abandoned intent and built a multi-vendor config repository.
Instead, you encode:
Capabilities in SoT or device-role metadata
Constraints in intent (“must support gNMI,” “must support large communities,” “must support SRv6”)
Renderer selection based on role/platform
Example: your SoT says:
pe-emea-01is Junospe-emea-02is IOS-XRBoth have role
peBoth claim capability
bmp-client,gnmi,policy-attach
The intent stays the same. The renderer output differs. This separation is what makes IBN survivable.
Deliverable: the “starter schema set” that scales
If you want a minimal schema set that doesn’t collapse, start with these files (conceptually):
TenantIntent
Identity, VRFs/segments, ownership, environment.RoutingContractIntent
BGP/IGP adjacency and policy contract: allowed prefixes, attributes, max-prefix, export constraints, and invariants.ReachabilityIntent
From zone/region → to prefixes/services, with redundancy and convergence constraints.SecurityIntent
Allowed flows, deny posture, DDoS profile, attachment selectors.ReliabilityIntent
Redundancy rules, minimum ECMP, failure-domain separation.ObservabilityIntent
Required signals, freshness, alert conditions, and evidence bundles.Expectations (compiled output)
ContractExpectations, ReachabilityExpectations, SecurityExpectations.
And one more that people forget:
Capability & Role Taxonomy
A controlled vocabulary (roles, zones, regions, attach points, interface classes).
This prevents “free-form strings” from silently fracturing your model.
The principle to tattoo on the repo README
Intent is stable; renderers change. Expectations are the proof.
If you adopt that as a design law, your model will remain coherent even as:
Vendors change
Topologies evolve
Teams reorganize
And the network grows by an order of magnitude
Because you aren’t modeling knobs. Instead, you’re modeling what you actually mean.
In the following article, we will take this exact schema approach and show a real repo layout, including how we can separate intent/, schemas/, sot/, compiler/, validator/, and simulations/, plus how PR validation gates work, so invalid intent never ships.
See you then!
Leonardo Furtado

![[IBN] Modeling Intent: The Data Models That Don’t Collapse at Scale](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,quality=80,format=auto,onerror=redirect/uploads/asset/file/0a0106ad-a755-48d3-b155-c5616825a2ae/ibn2.jpeg)