In case you're trying to read this through your email, be warned: This article is large and may get clipped by services like Gmail and others.
This isn't just an article or a typical post: it is a Python crash course delivered to your inbox. However, since services like Gmail often clip larger emails, I strongly recommend opening it in your browser instead.
Also, make sure to read the previous chapters before this one!
1. Why Poetry, and Why Now (for Network Engineers)
Network automation doesn’t move as one thing. It moves as a noisy convoy of tools and vendor SDKs, each on its own release cadence and with its own opinion about which Python and which transitive dependencies it will tolerate. Your everyday kit, like Ansible for change execution, Nornir for inventory-driven workflows, Scrapli and Netmiko for device sessions, ncclient for NETCONF/YANG operations, plus a sprinkling of vendor SDKs, rarely lines up on the same versions at the same time. That’s the normal state of play, not an exception. And yet most engineers still try to keep this convoy together with a hand-rolled requirements.txt and a silent hope that pip won’t surprise them during a change window.
Pip is an installer, not a contract. It will dutifully fetch “the latest compatible things” every time you run pip install -r requirements.txt, but “compatible” is evaluated on the fly from a sprawling dependency graph that changes underneath you. Last week’s “compatible” is not this week’s. A teammate running the same command on a jump host may end up with a slightly different set of transitive packages than you had in the lab. CI may resolve a different build than staging. Air-gapped recovery drills expose the worst of it: with no lockfile, you don’t actually know which exact wheel versions your automation depended on when it worked. You shipped intent with your code, but left the dependencies to probability.
This is where Poetry earns its keep, especially for network engineers who care about repeatable outcomes. Poetry turns dependency management from guesswork into policy. It gives you a single, auditable source of truth; pyproject.toml describes what you intend to use, and poetry.lock captures exactly what you did use when the system last resolved successfully. That lockfile is the difference between “works on my laptop” and “works everywhere we claim it does”: your lab VM, the shared bastion in production, and the CI runner that gates merges.
Poetry also understands that different parts of your network estate need different stacks, and it lets you express that cleanly. Your Junos/YANG work might live behind ncclient and lxml, while your Cisco playbooks rely on Scrapli/Netmiko, your test harness depends on pytest, pytest-asyncio, and rich for operator-friendly output, which you do not want on the production jump host. With Poetry, you don’t fork repos or maintain brittle comment blocks in a monolithic requirements.txt; you define dependency groups and optional extras that mirror reality. When you install for the lab, you include the lab groups. When you install on the bastion, you bring only the runtime set. When CI builds, it installs exactly what the lockfile says; no more, no less.
Critically, Poetry doesn’t replace the interpreter discipline you established with Pyenv; it sits above it. Pyenv pins which Python (like 3.8 for the production Ansible estate, 3.11 for your NETCONF pipelines, 3.12 for experimental Nornir tasks), while Poetry pins which packages for each of those worlds and records the precise resolution. That two-layer contract (interpreter + locked dependencies) is the operational equivalent of a well-designed underlay with a clear overlay: you trust it because it’s explicit, versioned, and reproducible.
Consider the lifecycle this enables. You set up a new repo for a maintenance window automation: pyenv local 3.9.x to match the approved runtime, poetry init to declare your intent, poetry add to pull in only what you need. Poetry resolves a coherent set of versions and freezes them in the lockfile. You push to Git; your teammate clones it on a different machine and runs poetry install, and they get the exact same environment, byte-for-byte. CI runs the test suite against the same lock; later, you update a single package and run poetry lock to re-resolve intentionally, review the changes in a merge request, and roll that change like any other network policy update. When the platform team asks for a software bill of materials or you practice an air-gapped restore, you can export the environment deterministically and even pre-cache the wheels. Nothing is left to “latest.”
In short, Poetry introduces the three traits network engineers insist on everywhere else: intent, isolation, and reproducibility. It gives automation the same rigor you apply to routing policies and change control. Pip alone will always be a moving target; requirements.txt will always age faster than you can babysit it.
Poetry is how you stop negotiating with transitive dependency drift and start operating your automation like a system, so the lab, the jump host, and CI/CD all see the same thing, every time. And this is what this article will discuss in the sections below.
2. What Poetry Is (and Isn’t): The Mental Model
The easiest way to understand Poetry is to place it in a three-layer stack that mirrors how you already think about networks:
Pyenv chooses which Python interpreter runs a project (3.8 for production Ansible, 3.11 for NETCONF/YANG, 3.12 for lab work).
Poetry declares and enforces which dependencies that project uses, turns that declaration into a locked, reproducible environment, and, when you’re ready, packages and publishes your code as a proper Python package.
pipx installs CLI tools (e.g.,
ansible,scrapli-replay) in isolated, tool-specific virtualenvs so they don’t leak into your projects.
Think of Pyenv as the underlay image on the box, Poetry as the intent system for the overlay (dependency graph + packaging), and pipx as standalone appliances living beside your projects.
What Poetry is
1) A single source of truth for your automation stack.
Poetry stores your intent in pyproject.toml (what you want) and the exact resolved state in poetry.lock (what you actually got when it worked). Together, they are a contract between your laptop, the jump host, and CI/CD. If the lockfile is in Git, everyone—humans and robots—installs the same wheels and transitive dependencies, byte-for-byte.
2) A disciplined dependency manager.poetry add ncclient doesn’t just “grab something compatible today.” It resolves a coherent set of versions, records them in the lockfile (including hashes), and enforces that state on every subsequent poetry install. Your Ansible change window won’t be surprised by an upstream minor release.
3) A project packager.
When your scripts mature into tools, Poetry builds wheels and source distributions (poetry build) and can publish them to internal indexes (poetry publish). That means your SD-WAN SDK, inventory client, or config parser can be consumed like any other first-class dependency across teams.
4) An environment orchestrator, on top of the interpreter you chose.
Poetry can create and manage a virtual environment per project, ideally in-project (e.g., ./.venv/) for portability. It respects whatever interpreter Pyenv puts on your PATH, or you can bind explicitly:
poetry env use "$(pyenv which python)"
Interpreter selection (Pyenv) and dependency resolution (Poetry) stay cleanly separated.
What Poetry isn’t
Not a Python version manager. It won’t install CPython 3.11 for you; that’s Pyenv’s job.
Not a replacement for pipx. Project deps live in Poetry; global CLIs belong in pipx.
Not an OS package manager. It won’t manage OpenSSL, libxml2, or system headers; install build prerequisites first, or Pyenv’s interpreter will compile without the features you need.
Not magic for broken vendor wheels. If a vendor only ships Linux x86_64 binaries and you’re on macOS ARM, you still need a plan (alternate index, source build, or constraints).
Poetry vs. pip + requirements.txt
pip is a great installer. It is not a policy engine. A hand-written requirements.txt expresses some intent (“I think we need these top-level packages”), but leaves transitive resolution to whatever the internet looks like today. Two engineers can run pip install -r requirements.txt a week apart and silently get different transitive versions. You can “freeze the moment” with pip freeze, but that file mixes top-level intent with every transitive pin and often includes machine-specific flotsam. It’s like dumping a router’s running config and calling it a design document.
Poetry separates concerns cleanly:
Intent lives in
pyproject.toml(clear, human-maintained constraints).State lives in
poetry.lock(full, machine-resolved pins + hashes).
You review and version both, just like you keep both high-level policy and compiled device configs under change control.
Operational payoff:
On your laptop, on the shared bastion, and in CI,
poetry installyields the same environment.Updates are intentional:
poetry updatere-resolves and produces a diff you can review (what changed, why), rather than accidental drift because a transitive package released overnight.
Poetry vs. pip-tools
pip-tools (pip-compile / pip-sync) is a solid middle ground when you want to stay in the pip universe:
You author
requirements.in(top-level deps).pip-compileproduces a fully pinnedrequirements.txtwith hashes.pip-syncenforces that environment.
Where Poetry and pip-tools differ:
Capability | Poetry | pip-tools |
|---|---|---|
Single config file with metadata ( | ✅ | ❌ |
Lockfile separate from intent | ✅ ( | ✅ ( |
Built-in packaging (build/publish wheels) | ✅ | ❌ (use |
Dependency groups (dev/test/docs/airgap) | ✅ | Partial (via multiple files) |
Extras and entrypoints (first-class) | ✅ | Indirect |
Native virtualenv management | ✅ | ❌ (use venv/virtualenv) |
Unified UX ( | ✅ | Split across pip/pip-tools |
If your team already standardized on pip-tools, you can stay there and still apply Pyenv for interpreters. But Poetry gives you a cohesive lifecycle: declare, lock, install, run, build, publish, under one tool, with strong semantics for groups/extras that map cleanly to network use cases (e.g., --with junos --without cisco).
Why the “single source of truth” matters in NetDevOps
In networking, we don’t ship “vibes”; we ship intent that can be reconciled. Poetry applies the same rigor to dependencies:
pyproject.tomlencodes the design of your automation stack (Python constraint, top-level libs, groups, extras, scripts, source priorities).poetry.lockencodes the compiled state (exact versions + hashes) that was validated by tests and change reviews.
With both files versioned:
Labs, jump hosts, and CI/CD converge on the same package graph.
Rollbacks are trivial (revert the lockfile).
Audits and SBOMs are straightforward (the lockfile is the inventory).
Air-gapped or DR scenarios are feasible (
poetry export+ pre-fetched wheels).
This isn’t a ceremony. It’s the difference between “my Nornir pipeline acted weird last night” and “we know exactly which dependency set ran and why.” Pyenv nails the interpreter. Poetry nails the packages. Pipx keeps global CLIs from leaking. Together, you get the boring determinism your automation deserves, so you can spend your time solving network problems, not negotiating with dependency drift.
3. Anatomy of pyproject.toml for Network Projects
A great pyproject.toml reads like a design document for your automation stack: it declares the Python you run on, lists what you depend on (at the top level), captures exactly what was resolved (in the lockfile), and encodes the knobs you need for multi-vendor work (groups, extras, internal indexes, and entry points). Below is a realistic template tuned to a network engineer’s day-to-day, followed by the rationale behind each choice.
Mental model recap: Pyenv chooses the interpreter (e.g., 3.11), Poetry pins and locks dependencies per project, and pipx keeps global CLIs (e.g., ansible) isolated from your projects.
A somewhat realistic pyproject.toml for a multi-vendor neteng repo
[build-system]
requires = ["poetry-core>=1.9.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "netops-automation"
version = "0.4.0"
description = "Multi-vendor network automation toolkit (Junos, IOS-XE, NX-OS) with NETCONF/SSH pipelines."
authors = ["Your Name <[email protected]>"]
readme = "README.md"
license = "MIT"
packages = [{ include = "pkg", from = "src" }] # src/ layout recommended
# Optional metadata that helps later (docs, repo, issues):
homepage = "https://example.com/netops"
repository = "https://github.com/yourorg/netops-automation"
keywords = ["networking", "automation", "netconf", "nornir", "scrapli", "ansible"]
[tool.poetry.dependencies]
python = "^3.11" # Project runs on 3.11.x (use Pyenv to enforce interpreter)
httpx = "^0.27" # Modern HTTP client: controllers/APIs, async-ready
pydantic = "^2.8" # Data models: device schemas, payload validation
rich = "^13.7" # Operator-friendly output, progress bars, tracebacks
# Core network libraries (top-level intent only; transitive pins go to poetry.lock):
ncclient = "^0.6.15" # NETCONF/YANG (Junos/IOS-XR/etc.)
scrapli = "^2024.7" # Fast, modern SSH for Cisco/Arista/Juniper (many platforms)
netmiko = "^4.4" # Broad vendor SSH coverage; complements scrapli
lxml = "^5.2" # XML parsing for NETCONF payloads
jinja2 = "^3.1" # Templates for device configs / payloads
# Example of consuming an INTERNAL package (from Artifactory/Nexus source below):
company-sdwan-sdk = { version = "^1.8", source = "corp-internal" }
# Optional extras expose feature bundles for consumers of YOUR package
[tool.poetry.extras]
junos = ["junos-eznc", "ncclient", "lxml"]
iosxe = ["scrapli", "netmiko"]
nxos = ["scrapli"]
# You can declare extras’ packages as optional dependencies so they resolve when requested
junos-eznc = { version = "^2.7", optional = true }
# Dependency groups let you toggle contexts (dev/test/docs/airgap)
[tool.poetry.group.dev.dependencies]
black = "^24.8"
ruff = "^0.6"
mypy = "^1.10"
pre-commit = "^3.7"
ipython = "^8.26"
[tool.poetry.group.test.dependencies]
pytest = "^8.3"
pytest-asyncio = "^0.23"
pytest-cov = "^5.0"
pytest-xdist = "^3.6"
hypothesis = "^6.112"
[tool.poetry.group.docs.dependencies]
mkdocs = "^1.6"
mkdocs-material = "^9.5"
mkdocstrings-python = "^1.10"
[tool.poetry.group.airgap.dependencies]
# Utilities to prepare/offline environments (wheelhouse, exports)
build = "^1.2"
wheel = "^0.43"
poetry-plugin-export = "^1.8" # enables `poetry export` if not bundled
# Expose a first-class CLI for operators (maps to pkg/cli.py -> main())
[tool.poetry.scripts]
netops = "pkg.cli:main"
netops-validate = "pkg.validate:main"
netops-deploy = "pkg.deploy:main"
# Private indexes: PyPI as default; corporate registry with explicit priority
[[tool.poetry.source]]
name = "pypi"
url = "https://pypi.org/simple"
default = true
[[tool.poetry.source]]
name = "corp-internal"
url = "https://artifactory.example.com/api/pypi/pypi/simple"
priority = "explicit" # only use if dependency explicitly sets `source = "corp-internal"`
Why each part matters (and how it maps to neteng realities)
1) python = "^3.11" — the interpreter contract
You’ve already pinned the interpreter with Pyenv, but declaring it here ensures:
Poetry won’t resolve an environment on an incompatible Python.
CI can enforce the same baseline by reading your constraint (or
.python-version).Teams know which major/minor you support (e.g., 3.11.x), keeping prod, lab, and jump hosts honest.
Treat this like your “global underlay version.” If the repo says ^3.11, that’s the fabric your overlay runs on.
2) Core libraries: httpx, pydantic, rich
These form the platform layer of your automation:
httpx talks to controllers (SD-WAN, telemetry, inventory) and supports async if you evolve there.
pydantic v2 gives you typed models, so payloads/device states don’t drift silently.
rich keeps operator UX readable in change windows (color, tables, progress).
3) Network libraries: ncclient, scrapli, netmiko, lxml, jinja2
ncclient for NETCONF/YANG edits/queries (Junos, IOS-XR, others).
scrapli for fast, modern SSH when APIs aren’t available.
netmiko complements coverage where needed and for teams already invested in it.
lxml for XML parsing (NETCONF).
jinja2 for config/payload templating (idempotent, testable).
Keep these as top-level intent. Poetry will write the resolved graph (versions + hashes) to poetry.lock. That file is your compiled overlay; reviewed, versioned, and identical across laptops, jump hosts, and CI.
4) Extras for vendor features
[tool.poetry.extras] lets consumers install feature bundles, e.g.:
pip install netops-automation[junos]→ bringsjunos-eznc,ncclient,lxml.pip install netops-automation[iosxe]→ bringsscrapli,netmiko.
This is perfect when your project is a library/CLI that other teams will consume; extras become modular capabilities akin to enabling a feature set per platform.
Note the optional = true flag on junos-eznc. That marks it as only installed when the junos extra is requested.
5) Dependency groups: dev, test, docs, airgap
Groups keep your environments lean and purposeful:
dev: linters/formatters/type-checkers you want in the editor, not on the prod jump host.
test: everything needed by CI for unit/integration tests (async testing, coverage, parallelism).
docs: mkdocs + mkdocstrings for internal documentation portals.
airgap: tooling to build wheels and export a requirements.txt from the lockfile (for bastions or AWX/Controller nodes that remain pip-native).
Usage is explicit:
# Dev box or CI:
poetry install --with dev,test
# Production bastion (runtime only):
poetry install --without dev,test,docs,airgap
# Build offline cache/Wheelhouse:
poetry export -f requirements.txt --output requirements.lock.txt --with runtime
pip download -r requirements.lock.txt -d ./wheelhouse
6) Scripts: first-class CLIs for operators
Expose operator tasks as commands:
netops→ top-level multi-command CLI (e.g.,netops validate,netops deploy).netops-validate→ pre-flight checks (reachability, device facts, policy simulation).netops-deploy→ guarded deployment path (prechecks → apply → postchecks) with consistent logging/exit codes.
This replaces ad-hoc shell scripts and makes your automation discoverable. CI and runbooks can call these entry points directly.
7) Private sources with explicit priority
Many enterprises mirror PyPI or host internal packages (e.g., company-sdwan-sdk). Declaring sources:
Leaves PyPI as the default index.
Adds your Artifactory/Nexus as
priority = "explicit", Poetry will not fall back to it unless a dependency explicitly names that source. That prevents accidental leakage to internal registries.
Authentication belongs outside pyproject.toml (use Poetry’s config or env vars like POETRY_HTTP_BASIC_CORP_INTERNAL_USERNAME/PASSWORD). Keep secrets out of Git.
The payoff: you can mix public and internal artifacts deterministically, and your builds won’t unpredictably switch sources.
Patterns to keep your file clean and maintainable
Pin intent, not transitive trivia. Let
pyproject.tomldeclare what you mean; letpoetry.lockcapture what actually resolved (reviewed in PRs).Name groups after operator contexts.
dev/test/docs/airgapmap naturally to your workflow stages.Prefer in-project virtualenvs.
poetry config virtualenvs.in-project truekeeps.venv/inside the repo so editors and teammates use the same interpreter path.Commit
.python-version. Even though Poetry declarespython = "^3.11", the exact interpreter (e.g.,3.11.9) comes from Pyenv—commit that file so everyone runs the same minor build.Document the bootstrap. A
bootstrap.shormake bootstrapthat does: read.python-version→pyenv install -s→poetry env use→poetry install --with dev,test.
Why this structure scales in real networks
Your stacks don’t stand still. Vendors rev APIs, security patches drop mid-quarter, CI images update. A disciplined pyproject.toml gives you:
Determinism: lockfile + explicit sources stop “surprise Tuesdays.”
Modularity: extras/groups match vendor footprints and environment roles.
Operability: scripts standardize how humans and pipelines invoke automation.
Portability: same definitions on laptops, bastions, and CI, with an export path for pip-only hosts.
Treat the file as you treat a routing policy: intent is readable, compilation is reviewable, and both are versioned. That’s how automation grows from scripts to a platform you trust.
4. Installing Poetry and First-Time Setup (With Pyenv Integration)
The goal here is to make your Poetry setup boring and correct: repeatable on laptops, jump hosts, and CI. We’ll (1) install Poetry in a clean way, (2) force in-project virtualenvs so the environment travels with the repo, (3) bind Poetry to the exact interpreter Pyenv selected, and (4) ship a tiny bootstrap you can reuse across repos.
Think of this as bringing underlay discipline to your tooling: predictable, versioned, and easy to audit.
A. Prereqs: have the right Python ready (Pyenv)
Pick the interpreter you want this repo to run on and make it explicit:
# Example: project will run on Python 3.11.9
pyenv install -s 3.11.9
pyenv local 3.11.9 # writes .python-version in the repo
python -V # should report 3.11.9 via pyenv
Commit .python-version, so teammates and CI align to the same minor build.
B. Install Poetry (choose one clean method)
Recommended (isolate the tool): via pipx
# If you don't have pipx yet
python -m pip install --user pipx
python -m pipx ensurepath
# New shell may be required for PATH to refresh
# Install Poetry in its own venv, cleanly isolated
pipx install poetry
poetry --version
Alternative: official installer
curl -sSL https://install.python-poetry.org | python3 -
# Make sure ~/.local/bin is on your PATH
export PATH="$HOME/.local/bin:$PATH"
poetry --version
macOS users can also brew install poetry if Homebrew is your standard.
Windows engineers: prefer WSL and follow the Linux steps inside WSL; if native, pipx install poetry in PowerShell works too.
C. Make Poetry create in-project virtualenvs
By default Poetry puts virtualenvs in a central cache. For network teams, in-project (./.venv/) is more portable and IDE-friendly. You can enforce this per project (committed) or globally (per user):
Per-project (committed):
poetry config virtualenvs.in-project true --localThis writes
poetry.tomlto your repo (commit it).Or globally (per user):
poetry config virtualenvs.in-project true
I also recommend telling Poetry to respect the interpreter you already activated with Pyenv:
# Optional but helpful when you frequently switch Pyenv versions
poetry config virtualenvs.prefer-active-python true
D. Bind Poetry to the Pyenv interpreter
Even with the config above, be explicit the first time you wire a project:
# Point Poetry at the exact interpreter Pyenv selected
poetry env use "$(pyenv which python)"
# Sanity check the environment Poetry will use
poetry env info
poetry run python -V # should match your .python-version
E. Initialize / install the project
New repo:
poetry init # guided; declare python constraint (e.g. ^3.11) and top-level deps
poetry install # creates .venv/ and installs your locked set once you have a lockfile
Existing repo you just cloned:
# Assuming pyproject.toml and poetry.lock are present
poetry env use "$(pyenv which python)"
poetry install --no-interaction
From now on, run tools and scripts through Poetry:
poetry run pytest
poetry run netops deploy
poetry shell # optional: spawn a subshell with the venv activated
F. A reusable, zero-friction bootstrap.sh
Drop this at the repo root, commit it, and tell teammates: “clone → run bootstrap → done.”
#!/usr/bin/env bash
set -euo pipefail
# 1) Ensure the exact interpreter exists (Pyenv)
if [[ -f ".python-version" ]]; then
PYVER="$(cat .python-version)"
pyenv install -s "$PYVER"
else
echo "No .python-version found. Create one with: pyenv local 3.11.9"
exit 1
fi
# 2) Force Poetry to use in-project venv and the Pyenv interpreter
poetry config virtualenvs.in-project true --local
poetry config virtualenvs.prefer-active-python true
# 3) Bind Poetry to Pyenv's python
poetry env use "$(pyenv which python)"
# 4) Install deps (add groups as appropriate for your context)
# dev/test on laptops & CI; runtime-only on jump hosts
if [[ "${INSTALL_PROFILE:-dev}" == "runtime" ]]; then
poetry install --without dev,test,docs,airgap --no-interaction
else
poetry install --with dev,test --no-interaction
fi
# 5) Confirm everything lines up
echo "Interpreter: $(poetry run python -V)"
echo "Venv path: $(poetry env info --path)"
echo "Done. Use 'poetry run <cmd>' or 'poetry shell'."
Usage:
# Developer machine
./bootstrap.sh
# Jump host (runtime-only)
INSTALL_PROFILE=runtime ./bootstrap.sh
G. Optional Makefile (nice ergonomics)
PY := $(shell poetry run which python)
.PHONY: bootstrap
bootstrap:
./bootstrap.sh
.PHONY: test
test:
poetry run pytest -q
.PHONY: lint
lint:
poetry run ruff check .
poetry run black --check .
.PHONY: run
run:
poetry run netops deploy
H. Editor and CI hints (avoid “mixed activation”)
VS Code:
Command Palette → Python: Select Interpreter→ choose.venv/bin/python.
Optionally add to repo:// .vscode/settings.json { "python.defaultInterpreterPath": ".venv/bin/python" }CI (example: GitHub Actions): read the interpreter from
.python-versionand install via Poetry:- uses: actions/setup-python@v4 with: python-version-file: .python-version - uses: snok/install-poetry@v1 - run: poetry install --with test --no-interaction - run: poetry run pytest
I. Quick operator troubleshooting
Poetry picked the wrong Python:
poetry env use "$(pyenv which python)"→poetry env info
If wrong, remove and recreate:poetry env remove --all, then repeat.No
.venv/created in repo:
Ensurepoetry config virtualenvs.in-project true(either--localor global), thenpoetry env use "$(pyenv which python)"andpoetry install.Cron/systemd jobs:
Don’t assume shell rc files run. Use absolute paths:/path/to/repo/.venv/bin/python script.pyorpoetry run <cmd>with PATH set to include.venv/bin.
Get this right once, and it disappears: Poetry uses the Pyenv interpreter you intended, builds a repo-local environment, and every human and robot that touches the repo can recreate it deterministically. That’s the foundation you want before you layer on vendor SDKs, playbooks, and orchestration logic.
But we're not finished yet! There are still other important topics about Poetry to discuss, which will be covered in future editions of this Packets & Python series.
See you there!
Leonardo Furtado

