1. Introduction: Why Package Management Matters in NetDevOps
In fact, not only in NetDevOps but also (and especially) in anything that involves Python development!
In the traditional world of network engineering, your tools might have been limited to routers, switches, and command-line interfaces. But in modern infrastructure environments, especially those powered by NetDevOps, automation pipelines, and network-as-code practices, Python has become a foundational skill. Yet learning Python syntax or scripting device configs is only part of the equation. If you're not managing your packages properly, you're building automation on quicksand.
Imagine this: you’ve written a script that backs up BGP configurations using Netmiko and parses outputs with TextFSM. It works beautifully on your laptop. You push it to a Git repo, share it with your team, or even try to run it in a production automation container. Suddenly, the script throws cryptic errors:
“Module not found: netmiko”
“TypeError: parse_cli_output() got unexpected keyword argument”
“AttributeError: module 'yaml' has no attribute 'safe_load_all’”
You start debugging, only to discover your teammates installed different versions of libraries or forgot to install some altogether. Worse yet, your own system silently upgraded a library that broke your code.
This chaos is exactly what Python package management is designed to prevent.
Why You Need It
Package management isn't just a "developer thing." It's a core engineering practice for anyone automating infrastructure. Here’s why it matters, especially in NetDevOps:
Predictability: You can reproduce the same environment on any machine.
Portability: Your scripts work the same on your laptop, in the CI pipeline, or in a container.
Stability: You avoid silent library upgrades that break your scripts.
Collaboration: Your team can run your code without asking, “What do I need to install?”
Security: You can audit exactly what packages you’re using and their versions.
In fact, if you’ve ever seen a file named requirements.txt in someone’s GitHub repo, that’s a signal they’re managing their packages. It’s the Python equivalent of a routing policy manifest: documenting what’s needed to make the system function properly.
So, in the next section, we’ll define what a Python package really is and how it fits into your daily life as a network engineer turned automation professional.
2. What Is a Python Package?
Before we explore package managers like pip and poetry, we need to answer a foundational question: What is a Python package, really?
At a surface level, you might say: “A package is a library someone else wrote, so I don’t have to.” And while that’s not wrong, it’s only the tip of the iceberg. To truly harness the power of packages in network engineering, you need to understand what’s under the hood.
A Python Package Is a Structured Collection of Code
In technical terms, a Python package is a directory containing Python modules, and it usually includes a __init__.py file to tell Python it’s a package. But in the modern context of package distribution (the kind you install with pip), the term “package” expands to include:
The codebase (modules and sub-packages)
Metadata (name, version, author, dependencies)
Installation logic (setup files, build configuration)
Optional scripts, CLI commands, or binaries
When someone publishes a package like netmiko, napalm, or pyntc, they’re packaging up all of this into a distributable unit. Think of it as a precompiled BGP policy that includes not just the route-maps, but also documentation, dependencies, and export/import logic.
Modules vs. Packages: A Quick Distinction
Term | Description |
|---|---|
Module | A single |
Package | A directory of related modules, possibly nested, with a |
📦 napalm is a package. Inside it, you’ll find many modules: ios.py, junos.py, base.py, each one encapsulating logic for different platforms.

Network Engineering Analogy
You can think of a Python package like a preconfigured virtual router image:
The package name is like the router hostname.
The version is the firmware version.
The dependencies are the hardware or OS it expects.
The exposed functions and classes are the CLI or API you interact with.
When you pip install netmiko, you’re essentially deploying a software router into your Python environment, ready to establish SSH sessions, configure devices, and parse output. Maybe this comparison wasn't the best to start with, but some people might find it helpful to understand what packages in this context mean.
Exploring a Package
Let’s inspect a package you probably use or will soon use: netmiko.
$ pip install netmiko
Now you can do this in a very basic Python script, such as the one below:
from netmiko import ConnectHandler
cisco_device = {
"device_type": "cisco_ios",
"ip": "192.0.2.1",
"username": "admin",
"password": "admin123"
}
net_connect = ConnectHandler(**cisco_device)
output = net_connect.send_command("show version")
print(output)
All of this functionality was imported from a package; you didn’t have to write a socket handler, an SSH abstraction layer, or a parser. Someone else did it, and Python packaging made it reusable.
Why You Should Care
Python packages make it possible for you, as a network engineer, to:
Use complex tools without reinventing the wheel
Collaborate in teams by standardizing dependencies
Scale your codebase without cluttering your project folder
Consume vendor-neutral libraries like NAPALM, Netmiko, PyEZ, and more
In NetDevOps, packages are the building blocks of your automation system, just like routers and protocols are the building blocks of a physical network.
3. The Problem: Dependency Hell, Version Drift, and Reproducibility
If you've ever tried to run someone else's Python script and been bombarded with errors like ModuleNotFoundError, or worse, unexpected bugs in a library you swear used to work, then you've already experienced the chaos that Python package management is designed to prevent.
Welcome to the dark side of programming: dependency hell.
Dependency Hell
Imagine this:
You clone a colleague’s repository that automates BGP peering validation across multi-vendor routers. You run the script, and immediately:
ImportError: cannot import name 'ConnectHandler' from 'netmiko'
Turns out, your version of netmiko is older than the one used to write the script.
You update netmiko. Now that works, but it breaks your other automation script that used the older version. You now face a choice:
Upgrade and risk breaking everything else, or
Downgrade and never run the new script
You’ve just entered dependency hell, where every decision risks collapsing your automation stack like a house of cards.
Over time, packages evolve:
APIs change
Behavior shifts
Bugs get introduced (and fixed)
If you install your Python packages ad hoc, like this:
pip install napalm
pip install netmiko
Then your environment is non-deterministic; you have no idea what version got installed unless you check manually. A script that worked today might break tomorrow, simply because a library was updated in the background.
In production environments, this is a nightmare. Imagine running an automation job that disables all unused interfaces, only for a new version of the library to misinterpret interface status. Ouch.
Reproducibility: Can You Rebuild Your Tools?
Infrastructure as Code (IaC) taught us that servers and networks must be reproducible; you should be able to rebuild an environment with the same configuration, every single time.
The same rule applies to your Python environment:
What version of each package?
Any platform-specific dependencies?
Any known conflicts?
Will it still work if I deploy it in Docker, CI/CD, or another machine?
If you can’t answer these confidently, your automation stack is brittle.
NetDevOps Scenarios Affected by This
Scenario | What Breaks |
|---|---|
You install | It uses different dependencies from production and fails on Junos |
You upgrade | It deprecates parameters used in your config backup script |
A colleague shares a repo | You can’t reproduce their setup, and it fails with cryptic errors |
You build a CI/CD pipeline | Your job randomly fails because packages install different versions daily |
This is version drift in action, and it’s a real operational risk.
Why Managing Packages Is Not Optional
Just as you’d never configure BGP without sanity checks, filters, or version control, you shouldn’t write or run Python automation scripts without controlling the software they're built on.
Enter package management tools: pip, virtualenv, pip-tools, poetry, pyenv, uv, and others. These exist not just to install software, but to:
Pin versions
Create isolated environments
Share dependency definitions with your team
Rebuild environments exactly as they were tested
4. The Solution: Package Managers and Virtual Environments
To tame the chaos of dependency hell, eliminate version drift, and ensure reproducibility across systems, Python gives us a set of tools, not just to install packages, but to do so safely, predictably, and repeatedly. Chief among these are package managers and virtual environments.
Together, they form the foundational layer of any modern Python workflow, especially in NetDevOps and infrastructure automation projects where stability, repeatability, and isolation are paramount.
What Is a Package Manager?
A package manager is a tool that helps you:
Install third-party libraries (like
netmiko,napalm,jinja2,requests)Resolve dependencies (if
napalmneedsrequests, it installs that too)Track versions (you can install a specific version, like
netmiko==3.4.0)Uninstall or upgrade packages as needed
The most common package manager in Python is pip, but there are others like pip-tools, pipx, poetry, and uv , which we’ll cover in future articles.
Without a package manager, you’d have to download packages manually, unzip them, and deal with all their dependencies yourself; a quite hard task for anything beyond trivial scripts.
What Is a Python Package?
So, again, a package is essentially a library of reusable code bundled in a structured way so Python can import and use it. For example:
napalmis a package for multi-vendor configuration managementnetmikois a package for SSH-based device communicationjinja2helps with templating configs dynamicallypyyamlparses YAML, often used as source-of-truth in automation
All of these live on PyPI (the Python Package Index), think of it as the App Store of Python packages. And pip is your trusted downloader.
But here’s the problem: when you install a package globally on your system, it’s installed for all scripts, across all projects. That can quickly lead to version conflicts, especially if different projects need different versions of the same package.
What Is a Virtual Environment?
A virtual environment is an isolated Python workspace, a self-contained directory that has:
Its own copy of the Python interpreter
Its own
site-packagesfolder (where libraries get installed)Its own installed packages (independent of your system Python)
When you activate a virtual environment, it hijacks your terminal so that all pip install operations install packages only inside that environment, and not globally.
This is absolutely essential for project isolation and reproducibility.
Let’s say you have two automation projects:
Project | Library Needed |
|---|---|
BGP Linter |
|
L2 Inventory Tool |
|
Without virtual environments, you’d have to pick one version and hope it works for both. With virtual environments, each project can pin its own exact versions with zero conflicts.

Made with AI
Example: Automating Device Status Checks
Let’s say you’re building a script that connects to 20 routers and verifies interface status using netmiko.
You start a new project directory:
mkdir net-checker
cd net-checker
You create a virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
Now your shell is using the local virtual environment; you’ll see (venv) in your prompt.
You install your dependencies locally:
pip install netmiko rich
You write your script, test it, and version control it.
Now, anyone on your team can clone your repo, run python3 -m venv venv && source venv/bin/activate && pip install -r requirements.txt, and reproduce the same behavior. No surprises. No drift.
Virtual Environments in CI/CD, Labs, and Production
When you're running Python automation in:
GitHub Actions
GitLab CI/CD pipelines
Jenkins pipelines
Docker containers
Lab VMs
Production validation systems
...virtual environments become a non-negotiable best practice. They ensure that your job or script always runs against the same exact dependencies it was written and tested with, which is vital for consistency, auditing, and uptime.
Even within a single laptop or VM, it’s smart to isolate:
One environment for dev
Another for testing
Another for production
A Quick Summary
Tool | Purpose |
|---|---|
| Installs, upgrades, and manages packages from PyPI |
| Creates isolated environments for project-specific Python + packages |
| A text file listing the exact packages and versions your project needs |
This is the trinity of Python dependency management, and it’s your first line of defense against future chaos.
Coming next, we’ll explore pip in depth and how to use it, when to pin versions, and how to build a reproducible automation environment the right way.
5. Meet Pip: The Default Python Package Installer
If you’ve ever run a Python automation script on a network lab or production toolchain, chances are you’ve already used pip. It’s the de facto standard for installing Python packages from PyPI, and it ships by default with most modern Python distributions.
But using pip effectively and safely means going beyond the basics. It’s not just about installing a package; it’s about managing versions, reproducibility, and consistency across teams and environments.
Let’s explore pip from first principles as if you’re deploying a NetDevOps tool in production, and your automation must work reliably across CI, lab, and field systems.

Made with AI
What Exactly Is Pip?
pip stands for “Pip Installs Packages”. It connects to the Python Package Index (PyPI), downloads packages (and their dependencies), and installs them in your current environment, whether that’s a virtualenv, pipx environment, or system-wide Python.
Behind the scenes, a package on PyPI is just a .whl (wheel) or .tar.gz (source archive) that includes:
Python code modules (
.py)A metadata file (
pyproject.toml,setup.py)Dependency information
Optional compiled binaries (for speed or C extensions)
When you run:
pip install netmiko
pip:
Connects to https://pypi.org
Finds the latest release of
netmikoDownloads it (and all required dependencies:
paramiko,scp,pyserial, etc.)Installs everything into your environment’s
site-packagesfolderMakes it immediately importable in your scripts
Basic Usage: Installing and Managing Packages
Let’s walk through core pip operations, with examples grounded in network engineering use cases.
Install a Package
You’re building a script to SSH into routers and retrieve LLDP neighbors. You need netmiko (or paramiko, or another, but let's stick with the basics here):
pip install netmiko
You can also install multiple packages in a single command:
pip install netmiko napalm rich
Install a Specific Version
You’re testing an older script that requires netmiko 3.4.0.
pip install netmiko==3.4.0
You can also use version constraints:
pip install "netmiko>=3.3,<4.0"
This is critical in long-lived NetDevOps environments where APIs change, and you can’t afford surprises during deployment.
Save Requirements to a File
Once your environment is working, save it to a file:
pip freeze > requirements.txt
This captures exact package versions (including dependencies) like:
netmiko==4.2.0
paramiko==3.4.0
scp==0.14.5
Anyone with this file can recreate your environment:
pip install -r requirements.txt
This is a foundational practice for:
Team collaboration
CI/CD pipelines
Production rollouts
Disaster recovery
Uninstall a Package
Need to clean up?
pip uninstall napalm
Or remove everything from a virtual environment:
pip freeze | xargs pip uninstall -y
This is especially useful when troubleshooting conflicts or repaving an environment from scratch.
Inspect What’s Installed
You can inspect your environment:
pip list
To see outdated packages:
pip list --outdated
Or show the full dependency tree:
pip show netmiko
This is useful when you need to audit what’s running in a script used in prod.
Pip in NetDevOps: A Scenario
Let’s say you’re building a Topology Mapper that pulls data via netmiko, parses it with textfsm, and visualizes it with rich.
Here’s how you’d approach it with pip:
Create your project folder
Create a virtual environment:
python -m venv venvActivate it:
source venv/bin/activateInstall tools:
pip install netmiko textfsm rich
Write your script
Save requirements:
pip freeze > requirements.txtCommit everything to Git
Share with your team or run in CI:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python topology_mapper.py
Reproducibility: ✅
Isolation: ✅
Collaboration-ready: ✅
⚠️ Common Pitfalls with Pip
Pitfall | What to Do Instead |
|---|---|
Installing globally ( | Use virtual environments or |
Not pinning versions | Use |
Mixing | Keep Python dependencies in virtualenvs |
Using | Freeze dependencies and version control them |
It all starts here: knowing how to wield pip cleanly and confidently.
Pip Alternatives (Coming Soon)
In upcoming articles, we’ll cover more advanced and safer workflows using tools like:
pip-toolsfor dependency lockingpipxfor running CLI tools in isolated environmentspoetryfor full project/package managementpyenvfor package managementuvan all-in-one solutiontwinefor publishing your own tools
The list goes beyond the above… There are many topics you might want to explore further as you continue advancing in your Python development skills. Things like pipx, conda, twine, pdm, rye, and many others. We'll cover those when the time is right and the overall progress of this Packets & Python series makes sense to discuss them.
7. Limitations of pip and venv: Where the Defaults Fall Short
While pip and venv are the foundational tools in the Python ecosystem and often good enough for beginners or small projects, but they start to show real limitations as your work grows in complexity, size, and scale. Understanding where they fall short helps you recognize when it’s time to adopt more advanced tools like pip-tools, poetry, or pyenv.
1. No Native Support for Managing Python Versions (venv)
Perhaps the most significant gap is that venv does not manage Python versions. It will always create a virtual environment using the currently active Python interpreter on your system. This is problematic in real-world scenarios where:
Your system has multiple projects that need different Python versions (e.g., one on 3.10, another on 3.12).
You’re working in a team or production environment and need reproducibility across machines.
You want to test your code on different versions of Python easily.
Solution: Tools like pyenv or asdf, or even the all-in-one uv, are designed to manage multiple Python runtimes cleanly.
2. Environment Drift and Dependency Chaos (pip)
pip installs packages and their dependencies without pinning exact versions unless you explicitly lock them. This leads to a problem known as dependency drift:
On Monday, you installed a package that required
requests>=2.0. You got version 2.31.On Friday, your teammate installs the same package, but now
requestsis at version 2.32, and subtly breaks your script.You debug something that only fails in prod, but not locally, because of slightly different dependency trees.
Solution: Use pip-tools or poetry to lock all transitive dependencies and ensure deterministic builds.
3. Lack of Dependency Resolution Logic (pip)
Unlike modern dependency managers, pip's resolver (although improved) can still be less intelligent when dealing with complex dependency trees:
You may end up with conflicts between packages (e.g., one requires
urllib3<2, another requiresurllib3>=2).There is no built-in solver that can help suggest workarounds or alternate resolutions.
It doesn’t tell you why a specific dependency was pulled in or what version was chosen.
Solution: Tools like poetry, uv, or pip-tools offer smarter resolution and better insight.
4. Manual Workflow Overhead
Using pip and venv alone often requires manual work and scripting to do things like:
Create virtual environments.
Activate/deactivate environments across shells and OSes.
Export dependency lists with
pip freeze.Separate dev vs prod dependencies (
requirements.txtdoesn't support this natively).
This leads to custom shell scripts and inconsistent setups across teams.
Solution: Tools like poetry provide opinionated workflows that reduce the need for duct tape and CLI boilerplate.
5. No Built-In Publishing or Metadata Handling
If you want to:
Package your script or module
Upload it to PyPI (internal or public)
Add proper versioning and metadata
pip won’t help you. You’ll need to write your own setup.py or pyproject.toml, use setuptools, twine, and manage your build logic manually.
Solution: poetry or flit provide one-command publishing, version bumping, and packaging via declarative files.
6. Doesn’t Isolate CLI Tools (pip)
Sometimes, you want to install a CLI utility (like httpie, ansible, or black) without polluting your project’s virtual environment or your system-wide Python installation.
pipdoesn’t support isolated, ephemeral CLI installs.You might end up with versioning conflicts between tool dependencies and project dependencies.
Solution: Use pipx, which installs Python-based CLIs in isolated environments and lets you run them globally.
The Takeaway for Network Engineers
As a network engineer venturing deeper into NetDevOps or automation pipelines, understanding these limitations is essential. Tools like pip and venv give you the building blocks, but they don’t scale well alone. You need version control, reproducibility, and automation in your tooling just as much as you need them in your configurations and topologies.
Don't just automate your networks. Automate your Python environments too.
7. Why Package Hygiene Is Engineering Hygiene
At first glance, Python package management might seem like a purely software engineering concern, one more layer of abstraction on top of “real” network automation work. But in reality, your ability to manage dependencies cleanly, reproducibly, and safely is one of the clearest signs of engineering maturity in NetDevOps workflows.
Let’s explore why.
Dependency Hygiene = Operational Hygiene
Imagine a script you wrote six months ago to back up device configs across a hundred edge routers. It was built on netmiko, parsed with textfsm, and logged data using rich.
It worked flawlessly.
Then one day, a junior teammate pulls that script down, runs pip install netmiko, and suddenly nothing works. You’re debugging strange issues like:
NetmikoTimeoutExceptionnow includes new argumentsYour
textfsmparser fails due to an upstream schema changeLogging formats differ slightly, breaking your downstream parser
What happened?
You lost control of your environment.
This isn't about programming but rather the systems engineering discipline, ensuring that what you ship works exactly the same way for everyone, every time.
Treat Your Automation Like a Platform
Network engineers have always been deeply concerned with reliability, consistency, and repeatability, just in different forms: routing convergence, hardware standardization, MTBF, and failover design.
Python package management brings those same concerns to the automation layer.
A well-managed Python environment should:
Be self-contained (via virtualenv or pipx)
Be version-locked (via
requirements.txtor lockfiles)Be repeatable (same outcome, every time, in CI or prod)
Be documented (how to install, run, update)
If these characteristics sound familiar, it’s because they mirror the traits of a robust network architecture. You don’t just want it to “work”: you want it to stay working, even as the world around it changes.
NetDevOps Is More Than Scripting
Learning to use pip, virtualenv, requirements.txt, and, later, tools like pip-tools, pyenv, poetry, or uv is part of your journey from network scripter to network software engineer.
You’re no longer just automating one-off tasks but instead building reusable platforms, collaborating with teammates, integrating with CI/CD pipelines, packaging your tools for others, and maybe even publishing your own Python packages someday.
The difference between a brittle automation stack and a scalable, sharable, reliable platform? Often, it's how you manage your dependencies.
What’s Next?
In the next chapters of this series, we’ll build on this foundation by introducing:
requirements.txtand constraints filesDependency locking with
pip-toolsExecuting isolated tools with
pipxStructured project management with
pyenvandpoetryFast and secure dependency resolution with
uvPublishing to PyPI with
twine
Each tool plays a role in the Python engineering lifecycle, and you’ll learn when and how to apply them in NetDevOps workflows.
But for now, take a moment to internalize this:
Clean automation starts with clean environments.
Reproducibility isn’t optional: it’s table stakes.
And pip and venv, humble as they may seem, are the first vital steps toward lasting automation.
See you in the next chapter!
Leonardo Furtado

