
Chapter Outline
Chapter 19: Building Command-Line Applications with Python
Python excels at creating command-line interface (CLI) applications thanks to its extensive standard library and clean syntax. CLI apps are ideal for scripting, DevOps tools, data wrangling utilities, and developer-facing tools.
In this chapter, we will cover:
- Benefits of CLI apps
- Argument parsing with
argparseandclick - Structuring a CLI app
- Incorporating design patterns (e.g., Command, Strategy)
- Two example projects:
- Task Manager CLI (with Command Pattern)
- File Processor CLI (with Strategy Pattern)
19.1 Why Build CLI Apps in Python?
- Easy to build and share scripts
- Fast execution and low overhead
- Great for automation and tooling
- Portable across platforms
- Easier to test compared to GUIs
19.2 Parsing Arguments with argparse
Python's argparse is built-in and powerful for parsing command-line arguments.
Example: Simple Calculator
python1import argparse23parser = argparse.ArgumentParser(description="Simple CLI Calculator")4parser.add_argument("a", type=int, help="First number")5parser.add_argument("b", type=int, help="Second number")6parser.add_argument("--operation", choices=["add", "sub"], default="add")78args = parser.parse_args()910if args.operation == "add":11 print(args.a + args.b)12else:13 print(args.a - args.b)
Run it like:
bashpython calc.py 5 3 --operation sub
19.3 A More Ergonomic CLI with Click
For more ergonomic CLI apps, use click, which supports nested commands, colorful help, and validation.
Example: Greet
python1import click23@click.command()4@click.option('--name', prompt='Your name', help='The person to greet.')5def hello(name):6 click.echo(f'Hello, {name}!')78if __name__ == '__main__':9 hello()
Install with:
bashpip install click
19.4 Design Patterns in CLI Apps
Let’s explore Command and Strategy patterns through practical CLI apps.
19.5 Example Project: Task Manager CLI (Command Pattern)
We will create a CLI app to manage tasks using the Command Pattern, which encapsulates commands as objects.
Command Pattern Overview
- Encapsulates a request as an object
- Allows undo/redo, macro commands
- Decouples sender from receiver
Project Structure: Task Manager
bashtask_manager/│├── main.py├── commands/│ ├── base.py│ ├── add.py│ └── list.py└── tasks.json
task_manager/commands/base.py1class Command:2 def execute(self):3 raise NotImplementedError("Execute must be implemented.")
task_manager/commands/add.py1import json2from commands.base import Command34class AddTaskCommand(Command):5 def __init__(self, task, file='tasks.json'):6 self.task = task7 self.file = file89 def execute(self):10 try:11 with open(self.file, 'r') as f:12 data = json.load(f)13 except FileNotFoundError:14 data = []1516 data.append({"task": self.task})17 with open(self.file, 'w') as f:18 json.dump(data, f, indent=2)19 print(f"Task added: {self.task}")
task_manager/commands/list.py1import json2from commands.base import Command34class ListTasksCommand(Command):5 def __init__(self, file='tasks.json'):6 self.file = file78 def execute(self):9 try:10 with open(self.file, 'r') as f:11 data = json.load(f)12 for i, item in enumerate(data):13 print(f"{i + 1}. {item['task']}")14 except FileNotFoundError:15 print("No tasks found.")
task_manager/main.py1import argparse2from commands.add import AddTaskCommand3from commands.list import ListTasksCommand45parser = argparse.ArgumentParser()6parser.add_argument("command", choices=["add", "list"])7parser.add_argument("--task", help="Task description (for add command)")89args = parser.parse_args()1011if args.command == "add":12 cmd = AddTaskCommand(task=args.task)13elif args.command == "list":14 cmd = ListTasksCommand()1516cmd.execute()
task_manager/tasks.json1[]
Run it
bashpython main.py add --task "Write chapter on CLI apps"python main.py list
19.6 Example Project: File Processor (Strategy Pattern)
We will create a CLI that processes files using pluggable strategies.
Strategy Pattern Overview
- Defines a family of algorithms
- Makes them interchangeable at runtime
- Promotes Open/Closed Principle
Project Structure: File Processor
bashfile_processor/│├── main.py├── strategies/│ ├── base.py│ ├── uppercase.py│ └── reverse.py└── input.txt
file_processor/strategies/base.py1class ProcessorStrategy:2 def process(self, data):3 raise NotImplementedError()
file_processor/strategies/uppercase.py1class UppercaseStrategy(ProcessorStrategy):2 def process(self, data):3 return data.upper()
file_processor/strategies/reverse.py1class ReverseStrategy(ProcessorStrategy):2 def process(self, data):3 return data[::-1]
file_processor/main.py1import argparse2from strategies import UppercaseStrategy, ReverseStrategy34strategies = {5 "uppercase": UppercaseStrategy,6 "reverse": ReverseStrategy7}89parser = argparse.ArgumentParser()10parser.add_argument("input")11parser.add_argument("output")12parser.add_argument("--strategy", choices=strategies.keys(), required=True)1314args = parser.parse_args()1516with open(args.input, 'r') as f:17 data = f.read()1819strategy = strategies[args.strategy]()20result = strategy.process(data)2122with open(args.output, 'w') as f:23 f.write(result)2425print(f"File processed using {args.strategy}.")
"""
@file: file_processor/input.txt
@showLineNumbers
"""
this is an all lower case text.
Run It
bashpython file_processor.py input.txt output.txt --strategy uppercase
This will create a file output.txt with the text of the file input.txt based on the specified strategy.
19.7 Testing CLI Apps
Use pytest with subprocess
python1import subprocess23def test_task_add_and_list():4 subprocess.run(["python", "main.py", "add", "--task", "Test CLI"], check=True)5 result = subprocess.run(["python", "main.py", "list"], capture_output=True, text=True)6 assert "Test CLI" in result.stdout
19.8 Packaging CLI Tools
So far, we’ve written Python scripts that can be run locally with python script.py. But if you want your command-line app to be installed and executed like real CLI tools (git, pytest, black), you’ll need to package and distribute it.
Packaging your CLI tool allows users to run it with a simple command, e.g.:
bash$ mycli init$ mycli run --debug
instead of:
bash$ python path/to/mycli/main.py run --debug
Let’s go step by step.
Step 1: Project Layout
A clean project structure helps with packaging:
bashmycli/├── pyproject.toml├── src/│ └── mycli/│ ├── __init__.py│ ├── cli.py # CLI entry point│ └── commands.py # command implementations├── tests/│ └── test_commands.py
Step 2: Define an Entry Point
Inside cli.py, write your CLI entry point function:
src/mycli/cli.py1import sys23def main():4 if len(sys.argv) < 2:5 print("Usage: mycli [init|run|deploy]")6 return78 command = sys.argv[1]9 if command == "init":10 print("Initializing project...")11 elif command == "run":12 print("Running project...")13 elif command == "deploy":14 print("Deploying project...")15 else:16 print(f"Unknown command: {command}")
Step 3: Configure pyproject.toml
Modern Python packaging uses PEP 621 with pyproject.toml.
Here’s a minimal config to turn your CLI app into an installable tool:
pyproject.toml1[build-system]2requires = ["setuptools>=61.0"]3build-backend = "setuptools.build_meta"45[project]6name = "mycli"7version = "0.1.0"8description = "A simple CLI demo tool"9readme = "README.md"10authors = [{ name="Your Name", email="you@example.com" }]11license = { text = "MIT" }12dependencies = []1314[project.scripts]15mycli = "mycli.cli:main"
The key part is:
cs[project.scripts]mycli = "mycli.cli:main"
This tells Python packaging tools that when mycli is installed, a console script should be created that points to mycli.cli.main.
Step 4: Build and Install Locally
Use pip or build to test packaging:
bash$ pip install --upgrade build$ python -m build
This will create dist/mycli-0.1.0.tar.gz and dist/mycli-0.1.0-py3-none-any.whl.
Install it locally:
bash$ pip install dist/mycli-0.1.0-py3-none-any.whl
Now run your tool directly:
bash$ mycli runRunning project...
Step 5: Distribute
You can publish your tool to PyPI so others can install it via pip install mycli.
bash$ pip install --upgrade twine$ twine upload dist/*
Step 6: Extras (Best Practices)
- Use
argparseorclick: For real-world CLI apps, rely on libraries likeargparse(in stdlib) orclickfor parsing flags and options. - Add version command: Users expect
mycli --version. - Logging over print: Replace
printwith theloggingmodule for configurable verbosity. - Testing CLI tools: Use
pytestwithsubprocess.runortyper.testing.CliRunnerif you use Typer.
Conclusion
You’ve learned:
- How to build CLI apps with
argparseandclick - How to use design patterns like Command and Strategy
- How to write reusable, testable CLI utilities
- How to package your tools for reuse
Assignment — Workflow Runner MVP (CLI-first)
Goal
Build an MVP job runner plus job queue and scheduler you can run locally as three cooperating CLI processes. The runner executes containerized tasks (Python or JavaScript) and reports results. The scheduler reads a YAML workflow, builds a simple DAG (no fan-in/out), and enqueues runnable nodes. The queue is a lightweight broker (filesystem or SQLite). All three tools should be usable as standalone CLIs and as daemon-like loops.
Readers have already learned how to structure and package CLI tools and tests; apply those practices here.
Deliverables
You will produce three installable CLI apps (or one multi-command CLI):
wf-runner— executes a singlejob_descin a container, returns ajob_result.wf-queue— provides two queues:job_request,job_result.wf-scheduler— parses a YAML workflow into a DAG (single-dependency nodes), enqueues ready jobs, and advances when dependencies complete.
Required CLI commands
Each tool should expose subcommands (argparse/click ok—your chapter shows both options).
wf-runner
init— probe environment (Docker availability), initialize logs/dirs.validate JOB_DESC.json— schema check for a job description.prepare JOB_DESC.json— pre-pull image, warm cache if needed.run JOB_DESC.json— run the job and print a JSONjob_resultto stdout.status— (optional) show last N results from log/store.cleanup— prune temp dirs/artifacts.discover— print available “plugins” (serializers/queue backends).
wf-queue
listen— start a loop that pulls fromjob_request, launcheswf-runner run, and pushes ajob_result.enqueue JOB_DESC.json— push a job request manually.drain— (optional) show and clear pending queues.stats— basic queue metrics.
wf-scheduler
validate WORKFLOW.yamlplan WORKFLOW.yaml— print a topological order preview (single-dependency DAG).start WORKFLOW.yaml— enqueue initial runnable nodes and keep advancing as results arrive.
Interfaces & Schemas
Job description (JOB_DESC.json)
JOB_DESC.json1{2 "id": "job-123",3 "image": "python:3.10-slim",4 "cmd": ["python", "script.py"],5 "mounts": [{"host": "/abs/path/myproj", "container": "/work", "mode": "ro"}],6 "workdir": "/work",7 "env": {"FOO": "bar"},8 "timeout_sec": 60,9 "resources": {"cpus": 1, "memory": "256m"}10}
Job result (stdout of wf-runner run and the payload stored in job_result)
JOB_RES.json1{2 "id": "job-123",3 "status": "SUCCEEDED | FAILED | TIMEOUT",4 "exit_code": 0,5 "stdout": "…",6 "stderr": "…",7 "started_at": "2025-08-18T00:12:25Z",8 "ended_at": "2025-08-18T00:12:27Z",9 "runtime_ms": 2010,10 "artifacts": [{"path": "/work/out.json"}],11 "metadata": {"image": "python:3.10-slim"}12}
Workflow (WORKFLOW.yaml)
WORKFLOW.yaml1workflow: example2nodes:3 fetch:4 job: jobs/fetch.json5 depends_on: null6 transform:7 job: jobs/transform.json8 depends_on: fetch9 save:10 job: jobs/save.json11 depends_on: transform
Constraint:Single dependency per node (no fan-in/out).
Queue Contract
The Job Queue is a transparent message pipeline. It does not schedule jobs, track state, or retry failures. Its only responsibility is to accept job requests, allow workers to dequeue them, and accept results that schedulers (or other consumers) can read back.
Backends
Pick one lightweight local backend:
- Filesystem (default)
- Two dirs:
./queue/job_request/./queue/job_result/
- Each message is a JSON file named
{id}.json. - Use atomic writes (e.g., os.rename) to avoid partial reads.
- Two dirs:
- SQLite (optional)
- Tables:
job_request(id TEXT PK, payload TEXT, ts INTEGER)job_result(id TEXT PK, payload TEXT, ts INTEGER)
- Tables:
Message Envelope (same for request/result)
json{"id": "job-123","kind": "job_request | job_result","payload": {},"ts": 1755497545230}
Commands
wf-queue enqueue --queue job_request payload.json→ enqueue a job requestwf-queue dequeue --queue job_request → dequeuethe oldest job requestwf-queue enqueue --queue job_result result.json→ enqueue a job resultwf-queue dequeue --queue job_result → dequeuethe oldest job result
Daemon Mode
- wf-queue start starts a simple listen loop that exposes enqueue/dequeue operations.
- Workers and schedulers interact by invoking enqueue/dequeue against this daemon (or directly against the backend).
Scheduler Algorithm (single-dependency DAG)
- Parse YAML into nodes:
{name, job_path, depends_on}. - Maintain state maps:
state[name] ∈ {PENDING,RUNNING,DONE,FAILED}anddependency[name]. - On
start:- Enqueue all nodes with
depends_on: null. - Subscribe to
job_result. - When a
job_resultfor node X arrives:- Mark
XasDONEorFAILED. - For each node Y whose
depends_on == Xand all dependencies areDONE(single parent here), enqueue Y.
- Mark
- Enqueue all nodes with
- Stop when all nodes are terminal.
Logging & Packaging
- Use
logging(not bare prints) with configurable verbosity (--verbose,--quiet), as recommended in your chapter. - Package each CLI with a console script entry point in
pyproject.toml:
cs[project.scripts]wf-runner = "wf_runner.cli:main"wf-queue = "wf_queue.cli:main"wf-scheduler = "wf_scheduler.cli:main"
So users can install and call them like real tools (wf-runner run …).
Testing Guidance
- Unit: parse/validate job/workflow; queue I/O adapters; command builders.
- Integration (local):
- Start
wf-queue listenin a background process. - Run
wf-scheduler start WORKFLOW.yaml. - Assert that
job_resultfiles/rows appear in orderfetch → transform → save.
- Start
- Use
pytestandsubprocess.runto drive CLI binaries, as shown in the chapter. - For packaging tests, keep the chapter’s project layout and entry point pattern.
Hints & Tips
- Arg parsing: Keep subcommands simple (
argparseis enough;click/Typer optional). - Safer subprocess: always set
text=True,capture_output=True, and handle timeouts. - Determinism: seed any randomness and use temp dirs under
./.tmp/. - Idempotency: generating
{id}.jsonfiles allows re-runs without duplication. - Observability: log the selected strategy (sequential/none here; parallel is future work), queue pop/push, runner start/stop, and node state changes.
- Daemon mode: implement
--loop(or alistensubcommand) thatwhile True: work(); sleep(…);and supportsSIGINTclean shutdown. - Local "multi-process" demo: open three terminals:
wf-queue listenwf-scheduler start WORKFLOW.yamltail -f logs/*.logOr script it withtmux/make.
Acceptance Criteria
- Installing the package(s) provides three console commands (or one multi-command tool).
- Running:
wf-queue listenwf-scheduler start demo.yaml- (Optionally) manual
wf-queue enqueue jobs/*.jsonresults in completed results appearing injob_result, with logs showing the pipeline progress.
wf-runner runreturns a validjob_resultJSON to stdout even on failure (non-zeroexit_code+status: FAILED).
Check your understanding
Test your knowledge of Building Command-Line Applications with Python