
Chapter Outline
Chapter 16: Python Design Patterns
Design patterns are tried-and-tested solutions to common software design problems. They provide structure and best practices, allowing you to write cleaner, scalable, and maintainable code.
This chapter introduces core design patterns in Python, with real-world examples and use cases for each.
16.1 Why Design Patterns Matter
- Improve code reusability and readability
- Solve recurring problems in a structured way
- Help with team collaboration via shared terminology
- Ease the transition from design to implementation
16.2 Categories of Design Patterns
| Category | Purpose |
|---|---|
| Creational | Object creation logic |
| Structural | Relationships between objects |
| Behavioral | Communication between objects |
16.3 Creational Patterns
Creational Design Patterns are a category of software design patterns that deal with object creation mechanisms, aiming to make a system independent of how its objects are created, composed, and represented. Instead of instantiating classes directly with new (or equivalent in Python, ClassName()), these patterns provide flexible ways to delegate the instantiation process. This helps in managing complexity, especially when objects require intricate setup, need to be reused, or when the system should remain loosely coupled to specific classes. Common examples include Singleton (ensures only one instance of a class exists), Factory Method (delegates instantiation to subclasses), Abstract Factory (creates families of related objects), Builder (constructs complex objects step by step), and Prototype (creates objects by cloning existing ones). They are commonly used in frameworks, dependency injection systems, UI toolkits, and applications that need configurable or extensible object creation workflows. In practice, Creational Patterns improve flexibility, promote reusability, and simplify code maintenance by separating object construction from its use.
Singleton Pattern
Ensures a class has only one instance, and provides a global point of access to it.
python1class Singleton:2 _instance = None34 def __new__(cls):5 if not cls._instance:6 cls._instance = super().__new__(cls)7 return cls._instance89a = Singleton()10b = Singleton()11print(a is b) # True
Use case: Logging, configuration managers, database connections.
Factory Pattern
Creates objects without specifying the exact class. For instance, in a workflow management system you often have many task types (HTTP call, SQL query, Spark job, Docker task…). A factory method lets the scheduler/loader create the correct concrete class from a spec (YAML/JSON/UI) without hard‑coding class names everywhere.
python1class Task(ABC):2 @abstractmethod3 def run(self): ...45class HttpTask(Task): ...6class SqlTask(Task): ...7class SparkTask(Task): ...89class TaskFactory:10 _registry = {11 "http": HttpTask,12 "sql": SqlTask,13 "spark": SparkTask,14 }1516 @classmethod17 def create(cls, spec: dict) -> Task:18 kind = spec["type"]19 return cls._registry[kind](**spec["params"])2021task1 = TaskFactory.create({'type': "http", 'params': {}})22task1.run()23task2 = TaskFactory.create({'type': "sql", 'params': {}})24task2.run()
Abstract Factory Pattern
Abstract Factory provides an interface for creating families of related objects without specifying their concrete classes. It’s useful when your code must remain agnostic to the specific concrete types it instantiates, but you still need these objects to be compatible with each other.
Example: A Car Parts Factory
A car, like any complex machine, is made up of various parts. You need a consistent family of parts—engine, wheels, interior, infotainment, battery/ECU—where the pieces must be compatible with each other and with the specific model/trim/market. That coordination is exactly where Abstract Factory shines.
With Abstract Factory, you define an interface that creates an entire family of related parts (create_engine, create_wheels, create_infotainment, …). Each concrete factory represents a model/trim (e.g., Model3Factory, CorollaFactory) and guarantees that all produced parts belong to the same family and work together. The client (assembler) never hardcodes which concrete parts to use; it receives a factory and assembles a car from whatever parts that factory yields.
1) Define part interfaces
python1from abc import ABC, abstractmethod2from dataclasses import dataclass34# Part interfaces5class Engine(ABC):6 @abstractmethod7 def spec(self) -> str: ...89class Wheels(ABC):10 @abstractmethod11 def spec(self) -> str: ...1213class Infotainment(ABC):14 @abstractmethod15 def spec(self) -> str: ...
2) Concrete parts per model
python1# Tesla Model 3 parts2class Model3Engine(Engine):3 def spec(self) -> str:4 return "Dual-motor electric, 258 kW, 75 kWh pack"56class Model3Wheels(Wheels):7 def spec(self) -> str:8 return "19-inch aero wheels, EV-rated tires"910class Model3Infotainment(Infotainment):11 def spec(self) -> str:12 return "17-inch center screen, Tesla OS"1314# Toyota Corolla parts15class CorollaEngine(Engine):16 def spec(self) -> str:17 return "1.8L I4 hybrid, 103 kW combined"1819class CorollaWheels(Wheels):20 def spec(self) -> str:21 return "16-inch alloy wheels, all-season tires"2223class CorollaInfotainment(Infotainment):24 def spec(self) -> str:25 return "8-inch touchscreen, Toyota Audio Multimedia"
3) Abstract Factory for “families of parts”
python1class PartsFactory(ABC):2 @abstractmethod3 def create_engine(self) -> Engine: ...4 @abstractmethod5 def create_wheels(self) -> Wheels: ...6 @abstractmethod7 def create_infotainment(self) -> Infotainment: ...
4) Concrete factories per car model (family)
python1class Model3Factory(PartsFactory):2 def create_engine(self) -> Engine:3 return Model3Engine()4 def create_wheels(self) -> Wheels:5 return Model3Wheels()6 def create_infotainment(self) -> Infotainment:7 return Model3Infotainment()89class CorollaFactory(PartsFactory):10 def create_engine(self) -> Engine:11 return CorollaEngine()12 def create_wheels(self) -> Wheels:13 return CorollaWheels()14 def create_infotainment(self) -> Infotainment:15 return CorollaInfotainment()
5) The assembler (client) stays model-agnostic
python1@dataclass2class Car:3 model: str4 engine: Engine5 wheels: Wheels6 infotainment: Infotainment78class CarAssembler:9 def __init__(self, factory: PartsFactory, model_name: str):10 self.factory = factory11 self.model_name = model_name1213 def assemble(self) -> Car:14 engine = self.factory.create_engine()15 wheels = self.factory.create_wheels()16 infotainment = self.factory.create_infotainment()17 return Car(18 model=self.model_name,19 engine=engine,20 wheels=wheels,21 infotainment=infotainment,22 )2324# Usage25car1 = CarAssembler(Model3Factory(), "Tesla Model 3").assemble()26car2 = CarAssembler(CorollaFactory(), "Toyota Corolla").assemble()2728print(car1.model, "|", car1.engine.spec(), "|", car1.wheels.spec(), "|", car1.infotainment.spec())29print(car2.model, "|", car2.engine.spec(), "|", car2.wheels.spec(), "|", car2.infotainment.spec())
Benefits of the Abstract Factory in this scenario:
- Compatibility & Consistency: A Model 3’s wheels, battery pack, and infotainment head unit match by construction.
- Easy swaps: Change the whole family at once (e.g., export market vs domestic market) by swapping factories.
- Scaling: Add new models/variants by adding one new factory, not editing existing logic everywhere.
- Testing: Provide a
TestPartsFactoryorMockPartsFactoryfor deterministic builds in unit tests.
A quick variant example: “Performance” trims
Add a new factory without touching existing ones:
python1class Model3PerformanceWheels(Wheels):2 def spec(self) -> str:3 return "20-inch performance wheels, summer tires"45class Model3PerformanceFactory(Model3Factory):6 def create_wheels(self) -> Wheels:7 return Model3PerformanceWheels() # engine & infotainment inherited
Swap it in:
pythonperf = CarAssembler(Model3PerformanceFactory(), "Tesla Model 3 Performance").assemble()
Everything stays compatible by construction.
Builder Pattern
When a “thing” has many optional parts, must be assembled in steps, or requires validation between parts, a Builder separates construction from representation. Instead of a single constructor with dozens of parameters (a “telescoping constructor”), the Builder exposes small, readable steps (often fluent), and a final build() that returns the finished product.
When to use Builder (vs Factory / Abstract Factory)
- Factory Method: picks which subclass to create based on input (focus: selection).
- Abstract Factory: creates families of related objects (engines, tires, dashboards) that should work well together (focus: cohesive product families).
- Builder: assembles one complex object step-by-step, often with optional parts, constraints, or order-sensitive assembly (focus: construction process).
You’ll often see Abstract Factory + Builder together: the abstract factory supplies compatible parts, while the builder assembles them into a finished car.
A simple Builder for cars
Let’s say a Car can have many optional features: engine, transmission, tires, infotainment, color, and safety package. Some combinations must be validated (e.g., Performance engine requires Sport tires).
python1from dataclasses import dataclass2from typing import Optional34# Part interfaces56@dataclass(frozen=True)7class Engine:8 name: str9 hp: int1011@dataclass(frozen=True)12class Transmission:13 type: str # "manual" | "automatic"1415@dataclass(frozen=True)16class Tires:17 name: str18 rating: str # "touring" | "sport"1920@dataclass(frozen=True)21class Infotainment:22 screen_in: float23 supports_carplay: bool2425# Abstract car2627@dataclass(frozen=True)28class Car:29 model: str30 engine: Engine31 transmission: Transmission32 tires: Tires33 color: str34 infotainment: Optional[Infotainment] = None35 safety_pkg: Optional[str] = None # "standard" | "advanced" | None363738class CarBuilder:39 def __init__(self, model: str):40 self._model = model41 self._engine: Optional[Engine] = None42 self._transmission: Optional[Transmission] = None43 self._tires: Optional[Tires] = None44 self._color: Optional[str] = None45 self._infotainment: Optional[Infotainment] = None46 self._safety_pkg: Optional[str] = None4748 # Fluent steps49 def with_engine(self, name: str, hp: int):50 self._engine = Engine(name, hp)51 return self5253 def with_transmission(self, type_: str):54 self._transmission = Transmission(type_)55 return self5657 def with_tires(self, name: str, rating: str):58 self._tires = Tires(name, rating)59 return self6061 def painted(self, color: str):62 self._color = color63 return self6465 def with_infotainment(self, screen_in: float, supports_carplay: bool = True):66 self._infotainment = Infotainment(screen_in, supports_carplay)67 return self6869 def with_safety(self, pkg: str):70 self._safety_pkg = pkg71 return self7273 # Validation lives here74 def _validate(self):75 if not all([self._engine, self._transmission, self._tires, self._color]):76 raise ValueError("Engine, transmission, tires, and color are required")7778 # Example cross-part constraints:79 if self._engine.hp >= 350 and self._tires.rating != "sport":80 raise ValueError("High-HP build requires sport tires")8182 if self._transmission.type == "manual" and self._engine.name == "EV":83 raise ValueError("Manual transmission not available for EV")8485 def build(self) -> Car:86 self._validate()87 return Car(88 model=self._model,89 engine=self._engine, # type: ignore[arg-type]90 transmission=self._transmission, # type: ignore[arg-type]91 tires=self._tires, # type: ignore[arg-type]92 color=self._color, # type: ignore[arg-type]93 infotainment=self._infotainment,94 safety_pkg=self._safety_pkg,95 )
Usage:
python1sport_sedan = (2 CarBuilder("Falcon S")3 .with_engine("V6 Turbo", 380)4 .with_transmission("automatic")5 .with_tires("Eagle F1", "sport")6 .painted("Midnight Blue")7 .with_infotainment(12.0, True)8 .with_safety("advanced")9 .build()10)
Director (optional)
Director is not a design pattern, however, if you have repeated recipes (e.g., “base economy”, “performance pack”), a Director encodes the build steps.
python1class CarDirector:2 def build_economy(self, model: str) -> Car:3 return (4 CarBuilder(model)5 .with_engine("I4", 150)6 .with_transmission("automatic")7 .with_tires("AllSeason", "touring")8 .painted("Silver")9 .with_safety("standard")10 .build()11 )1213 def build_performance(self, model: str) -> Car:14 return (15 CarBuilder(model)16 .with_engine("V6 Turbo", 380)17 .with_transmission("automatic")18 .with_tires("Eagle F1", "sport")19 .painted("Red")20 .with_infotainment(12.0)21 .with_safety("advanced")22 .build()23 )
Builder + Abstract Factory: families of parts + assembly
Abstract Factory ensures compatible families of parts (e.g., Eco vs Performance). The Builder then assembles them into a car.
python1from abc import ABC, abstractmethod23# ----- Abstract Factory for parts -----4class PartsFactory(ABC):5 @abstractmethod6 def create_engine(self) -> Engine: ...7 @abstractmethod8 def create_transmission(self) -> Transmission: ...9 @abstractmethod10 def create_tires(self) -> Tires: ...1112class EcoPartsFactory(PartsFactory):13 def create_engine(self) -> Engine:14 return Engine("I4 Hybrid", 180)15 def create_transmission(self) -> Transmission:16 return Transmission("automatic")17 def create_tires(self) -> Tires:18 return Tires("EcoGrip", "touring")1920class PerformancePartsFactory(PartsFactory):21 def create_engine(self) -> Engine:22 return Engine("V8 Supercharged", 520)23 def create_transmission(self) -> Transmission:24 return Transmission("automatic")25 def create_tires(self) -> Tires:26 return Tires("TrackMax", "sport")2728# ----- Builder that can accept factory-provided parts -----29class FactoryAwareCarBuilder(CarBuilder):30 def with_parts_from(self, factory: PartsFactory):31 self._engine = factory.create_engine()32 self._transmission = factory.create_transmission()33 self._tires = factory.create_tires()34 return self3536# Usage:37eco_car = (38 FactoryAwareCarBuilder("Falcon E")39 .with_parts_from(EcoPartsFactory())40 .painted("Pearl White")41 .with_safety("standard")42 .build()43)4445track_car = (46 FactoryAwareCarBuilder("Falcon R")47 .with_parts_from(PerformancePartsFactory())48 .painted("Racing Yellow")49 .with_infotainment(10.0)50 .with_safety("advanced")51 .build()52)
Here the Abstract Factory guarantees consistent, compatible part families; the Builder controls assembly order/validation and optional features.
16.4 Structural Patterns
Structural patterns describe how classes and objects can be combined to form larger, more complex structures while keeping them flexible, reusable, and efficient. They help define composition over inheritance, which often leads to cleaner and more extensible designs.
Adapter Pattern
The Adapter Pattern allows incompatible interfaces to work together. Think of it as a “translator” between two systems.
Example: Notification
Imagine you’re building a system that needs to send notifications.
Your app expects a Notifier interface, but you have multiple third-party services with very different APIs (e.g., Slack, Email, SMS).
Instead of rewriting your app for each provider, you write adapters to normalize them to a common interface.
Step 1: Define a Common Interface
python1class Notifier:2 def send(self, message: str):3 raise NotImplementedError
Step 2: Third-Party APIs (Incompatible Interfaces)
python1# Pretend this is a library you can't change2class SlackAPI:3 def post_message(self, channel: str, text: str):4 print(f"[Slack] #{channel}: {text}")56class EmailAPI:7 def send_email(self, to: str, subject: str, body: str):8 print(f"[Email] To:{to} | {subject}: {body}")
Notice:
- Slack wants
(channel, text) - Email wants
(to, subject, body) - Neither matches
Notifier.send(message).
Step 3: Create Adapters
python1class SlackAdapter(Notifier):2 def __init__(self, slack_api: SlackAPI, channel: str):3 self.slack_api = slack_api4 self.channel = channel56 def send(self, message: str):7 self.slack_api.post_message(self.channel, message)8910class EmailAdapter(Notifier):11 def __init__(self, email_api: EmailAPI, recipient: str):12 self.email_api = email_api13 self.recipient = recipient1415 def send(self, message: str):16 subject = "Notification"17 self.email_api.send_email(self.recipient, subject, message)
Step 4: Client Code (No Changes!)
python1def notify_all(notifiers: list[Notifier], message: str):2 for notifier in notifiers:3 notifier.send(message)45# Usage6slack = SlackAdapter(SlackAPI(), channel="dev-team")7email = EmailAdapter(EmailAPI(), recipient="admin@example.com")89notifiers = [slack, email]10notify_all(notifiers, "Adapter Pattern makes integrations easy!")
Output
bash[Slack] #dev-team: Adapter Pattern makes integrations easy![Email] To:admin@example.com | Notification: Adapter Pattern makes integrations easy!
Why is this usable?
- Decouples your app from third-party APIs.
- You can swap providers without changing your core logic.
- Works in real-life systems: integrating payment gateways, APIs, cloud services, etc.
- Client code (
notify_all) doesn’t care where the message goes — Slack, Email, SMS, or something new.
Decorator Pattern
The Decorator Pattern lets you dynamically add behavior to an object without modifying its class. Think of it like “wrapping” an object with extra functionality.
Example: File Reader
Imagine you’re building a file reader system. You want to support basic file reading, but also be able to:
- Encrypt/Decrypt the data
- Compress/Decompress the data
- Log whenever a file is accessed
Instead of stuffing all of that into one FileReader, you build decorators.
Step 1: Define a Common Interface
python1class DataSource:2 def read(self) -> str:3 raise NotImplementedError
Step 2: Concrete Implementation
python1class FileDataSource(DataSource):2 def __init__(self, filename: str):3 self.filename = filename45 def read(self) -> str:6 with open(self.filename, "r") as f:7 return f.read()
Step 3: Base Decorator
python1class DataSourceDecorator(DataSource):2 def __init__(self, wrappee: DataSource):3 self.wrappee = wrappee45 def read(self) -> str:6 return self.wrappee.read()
This ensures all decorators behave like a DataSource.
Step 4: Concrete Decorators
python1class EncryptionDecorator(DataSourceDecorator):2 def read(self) -> str:3 data = self.wrappee.read()4 return "".join(chr(ord(c) + 1) for c in data) # simple shift encryption567class CompressionDecorator(DataSourceDecorator):8 def read(self) -> str:9 data = self.wrappee.read()10 return data.replace(" ", "") # naive "compression"111213class LoggingDecorator(DataSourceDecorator):14 def read(self) -> str:15 print(f"[LOG] Reading from {self.wrappee.__class__.__name__}")16 return self.wrappee.read()
Step 5: Client Code
python1# Suppose "example.txt" contains: "hello world"2source = FileDataSource("example.txt")34# Add decorators dynamically5decorated = LoggingDecorator(6 CompressionDecorator(7 EncryptionDecorator(source)8 )9)1011print(decorated.read())
Output
bash[LOG] Reading from EncryptionDecoratorifmmpxpsme # "hello world" encrypted and compressed
Why is this usable?
- You can stack behaviors dynamically at runtime.
- Each decorator adds a feature (logging, compression, encryption) without touching
FileDataSource. - You can reconfigure: maybe just logging, maybe logging + compression, etc.
Composite Pattern
The Composite Pattern lets you treat individual objects (leaves) and groups of objects (composites) uniformly. You define a common interface so a client can call the same methods on a single item or a whole tree of items. It’s perfect for hierarchies like file systems, UI widgets, BOMs (bill of materials), and workflow step groups.
Example 1: Car Parts BOM (Cost & Weight Aggregation)
Goal: compute total cost and total weight of a car from nested assemblies (engine, chassis, wheels…), where each assembly can contain parts or other assemblies.
python1from abc import ABC, abstractmethod2from typing import List34# ----- 1) Component -----5class CarComponent(ABC):6 @abstractmethod7 def total_cost(self) -> float: ...8 @abstractmethod9 def total_weight(self) -> float: ...10 @abstractmethod11 def describe(self, indent: int = 0) -> str: ...1213# ----- 2) Leaf -----14class Part(CarComponent):15 def __init__(self, name: str, cost: float, weight: float):16 self.name = name17 self._cost = cost18 self._weight = weight1920 def total_cost(self) -> float:21 return self._cost2223 def total_weight(self) -> float:24 return self._weight2526 def describe(self, indent: int = 0) -> str:27 pad = " " * indent28 return f"{pad}- Part: {self.name} | cost=${self._cost:.2f}, weight={self._weight:.1f}kg"2930# ----- 3) Composite -----31class Assembly(CarComponent):32 def __init__(self, name: str):33 self.name = name34 self._children: List[CarComponent] = []3536 def add(self, component: CarComponent) -> None:37 self._children.append(component)3839 def remove(self, component: CarComponent) -> None:40 self._children.remove(component)4142 def total_cost(self) -> float:43 return sum(c.total_cost() for c in self._children)4445 def total_weight(self) -> float:46 return sum(c.total_weight() for c in self._children)4748 def describe(self, indent: int = 0) -> str:49 pad = " " * indent50 lines = [f"{pad}+ Assembly: {self.name}"]51 for c in self._children:52 lines.append(c.describe(indent + 2))53 return "\n".join(lines)5455# ---- Usage ---------------------------------------------------------56if __name__ == "__main__":57 # Leaves58 piston = Part("Piston", 40.0, 1.2)59 spark_plug = Part("Spark Plug", 8.0, 0.1)60 block = Part("Engine Block", 500.0, 90.0)61 wheel = Part("Wheel", 120.0, 12.0)6263 # Sub-assemblies64 cylinder = Assembly("Cylinder")65 cylinder.add(piston)66 cylinder.add(spark_plug)6768 engine = Assembly("Engine")69 engine.add(block)70 engine.add(cylinder)7172 wheels = Assembly("Wheel Set")73 for _ in range(4):74 wheels.add(wheel)7576 # Top-level assembly (car)77 car = Assembly("Car")78 car.add(engine)79 car.add(wheels)8081 print(car.describe())82 print(f"\nTOTAL COST: ${car.total_cost():.2f}")83 print(f"TOTAL WEIGHT: {car.total_weight():.1f} kg")
Why this is useful
- You can nest as deep as needed.
- The client doesn’t care if it’s a
PartorAssembly; it callstotal_cost()either way. - Adding/removing parts doesn’t require changing the aggregation logic.
Example 2: Workflow Engine — Grouping Tasks
Goal: compose tasks into sequences (or even trees) and execute them with a single run() call. Each task returns a result; groups aggregate results.
python1from abc import ABC, abstractmethod2from typing import Any, List34# 1) Component5class Task(ABC):6 @abstractmethod7 def run(self) -> Any: ...89# 2) Leaf Task10class PrintTask(Task):11 def __init__(self, message: str):12 self.message = message1314 def run(self) -> str:15 # Side-effect could be logging, HTTP call, etc.16 output = f"[PrintTask] {self.message}"17 print(output)18 return output1920# Another leaf21class AddTask(Task):22 def __init__(self, a: int, b: int):23 self.a, self.b = a, b2425 def run(self) -> int:26 return self.a + self.b2728# 3) Composite (sequence of tasks)29class TaskGroup(Task):30 def __init__(self, name: str):31 self.name = name32 self._children: List[Task] = []3334 def add(self, task: Task) -> None:35 self._children.append(task)3637 def remove(self, task: Task) -> None:38 self._children.remove(task)3940 def run(self) -> List[Any]:41 results = []42 print(f"[TaskGroup] Starting: {self.name}")43 for t in self._children:44 results.append(t.run())45 print(f"[TaskGroup] Finished: {self.name}")46 return results4748# ---- Usage ---------------------------------------------------------49if __name__ == "__main__":50 t1 = PrintTask("Validate input")51 t2 = AddTask(40, 2)52 t3 = PrintTask("Persist to DB")5354 sub_pipeline = TaskGroup("Preprocess")55 sub_pipeline.add(t1)56 sub_pipeline.add(t2)5758 pipeline = TaskGroup("Main Workflow")59 pipeline.add(sub_pipeline)60 pipeline.add(t3)6162 all_results = pipeline.run()63 print("Results:", all_results)
Why this is useful
- A single interface (
Task.run) for both simple tasks and grouped tasks. - You can nest groups and plug them together to form complex workflows.
- Easy to extend: add parallel groups, conditional groups, retries, etc., without changing client code.
Proxy Pattern
The Proxy Pattern provides a surrogate or placeholder object that controls access to another object. Instead of calling the real object directly, clients interact with the proxy, which decides how and when to delegate requests.
This is especially useful when:
- You want to add access control (authorization, rate limiting).
- You want to lazy-load heavy resources (database connections, APIs).
- You want to add caching or logging without modifying the real object.
Example: Secured Task Execution
Imagine a workflow engine that executes tasks. Some tasks may require special authorization or restricted access (e.g., “Approve Payment”).
workflow/proxy_pattern.py1from abc import ABC, abstractmethod234class Task(ABC):5 """Abstract base class for workflow tasks."""67 @abstractmethod8 def execute(self, user: str) -> str:9 """Execute the task with the given user context."""10 raise NotImplementedError111213class RealTask(Task):14 """The real implementation of a workflow task."""1516 def __init__(self, name: str) -> None:17 self.name = name1819 def execute(self, user: str) -> str:20 return f"Task '{self.name}' executed by {user}..."212223class TaskProxy(Task):24 """Proxy for Task that enforces role-based access."""2526 def __init__(self, real_task: RealTask, allowed_roles: list[str]) -> None:27 self._real_task = real_task28 self._allowed_roles = allowed_roles2930 def execute(self, user: str, role: str | None = None) -> str:31 if role not in self._allowed_roles:32 return f"Access denied for {user} with role={role}!"33 return self._real_task.execute(user)3435if __name__ == "__main__":36 approve_payment = RealTask("Approve Payment")37 proxy = TaskProxy(approve_payment, allowed_roles=["Manager", "Admin"])3839 print(proxy.execute("Alice", role="Employee"))40 print(proxy.execute("Bob", role="Manager"))
Output:
bashAccess denied for Alice with role=Employee!Task 'Approve Payment' executed by Bob...
Why Proxy Works Well Here
- The client code (
workflow engine) doesn’t know whether it’s using a real task or a proxy. - Security checks (roles) are separated from business logic.
- You can easily extend proxies for logging, caching, or monitoring without changing
RealTask.
16.5 Behavioral Patterns
Behavioral design patterns focus on how objects interact and communicate. They define responsibilities, control flow, and message passing between objects.
Whereas creational patterns deal with object creation and structural patterns deal with object composition, behavioral patterns ensure that work gets done in flexible and reusable ways.
Chain of Responsibility
The Chain of Responsibility (CoR) pattern is a behavioral design pattern that allows a request to be passed along a chain of handlers, where each handler decides whether to process it or pass it to the next handler.
- Problem it solves: Avoids hardcoding request handling logic into one giant method. Instead, responsibility is spread across independent handlers.
- When to use:
- Processing pipelines (e.g., car assembly steps).
- Event handling systems.
- Request validation / middleware (like in web servers).
- Workflow orchestration (different actions depending on context).
Example: Workflow Engine
In a Workflow Engine a request moves through multiple processors — authentication, validation, execution, logging. Each processor either handles or forwards the request.
python1class WorkflowHandler(ABC):2 def __init__(self, next_handler=None):3 self.next_handler = next_handler45 @abstractmethod6 def handle(self, request: dict) -> dict:7 pass8910class AuthHandler(WorkflowHandler):11 def handle(self, request: dict) -> dict:12 if not request.get("authenticated", False):13 raise Exception("User not authenticated!")14 print("Authentication passed")15 return self.next_handler.handle(request) if self.next_handler else request161718class ValidationHandler(WorkflowHandler):19 def handle(self, request: dict) -> dict:20 if "query" not in request:21 raise Exception("Invalid request: missing query!")22 print("Request validated")23 return self.next_handler.handle(request) if self.next_handler else request242526class ExecutionHandler(WorkflowHandler):27 def handle(self, request: dict) -> dict:28 request["results"] = ["case-1", "case-2"]29 print("Workflow executed, results attached")30 return request313233if __name__ == "__main__":34 chain = AuthHandler(ValidationHandler(ExecutionHandler()))3536 request = {"authenticated": True, "query": "find cases"}37 response = chain.handle(request)38 print("Final Response:", response)
Output:
bashAuthentication passedRequest validatedWorkflow executed, results attachedFinal Response: {'authenticated': True, 'query': 'find cases', 'results': ['case-1', 'case-2']}
All the middlewares used in the chain to process the request execute sequentially. The final response is a transformed request.
Observer Pattern
The Observer Pattern is a behavioral design pattern where an object, called the Subject, maintains a list of dependents, called Observers, and automatically notifies them of state changes.
- Problem it solves:
Keeps objects loosely coupled. The subject doesn’t need to know about the observers’ implementation, only that they implement a
notifymethod (or equivalent). - When to use:
- GUI frameworks (update UI when model changes).
- Event-driven systems (pub/sub).
- Workflow orchestration engines (notify subscribers when a job’s state changes).
- Monitoring systems (alert observers on new events).
Example: Workflow Orchestration Notifications
Imagine a workflow engine where tasks execute, and multiple subsystems (UI, logging, monitoring) need updates whenever a task finishes. Instead of tightly coupling task execution with all those systems, we use the Observer Pattern.
python1from abc import ABC, abstractmethod234# --- Subject (Publisher) ---5class WorkflowTask:6 def __init__(self, name: str):7 self.name = name8 self._observers = []910 def attach(self, observer: "Observer"):11 self._observers.append(observer)1213 def detach(self, observer: "Observer"):14 self._observers.remove(observer)1516 def notify(self, status: str):17 for observer in self._observers:18 observer.update(self.name, status)1920 def run(self):21 print(f"Running task: {self.name}")22 # Simulate execution23 self.notify("started")24 self.notify("completed")252627# --- Observer Interface ---28class Observer(ABC):29 @abstractmethod30 def update(self, task_name: str, status: str):31 pass323334# --- Concrete Observers ---35class LoggerObserver(Observer):36 def update(self, task_name: str, status: str):37 print(f"[Logger] Task {task_name} -> {status}")383940class UIObserver(Observer):41 def update(self, task_name: str, status: str):42 print(f"[UI] Updating dashboard: Task {task_name} is {status}")434445class AlertObserver(Observer):46 def update(self, task_name: str, status: str):47 if status == "completed":48 print(f"[Alert] Task {task_name} finished successfully!")495051# --- Usage Example ---52if __name__ == "__main__":53 task = WorkflowTask("Data Ingestion")5455 # Attach observers56 task.attach(LoggerObserver())57 task.attach(UIObserver())58 task.attach(AlertObserver())5960 # Run task61 task.run()
Output:
bash️Running task: Data Ingestion[Logger] Task Data Ingestion -> started[UI] Updating dashboard: Task Data Ingestion is started[Logger] Task Data Ingestion -> completed[UI] Updating dashboard: Task Data Ingestion is completed[Alert] Task Data Ingestion finished successfully!
Key Benefits:
- Loose coupling — Subject knows nothing about observers’ internal logic.
- Dynamic subscription — Observers can subscribe/unsubscribe at runtime.
- Scalability — Multiple observers can react to the same event independently.
Strategy Pattern
The Strategy Pattern is a behavioral design pattern that defines a family of algorithms, encapsulates each one, and makes them interchangeable. The client code can choose which strategy to use at runtime, without changing the logic of the client itself.
- Problem it solves: Avoids hard-coding a specific algorithm into a class, allowing the algorithm to be swapped out dynamically.
- When to use:
- Choosing different scheduling strategies (e.g., parallel vs sequential).
- Switching between different pricing models (e.g., flat rate vs tiered).
- Selecting different sorting algorithms (e.g., quicksort vs mergesort).
Example: Workflow Engine (Parallel vs Sequential Execution)
Imagine a workflow orchestration system where tasks can be executed sequentially or in parallel. Instead of hardcoding execution logic into the workflow, we define a Strategy interface and multiple implementations.
python1from abc import ABC, abstractmethod2import asyncio3import time456# --- Strategy Interface ---7class ExecutionStrategy(ABC):8 @abstractmethod9 def execute(self, tasks):10 pass111213# --- Concrete Strategies ---14class SequentialExecution(ExecutionStrategy):15 def execute(self, tasks):16 print("Running tasks sequentially...")17 start = time.perf_counter()18 results = []19 for task in tasks:20 results.append(task())21 elapsed = time.perf_counter() - start22 return results, elapsed232425class ParallelExecution(ExecutionStrategy):26 def execute(self, tasks):27 print("Running tasks in parallel with asyncio...")28 async def runner():29 start = time.perf_counter()30 coroutines = [asyncio.to_thread(task) for task in tasks]31 results = await asyncio.gather(*coroutines)32 elapsed = time.perf_counter() - start33 return results, elapsed3435 return asyncio.run(runner())363738# --- Context (Workflow Engine) ---39class WorkflowEngine:40 def __init__(self, strategy: ExecutionStrategy):41 self.strategy = strategy4243 def set_strategy(self, strategy: ExecutionStrategy):44 self.strategy = strategy4546 def run(self, tasks):47 return self.strategy.execute(tasks)484950# --- Example Tasks ---51def task_a():52 print("Task A running...")53 time.sleep(1)54 return "Result A"5556def task_b():57 print("Task B running...")58 time.sleep(2)59 return "Result B"6061def task_c():62 print("Task C running...")63 time.sleep(1)64 return "Result C"656667# --- Usage Example ---68if __name__ == "__main__":69 tasks = [task_a, task_b, task_c]7071 engine = WorkflowEngine(SequentialExecution())72 seq_results, seq_time = engine.run(tasks)73 print(f"Sequential Results: {seq_results}, Time: {seq_time:.2f}s")74 # Reuse the same engine by change the strategy75 engine.set_strategy(ParallelExecution())76 par_results, par_time = engine.run(tasks)77 print(f"Parallel Results: {par_results}, Time: {par_time:.2f}s")
Output:
bashRunning tasks sequentially...Task A running...Task B running...Task C running...Sequential Results: ['Result A', 'Result B', 'Result C'], Time: 4.01sRunning tasks in parallel with asyncio...Task A running...Task B running...Task C running...Parallel Results: ['Result A', 'Result B', 'Result C'], Time: 2.01s
Key Benefits
- Interchangeable execution strategies (sequential vs parallel).
- Open/Closed Principle — new strategies can be added without modifying existing code.
- Flexible workflows — engine can switch strategies at runtime.
Command Pattern
The Command Pattern encapsulates a request as an object so you can queue, log, undo/redo, or defer operations without the caller needing to know how they’re performed.
- Problem it solves: Decouples what needs to be done (a request) from how/when/where it’s executed.
- When to use:
- You need undo/redo (editors, financial adjustments).
- You want to queue work (background runners, schedulers).
- You want to audit/log actions (compliance, ops).
- You want to script/macro sequences (batch actions).
Roles:
- Command: Interface with
execute()(optionallyundo()). - ConcreteCommand: Implements the request.
- Receiver: The thing that actually does the work.
- Invoker: Triggers the command (can queue, schedule).
- Client: Builds the command and hands it to the invoker.
Example: Workflow Engine Commands (Run, Cancel, Retry) with Queue + Undo
Below is a minimal but useful example that would fit a workflow orchestration engine. We’ll support:
- RunTask, CancelTask, RetryTask
- A CommandBus (Invoker) that can execute immediately or enqueue
- A history stack for undo
- A MacroCommand for batching
python1from abc import ABC, abstractmethod2from dataclasses import dataclass3from typing import Any, Dict, List, Optional4import queue5import threading6import time789# ----- Receiver ---------------------------------------------------------------10class TaskRunner:11 """Receiver: knows how to run/cancel/retry tasks."""12 def __init__(self):13 self.state: Dict[str, str] = {} # task_id -> status (e.g., "PENDING", "RUNNING", "DONE", "CANCELLED", "FAILED")1415 def run(self, task_id: str) -> str:16 self.state[task_id] = "RUNNING"17 # Simulate work18 time.sleep(0.1)19 self.state[task_id] = "DONE"20 return f"Task {task_id} completed"2122 def cancel(self, task_id: str) -> str:23 if self.state.get(task_id) in {"PENDING", "RUNNING"}:24 self.state[task_id] = "CANCELLED"25 return f"Task {task_id} cancelled"26 return f"Task {task_id} not cancellable"2728 def retry(self, task_id: str) -> str:29 # If failed, retry; otherwise noop for demo30 if self.state.get(task_id) == "FAILED":31 return self.run(task_id)32 return f"Task {task_id} not in FAILED state"333435# ----- Command Interface ------------------------------------------------------36class Command(ABC):37 @abstractmethod38 def execute(self, runner: TaskRunner) -> Any: ...3940 def undo(self, runner: TaskRunner) -> None:41 """Optional: not all commands need undo, but we support it when it makes sense."""42 pass434445# ----- Concrete Commands ------------------------------------------------------46@dataclass47class RunTaskCommand(Command):48 task_id: str49 _prev_state: Optional[str] = None5051 def execute(self, runner: TaskRunner) -> str:52 self._prev_state = runner.state.get(self.task_id)53 return runner.run(self.task_id)5455 def undo(self, runner: TaskRunner) -> None:56 # For demo: restore previous state (a lightweight "memento")57 if self._prev_state is None:58 runner.state.pop(self.task_id, None)59 else:60 runner.state[self.task_id] = self._prev_state616263@dataclass64class CancelTaskCommand(Command):65 task_id: str66 _prev_state: Optional[str] = None6768 def execute(self, runner: TaskRunner) -> str:69 self._prev_state = runner.state.get(self.task_id)70 return runner.cancel(self.task_id)7172 def undo(self, runner: TaskRunner) -> None:73 if self._prev_state is None:74 runner.state.pop(self.task_id, None)75 else:76 runner.state[self.task_id] = self._prev_state777879@dataclass80class RetryTaskCommand(Command):81 task_id: str82 _prev_state: Optional[str] = None8384 def execute(self, runner: TaskRunner) -> str:85 self._prev_state = runner.state.get(self.task_id)86 return runner.retry(self.task_id)8788 def undo(self, runner: TaskRunner) -> None:89 if self._prev_state is None:90 runner.state.pop(self.task_id, None)91 else:92 runner.state[self.task_id] = self._prev_state939495# ----- Macro (Composite) Command ---------------------------------------------96@dataclass97class MacroCommand(Command):98 commands: List[Command]99100 def execute(self, runner: TaskRunner) -> List[Any]:101 results = []102 for cmd in self.commands:103 results.append(cmd.execute(runner))104 return results105106 def undo(self, runner: TaskRunner) -> None:107 # Undo in reverse order108 for cmd in reversed(self.commands):109 cmd.undo(runner)110111112# ----- Invoker: CommandBus ----------------------------------------------------113class CommandBus:114 """Invoker: can execute commands now, or enqueue them for worker threads."""115 def __init__(self, runner: TaskRunner):116 self.runner = runner117 self.history: List[Command] = []118 self.q: "queue.Queue[Command]" = queue.Queue()119 self._stop = threading.Event()120 self._worker: Optional[threading.Thread] = None121122 # Immediate execution (returns result)123 def dispatch(self, cmd: Command) -> Any:124 result = cmd.execute(self.runner)125 self.history.append(cmd)126 return result127128 # Enqueue for background worker129 def enqueue(self, cmd: Command) -> None:130 self.q.put(cmd)131132 def start_worker(self) -> None:133 if self._worker and self._worker.is_alive():134 return135136 def worker():137 while not self._stop.is_set():138 try:139 cmd = self.q.get(timeout=0.1)140 except queue.Empty:141 continue142 cmd.execute(self.runner)143 self.history.append(cmd)144 self.q.task_done()145146 self._worker = threading.Thread(target=worker, daemon=True)147 self._worker.start()148149 def stop_worker(self) -> None:150 self._stop.set()151 if self._worker:152 self._worker.join(timeout=1)153154 def undo_last(self) -> None:155 if not self.history:156 return157 cmd = self.history.pop()158 cmd.undo(self.runner)159160# ----- Usage ------------------------------------------------------------------161if __name__ == "__main__":162 runner = TaskRunner()163 bus = CommandBus(runner)164165 # Immediate execution166 print(bus.dispatch(RunTaskCommand("task-1"))) # -> Task task-1 completed167 print(bus.dispatch(CancelTaskCommand("task-1"))) # -> Task task-1 cancelled168169 # Undo the last action (cancel)170 bus.undo_last()171 print("State after undo:", runner.state["task-1"]) # Restored state172173 # Batch/macro (e.g., mass operations)174 macro = MacroCommand([175 RunTaskCommand("job-100"),176 RunTaskCommand("job-101"),177 CancelTaskCommand("job-100"),178 ])179 print(bus.dispatch(macro)) # list of results180181 # Queue + background worker182 bus.start_worker()183 for i in range(3):184 bus.enqueue(RunTaskCommand(f"job-{i}"))185 bus.q.join()186 bus.stop_worker()187188 print("Final state:", runner.state)
Output:
bashTask task-1 completedTask task-1 not cancellableState after undo: DONE['Task job-100 completed', 'Task job-101 completed', 'Task job-100 not cancellable']Final state: {'task-1': 'DONE', 'job-100': 'DONE', 'job-101': 'DONE', 'job-0': 'DONE', 'job-1': 'DONE', 'job-2': 'DONE'}
Visitor Pattern
The Visitor Pattern is a behavioral design pattern that allows you to add new operations to a group of related objects without modifying their classes.
Instead of embedding multiple operations inside each class, you define a separate visitor object that "visits" elements of your object structure and performs actions on them.
This pattern is especially useful when:
- You have a complex hierarchy of objects (like AST nodes, file system objects, workflow steps).
- You want to separate algorithms from the object structure.
- You want to add new operations without altering existing classes.
Structure
- Element (interface/protocol)
Defines an
accept(visitor)method that accepts a visitor. - Concrete Elements
Implement the
acceptmethod, passing themselves to the visitor. - Visitor (interface/protocol) Declares a set of visit methods for each element type.
- Concrete Visitor Implements operations that should be applied to elements.
Example: An Expression Tree (AST) with Multiple Visitors
We’ll build a small arithmetic expression tree with nodes like Number, Var, Add, and Mul. Then we’ll write three visitors:
- Evaluator: computes the numeric result using a variable environment.
- PrettyPrinter: produces a human-readable string.
- NodeCounter: counts nodes for diagnostics.
1) Node hierarchy (the “elements”)
python1from dataclasses import dataclass2from abc import ABC, abstractmethod3from typing import Any, Dict45# ----- Element interface ------------------------------------------------------6class Expr(ABC):7 @abstractmethod8 def accept(self, visitor: "Visitor") -> Any:9 ...1011# ----- Concrete nodes ---------------------------------------------------------12@dataclass(frozen=True)13class Number(Expr):14 value: float15 def accept(self, visitor: "Visitor") -> Any:16 return visitor.visit_Number(self)1718@dataclass(frozen=True)19class Var(Expr):20 name: str21 def accept(self, visitor: "Visitor") -> Any:22 return visitor.visit_Var(self)2324@dataclass(frozen=True)25class Add(Expr):26 left: Expr27 right: Expr28 def accept(self, visitor: "Visitor") -> Any:29 return visitor.visit_Add(self)3031@dataclass(frozen=True)32class Mul(Expr):33 left: Expr34 right: Expr35 def accept(self, visitor: "Visitor") -> Any:36 return visitor.visit_Mul(self)
Each node implements
accept(self, visitor)and forwards control to a type-specificvisitor.visit_<ClassName>(self)method. That’s the double-dispatch: the runtime type of the node picks which visitor method to run.
2) Visitor base class with safe fallback
python1class Visitor(ABC):2 """Base visitor with a safe fallback."""3 def generic_visit(self, node: Expr) -> Any:4 raise NotImplementedError(f"No visit method for {type(node).__name__}")56 # Optional: generic dispatcher if a node forgets to override accept()7 def visit(self, node: Expr) -> Any:8 meth_name = f"visit_{type(node).__name__}"9 meth = getattr(self, meth_name, self.generic_visit)10 return meth(node)
Our nodes call
visit_*directly. Thevisit()helper is handy if you have nodes that don’t implementaccept()(or for internal recursion).
3) Concrete visitors
Evaluator: compute value with variables
python1class Evaluator(Visitor):2 def __init__(self, env: Dict[str, float] | None = None):3 self.env = env or {}45 def visit_Number(self, node: Number) -> float:6 return node.value78 def visit_Var(self, node: Var) -> float:9 if node.name not in self.env:10 raise NameError(f"Undefined variable: {node.name}")11 return self.env[node.name]1213 def visit_Add(self, node: Add) -> float:14 return node.left.accept(self) + node.right.accept(self)1516 def visit_Mul(self, node: Mul) -> float:17 return node.left.accept(self) * node.right.accept(self)
PrettyPrinter: generate a readable string
python1class PrettyPrinter(Visitor):2 def visit_Number(self, node: Number) -> str:3 # Render integers nicely (no trailing .0)4 v = int(node.value) if node.value.is_integer() else node.value5 return str(v)67 def visit_Var(self, node: Var) -> str:8 return node.name910 def visit_Add(self, node: Add) -> str:11 return f"({node.left.accept(self)} + {node.right.accept(self)})"1213 def visit_Mul(self, node: Mul) -> str:14 # Multiplication binds tighter than addition, but we’re simple here15 return f"({node.left.accept(self)} * {node.right.accept(self)})"
NodeCounter: tally nodes (useful for diagnostics or cost models)
python1class NodeCounter(Visitor):2 def __init__(self):3 self.counts: Dict[str, int] = {}45 def _bump(self, cls_name: str):6 self.counts[cls_name] = self.counts.get(cls_name, 0) + 178 def visit_Number(self, node: Number) -> int:9 self._bump("Number")10 return 11112 def visit_Var(self, node: Var) -> int:13 self._bump("Var")14 return 11516 def visit_Add(self, node: Add) -> int:17 self._bump("Add")18 return 1 + node.left.accept(self) + node.right.accept(self)1920 def visit_Mul(self, node: Mul) -> int:21 self._bump("Mul")22 return 1 + node.left.accept(self) + node.right.accept(self)
Using the visitors on a nested structure—let’s build an expression:
python# Build a nested ASTast = Mul(Add(Number(2), Var("x")),Add(Number(3), Number(4)))# Pretty printpp = PrettyPrinter()print("Expr:", ast.accept(pp))# -> Expr: ((2 + x) * (3 + 4))# Evaluate with a variable environmentev = Evaluator({"x": 10})print("Value:", ast.accept(ev))# -> Value: 2 + 10 = 12; 3 + 4 = 7; 12 * 7 = 84# Count nodesnc = NodeCounter()total = ast.accept(nc)print("Total nodes:", total, "| breakdown:", nc.counts)
Variations & Pythonic Notes
- Fallback dispatch: Our
Visitor.generic_visit()andVisitor.visit()give you a safe default and a reflective dispatcher. functools.singledispatchalternative: You can implement visitor-like logic with@singledispatchfunctions on node types—handy when you don’t control the node classes, but you’ll lose the explicitaccept()double dispatch.- Immutability: The example uses
@dataclass(frozen=True)for nodes—this makes ASTs safer to share and reason about. - Graphs vs Trees: Visitor is simplest on trees. For DAGs, ensure you don’t revisit nodes accidentally (cache/memoize by node id) if that matters.
16.6 Conclusion
In this chapter, you explored the three major families of design patterns — Creational, Structural, and Behavioral — and saw how they apply directly to real-world Python development.
- Creational patterns (Singleton, Factory, Abstract Factory, Builder) let you separate object creation from object usage, improving flexibility and testability.
- Structural patterns (Adapter, Decorator, Composite, Proxy) help you assemble larger systems from smaller components, while keeping code extensible and reusable.
- Behavioral patterns (Chain of Responsibility, Observer, Strategy, Command) define interaction rules between objects, giving you cleaner, more maintainable workflows.
Design patterns are not rigid “rules.” They are guides that help you recognize common problems and apply proven solutions. As you build larger Python systems—especially frameworks, workflow engines, or distributed systems—you’ll find yourself returning to these patterns again and again.
16.7 Chapter Assignment: Workflow Engine with Patterns
In this assignment you’ll extend your Docker runner into a mini workflow engine that demonstrates how multiple design patterns (Command, Strategy, Chain of Responsibility, Observer) fit together.
Requirements
- Tasks as Commands
- Implement at least two tasks that wrap your existing Docker runners:
RunPythonTask(runs a.pyscript inside a Python container).RunJavaScriptTask(runs a.jsscript inside a Node.js container).
- Each task should implement a common
Taskinterface with anexecute(context)method.
- Implement at least two tasks that wrap your existing Docker runners:
- Execution Strategies (Strategy Pattern)
- Implement two workflow execution strategies:
- Sequential: tasks run one after the other.
- Parallel (asyncio): tasks run concurrently.
- Let the user choose the strategy when starting the workflow.
- Implement two workflow execution strategies:
- Pipeline (Chain of Responsibility)
- Before running tasks, all requests should pass through a pipeline of handlers:
- AuthHandler (checks for an API key in the context).
- ValidationHandler (ensures script paths exist).
- LoggingHandler (logs before/after execution).
- If any handler fails, stop the workflow.
- Before running tasks, all requests should pass through a pipeline of handlers:
- Observers
- Implement observers such as:
LoggerObserver: prints events to the console.FileObserver: writes task events to a log file.
- Observers should be notified whenever a task starts or completes.
- Implement observers such as:
- Workflow Context
- Store data (e.g. paths, environment vars, execution results) in a shared context dictionary.
- Ensure one task’s output can be added to the context and used by the next.
Hints
- Start with a
Taskbase class or ABC, then subclass it for Python/JS tasks. - Use
asyncio.gather()for parallel execution. - Chain of Responsibility can be implemented by linking handlers together: each calls
next.handle(context)if successful. - Observers are just listeners attached to the workflow engine. Call
observer.update(event)whenever a task changes state. - Use your existing Docker runner code for the core
execute()logic of each task. - Keep the workflow small (2–3 tasks) so you can run it end-to-end in under a minute.
Check your understanding
Test your knowledge of Python Design Patterns