Python Design Patterns

Chapter Outline

Chapter 16: Python Design Patterns

Design patterns are tried-and-tested solutions to common software design problems. They provide structure and best practices, allowing you to write cleaner, scalable, and maintainable code.

This chapter introduces core design patterns in Python, with real-world examples and use cases for each.

16.1 Why Design Patterns Matter

  • Improve code reusability and readability
  • Solve recurring problems in a structured way
  • Help with team collaboration via shared terminology
  • Ease the transition from design to implementation

16.2 Categories of Design Patterns

CategoryPurpose
CreationalObject creation logic
StructuralRelationships between objects
BehavioralCommunication between objects

16.3 Creational Patterns

Creational Design Patterns are a category of software design patterns that deal with object creation mechanisms, aiming to make a system independent of how its objects are created, composed, and represented. Instead of instantiating classes directly with new (or equivalent in Python, ClassName()), these patterns provide flexible ways to delegate the instantiation process. This helps in managing complexity, especially when objects require intricate setup, need to be reused, or when the system should remain loosely coupled to specific classes. Common examples include Singleton (ensures only one instance of a class exists), Factory Method (delegates instantiation to subclasses), Abstract Factory (creates families of related objects), Builder (constructs complex objects step by step), and Prototype (creates objects by cloning existing ones). They are commonly used in frameworks, dependency injection systems, UI toolkits, and applications that need configurable or extensible object creation workflows. In practice, Creational Patterns improve flexibility, promote reusability, and simplify code maintenance by separating object construction from its use.

Singleton Pattern

Ensures a class has only one instance, and provides a global point of access to it.

python
1class Singleton:
2 _instance = None
3
4 def __new__(cls):
5 if not cls._instance:
6 cls._instance = super().__new__(cls)
7 return cls._instance
8
9a = Singleton()
10b = Singleton()
11print(a is b) # True

Use case: Logging, configuration managers, database connections.

Factory Pattern

Creates objects without specifying the exact class. For instance, in a workflow management system you often have many task types (HTTP call, SQL query, Spark job, Docker task…). A factory method lets the scheduler/loader create the correct concrete class from a spec (YAML/JSON/UI) without hard‑coding class names everywhere.

python
1class Task(ABC):
2 @abstractmethod
3 def run(self): ...
4
5class HttpTask(Task): ...
6class SqlTask(Task): ...
7class SparkTask(Task): ...
8
9class TaskFactory:
10 _registry = {
11 "http": HttpTask,
12 "sql": SqlTask,
13 "spark": SparkTask,
14 }
15
16 @classmethod
17 def create(cls, spec: dict) -> Task:
18 kind = spec["type"]
19 return cls._registry[kind](**spec["params"])
20
21task1 = TaskFactory.create({'type': "http", 'params': {}})
22task1.run()
23task2 = TaskFactory.create({'type': "sql", 'params': {}})
24task2.run()

Abstract Factory Pattern

Abstract Factory provides an interface for creating families of related objects without specifying their concrete classes. It’s useful when your code must remain agnostic to the specific concrete types it instantiates, but you still need these objects to be compatible with each other.

Example: A Car Parts Factory

A car, like any complex machine, is made up of various parts. You need a consistent family of parts—engine, wheels, interior, infotainment, battery/ECU—where the pieces must be compatible with each other and with the specific model/trim/market. That coordination is exactly where Abstract Factory shines.

With Abstract Factory, you define an interface that creates an entire family of related parts (create_engine, create_wheels, create_infotainment, …). Each concrete factory represents a model/trim (e.g., Model3Factory, CorollaFactory) and guarantees that all produced parts belong to the same family and work together. The client (assembler) never hardcodes which concrete parts to use; it receives a factory and assembles a car from whatever parts that factory yields.

1) Define part interfaces

python
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3
4# Part interfaces
5class Engine(ABC):
6 @abstractmethod
7 def spec(self) -> str: ...
8
9class Wheels(ABC):
10 @abstractmethod
11 def spec(self) -> str: ...
12
13class Infotainment(ABC):
14 @abstractmethod
15 def spec(self) -> str: ...

2) Concrete parts per model

python
1# Tesla Model 3 parts
2class Model3Engine(Engine):
3 def spec(self) -> str:
4 return "Dual-motor electric, 258 kW, 75 kWh pack"
5
6class Model3Wheels(Wheels):
7 def spec(self) -> str:
8 return "19-inch aero wheels, EV-rated tires"
9
10class Model3Infotainment(Infotainment):
11 def spec(self) -> str:
12 return "17-inch center screen, Tesla OS"
13
14# Toyota Corolla parts
15class CorollaEngine(Engine):
16 def spec(self) -> str:
17 return "1.8L I4 hybrid, 103 kW combined"
18
19class CorollaWheels(Wheels):
20 def spec(self) -> str:
21 return "16-inch alloy wheels, all-season tires"
22
23class CorollaInfotainment(Infotainment):
24 def spec(self) -> str:
25 return "8-inch touchscreen, Toyota Audio Multimedia"

3) Abstract Factory for “families of parts”

python
1class PartsFactory(ABC):
2 @abstractmethod
3 def create_engine(self) -> Engine: ...
4 @abstractmethod
5 def create_wheels(self) -> Wheels: ...
6 @abstractmethod
7 def create_infotainment(self) -> Infotainment: ...

4) Concrete factories per car model (family)

python
1class Model3Factory(PartsFactory):
2 def create_engine(self) -> Engine:
3 return Model3Engine()
4 def create_wheels(self) -> Wheels:
5 return Model3Wheels()
6 def create_infotainment(self) -> Infotainment:
7 return Model3Infotainment()
8
9class CorollaFactory(PartsFactory):
10 def create_engine(self) -> Engine:
11 return CorollaEngine()
12 def create_wheels(self) -> Wheels:
13 return CorollaWheels()
14 def create_infotainment(self) -> Infotainment:
15 return CorollaInfotainment()

5) The assembler (client) stays model-agnostic

python
1@dataclass
2class Car:
3 model: str
4 engine: Engine
5 wheels: Wheels
6 infotainment: Infotainment
7
8class CarAssembler:
9 def __init__(self, factory: PartsFactory, model_name: str):
10 self.factory = factory
11 self.model_name = model_name
12
13 def assemble(self) -> Car:
14 engine = self.factory.create_engine()
15 wheels = self.factory.create_wheels()
16 infotainment = self.factory.create_infotainment()
17 return Car(
18 model=self.model_name,
19 engine=engine,
20 wheels=wheels,
21 infotainment=infotainment,
22 )
23
24# Usage
25car1 = CarAssembler(Model3Factory(), "Tesla Model 3").assemble()
26car2 = CarAssembler(CorollaFactory(), "Toyota Corolla").assemble()
27
28print(car1.model, "|", car1.engine.spec(), "|", car1.wheels.spec(), "|", car1.infotainment.spec())
29print(car2.model, "|", car2.engine.spec(), "|", car2.wheels.spec(), "|", car2.infotainment.spec())

Benefits of the Abstract Factory in this scenario:

  • Compatibility & Consistency: A Model 3’s wheels, battery pack, and infotainment head unit match by construction.
  • Easy swaps: Change the whole family at once (e.g., export market vs domestic market) by swapping factories.
  • Scaling: Add new models/variants by adding one new factory, not editing existing logic everywhere.
  • Testing: Provide a TestPartsFactory or MockPartsFactory for deterministic builds in unit tests.

A quick variant example: “Performance” trims

Add a new factory without touching existing ones:

python
1class Model3PerformanceWheels(Wheels):
2 def spec(self) -> str:
3 return "20-inch performance wheels, summer tires"
4
5class Model3PerformanceFactory(Model3Factory):
6 def create_wheels(self) -> Wheels:
7 return Model3PerformanceWheels() # engine & infotainment inherited

Swap it in:

python
perf = CarAssembler(Model3PerformanceFactory(), "Tesla Model 3 Performance").assemble()

Everything stays compatible by construction.

Builder Pattern

When a “thing” has many optional parts, must be assembled in steps, or requires validation between parts, a Builder separates construction from representation. Instead of a single constructor with dozens of parameters (a “telescoping constructor”), the Builder exposes small, readable steps (often fluent), and a final build() that returns the finished product.

When to use Builder (vs Factory / Abstract Factory)

  • Factory Method: picks which subclass to create based on input (focus: selection).
  • Abstract Factory: creates families of related objects (engines, tires, dashboards) that should work well together (focus: cohesive product families).
  • Builder: assembles one complex object step-by-step, often with optional parts, constraints, or order-sensitive assembly (focus: construction process).

You’ll often see Abstract Factory + Builder together: the abstract factory supplies compatible parts, while the builder assembles them into a finished car.

A simple Builder for cars

Let’s say a Car can have many optional features: engine, transmission, tires, infotainment, color, and safety package. Some combinations must be validated (e.g., Performance engine requires Sport tires).

python
1from dataclasses import dataclass
2from typing import Optional
3
4# Part interfaces
5
6@dataclass(frozen=True)
7class Engine:
8 name: str
9 hp: int
10
11@dataclass(frozen=True)
12class Transmission:
13 type: str # "manual" | "automatic"
14
15@dataclass(frozen=True)
16class Tires:
17 name: str
18 rating: str # "touring" | "sport"
19
20@dataclass(frozen=True)
21class Infotainment:
22 screen_in: float
23 supports_carplay: bool
24
25# Abstract car
26
27@dataclass(frozen=True)
28class Car:
29 model: str
30 engine: Engine
31 transmission: Transmission
32 tires: Tires
33 color: str
34 infotainment: Optional[Infotainment] = None
35 safety_pkg: Optional[str] = None # "standard" | "advanced" | None
36
37
38class CarBuilder:
39 def __init__(self, model: str):
40 self._model = model
41 self._engine: Optional[Engine] = None
42 self._transmission: Optional[Transmission] = None
43 self._tires: Optional[Tires] = None
44 self._color: Optional[str] = None
45 self._infotainment: Optional[Infotainment] = None
46 self._safety_pkg: Optional[str] = None
47
48 # Fluent steps
49 def with_engine(self, name: str, hp: int):
50 self._engine = Engine(name, hp)
51 return self
52
53 def with_transmission(self, type_: str):
54 self._transmission = Transmission(type_)
55 return self
56
57 def with_tires(self, name: str, rating: str):
58 self._tires = Tires(name, rating)
59 return self
60
61 def painted(self, color: str):
62 self._color = color
63 return self
64
65 def with_infotainment(self, screen_in: float, supports_carplay: bool = True):
66 self._infotainment = Infotainment(screen_in, supports_carplay)
67 return self
68
69 def with_safety(self, pkg: str):
70 self._safety_pkg = pkg
71 return self
72
73 # Validation lives here
74 def _validate(self):
75 if not all([self._engine, self._transmission, self._tires, self._color]):
76 raise ValueError("Engine, transmission, tires, and color are required")
77
78 # Example cross-part constraints:
79 if self._engine.hp >= 350 and self._tires.rating != "sport":
80 raise ValueError("High-HP build requires sport tires")
81
82 if self._transmission.type == "manual" and self._engine.name == "EV":
83 raise ValueError("Manual transmission not available for EV")
84
85 def build(self) -> Car:
86 self._validate()
87 return Car(
88 model=self._model,
89 engine=self._engine, # type: ignore[arg-type]
90 transmission=self._transmission, # type: ignore[arg-type]
91 tires=self._tires, # type: ignore[arg-type]
92 color=self._color, # type: ignore[arg-type]
93 infotainment=self._infotainment,
94 safety_pkg=self._safety_pkg,
95 )

Usage:

python
1sport_sedan = (
2 CarBuilder("Falcon S")
3 .with_engine("V6 Turbo", 380)
4 .with_transmission("automatic")
5 .with_tires("Eagle F1", "sport")
6 .painted("Midnight Blue")
7 .with_infotainment(12.0, True)
8 .with_safety("advanced")
9 .build()
10)

Director (optional)

Director is not a design pattern, however, if you have repeated recipes (e.g., “base economy”, “performance pack”), a Director encodes the build steps.

python
1class CarDirector:
2 def build_economy(self, model: str) -> Car:
3 return (
4 CarBuilder(model)
5 .with_engine("I4", 150)
6 .with_transmission("automatic")
7 .with_tires("AllSeason", "touring")
8 .painted("Silver")
9 .with_safety("standard")
10 .build()
11 )
12
13 def build_performance(self, model: str) -> Car:
14 return (
15 CarBuilder(model)
16 .with_engine("V6 Turbo", 380)
17 .with_transmission("automatic")
18 .with_tires("Eagle F1", "sport")
19 .painted("Red")
20 .with_infotainment(12.0)
21 .with_safety("advanced")
22 .build()
23 )

Builder + Abstract Factory: families of parts + assembly

Abstract Factory ensures compatible families of parts (e.g., Eco vs Performance). The Builder then assembles them into a car.

python
1from abc import ABC, abstractmethod
2
3# ----- Abstract Factory for parts -----
4class PartsFactory(ABC):
5 @abstractmethod
6 def create_engine(self) -> Engine: ...
7 @abstractmethod
8 def create_transmission(self) -> Transmission: ...
9 @abstractmethod
10 def create_tires(self) -> Tires: ...
11
12class EcoPartsFactory(PartsFactory):
13 def create_engine(self) -> Engine:
14 return Engine("I4 Hybrid", 180)
15 def create_transmission(self) -> Transmission:
16 return Transmission("automatic")
17 def create_tires(self) -> Tires:
18 return Tires("EcoGrip", "touring")
19
20class PerformancePartsFactory(PartsFactory):
21 def create_engine(self) -> Engine:
22 return Engine("V8 Supercharged", 520)
23 def create_transmission(self) -> Transmission:
24 return Transmission("automatic")
25 def create_tires(self) -> Tires:
26 return Tires("TrackMax", "sport")
27
28# ----- Builder that can accept factory-provided parts -----
29class FactoryAwareCarBuilder(CarBuilder):
30 def with_parts_from(self, factory: PartsFactory):
31 self._engine = factory.create_engine()
32 self._transmission = factory.create_transmission()
33 self._tires = factory.create_tires()
34 return self
35
36# Usage:
37eco_car = (
38 FactoryAwareCarBuilder("Falcon E")
39 .with_parts_from(EcoPartsFactory())
40 .painted("Pearl White")
41 .with_safety("standard")
42 .build()
43)
44
45track_car = (
46 FactoryAwareCarBuilder("Falcon R")
47 .with_parts_from(PerformancePartsFactory())
48 .painted("Racing Yellow")
49 .with_infotainment(10.0)
50 .with_safety("advanced")
51 .build()
52)

Here the Abstract Factory guarantees consistent, compatible part families; the Builder controls assembly order/validation and optional features.

16.4 Structural Patterns

Structural patterns describe how classes and objects can be combined to form larger, more complex structures while keeping them flexible, reusable, and efficient. They help define composition over inheritance, which often leads to cleaner and more extensible designs.

Adapter Pattern

The Adapter Pattern allows incompatible interfaces to work together. Think of it as a “translator” between two systems.

Example: Notification

Imagine you’re building a system that needs to send notifications. Your app expects a Notifier interface, but you have multiple third-party services with very different APIs (e.g., Slack, Email, SMS).

Instead of rewriting your app for each provider, you write adapters to normalize them to a common interface.

Step 1: Define a Common Interface

python
1class Notifier:
2 def send(self, message: str):
3 raise NotImplementedError

Step 2: Third-Party APIs (Incompatible Interfaces)

python
1# Pretend this is a library you can't change
2class SlackAPI:
3 def post_message(self, channel: str, text: str):
4 print(f"[Slack] #{channel}: {text}")
5
6class EmailAPI:
7 def send_email(self, to: str, subject: str, body: str):
8 print(f"[Email] To:{to} | {subject}: {body}")

Notice:

  • Slack wants (channel, text)
  • Email wants (to, subject, body)
  • Neither matches Notifier.send(message).

Step 3: Create Adapters

python
1class SlackAdapter(Notifier):
2 def __init__(self, slack_api: SlackAPI, channel: str):
3 self.slack_api = slack_api
4 self.channel = channel
5
6 def send(self, message: str):
7 self.slack_api.post_message(self.channel, message)
8
9
10class EmailAdapter(Notifier):
11 def __init__(self, email_api: EmailAPI, recipient: str):
12 self.email_api = email_api
13 self.recipient = recipient
14
15 def send(self, message: str):
16 subject = "Notification"
17 self.email_api.send_email(self.recipient, subject, message)

Step 4: Client Code (No Changes!)

python
1def notify_all(notifiers: list[Notifier], message: str):
2 for notifier in notifiers:
3 notifier.send(message)
4
5# Usage
6slack = SlackAdapter(SlackAPI(), channel="dev-team")
7email = EmailAdapter(EmailAPI(), recipient="admin@example.com")
8
9notifiers = [slack, email]
10notify_all(notifiers, "Adapter Pattern makes integrations easy!")

Output

bash
[Slack] #dev-team: Adapter Pattern makes integrations easy!
[Email] To:admin@example.com | Notification: Adapter Pattern makes integrations easy!

Why is this usable?

  • Decouples your app from third-party APIs.
  • You can swap providers without changing your core logic.
  • Works in real-life systems: integrating payment gateways, APIs, cloud services, etc.
  • Client code (notify_all) doesn’t care where the message goes — Slack, Email, SMS, or something new.

Decorator Pattern

The Decorator Pattern lets you dynamically add behavior to an object without modifying its class. Think of it like “wrapping” an object with extra functionality.

Example: File Reader

Imagine you’re building a file reader system. You want to support basic file reading, but also be able to:

  • Encrypt/Decrypt the data
  • Compress/Decompress the data
  • Log whenever a file is accessed

Instead of stuffing all of that into one FileReader, you build decorators.

Step 1: Define a Common Interface

python
1class DataSource:
2 def read(self) -> str:
3 raise NotImplementedError

Step 2: Concrete Implementation

python
1class FileDataSource(DataSource):
2 def __init__(self, filename: str):
3 self.filename = filename
4
5 def read(self) -> str:
6 with open(self.filename, "r") as f:
7 return f.read()

Step 3: Base Decorator

python
1class DataSourceDecorator(DataSource):
2 def __init__(self, wrappee: DataSource):
3 self.wrappee = wrappee
4
5 def read(self) -> str:
6 return self.wrappee.read()

This ensures all decorators behave like a DataSource.

Step 4: Concrete Decorators

python
1class EncryptionDecorator(DataSourceDecorator):
2 def read(self) -> str:
3 data = self.wrappee.read()
4 return "".join(chr(ord(c) + 1) for c in data) # simple shift encryption
5
6
7class CompressionDecorator(DataSourceDecorator):
8 def read(self) -> str:
9 data = self.wrappee.read()
10 return data.replace(" ", "") # naive "compression"
11
12
13class LoggingDecorator(DataSourceDecorator):
14 def read(self) -> str:
15 print(f"[LOG] Reading from {self.wrappee.__class__.__name__}")
16 return self.wrappee.read()

Step 5: Client Code

python
1# Suppose "example.txt" contains: "hello world"
2source = FileDataSource("example.txt")
3
4# Add decorators dynamically
5decorated = LoggingDecorator(
6 CompressionDecorator(
7 EncryptionDecorator(source)
8 )
9)
10
11print(decorated.read())

Output

bash
[LOG] Reading from EncryptionDecorator
ifmmpxpsme # "hello world" encrypted and compressed

Why is this usable?

  • You can stack behaviors dynamically at runtime.
  • Each decorator adds a feature (logging, compression, encryption) without touching FileDataSource.
  • You can reconfigure: maybe just logging, maybe logging + compression, etc.

Composite Pattern

The Composite Pattern lets you treat individual objects (leaves) and groups of objects (composites) uniformly. You define a common interface so a client can call the same methods on a single item or a whole tree of items. It’s perfect for hierarchies like file systems, UI widgets, BOMs (bill of materials), and workflow step groups.

Example 1: Car Parts BOM (Cost & Weight Aggregation)

Goal: compute total cost and total weight of a car from nested assemblies (engine, chassis, wheels…), where each assembly can contain parts or other assemblies.

python
1from abc import ABC, abstractmethod
2from typing import List
3
4# ----- 1) Component -----
5class CarComponent(ABC):
6 @abstractmethod
7 def total_cost(self) -> float: ...
8 @abstractmethod
9 def total_weight(self) -> float: ...
10 @abstractmethod
11 def describe(self, indent: int = 0) -> str: ...
12
13# ----- 2) Leaf -----
14class Part(CarComponent):
15 def __init__(self, name: str, cost: float, weight: float):
16 self.name = name
17 self._cost = cost
18 self._weight = weight
19
20 def total_cost(self) -> float:
21 return self._cost
22
23 def total_weight(self) -> float:
24 return self._weight
25
26 def describe(self, indent: int = 0) -> str:
27 pad = " " * indent
28 return f"{pad}- Part: {self.name} | cost=${self._cost:.2f}, weight={self._weight:.1f}kg"
29
30# ----- 3) Composite -----
31class Assembly(CarComponent):
32 def __init__(self, name: str):
33 self.name = name
34 self._children: List[CarComponent] = []
35
36 def add(self, component: CarComponent) -> None:
37 self._children.append(component)
38
39 def remove(self, component: CarComponent) -> None:
40 self._children.remove(component)
41
42 def total_cost(self) -> float:
43 return sum(c.total_cost() for c in self._children)
44
45 def total_weight(self) -> float:
46 return sum(c.total_weight() for c in self._children)
47
48 def describe(self, indent: int = 0) -> str:
49 pad = " " * indent
50 lines = [f"{pad}+ Assembly: {self.name}"]
51 for c in self._children:
52 lines.append(c.describe(indent + 2))
53 return "\n".join(lines)
54
55# ---- Usage ---------------------------------------------------------
56if __name__ == "__main__":
57 # Leaves
58 piston = Part("Piston", 40.0, 1.2)
59 spark_plug = Part("Spark Plug", 8.0, 0.1)
60 block = Part("Engine Block", 500.0, 90.0)
61 wheel = Part("Wheel", 120.0, 12.0)
62
63 # Sub-assemblies
64 cylinder = Assembly("Cylinder")
65 cylinder.add(piston)
66 cylinder.add(spark_plug)
67
68 engine = Assembly("Engine")
69 engine.add(block)
70 engine.add(cylinder)
71
72 wheels = Assembly("Wheel Set")
73 for _ in range(4):
74 wheels.add(wheel)
75
76 # Top-level assembly (car)
77 car = Assembly("Car")
78 car.add(engine)
79 car.add(wheels)
80
81 print(car.describe())
82 print(f"\nTOTAL COST: ${car.total_cost():.2f}")
83 print(f"TOTAL WEIGHT: {car.total_weight():.1f} kg")

Why this is useful

  • You can nest as deep as needed.
  • The client doesn’t care if it’s a Part or Assembly; it calls total_cost() either way.
  • Adding/removing parts doesn’t require changing the aggregation logic.

Example 2: Workflow Engine — Grouping Tasks

Goal: compose tasks into sequences (or even trees) and execute them with a single run() call. Each task returns a result; groups aggregate results.

python
1from abc import ABC, abstractmethod
2from typing import Any, List
3
4# 1) Component
5class Task(ABC):
6 @abstractmethod
7 def run(self) -> Any: ...
8
9# 2) Leaf Task
10class PrintTask(Task):
11 def __init__(self, message: str):
12 self.message = message
13
14 def run(self) -> str:
15 # Side-effect could be logging, HTTP call, etc.
16 output = f"[PrintTask] {self.message}"
17 print(output)
18 return output
19
20# Another leaf
21class AddTask(Task):
22 def __init__(self, a: int, b: int):
23 self.a, self.b = a, b
24
25 def run(self) -> int:
26 return self.a + self.b
27
28# 3) Composite (sequence of tasks)
29class TaskGroup(Task):
30 def __init__(self, name: str):
31 self.name = name
32 self._children: List[Task] = []
33
34 def add(self, task: Task) -> None:
35 self._children.append(task)
36
37 def remove(self, task: Task) -> None:
38 self._children.remove(task)
39
40 def run(self) -> List[Any]:
41 results = []
42 print(f"[TaskGroup] Starting: {self.name}")
43 for t in self._children:
44 results.append(t.run())
45 print(f"[TaskGroup] Finished: {self.name}")
46 return results
47
48# ---- Usage ---------------------------------------------------------
49if __name__ == "__main__":
50 t1 = PrintTask("Validate input")
51 t2 = AddTask(40, 2)
52 t3 = PrintTask("Persist to DB")
53
54 sub_pipeline = TaskGroup("Preprocess")
55 sub_pipeline.add(t1)
56 sub_pipeline.add(t2)
57
58 pipeline = TaskGroup("Main Workflow")
59 pipeline.add(sub_pipeline)
60 pipeline.add(t3)
61
62 all_results = pipeline.run()
63 print("Results:", all_results)

Why this is useful

  • A single interface (Task.run) for both simple tasks and grouped tasks.
  • You can nest groups and plug them together to form complex workflows.
  • Easy to extend: add parallel groups, conditional groups, retries, etc., without changing client code.

Proxy Pattern

The Proxy Pattern provides a surrogate or placeholder object that controls access to another object. Instead of calling the real object directly, clients interact with the proxy, which decides how and when to delegate requests.

This is especially useful when:

  • You want to add access control (authorization, rate limiting).
  • You want to lazy-load heavy resources (database connections, APIs).
  • You want to add caching or logging without modifying the real object.

Example: Secured Task Execution

Imagine a workflow engine that executes tasks. Some tasks may require special authorization or restricted access (e.g., “Approve Payment”).

workflow/proxy_pattern.py
1from abc import ABC, abstractmethod
2
3
4class Task(ABC):
5 """Abstract base class for workflow tasks."""
6
7 @abstractmethod
8 def execute(self, user: str) -> str:
9 """Execute the task with the given user context."""
10 raise NotImplementedError
11
12
13class RealTask(Task):
14 """The real implementation of a workflow task."""
15
16 def __init__(self, name: str) -> None:
17 self.name = name
18
19 def execute(self, user: str) -> str:
20 return f"Task '{self.name}' executed by {user}..."
21
22
23class TaskProxy(Task):
24 """Proxy for Task that enforces role-based access."""
25
26 def __init__(self, real_task: RealTask, allowed_roles: list[str]) -> None:
27 self._real_task = real_task
28 self._allowed_roles = allowed_roles
29
30 def execute(self, user: str, role: str | None = None) -> str:
31 if role not in self._allowed_roles:
32 return f"Access denied for {user} with role={role}!"
33 return self._real_task.execute(user)
34
35if __name__ == "__main__":
36 approve_payment = RealTask("Approve Payment")
37 proxy = TaskProxy(approve_payment, allowed_roles=["Manager", "Admin"])
38
39 print(proxy.execute("Alice", role="Employee"))
40 print(proxy.execute("Bob", role="Manager"))

Output:

bash
Access denied for Alice with role=Employee!
Task 'Approve Payment' executed by Bob...

Why Proxy Works Well Here

  • The client code (workflow engine) doesn’t know whether it’s using a real task or a proxy.
  • Security checks (roles) are separated from business logic.
  • You can easily extend proxies for logging, caching, or monitoring without changing RealTask.

16.5 Behavioral Patterns

Behavioral design patterns focus on how objects interact and communicate. They define responsibilities, control flow, and message passing between objects.

Whereas creational patterns deal with object creation and structural patterns deal with object composition, behavioral patterns ensure that work gets done in flexible and reusable ways.

Chain of Responsibility

The Chain of Responsibility (CoR) pattern is a behavioral design pattern that allows a request to be passed along a chain of handlers, where each handler decides whether to process it or pass it to the next handler.

  • Problem it solves: Avoids hardcoding request handling logic into one giant method. Instead, responsibility is spread across independent handlers.
  • When to use:
    • Processing pipelines (e.g., car assembly steps).
    • Event handling systems.
    • Request validation / middleware (like in web servers).
    • Workflow orchestration (different actions depending on context).

Example: Workflow Engine

In a Workflow Engine a request moves through multiple processors — authentication, validation, execution, logging. Each processor either handles or forwards the request.

python
1class WorkflowHandler(ABC):
2 def __init__(self, next_handler=None):
3 self.next_handler = next_handler
4
5 @abstractmethod
6 def handle(self, request: dict) -> dict:
7 pass
8
9
10class AuthHandler(WorkflowHandler):
11 def handle(self, request: dict) -> dict:
12 if not request.get("authenticated", False):
13 raise Exception("User not authenticated!")
14 print("Authentication passed")
15 return self.next_handler.handle(request) if self.next_handler else request
16
17
18class ValidationHandler(WorkflowHandler):
19 def handle(self, request: dict) -> dict:
20 if "query" not in request:
21 raise Exception("Invalid request: missing query!")
22 print("Request validated")
23 return self.next_handler.handle(request) if self.next_handler else request
24
25
26class ExecutionHandler(WorkflowHandler):
27 def handle(self, request: dict) -> dict:
28 request["results"] = ["case-1", "case-2"]
29 print("Workflow executed, results attached")
30 return request
31
32
33if __name__ == "__main__":
34 chain = AuthHandler(ValidationHandler(ExecutionHandler()))
35
36 request = {"authenticated": True, "query": "find cases"}
37 response = chain.handle(request)
38 print("Final Response:", response)

Output:

bash
Authentication passed
Request validated
Workflow executed, results attached
Final Response: {'authenticated': True, 'query': 'find cases', 'results': ['case-1', 'case-2']}

All the middlewares used in the chain to process the request execute sequentially. The final response is a transformed request.

Observer Pattern

The Observer Pattern is a behavioral design pattern where an object, called the Subject, maintains a list of dependents, called Observers, and automatically notifies them of state changes.

  • Problem it solves: Keeps objects loosely coupled. The subject doesn’t need to know about the observers’ implementation, only that they implement a notify method (or equivalent).
  • When to use:
    • GUI frameworks (update UI when model changes).
    • Event-driven systems (pub/sub).
    • Workflow orchestration engines (notify subscribers when a job’s state changes).
    • Monitoring systems (alert observers on new events).

Example: Workflow Orchestration Notifications

Imagine a workflow engine where tasks execute, and multiple subsystems (UI, logging, monitoring) need updates whenever a task finishes. Instead of tightly coupling task execution with all those systems, we use the Observer Pattern.

python
1from abc import ABC, abstractmethod
2
3
4# --- Subject (Publisher) ---
5class WorkflowTask:
6 def __init__(self, name: str):
7 self.name = name
8 self._observers = []
9
10 def attach(self, observer: "Observer"):
11 self._observers.append(observer)
12
13 def detach(self, observer: "Observer"):
14 self._observers.remove(observer)
15
16 def notify(self, status: str):
17 for observer in self._observers:
18 observer.update(self.name, status)
19
20 def run(self):
21 print(f"Running task: {self.name}")
22 # Simulate execution
23 self.notify("started")
24 self.notify("completed")
25
26
27# --- Observer Interface ---
28class Observer(ABC):
29 @abstractmethod
30 def update(self, task_name: str, status: str):
31 pass
32
33
34# --- Concrete Observers ---
35class LoggerObserver(Observer):
36 def update(self, task_name: str, status: str):
37 print(f"[Logger] Task {task_name} -> {status}")
38
39
40class UIObserver(Observer):
41 def update(self, task_name: str, status: str):
42 print(f"[UI] Updating dashboard: Task {task_name} is {status}")
43
44
45class AlertObserver(Observer):
46 def update(self, task_name: str, status: str):
47 if status == "completed":
48 print(f"[Alert] Task {task_name} finished successfully!")
49
50
51# --- Usage Example ---
52if __name__ == "__main__":
53 task = WorkflowTask("Data Ingestion")
54
55 # Attach observers
56 task.attach(LoggerObserver())
57 task.attach(UIObserver())
58 task.attach(AlertObserver())
59
60 # Run task
61 task.run()

Output:

bash
️Running task: Data Ingestion
[Logger] Task Data Ingestion -> started
[UI] Updating dashboard: Task Data Ingestion is started
[Logger] Task Data Ingestion -> completed
[UI] Updating dashboard: Task Data Ingestion is completed
[Alert] Task Data Ingestion finished successfully!

Key Benefits:

  • Loose coupling — Subject knows nothing about observers’ internal logic.
  • Dynamic subscription — Observers can subscribe/unsubscribe at runtime.
  • Scalability — Multiple observers can react to the same event independently.

Strategy Pattern

The Strategy Pattern is a behavioral design pattern that defines a family of algorithms, encapsulates each one, and makes them interchangeable. The client code can choose which strategy to use at runtime, without changing the logic of the client itself.

  • Problem it solves: Avoids hard-coding a specific algorithm into a class, allowing the algorithm to be swapped out dynamically.
  • When to use:
    • Choosing different scheduling strategies (e.g., parallel vs sequential).
    • Switching between different pricing models (e.g., flat rate vs tiered).
    • Selecting different sorting algorithms (e.g., quicksort vs mergesort).

Example: Workflow Engine (Parallel vs Sequential Execution)

Imagine a workflow orchestration system where tasks can be executed sequentially or in parallel. Instead of hardcoding execution logic into the workflow, we define a Strategy interface and multiple implementations.

python
1from abc import ABC, abstractmethod
2import asyncio
3import time
4
5
6# --- Strategy Interface ---
7class ExecutionStrategy(ABC):
8 @abstractmethod
9 def execute(self, tasks):
10 pass
11
12
13# --- Concrete Strategies ---
14class SequentialExecution(ExecutionStrategy):
15 def execute(self, tasks):
16 print("Running tasks sequentially...")
17 start = time.perf_counter()
18 results = []
19 for task in tasks:
20 results.append(task())
21 elapsed = time.perf_counter() - start
22 return results, elapsed
23
24
25class ParallelExecution(ExecutionStrategy):
26 def execute(self, tasks):
27 print("Running tasks in parallel with asyncio...")
28 async def runner():
29 start = time.perf_counter()
30 coroutines = [asyncio.to_thread(task) for task in tasks]
31 results = await asyncio.gather(*coroutines)
32 elapsed = time.perf_counter() - start
33 return results, elapsed
34
35 return asyncio.run(runner())
36
37
38# --- Context (Workflow Engine) ---
39class WorkflowEngine:
40 def __init__(self, strategy: ExecutionStrategy):
41 self.strategy = strategy
42
43 def set_strategy(self, strategy: ExecutionStrategy):
44 self.strategy = strategy
45
46 def run(self, tasks):
47 return self.strategy.execute(tasks)
48
49
50# --- Example Tasks ---
51def task_a():
52 print("Task A running...")
53 time.sleep(1)
54 return "Result A"
55
56def task_b():
57 print("Task B running...")
58 time.sleep(2)
59 return "Result B"
60
61def task_c():
62 print("Task C running...")
63 time.sleep(1)
64 return "Result C"
65
66
67# --- Usage Example ---
68if __name__ == "__main__":
69 tasks = [task_a, task_b, task_c]
70
71 engine = WorkflowEngine(SequentialExecution())
72 seq_results, seq_time = engine.run(tasks)
73 print(f"Sequential Results: {seq_results}, Time: {seq_time:.2f}s")
74 # Reuse the same engine by change the strategy
75 engine.set_strategy(ParallelExecution())
76 par_results, par_time = engine.run(tasks)
77 print(f"Parallel Results: {par_results}, Time: {par_time:.2f}s")

Output:

bash
Running tasks sequentially...
Task A running...
Task B running...
Task C running...
Sequential Results: ['Result A', 'Result B', 'Result C'], Time: 4.01s
Running tasks in parallel with asyncio...
Task A running...
Task B running...
Task C running...
Parallel Results: ['Result A', 'Result B', 'Result C'], Time: 2.01s

Key Benefits

  • Interchangeable execution strategies (sequential vs parallel).
  • Open/Closed Principle — new strategies can be added without modifying existing code.
  • Flexible workflows — engine can switch strategies at runtime.

Command Pattern

The Command Pattern encapsulates a request as an object so you can queue, log, undo/redo, or defer operations without the caller needing to know how they’re performed.

  • Problem it solves: Decouples what needs to be done (a request) from how/when/where it’s executed.
  • When to use:
    • You need undo/redo (editors, financial adjustments).
    • You want to queue work (background runners, schedulers).
    • You want to audit/log actions (compliance, ops).
    • You want to script/macro sequences (batch actions).

Roles:

  • Command: Interface with execute() (optionally undo()).
  • ConcreteCommand: Implements the request.
  • Receiver: The thing that actually does the work.
  • Invoker: Triggers the command (can queue, schedule).
  • Client: Builds the command and hands it to the invoker.

Example: Workflow Engine Commands (Run, Cancel, Retry) with Queue + Undo

Below is a minimal but useful example that would fit a workflow orchestration engine. We’ll support:

  • RunTask, CancelTask, RetryTask
  • A CommandBus (Invoker) that can execute immediately or enqueue
  • A history stack for undo
  • A MacroCommand for batching
python
1from abc import ABC, abstractmethod
2from dataclasses import dataclass
3from typing import Any, Dict, List, Optional
4import queue
5import threading
6import time
7
8
9# ----- Receiver ---------------------------------------------------------------
10class TaskRunner:
11 """Receiver: knows how to run/cancel/retry tasks."""
12 def __init__(self):
13 self.state: Dict[str, str] = {} # task_id -> status (e.g., "PENDING", "RUNNING", "DONE", "CANCELLED", "FAILED")
14
15 def run(self, task_id: str) -> str:
16 self.state[task_id] = "RUNNING"
17 # Simulate work
18 time.sleep(0.1)
19 self.state[task_id] = "DONE"
20 return f"Task {task_id} completed"
21
22 def cancel(self, task_id: str) -> str:
23 if self.state.get(task_id) in {"PENDING", "RUNNING"}:
24 self.state[task_id] = "CANCELLED"
25 return f"Task {task_id} cancelled"
26 return f"Task {task_id} not cancellable"
27
28 def retry(self, task_id: str) -> str:
29 # If failed, retry; otherwise noop for demo
30 if self.state.get(task_id) == "FAILED":
31 return self.run(task_id)
32 return f"Task {task_id} not in FAILED state"
33
34
35# ----- Command Interface ------------------------------------------------------
36class Command(ABC):
37 @abstractmethod
38 def execute(self, runner: TaskRunner) -> Any: ...
39
40 def undo(self, runner: TaskRunner) -> None:
41 """Optional: not all commands need undo, but we support it when it makes sense."""
42 pass
43
44
45# ----- Concrete Commands ------------------------------------------------------
46@dataclass
47class RunTaskCommand(Command):
48 task_id: str
49 _prev_state: Optional[str] = None
50
51 def execute(self, runner: TaskRunner) -> str:
52 self._prev_state = runner.state.get(self.task_id)
53 return runner.run(self.task_id)
54
55 def undo(self, runner: TaskRunner) -> None:
56 # For demo: restore previous state (a lightweight "memento")
57 if self._prev_state is None:
58 runner.state.pop(self.task_id, None)
59 else:
60 runner.state[self.task_id] = self._prev_state
61
62
63@dataclass
64class CancelTaskCommand(Command):
65 task_id: str
66 _prev_state: Optional[str] = None
67
68 def execute(self, runner: TaskRunner) -> str:
69 self._prev_state = runner.state.get(self.task_id)
70 return runner.cancel(self.task_id)
71
72 def undo(self, runner: TaskRunner) -> None:
73 if self._prev_state is None:
74 runner.state.pop(self.task_id, None)
75 else:
76 runner.state[self.task_id] = self._prev_state
77
78
79@dataclass
80class RetryTaskCommand(Command):
81 task_id: str
82 _prev_state: Optional[str] = None
83
84 def execute(self, runner: TaskRunner) -> str:
85 self._prev_state = runner.state.get(self.task_id)
86 return runner.retry(self.task_id)
87
88 def undo(self, runner: TaskRunner) -> None:
89 if self._prev_state is None:
90 runner.state.pop(self.task_id, None)
91 else:
92 runner.state[self.task_id] = self._prev_state
93
94
95# ----- Macro (Composite) Command ---------------------------------------------
96@dataclass
97class MacroCommand(Command):
98 commands: List[Command]
99
100 def execute(self, runner: TaskRunner) -> List[Any]:
101 results = []
102 for cmd in self.commands:
103 results.append(cmd.execute(runner))
104 return results
105
106 def undo(self, runner: TaskRunner) -> None:
107 # Undo in reverse order
108 for cmd in reversed(self.commands):
109 cmd.undo(runner)
110
111
112# ----- Invoker: CommandBus ----------------------------------------------------
113class CommandBus:
114 """Invoker: can execute commands now, or enqueue them for worker threads."""
115 def __init__(self, runner: TaskRunner):
116 self.runner = runner
117 self.history: List[Command] = []
118 self.q: "queue.Queue[Command]" = queue.Queue()
119 self._stop = threading.Event()
120 self._worker: Optional[threading.Thread] = None
121
122 # Immediate execution (returns result)
123 def dispatch(self, cmd: Command) -> Any:
124 result = cmd.execute(self.runner)
125 self.history.append(cmd)
126 return result
127
128 # Enqueue for background worker
129 def enqueue(self, cmd: Command) -> None:
130 self.q.put(cmd)
131
132 def start_worker(self) -> None:
133 if self._worker and self._worker.is_alive():
134 return
135
136 def worker():
137 while not self._stop.is_set():
138 try:
139 cmd = self.q.get(timeout=0.1)
140 except queue.Empty:
141 continue
142 cmd.execute(self.runner)
143 self.history.append(cmd)
144 self.q.task_done()
145
146 self._worker = threading.Thread(target=worker, daemon=True)
147 self._worker.start()
148
149 def stop_worker(self) -> None:
150 self._stop.set()
151 if self._worker:
152 self._worker.join(timeout=1)
153
154 def undo_last(self) -> None:
155 if not self.history:
156 return
157 cmd = self.history.pop()
158 cmd.undo(self.runner)
159
160# ----- Usage ------------------------------------------------------------------
161if __name__ == "__main__":
162 runner = TaskRunner()
163 bus = CommandBus(runner)
164
165 # Immediate execution
166 print(bus.dispatch(RunTaskCommand("task-1"))) # -> Task task-1 completed
167 print(bus.dispatch(CancelTaskCommand("task-1"))) # -> Task task-1 cancelled
168
169 # Undo the last action (cancel)
170 bus.undo_last()
171 print("State after undo:", runner.state["task-1"]) # Restored state
172
173 # Batch/macro (e.g., mass operations)
174 macro = MacroCommand([
175 RunTaskCommand("job-100"),
176 RunTaskCommand("job-101"),
177 CancelTaskCommand("job-100"),
178 ])
179 print(bus.dispatch(macro)) # list of results
180
181 # Queue + background worker
182 bus.start_worker()
183 for i in range(3):
184 bus.enqueue(RunTaskCommand(f"job-{i}"))
185 bus.q.join()
186 bus.stop_worker()
187
188 print("Final state:", runner.state)

Output:

bash
Task task-1 completed
Task task-1 not cancellable
State after undo: DONE
['Task job-100 completed', 'Task job-101 completed', 'Task job-100 not cancellable']
Final state: {'task-1': 'DONE', 'job-100': 'DONE', 'job-101': 'DONE', 'job-0': 'DONE', 'job-1': 'DONE', 'job-2': 'DONE'}

Visitor Pattern

The Visitor Pattern is a behavioral design pattern that allows you to add new operations to a group of related objects without modifying their classes.

Instead of embedding multiple operations inside each class, you define a separate visitor object that "visits" elements of your object structure and performs actions on them.

This pattern is especially useful when:

  • You have a complex hierarchy of objects (like AST nodes, file system objects, workflow steps).
  • You want to separate algorithms from the object structure.
  • You want to add new operations without altering existing classes.

Structure

  1. Element (interface/protocol) Defines an accept(visitor) method that accepts a visitor.
  2. Concrete Elements Implement the accept method, passing themselves to the visitor.
  3. Visitor (interface/protocol) Declares a set of visit methods for each element type.
  4. Concrete Visitor Implements operations that should be applied to elements.

Example: An Expression Tree (AST) with Multiple Visitors

We’ll build a small arithmetic expression tree with nodes like Number, Var, Add, and Mul. Then we’ll write three visitors:

  1. Evaluator: computes the numeric result using a variable environment.
  2. PrettyPrinter: produces a human-readable string.
  3. NodeCounter: counts nodes for diagnostics.

1) Node hierarchy (the “elements”)

python
1from dataclasses import dataclass
2from abc import ABC, abstractmethod
3from typing import Any, Dict
4
5# ----- Element interface ------------------------------------------------------
6class Expr(ABC):
7 @abstractmethod
8 def accept(self, visitor: "Visitor") -> Any:
9 ...
10
11# ----- Concrete nodes ---------------------------------------------------------
12@dataclass(frozen=True)
13class Number(Expr):
14 value: float
15 def accept(self, visitor: "Visitor") -> Any:
16 return visitor.visit_Number(self)
17
18@dataclass(frozen=True)
19class Var(Expr):
20 name: str
21 def accept(self, visitor: "Visitor") -> Any:
22 return visitor.visit_Var(self)
23
24@dataclass(frozen=True)
25class Add(Expr):
26 left: Expr
27 right: Expr
28 def accept(self, visitor: "Visitor") -> Any:
29 return visitor.visit_Add(self)
30
31@dataclass(frozen=True)
32class Mul(Expr):
33 left: Expr
34 right: Expr
35 def accept(self, visitor: "Visitor") -> Any:
36 return visitor.visit_Mul(self)

Each node implementsaccept(self, visitor) and forwards control to a type-specific visitor.visit_<ClassName>(self) method. That’s the double-dispatch: the runtime type of the node picks which visitor method to run.

2) Visitor base class with safe fallback

python
1class Visitor(ABC):
2 """Base visitor with a safe fallback."""
3 def generic_visit(self, node: Expr) -> Any:
4 raise NotImplementedError(f"No visit method for {type(node).__name__}")
5
6 # Optional: generic dispatcher if a node forgets to override accept()
7 def visit(self, node: Expr) -> Any:
8 meth_name = f"visit_{type(node).__name__}"
9 meth = getattr(self, meth_name, self.generic_visit)
10 return meth(node)

Our nodes callvisit_* directly. The visit() helper is handy if you have nodes that don’t implement accept() (or for internal recursion).

3) Concrete visitors

Evaluator: compute value with variables

python
1class Evaluator(Visitor):
2 def __init__(self, env: Dict[str, float] | None = None):
3 self.env = env or {}
4
5 def visit_Number(self, node: Number) -> float:
6 return node.value
7
8 def visit_Var(self, node: Var) -> float:
9 if node.name not in self.env:
10 raise NameError(f"Undefined variable: {node.name}")
11 return self.env[node.name]
12
13 def visit_Add(self, node: Add) -> float:
14 return node.left.accept(self) + node.right.accept(self)
15
16 def visit_Mul(self, node: Mul) -> float:
17 return node.left.accept(self) * node.right.accept(self)

PrettyPrinter: generate a readable string

python
1class PrettyPrinter(Visitor):
2 def visit_Number(self, node: Number) -> str:
3 # Render integers nicely (no trailing .0)
4 v = int(node.value) if node.value.is_integer() else node.value
5 return str(v)
6
7 def visit_Var(self, node: Var) -> str:
8 return node.name
9
10 def visit_Add(self, node: Add) -> str:
11 return f"({node.left.accept(self)} + {node.right.accept(self)})"
12
13 def visit_Mul(self, node: Mul) -> str:
14 # Multiplication binds tighter than addition, but we’re simple here
15 return f"({node.left.accept(self)} * {node.right.accept(self)})"

NodeCounter: tally nodes (useful for diagnostics or cost models)

python
1class NodeCounter(Visitor):
2 def __init__(self):
3 self.counts: Dict[str, int] = {}
4
5 def _bump(self, cls_name: str):
6 self.counts[cls_name] = self.counts.get(cls_name, 0) + 1
7
8 def visit_Number(self, node: Number) -> int:
9 self._bump("Number")
10 return 1
11
12 def visit_Var(self, node: Var) -> int:
13 self._bump("Var")
14 return 1
15
16 def visit_Add(self, node: Add) -> int:
17 self._bump("Add")
18 return 1 + node.left.accept(self) + node.right.accept(self)
19
20 def visit_Mul(self, node: Mul) -> int:
21 self._bump("Mul")
22 return 1 + node.left.accept(self) + node.right.accept(self)

Using the visitors on a nested structure—let’s build an expression:

(2+x)×(3+4)(2 + x) \times (3 + 4)

python
# Build a nested AST
ast = Mul(
Add(Number(2), Var("x")),
Add(Number(3), Number(4))
)
# Pretty print
pp = PrettyPrinter()
print("Expr:", ast.accept(pp))
# -> Expr: ((2 + x) * (3 + 4))
# Evaluate with a variable environment
ev = Evaluator({"x": 10})
print("Value:", ast.accept(ev))
# -> Value: 2 + 10 = 12; 3 + 4 = 7; 12 * 7 = 84
# Count nodes
nc = NodeCounter()
total = ast.accept(nc)
print("Total nodes:", total, "| breakdown:", nc.counts)

Variations & Pythonic Notes

  • Fallback dispatch: Our Visitor.generic_visit() and Visitor.visit() give you a safe default and a reflective dispatcher.
  • functools.singledispatch alternative: You can implement visitor-like logic with @singledispatch functions on node types—handy when you don’t control the node classes, but you’ll lose the explicit accept() double dispatch.
  • Immutability: The example uses @dataclass(frozen=True) for nodes—this makes ASTs safer to share and reason about.
  • Graphs vs Trees: Visitor is simplest on trees. For DAGs, ensure you don’t revisit nodes accidentally (cache/memoize by node id) if that matters.

16.6 Conclusion

In this chapter, you explored the three major families of design patterns — Creational, Structural, and Behavioral — and saw how they apply directly to real-world Python development.

  • Creational patterns (Singleton, Factory, Abstract Factory, Builder) let you separate object creation from object usage, improving flexibility and testability.
  • Structural patterns (Adapter, Decorator, Composite, Proxy) help you assemble larger systems from smaller components, while keeping code extensible and reusable.
  • Behavioral patterns (Chain of Responsibility, Observer, Strategy, Command) define interaction rules between objects, giving you cleaner, more maintainable workflows.

Design patterns are not rigid “rules.” They are guides that help you recognize common problems and apply proven solutions. As you build larger Python systems—especially frameworks, workflow engines, or distributed systems—you’ll find yourself returning to these patterns again and again.

16.7 Chapter Assignment: Workflow Engine with Patterns

In this assignment you’ll extend your Docker runner into a mini workflow engine that demonstrates how multiple design patterns (Command, Strategy, Chain of Responsibility, Observer) fit together.

Requirements

  1. Tasks as Commands
    • Implement at least two tasks that wrap your existing Docker runners:
      • RunPythonTask (runs a .py script inside a Python container).
      • RunJavaScriptTask (runs a .js script inside a Node.js container).
    • Each task should implement a common Task interface with an execute(context) method.
  2. Execution Strategies (Strategy Pattern)
    • Implement two workflow execution strategies:
      • Sequential: tasks run one after the other.
      • Parallel (asyncio): tasks run concurrently.
    • Let the user choose the strategy when starting the workflow.
  3. Pipeline (Chain of Responsibility)
    • Before running tasks, all requests should pass through a pipeline of handlers:
      • AuthHandler (checks for an API key in the context).
      • ValidationHandler (ensures script paths exist).
      • LoggingHandler (logs before/after execution).
    • If any handler fails, stop the workflow.
  4. Observers
    • Implement observers such as:
      • LoggerObserver: prints events to the console.
      • FileObserver: writes task events to a log file.
    • Observers should be notified whenever a task starts or completes.
  5. Workflow Context
    • Store data (e.g. paths, environment vars, execution results) in a shared context dictionary.
    • Ensure one task’s output can be added to the context and used by the next.

Hints

  • Start with a Task base class or ABC, then subclass it for Python/JS tasks.
  • Use asyncio.gather() for parallel execution.
  • Chain of Responsibility can be implemented by linking handlers together: each calls next.handle(context) if successful.
  • Observers are just listeners attached to the workflow engine. Call observer.update(event) whenever a task changes state.
  • Use your existing Docker runner code for the core execute() logic of each task.
  • Keep the workflow small (2–3 tasks) so you can run it end-to-end in under a minute.

Check your understanding

Test your knowledge of Python Design Patterns

Feedback