Pytest Tutorial: Fixtures, Mocking, and Beyond

Python Pytest Tutorial: Fixtures, Mocking, and Beyond

Pytest Tutorial: Streamline Your Python Testing Workflow

What is Pytest? Pytest is a Python testing framework that simplifies the process of writing and executing tests. It allows you to create simple to complex tests for your Python applications, offering features like test discovery, fixture management, and plugin support. In this pytest tutorial, we will explore the core concepts of Pytest and demonstrate how to use it effectively.

Among Python testing frameworks(unittest and PyTest), Pytest stands out as one of the most powerful and user-friendly frameworks. Whether you’re working on a small script or a complex application, Pytest offers flexibility, simplicity, and advanced features that cater to both beginners and experts. This PyTest Tutorial will give you a complete understanding of Pytest Testing Framework.

In this article you will learn a comprehensive walkthrough of PyTest – its key features, advanced capabilities, and real-world applications. We’ll cover everything from basic test cases to custom plugins, complete with examples and analogies for clarity.

Why Pytest?

There are many reasons to choose pytest as your Python testing framework. PyTest’s popularity stems from its combination of simplicity and power:

  • Simple Syntax: Tests are written as plain Python functions using assert statements. Unlike other testing frameworks that require boilerplate code, Pytest makes it easy to write clear and concise test cases.
  • Rich Plugin Architecture: Extend functionality with a vast array of plugins or write your own to customize Pytest according to your needs.
  • Automatic Test Discovery: Pytest automatically finds test files and functions, reducing manual effort in test execution.
  • Fixtures: Manage setup and teardown for tests efficiently, preventing redundant setup code.
  • Parameterization: Run tests with multiple sets of inputs without duplicating code, ensuring broader test coverage.
  • Detailed Reporting: Provides clear and customizable output for debugging failures effectively.

Setting Up PyTest

To install Pytest, run the following pip command in Python terminal:

Install PyTest
pip install -U pytest
pip install pytest

Verify the installation:

pytest –version
pytest --version
Pytest version

PS C:\enodeas\task_management> pytest –version
pytest 8.3.5

Task Management System (Project Overview for PyTest Tutorial)

In order to make you understand and demonstrate advanced Pytest features, we’ll test a Task Management System, built using Python(source code included), designed to help users organize and track their tasks. It demonstrates several important software development concepts, including:

This section focuses on the main components and how they work together to provide the task management functionality.

  • Project Structure: The project is organized into distinct modules, each responsible for a specific aspect of the application. This modularity improves code readability, maintainability, and extensibility. For example, task.py handles task-related data, task_manager.py handles task operations, and ui.py handles user interaction.
  • Object-Oriented Programming (OOP): OOP principles are used to model the application’s entities. The Task class represents a task with its attributes (title, description, etc.), and the TaskManager class manages a collection of tasks and provides methods to perform actions on them.
  • Data Persistence: The application offers two ways to store task data:
  • File-based storage: Tasks can be saved to and loaded from a JSON file.
  • Database storage: Tasks can be stored and retrieved from a SQLite database. The user can choose which storage mechanism to use.
  • Task Management Logic: The TaskManager class contains the core logic for managing tasks. It provides functionalities like adding new tasks, marking tasks as complete, listing tasks, and potentially other operations like deleting or updating tasks.
  • User Interface (UI): A simple command-line interface (CLI) allows users to interact with the task manager. The UI prompts the user for input and displays the results of their actions.
  • Utility Functions: Helper functions, such as those for validating task priority, are placed in a separate utils.py module to keep the code organized and reusable.

This section covers the aspects related to ensuring the quality of the code and setting up the project.

  • Testing: A comprehensive set of tests is included to verify the application’s functionality. The tests, written using the pytest framework, cover various aspects:
  • Unit Tests: Test individual components (classes, functions).
  • Integration Tests: Test how different parts of the application work together.
  • Mocking: Isolates tests by replacing external dependencies (like file or database access) with mock objects.
  • Fixtures: Set up reusable test data and objects.
  • Parameterization: Run the same test with different inputs.
  • Markers: Categorize tests for selective execution.
  • Configuration: Settings like file paths and database names are stored in a config.py file, making it easy to customize the application without modifying the core code.
  • Requirements: The requirements.txt file lists all the project’s dependencies, simplifying the process of setting up the project in a new environment. You can install all required libraries using pip install -r requirements.txt.

You need to create a proper directory structure to execute pytest scripts. In this Python project example we create a parent directory (manage_tasks). Then we created two directory one for task management project(task_management) and another to keep all the test scripts(tests).

PyTest Project Directory Structure
manage_tasks/
└──task_management/
   ├── config.py
   ├── database.py
   ├── task.py
   ├── task_manager.py
   ├── storage.py  # file-based storage if needed
   ├── ui.py       # Example UI (CLI)
   ├── utils.py
   ├── __init__.py
└──tests/
   ├── test_task.py
   ├── test_task_manager.py
   ├── test_database.py
   ├── test_storage.py # for file based storage
   ├── test_utils.py
   └── conftest.py
└── requirements.txt
PyTest Project Directory Structure

The goal is to rigorously test each component, ensuring robustness and correctness.

Task Manager Config file
# Configuration settings
TASK_FILE = "tasks.json"  # Where tasks are stored
DB_FILE = "db.sqlite"  # Path to database file
Task Manager Config file
Python file task.py
class Task:
    def __init__(self, title, description, due_date=None, priority="medium", completed=False):
        self.title = title
        self.description = description
        self.due_date = due_date  # Could be a datetime object
        self.priority = priority
        self.completed = completed

    def __str__(self): #for printing task object in readable format
        return f"{self.title} - {self.description} (Priority: {self.priority}, Due: {self.due_date}, Completed: {self.completed})"

    def __repr__(self): #for printing task object in readable format
        return f"Task(title='{self.title}', description='{self.description}', due_date='{self.due_date}', priority='{self.priority}', completed={self.completed})"
    
    def mark_complete(self):
        self.completed = True

    def to_dict(self): #for storing in json file
        return {
            "title": self.title,
            "description": self.description,
            "due_date": str(self.due_date), # Store date as string
            "priority": self.priority,
            "completed": self.completed,
        }

    @classmethod
    def from_dict(cls, data): #for reading from json file
        # Convert date string back to datetime object here if needed
        return cls(data["title"], data["description"], data.get("due_date"), data["priority"], data["completed"])
Python file task.py
Python Script for Task Management Pytest
from task_management.task import Task
from task_management.storage import load_tasks, save_tasks  # Keep these for file-based storage if needed
from task_management.utils import is_valid_priority
from task_management.database import create_connection, create_table, add_task, get_task, list_tasks, complete_task  # Import database functions
from task_management.config import TASK_FILE, DB_FILE  # Add DB_FILE to config

class TaskManager:
    def __init__(self, use_database=True):  # Add a flag to choose storage
        self.use_database = use_database
        if self.use_database:
            self.conn = create_connection(DB_FILE)  # Establish DB connection
            if self.conn is None:
                raise Exception("Failed to connect to database.") # Handle connection failure
            create_table(self.conn)
            self.tasks = self._load_tasks_from_db()  # Load tasks from DB
        else:
            self.conn = None # No connection if not using database
            self.tasks = load_tasks()  # Load from file if not using database

    def __del__(self): # Close connection when TaskManager is destroyed
        if self.conn:
            self.conn.close()

    def _load_tasks_from_db(self):
        return list_tasks(self.conn)

    def add_task(self, title, description, due_date=None, priority="medium"):
        if not is_valid_priority(priority):
            raise ValueError("Invalid priority")
        new_task = Task(title, description, due_date, priority)
        if self.use_database:
            task_id = add_task(self.conn, new_task)
            new_task.id = task_id # Set the ID of the task object
            self.tasks.append(new_task)
        else:
            self.tasks.append(new_task)
            save_tasks(self.tasks) # Save to file if not using database

    def complete_task(self, task_id_or_index):  # Accept ID or index
        print(f"Completing task: {task_id_or_index}")
        print(f"Task list: {self.tasks}")
        try:
            task_id = int(task_id_or_index)  # Try converting to int (ID)
            if self.use_database:
                complete_task(self.conn, task_id)
                task = get_task(self.conn, task_id) # Get updated task after completion
                if task:
                    for i, t in enumerate(self.tasks):
                        if t.id == task_id:
                            self.tasks[i] = task # Update in memory list as well
                            break
            else:
                for i, task in enumerate(self.tasks):
                    if i == task_id: # If it's an index
                        task.mark_complete()
                        break
                save_tasks(self.tasks) # Save to file if not using database
        except ValueError: # If it's not an int, it's an index
            index = int(task_id_or_index)
            if 0 <= index < len(self.tasks):
                if self.use_database:
                    complete_task(self.conn, self.tasks[index].id)
                    self.tasks[index].completed = True
                else:
                    self.tasks[index].mark_complete()
                    save_tasks(self.tasks)
            else:
                raise IndexError("Invalid task index")
    def get_task_id_by_task(self, task):
        if not self.use_database:
            raise ValueError("get_task_id_by_task is only valid for database-backed TaskManager")
        
        for t in self.tasks:
            if t == task:  # Assuming you have an _eq_ method in your Task class, or modify as needed
                return t.id
        return None  # Task not found

    def list_tasks(self):
        if not self.tasks:
            return "No tasks found."
        formatted_tasks = "\n".join(f"{index + 1}. {task}" for index, task in enumerate(self.tasks))
        return formatted_tasks
Python Script for Task Management Pytest
Python file database.py
import sqlite3
from task_management.task import Task  # Assuming Task class is in task.py

def create_connection(db_file):
    conn = None
    try:
        conn = sqlite3.connect(db_file)
        return conn
    except Exception as e:
        print(f"Error connecting to database: {e}")
        return None

def create_table(conn):
    try:
        cursor = conn.cursor()
        cursor.execute("""
            CREATE TABLE IF NOT EXISTS tasks (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                title TEXT NOT NULL,
                description TEXT,
                due_date TEXT,
                priority TEXT,
                completed INTEGER
            )
        """)
        conn.commit()
    except Exception as e:
        print(f"Error creating table: {e}")

def add_task(conn, task):
    try:
        cursor = conn.cursor()
        cursor.execute("""
            INSERT INTO tasks (title, description, due_date, priority, completed)
            VALUES (?, ?, ?, ?, ?)
        """, (task.title, task.description, task.due_date, task.priority, int(task.completed)))  # Store boolean as integer
        conn.commit()
        return cursor.lastrowid # Return the ID of the new task
    except Exception as e:
        print(f"Error adding task: {e}")
        return None

def get_task(conn, task_id):
    try:
        cursor = conn.cursor()
        cursor.execute("SELECT * FROM tasks WHERE id = ?", (task_id,))
        row = cursor.fetchone()
        if row:
            # Assuming Task.from_dict handles the conversion
            return Task.from_dict({
                "title": row[1], "description": row[2], "due_date": row[3],
                "priority": row[4], "completed": bool(row[5])
            })
        return None
    except Exception as e:
        print(f"Error getting task: {e}")
        return None


def list_tasks(conn):
    try:
        cursor = conn.cursor()
        cursor.execute("SELECT * FROM tasks")
        rows = cursor.fetchall()
        tasks = []
        for row in rows:
             tasks.append(Task.from_dict({
                "title": row[1], "description": row[2], "due_date": row[3],
                "priority": row[4], "completed": bool(row[5])
            }))
        return tasks
    except Exception as e:
        print(f"Error listing tasks: {e}")
        return None

def complete_task(conn, task_id):
    try:
        cursor = conn.cursor()
        print('Calling complete task')
        cursor.execute("UPDATE tasks SET completed = 1 WHERE id = ?", (task_id,))
        conn.commit()
    except Exception as e:
        print(f"Error completing task: {e}")
Python file storage.py
Python File for storage PyTest
import json
from task_management.task import Task
from task_management.config import TASK_FILE

def load_tasks():  # If you still use file-based storage
    try:
        with open(TASK_FILE, "r") as f:
            data = json.load(f)
            return [Task.from_dict(task_data) for task_data in data]
    except FileNotFoundError:
        return []

def save_tasks(tasks): # If you still use file-based storage
    with open(TASK_FILE, "w") as f:
        task_dicts = [task.to_dict() for task in tasks]
        json.dump(task_dicts, f, indent=4)
Python File for Database PyTest
User Authentication Python Pytest
from task_manager import TaskManager

def main():
    use_db = input("Use database (yes/no)? ").lower() == "yes"
    manager = TaskManager(use_database=use_db)

    while True:
        print("\nTask Management Menu:")
        print("1. Add Task")
        print("2. List Tasks")
        print("3. Complete Task")
        print("4. Exit")

        choice = input("Enter your choice: ")

        try:
            if choice == "1":
                title = input("Enter task title: ")
                description = input("Enter task description: ")
                # ... get due_date and priority
                manager.add_task(title, description) #, due_date, priority
                print("Task added.")
            elif choice == "2":
                tasks = manager.list_tasks()
                if tasks:
                    for i, task in enumerate(tasks):
                        print(f"{i+1}. {task}")
                else:
                    print("No tasks found.")
            elif choice == "3":
                task_id_or_index = input("Enter task ID or index to complete: ")
                manager.complete_task(task_id_or_index)
                print("Task completed.")
            elif choice == "4":
                break
            else:
                print("Invalid choice.")
        except Exception as e:
            print(f"An error occurred: {e}")

if __name__ == "__main__":
    main()
Python Pytest User Authentication
utils.py
def is_valid_priority(priority):
    return priority.lower() in ["high", "medium", "low"]
Python
task_reports.py
from task_management.task_manager import TaskManager

def generate_completed_report(manager):
    """Generates a report of completed tasks."""
    completed_tasks = [task for task in manager.list_tasks() if task.completed]

    if not completed_tasks:
        print("No completed tasks.")
        return

    print("Completed Tasks Report:")
    for task in completed_tasks:
        print(f"- {task}")  # Assuming Task._str_() is implemented

def generate_priority_report(manager, priority):
    """Generates a report of tasks with a specific priority."""
    priority_tasks = [task for task in manager.list_tasks() if task.priority.lower() == priority.lower()]

    if not priority_tasks:
        print(f"No tasks with priority '{priority}'.")
        return

    print(f"Tasks with Priority '{priority}':")
    for task in priority_tasks:
        print(f"- {task}")

# Example usage (if reports.py is run as a script):
if __name__ == "__main__":
    manager = TaskManager()  # Create a TaskManager instance
    generate_completed_report(manager)
    generate_priority_report(manager, "high")
Python
test_database.py
import pytest
import sqlite3
from task_management.task import Task
from task_management.database import create_connection, create_table, add_task, get_task, list_tasks, complete_task
from unittest.mock import patch, MagicMock

@pytest.fixture
def test_db():
    conn = create_connection(":memory:")  # In-memory database for testing
    create_table(conn)
    yield conn
    conn.close()

def test_add_task_to_db(test_db):
    task = Task("Test Task", "Test Description", "2024-04-15", "high")
    task_id = add_task(test_db, task)
    assert task_id is not None  # Check if task was added

    retrieved_task = get_task(test_db, task_id)
    assert retrieved_task.title == "Test Task"
    assert retrieved_task.priority == "high"
    assert retrieved_task.completed == False

def test_get_task_from_db(test_db):
    task = Task("Another Task", "Another Desc")
    task_id = add_task(test_db, task)
    retrieved = get_task(test_db, task_id)
    assert retrieved.title == "Another Task"

def test_list_tasks_from_db(test_db):
    task1 = Task("Task 1", "Desc 1")
    task2 = Task("Task 2", "Desc 2")
    add_task(test_db, task1)
    add_task(test_db, task2)
    tasks = list_tasks(test_db)
    assert len(tasks) == 2
    assert tasks[0].title == "Task 1"

def test_complete_task_in_db(test_db):
    task = Task("To Complete", "Description")
    task_id = add_task(test_db, task)
    complete_task(test_db, task_id)
    retrieved = get_task(test_db, task_id)
    assert retrieved.completed == True

# Example of more advanced mocking - mocking the cursor
def test_add_task_mock_cursor(test_db):
    task = Task("Mock Task", "Mock Desc")
    mock_cursor = MagicMock()
    mock_conn = MagicMock()
    mock_conn.cursor.return_value = mock_cursor

    with patch("task_management.database.create_connection", return_value=mock_conn):
        add_task(mock_conn, task)

        mock_conn.cursor.assert_called_once()
        mock_cursor.execute.assert_called_once()
        mock_conn.commit.assert_called_once()

# Example: Mocking the connection
def test_add_task_mock_connection():

    mock_conn = MagicMock()
    task = Task("Mock Task 2", "Mock Desc 2")

    with patch("task_management.database.create_connection", return_value=mock_conn):
        add_task(mock_conn, task)

        mock_conn.cursor.assert_called_once()
        mock_conn.commit.assert_called_once()
Python
test_task_manager.py
import pytest
from task_management.task_manager import TaskManager
from task_management.task import Task
from unittest.mock import patch, MagicMock

@pytest.fixture
def task_manager_db():
    manager = TaskManager(use_database=True)
    manager.tasks = [] # Initialize task list
    return manager

@pytest.fixture
def task_manager_file():
    manager = TaskManager(use_database=False)
    manager.tasks = [] # Initialize task list
    return manager

def test_add_task_db(task_manager_db):
    task_manager_db.add_task("Grocery Shopping", "Buy milk, eggs")
    assert len(task_manager_db.tasks) == 1
    assert task_manager_db.tasks[0].title == "Grocery Shopping"
    assert hasattr(task_manager_db.tasks[0], "id") # Check if task has an ID

def test_add_task_file(task_manager_file):
    task_manager_file.add_task("Grocery Shopping", "Buy milk, eggs")
    assert len(task_manager_file.tasks) == 1
    assert "Grocery Shopping" in str(task_manager_file.tasks[0])
    assert not hasattr(task_manager_file.tasks[0], "id") # Check if task has an ID

def test_complete_task_db(task_manager_db):
    task_manager_db.add_task("Laundry", "Wash and dry clothes")
    task_id = task_manager_db.tasks[0].id
    print("sample_tasks", task_id)
    task_manager_db.complete_task(task_id) # Complete using ID
    assert task_manager_db.tasks[0].completed == True

def test_complete_task_file(task_manager_file):
    task_manager_file.add_task("Laundry", "Wash and dry clothes")
    task_manager_file.complete_task(0) # Complete using index
    assert task_manager_file.tasks[0].completed == True

def test_complete_task_db_by_index(task_manager_db): # New test: Complete by index in DB mode
    task_manager_db.add_task("Laundry", "Wash and dry clothes")
    task=task_manager_db.tasks[0]
    id=task_manager_db.get_task_id_by_task(task)
    task_manager_db.complete_task(id) # Complete using index
    assert task_manager_db.tasks[0].completed == True

def test_list_tasks_db(task_manager_db):
    task1 = Task("Task 1", "Desc 1")
    task2 = Task("Task 2", "Desc 2")
    task_manager_db.add_task("Task 1", "Desc 1")
    task_manager_db.add_task("Task 2", "Desc 2")
    tasks = task_manager_db.list_tasks().splitlines()
    assert len(tasks) == 2
    assert "Task 1" in tasks[0]

def test_list_tasks_file(task_manager_file):
    task1 = Task("Task 1", "Desc 1")
    task2 = Task("Task 2", "Desc 2")
    task_manager_file.add_task("Task 1", "Desc 1")
    task_manager_file.add_task("Task 2", "Desc 2")
    tasks = task_manager_file.list_tasks().splitlines()
    assert len(tasks) == 2
    assert "Task 1" in tasks[0]
Python
conftext.py
from task_management.task_manager import TaskManager
from task_management.task import Task
from unittest.mock import patch, Mock
import pytest

# Fixture for TaskManager (used in multiple tests)
@pytest.fixture
def task_manager():
    manager = TaskManager()
    manager.tasks = []  # Start with an empty list
    return manager


# Fixture with a mock storage (for testing storage interactions)
@pytest.fixture
def mock_storage():
    with patch("storage.save_tasks") as mock_save:  # Patch the save_tasks function
        yield mock_save  # Provide the mock to the tests

# Fixture with some pre-defined tasks (for testing task listing, etc.)
@pytest.fixture
def sample_tasks():
    return [
        Task("Grocery Shopping", "Milk, eggs, bread"),
        Task("Laundry", "Wash and dry clothes", completed=True),
        Task("Pay Bills", "Electricity, internet"),
    ]

# Example of a hook to add options to pytest
def pytest_addoption(parser):
    parser.addoption("--runslow", action="store_true", default=False, help="run slow tests")

# Example of a hook to control test execution based on markers
def pytest_collection_modifyitems(config, items):
    if config.getoption("--runslow"):
        # --runslow given in cli: do not skip slow tests
        return
    skip_slow = pytest.mark.skip(reason="need --runslow option to run")
    for item in items:
        if "slow" in item.keywords:
            item.add_marker(skip_slow)
Python
Python
pytest
coverage
Python

I hope you are able to configure the task management project for this PyTest Tutorial. All the files must be placed to the directories as mentioned earlier. Let’s execute the PyTest using below terminal command.

Execute PyTest in Python
PS C:\enodeas\TaskManager> pytest -v -s        
Execute PyTest in Python

======================= test session starts ======================================
platform win32 — Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 — C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, anyio-3.5.0
collected 13 items

tests/test_database.py::test_add_task_to_db PASSED
tests/test_database.py::test_get_task_from_db PASSED
tests/test_database.py::test_list_tasks_from_db PASSED
tests/test_database.py::test_complete_task_in_db Calling complete task
PASSED
tests/test_database.py::test_add_task_mock_cursor PASSED
tests/test_database.py::test_add_task_mock_connection PASSED
tests/test_task_manager.py::test_add_task_db PASSED
tests/test_task_manager.py::test_add_task_file PASSED
tests/test_task_manager.py::test_complete_task_db sample_tasks 167
Completing task: 167
Task list: [Task(title=’Laundry’, description=’Wash and dry clothes’, due_date=’None’, priority=’medium’, completed=False)]
Calling complete task
PASSED
tests/test_task_manager.py::test_complete_task_file Completing task: 0
Task list: [Task(title=’Laundry’, description=’Wash and dry clothes’, due_date=’None’, priority=’medium’, completed=False)]
PASSED
tests/test_task_manager.py::test_complete_task_db_by_index Completing task: 168
Task list: [Task(title=’Laundry’, description=’Wash and dry clothes’, due_date=’None’, priority=’medium’, completed=False)]
Calling complete task
PASSED
tests/test_task_manager.py::test_list_tasks_db PASSED
tests/test_task_manager.py::test_list_tasks_file PASSED

=============================13 passed in 0.14s ===============================================

Advanced Test Scripts for PyTest Tutorial

Fixtures in PyTest help manage test setup and teardown processes efficiently. Pytest fixture acts as a configuration script and execute automatically by a pytest test script. A PyTest fixture is a function that provides a defined baseline, or setup, for your tests.

Essentially, it’s a way to

  • Set up resources: PyTest fixtures could involve things like creating database connections, initializing test data, or configuring application settings.
  • Provide test data: PyTest fixtures can return data that your tests need to work with.
  • Perform teardown: After a test has run, pytest fixtures can clean up any resources that were created during setup

PyTest fixtures help eliminate redundant setup and teardown code, making your tests more concise and maintainable.  They promote code reusability, allowing you to share common setup logic across multiple tests. PyTest fixtures improve test isolation, ensuring that each test runs in a consistent and predictable environment.

  • You define a fixture function and decorate it with @pytest.fixture.
  • Test functions can then “request” the fixture by including it as an argument.  
  • pytest automatically executes the fixture function before running the test, and passes the fixture’s return value to the test.
Pytest Fixture Example
import pytest
from task_management.task_manager import TaskManager
@pytest.fixture
def task_manager():
    """Provides a fresh TaskManager instance for testing."""
    taskmgr = TaskManager(use_database=False)
    taskmgr.tasks = []
    return taskmgr

def test_task_creation(task_manager):
    """Tests if a new task is created correctly."""
    task_manager.add_task("Implement fixture example", "Validate the fixture in pytest")

    assert task_manager.tasks[0].title == "Implement fixture example"
    assert task_manager.tasks[0].completed == False
Example of Pytest Fixture

PS C:\enodeas\TaskManager> pytest -v -s validate.py
================================= test session starts =============================
platform win32 — Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 — C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, anyio-3.5.0
collected 1 item

validate.py::test_task_creation False
PASSED

====================================================== 1 passed in 0.05s ======================================================
PS C:\enodeas\TaskManager> pytest -v -s validate.py
===================================================== test session starts =====================================================
platform win32 — Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 — C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, anyio-3.5.0
collected 1 item

validate.py::test_task_creation PASSED

===================================1 passed in 0.05s ===================================

Ideal for setting up resources that are used throughout the entire test run, such as database connections.

Pytest fixture scopes determine how often a fixture is set up and torn down during a test session. Pytest provides several scopes, each serving different purposes:

function(default):

The pytest fixture is created and destroyed for each test function. This is the most common scope, ensuring each test has a fresh environment.

@pytest.fixture(scope='function')
def task_manager():

class:

The fixture is created once per test class and destroyed after the last test in the class. Useful for setting up resources that are shared across all tests within a class.

@pytest.fixture(scope="class")
def task_manager():

module:

The fixture is created once per test module and destroyed after the last test in the module. Suitable for setting up resources that are shared across all tests in a module.

@pytest.fixture(scope="module")
def task_manager():

package:

The fixture is created once per package of tests. This is a less commonly used scope, and is usefull for setup that will be used across a package of test files.

@pytest.fixture(scope="package")
def task_manager():

session:

The fixture is created once per test session and destroyed after all tests have completed.

@pytest.fixture(scope="session")
def task_manager():

Mark parameterized testing in pytest allows you to run a single test function multiple times with different sets of input data. This significantly reduces code duplication and makes your tests more comprehensive.

  • @pytest.mark.parametrize: This decorator is used to define the parameters for your test.
  • Parameters: You provide a list of tuples, where each tuple represents a set of inputs for a single test run.
  • Test Execution: pytest automatically generates multiple test runs, one for each set of parameters.
Example PyTest Mark Parameterized
import pytest
from task_management.task_manager import TaskManager
@pytest.fixture
def task_manager():
    """Provides a fresh TaskManager instance for testing."""
    taskmgr = TaskManager(use_database=False)
    taskmgr.tasks = []
    return taskmgr
@pytest.mark.parametrize("title, description, completed", [
    ("Task A", "Description A", False),
    ("Task B", "Description B", True),
    ("Task C", "Description C", False),
])
def test_add_multiple_tasks(task_manager, title, description, completed):
    """Tests adding multiple tasks with different statuses."""
    task_manager.add_task(title, description)
    if completed:
        task_manager.tasks[-1].mark_complete()

    assert task_manager.tasks[-1].title == title
    assert task_manager.tasks[-1].description == description
    assert task_manager.tasks[-1].completed == completed
PyTest Mark Parameterized Example

PS C:\enodeas\TaskManager> pytest -v -s validate.py
===================================== test session starts ===================================
platform win32 — Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 — C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, anyio-3.5.0
collected 3 items

validate.py::test_add_multiple_tasks[Task A-Description A-False] PASSED
validate.py::test_add_multiple_tasks[Task B-Description B-True] PASSED
validate.py::test_add_multiple_tasks[Task C-Description C-False] PASSED

Mocking is a technique used to replace parts of your system under test with simulated objects. This is particularly useful for isolating your code and testing it independently of its dependencies.

  • Isolation:
    • Mocking allows you to test a specific unit of code without relying on external dependencies like databases, APIs, or file systems. This ensures that your tests are focused and predictable.
  • Controlling Behavior:
    • You can simulate the behavior of dependencies, specifying their return values, raising exceptions, or verifying that they were called correctly.
  • Speed and Reliability:
    • Mocking eliminates the need to interact with slow or unreliable external systems, making your tests faster and more consistent.
  • Mock Objects:
    • These are simulated objects that mimic the behavior of real objects. You can define how they respond to method calls and attribute access.
  • Patching:
    • Patching involves temporarily replacing a function, method, or class with a mock object during a test. This allows you to intercept calls to the original object and control its behavior.
  • Assertions:
    • Mocking often involves asserting that mock objects were called with the expected arguments or that they returned the expected values.
  • pytest-mock:
    • pytest-mock is a popular pytest plugin that provides a convenient way to use mocking in your tests. It offers a mocker fixture that simplifies the creation and use of mock objects.
  • unittest.mock:
    • Python’s standard library includes the unittest.mock module, which provides the core mocking functionality. pytest-mock builds upon this module.
  • Benefits of using pytest-mock:
    • It integrates seamlessly with pytest’s testing framework.
    • It simplifies the process of patching and creating mock objects.
    • It provides helpful assertion methods for verifying mock interactions.

In the below code snippet you can understand how unittest mock, patch and assertion to be used to test the task management project with mocking database connection or different part of the package.

Unittest Mock Example
import pytest
from unittest.mock import MagicMock, patch
from task_management.task_manager import TaskManager
from task_management.task import Task
from task_management.database import create_connection, add_task, get_task
from task_management.storage import load_tasks, save_tasks

# 1. Basic Mocking: Mocking a function that returns a fixed value
def test_list_tasks():
    task_manager = TaskManager(use_database=False)
    task_manager.list_tasks = MagicMock(return_value="Mocked Task List")
    assert task_manager.list_tasks() == "Mocked Task List"

# 2. Mocking a Database Connection
def test_db_connection_failure():
    with patch("task_management.task_manager.create_connection", side_effect=Exception("Failed to connect to database.")):
        with pytest.raises(Exception, match="Failed to connect to database."):
            task=TaskManager(use_database=True)
            task.add_task('Test', 'Test')

# 3. Mocking a Method: Mocking list_tasks
def test_mock_list_tasks():
    task_manager = TaskManager(use_database=True)
    with patch.object(task_manager, "list_tasks", return_value="Mocked Tasks"):
        assert task_manager.list_tasks() == "Mocked Tasks"

# 4. Mocking a Class: Mocking Task
def test_mock_task_class():
    mock_task = MagicMock(spec=Task)
    mock_task.title = "Mock Task"
    mock_task.description = "This is a mocked task"
    assert mock_task.title == "Mock Task"
    assert mock_task.description == "This is a mocked task"

# 5. Mocking External Dependencies
def test_mock_load_save_tasks():
    with patch("task_management.storage.load_tasks", return_value=[Task("Task1", "Desc1")]):
        task_manager = TaskManager(use_database=False)
        assert len(task_manager.tasks) == 1
        assert task_manager.tasks[0].title == "Task 1"
    
    with patch("task_management.storage.save_tasks") as mock_save:
        task_manager.save_tasks = mock_save
        task_manager.save_tasks([])
        mock_save.assert_called_once_with([])
Unittest Mock Example
Example of Unittest Mock
PS C:\enodeas\TaskManager> pytest -v -s validate.py
=============================================== test session starts ================================================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 -- C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 5 items

validate.py::test_list_tasks PASSED
validate.py::test_db_connection_failure PASSED
validate.py::test_mock_list_tasks PASSED
validate.py::test_mock_task_class PASSED
validate.py::test_mock_load_save_tasks PASSED

================== 5 passed in 0.05s ==================
Unittest Mock Example Output

The mocker argument in a pytest test script comes from the pytest-mock plugin. When you include mocker as an argument in your test function, pytest automatically injects an instance of the Mocker-fixture class. This MockerFixture instance provides a set of methods that simplify the process of creating and managing mock objects.

PyTest Mocker Python Example Script
import pytest
from task_management.task_manager import TaskManager
from task_management.task import Task

def test_db_connection_failure(mocker):
    """Mock database connection failure."""
    mocker.patch("task_management.task_manager.create_connection", side_effect=Exception("Failed to connect to database."))
    
    with pytest.raises(Exception, match="Failed to connect to database."):
        TaskManager(use_database=True)

def test_add_task(mocker):
    """Mock add_task on TaskManager instance."""
    manager = TaskManager(use_database=True)

    # Patch the instance method
    mock_add_task = mocker.patch.object(manager, "add_task", return_value=123)
    task = Task("Mock Task", "Testing", priority="high", id=123)
    manager.tasks.append(task)
    manager.add_task("Mock Task", "Testing", priority="high")
    mock_add_task.assert_called_once()  # Ensure add_task is called
    assert manager.tasks[-1].id == 123  # Validate task ID

def test_complete_task(mocker):
    """Mock complete_task to test task completion."""
    manager = TaskManager(use_database=True)
    
    # Patch complete_task on the manager instance
    mock_complete_task = mocker.patch.object(manager, "complete_task")

    mock_task = Task("Mock Task", "Description", priority="low", completed=True)
    mock_task.id = 124
    manager.tasks = [mock_task]

    manager.complete_task(124)  # Call the method

    mock_complete_task.assert_called_once_with(124)  # Ensure the mock was called
    assert manager.tasks[-1].completed == True    
PyTest Mocker Python Example Script
Example PyTest Mocker Output
PS C:\enodeas\TaskManager> pytest -v -s validate.py 
===================== test session starts =====================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0 -- C:\ProgramData\anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 3 items

validate.py::test_db_connection_failure PASSED
validate.py::test_add_task PASSED
validate.py::test_complete_task PASSED

========================== 3 passed in 0.16s ========================== 
PyTest Mocker Example Script

Running tests in parallel can significantly reduce the overall execution time of your test suite, especially for large projects. pytest provides several ways to achieve this, primarily through plugins.

  • Primary Tool: pytest-xdist is the most popular and recommended plugin for parallel test execution in pytest.
  • How pytest-xdist Works: pytest-xdist distributes tests across multiple CPUs or even multiple machines.
  • Installation of pytest-xdist: pip install pytest-xdist
  • Usage of pytest-xdist:
    • Use the -n option to specify the number of worker processes. For example: pytest -n auto #Automatically detect the number of cpus.
    • pytest -n 4 #Run tests using 4 worker processes.
    • You can also distribute tests across multiple machines using ssh:
      pytest -n 2 –dist loadfile –tx ssh=user@host1//python=python3 –tx ssh=user@host2//python=python3
  • Benefits of pytest-xdist:
    • Significant speed improvements for large test suites.
    • Easy to use.
    • Supports distribution across multiple machines.
  • Considerations:
    • Tests must be independent and not rely on shared mutable state, as this can lead to race conditions.
    • Database interactions and file system changes must be handled carefully.
PS C:\enodeas\TaskManager> pytest -n 4
========================== test session starts ========================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
4 workers [13 items]
............. [100%]
========================== 13 passed in 7.87s ===========================
pytest -n auto pytest-xdist
PS C:\enodeas\TaskManager> pytest -n auto
===================================================== test session starts =====================================================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
10 workers [13 items]     
.............                                                                                                            [100%]
=========================== 13 passed in 6.23s ===========================
pytest -n auto pytest-xdist
  • Alternative: pytest-parallel is another option for parallel testing.
  • How pytest-parallel works: It parallelizes test execution using threads or processes.
  • Installation of pytest-parallel: pip install pytest-parallel
  • Usage of pytest-parallel:
    • Use the –workers option to specify the number of workers.
      pytest –workers auto #Automatically detect number of workers
      pytest — workers 4 #Run pytest using 4 workers
  • Differences from xdist:
    • pytest-parallel primarily focuses on parallel execution within a single machine.
    • pytest-xdist is generally more robust and feature-rich.
  • Considerations:
    • Like pytest-xdist, test isolation is crucial.
    • Thread-based parallelism might be limited by the Global Interpreter Lock (GIL) in CPython for CPU-bound tasks. Process-based parallelism avoids this.
  • Test Isolation: To ensure your tests are independent of each other, it’s crucial to avoid shared mutable state or dependencies that can cause conflicts. Furthermore, this practice enhances test reliability and simplifies debugging.
  • Resource Management: Be mindful of resource usage, especially when dealing with databases, file systems, or network connections.
  • Test Ordering: Parallel execution can change the order in which tests are run. Avoid relying on specific test order.
  • Logging and Debugging: Parallel execution can make debugging more challenging. Use robust logging and debugging techniques.
  • Database and File System: When dealing with databases or file systems, use unique names or temporary resources to avoid conflicts.
  • Fixture Scopes: Carefully consider the scope of your pytest fixtures. Session-scoped fixtures can be problematic in parallel environments if they involve mutable state.
  • Plugin Compatibility: ensure all pytest plugins that you are using are compatible with parallel execution.
  • Large test suites that take a long time to run.
  • CPU-bound or I/O-bound tests that can benefit from parallel execution.
  • Continuous integration/continuous delivery (CI/CD) pipelines where speed is critical.
Python
pip install pytest-xdist
pytest -n 4
Python

Why pytest-xdist is Preferred:

  • Active Maintenance: pytest-xdist is actively developed and maintained, ensuring compatibility with newer pytest versions.
  • Robustness: It’s generally more stable and reliable than pytest-parallel.
  • Features: pytest-xdist offers more advanced features, such as distributing tests across multiple machines.

Hooks modify the behavior of test execution dynamically.

Python
# conftest.py
import pytest

def pytest_runtest_setup(item):
    print(f"Starting test: {item.name}")
Python

Pytest allows capturing and asserting console output.

SQL
def test_output(capsys):
    """Tests if printed output is captured correctly."""
    print("Task Manager Initialized")
    captured = capsys.readouterr()
    assert "Task Manager Initialized" in captured.out
SQL

The conftest.py(pytest conftest) in pytest is a crucial component for managing test configurations and sharing resources across multiple test files. Essentially, it serves as a local pytest plugin.

Key Functions and Purpose:

  • Sharing Fixtures:
    • One of the primary uses of pytest conftest(conftest.py) is to define fixtures that can be reused by multiple test functions within a directory or its subdirectories.
    • This eliminates the need to duplicate fixture code across various test files, promoting code reusability and maintainability.
  • Local Plugins:
    • pytest conftest files act as local plugins, allowing you to define hooks and fixtures that are specific to a particular directory and its test suite.
    • This enables you to customize pytest’s behavior based on the specific needs of different parts of your project.
  • Configuration:
    • It can be used to set up configurations that apply to the tests within its scope.
    • This can include setting up database connections, initializing test environments, and other setup tasks.
  • Hook Functions:
    • You can define pytest hook functions within pytest conftest to customize pytest’s behavior during the test execution process.
  • Directory Scoping:
    • pytest conftest(conftest.py) files have a directory-level scope. This means that fixtures and configurations defined in a conftest.py file are available to all test files within that directory and its subdirectories.
    • It is possible to have multiple conftest.py files in a project, each with its own scope. This allows for very granular control of test setup.

Key Concepts:

  • Fixtures:
    • Fixtures are functions that provide setup and teardown functionality for tests. They can return values that are used by test functions.
    • pytest conftest(conftest.py) is the ideal place to define fixtures that are used by multiple tests.
  • Hooks:
    • Hooks are functions that allow you to customize pytest’s behavior at various points during the test execution cycle.

How to Run Specific Test in PyTest Tutorial

Pytest offers a variety of methods for running specific tests, providing flexibility and efficiency during development and debugging. This allows you to focus on particular areas of your codebase without executing the entire test suite.

To run all tests within a specific file, simply provide the filename as an argument to pytest. For instance, pytest test_task_manager.py will execute all tests defined in test_task_manager.py.

PyTest Run Specific Test by Filename
PS C:\enodeas\TaskManager\tests> pytest test_task_manager.py
======================== test session starts =======================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager\tests
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 7 items

test_task_manager.py .......                                                                                  [100%]

====================== 7 passed in 0.15s ========================= 
Example PyTest Run Specific Test by Filename

You can run a specific test function by providing its fully qualified name, including the module and class (if applicable). For example, pytest test_task_manager.py::test_add_task_db will run only the test_add_task_db function. If the test is inside a class, you would use pytest test_task_manager.py::TestTaskManager::test_complete_task_db.

Example PyTest Run Specific Test by Test Name
PS C:\enodeas\TaskManager\tests> pytest test_task_manager.py::test_add_task_db      
===================== test session starts =====================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager\tests
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 1 item

test_task_manager.py .                                                                                        [100%]

======================== 1 passed in 0.07s ========================
Example PyTest Run Specific Test by Test Name

Pytest’s -k option allows you to run tests that match a given keyword expression. For example, pytest -k “add and db” will execute tests whose names contain both “add” and “db”. This is helpful for running tests related to a specific feature or component.

PyTest Run Specific Test by Keyword Expression
PS C:\enodeas\TaskManager\tests> pytest -k "add and db"
========================= test session starts =========================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager\tests
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 13 items / 11 deselected / 2 selected

test_database.py .                                                                                            [ 50%]
test_task_manager.py .                                                                                        [100%]

===================== 2 passed, 11 deselected in 0.05s ===================== 
PyTest Run Specific Test by Keyword Expression

Markers are custom labels that you can apply to tests. The -m option allows you to run tests with a specific marker. For instance, if you have tests marked with @pytest.mark.database, you can run them using pytest -m database. This is useful for categorizing tests based on their functionality or environment.

PyTest Run Specific Test by Marker
PS C:\enodeas\TaskManager\tests> pytest -m database    
================== test session starts ===================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager\tests
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 13 items / 11 deselected / 2 selected

test_database.py ..                                                                                           [100%] 

=============== 2 passed, 11 deselected, 2 warnings in 0.08s =============== 
PyTest Run Specific Test by Marker

5. PyTest Run Specific Test by a Directory:

To run all tests within a specific directory and its subdirectories, simply provide the directory name as an argument to pytest. For example, pytest tests/ will execute all tests within the tests directory.

Run Specific Test by a Directory PyTest
PS C:\enodeas\TaskManager> pytest tests/     
======================= test session starts ======================
platform win32 -- Python 3.10.9, pytest-8.3.5, pluggy-1.5.0
rootdir: C:\enodeas\TaskManager
plugins: bdd-8.1.0, mock-3.14.0, xdist-3.6.1, anyio-3.5.0
collected 13 items

tests\test_database.py ......                                                                                 [ 46%]
tests\test_task_manager.py .......                                                                            [100%]

===================== 13 passed, 2 warnings in 0.24s ===================== 
PyTest Run Specific Test by a Directory

PyTest Plugins for PyTest Tutorial

Pytest’s extensibility is one of its most powerful features, largely thanks to its robust plugin system. Plugins allow you to customize and enhance pytest’s behavior, adding functionality for everything from code coverage and test parallelization to integration with external tools and frameworks

  • Hook Functions: Pytest defines a set of “hook” functions that are called at various points during the test lifecycle (e.g., test collection, test execution, reporting). Plugins can implement these hook functions to modify pytest’s behavior.  
  • Plugin Discovery: Pytest automatically discovers and loads plugins from various sources, including:
    • Built-in plugins: These provide core pytest functionality.  
    • External plugins: Installed via pip.
    • conftest.py files: Local plugins defined within your project.
  • Reporting Pytest Plugins:
    • pytest-cov: Measures code coverage.
    • pytest-html: Generates HTML test reports.
    • pytest-sugar: Improves the default pytest output.
  • Execution PyTest Plugins:
    • pytest-xdist: Distributes tests across multiple CPUs or machines for parallel execution.
    • pytest-ordering: Allows you to specify the order in which tests are run.
  • Integration Plugins:
    • pytest-django: Integrates pytest with Django.
    • pytest-flask: Integrates pytest with Flask.
    • pytest-selenium: Integrates pytest with Selenium for browser testing.
  • Assertion Plugins:
    • pytest-assert-rewrite: Rewrites assertions to provide more informative error messages.
  • Mocking Plugins:
    • Pytest-mock: provides a wrapper around the unittest.mock library.
  1. Installation: Install the plugin using pip: pip install pytest-plugin-name.
  2. Configuration: Many plugins can be configured through command-line options or pytest.ini files.
  3. Usage: Once installed, the plugin’s functionality is automatically integrated into pytest.  
  • Extensibility: Plugins allow you to tailor pytest to your specific needs.  
  • Reusability: Plugins can be shared and reused across projects.  
  • Community Support: A large and active community contributes to a wide range of pytest plugins.  
  • Improved Workflow: Plugins can streamline your testing workflow and provide valuable insights.  

PyTest Skip: Controlling Test Execution with PyTest Tutorial

Pytest skip functionality provides a nice way to control which tests to be executed based on specific conditions. This is invaluable when dealing with tests that are not applicable in certain environments, require specific dependencies, or are temporarily ignored.

This decorator is used to unconditionally skip a test.You can provide a reason for the skip, which will be displayed in the test report.

@pytest.mark.skip
import pytest
@pytest.mark.skip(reason="This test is temporarily disabled.")
def test_skipped_function():
    assert 1 == 2
@pytest.mark.skip

This decorator conditionally skips a test based on a given condition.The condition is evaluated as a Python expression.This is useful for skipping tests that require specific dependencies or are platform-dependent.

@pytest.mark.skipif
import pytest
import sys

@pytest.mark.skipif(sys.version_info < (3, 7), reason="Required Python version 3.7 or higher.")
def test_python_version():
    assert True
@pytest.mark.skipif

You can call pytest.skip(reason) directly within a test function to skip it dynamically.This is useful when you need to make the skip decision based on runtime conditions

Python
import pytest

def check_condition():
    return False
    
def test_conditional_skip():
    if not check_condition():
        pytest.skip("Skipping because check_condition is False.")
    assert True
Python
  • Platform-Specific Tests: Skip tests that are not applicable to the current operating system.
  • Dependency Management: Skip tests that require specific libraries or modules that are not installed.
  • Feature Flags: Skip tests that are related to features that are not yet enabled.
  • Temporary Disabling: Skip tests that are known to be failing or are under development.
  • Improved Test Suite Management: Skip tests that are not relevant to the current environment or situation.
  • Clear Test Reports: Skipped tests are clearly marked in the test report, making it easy to identify them.
  • Dynamic Test Execution: Skip tests based on runtime conditions.
  • Avoid False Failures: Prevents unnecessary test failures when tests are not applicable.

Conclusion Pytest Tutorial

By testing our Task Management System, we demonstrated the power of Pytest, including fixtures, parameterization, mocking, parallel execution, and custom hooks. These features ensure our application is robust and reliable.

Hope with this PyTest Tutorial you learned how to use PyTest in Python. Start using PyTest today to make your Python coding journey more reliable!

This Post Has 3 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.