Pytest: Complete Guide to Testing in Python
Introduction to Pytest
Testing is a critically important part of software development. Without quality tests, it is difficult to guarantee code reliability and stable operation when changes are introduced. Pytest is a modern, powerful, and user‑friendly framework for automated testing in Python that greatly simplifies writing and executing tests.
Pytest has gained popularity among developers thanks to its concise syntax, rich extensibility, and robust fixture system. It significantly outperforms the standard unittest module in terms of ease of use and test development speed.
Installing and Configuring Pytest
Installation via pip
pip install pytest
Verify Installation
pytest --version
First Simple Test
Create a file test_example.py:
def test_addition():
assert 2 + 2 == 4
def test_string_operations():
text = "Hello, World!"
assert "Hello" in text
assert len(text) == 13
Running Tests
pytest
Pytest automatically discovers all test files and functions that follow naming conventions.
Test Structure and Organization
Naming Conventions
Pytest uses the following conventions for automatic test discovery:
- Test files:
test_*.pyor*_test.py - Test functions:
def test_...() - Test classes:
class Test...(without an__init__method) - Test methods in classes:
def test_...()
Test Directory Layout
project/
├── src/
│ ├── calculator.py
│ └── utils.py
├── tests/
│ ├── test_calculator.py
│ ├── test_utils.py
│ └── conftest.py
└── pytest.ini
Running Tests with Various Options
# Run all tests
pytest
# Run tests in a specific directory
pytest tests/
# Run a single file
pytest test_calculator.py
# Run a specific test
pytest test_calculator.py::test_add
# Run tests in a class
pytest test_calculator.py::TestCalculator
# Run a specific method in a class
pytest test_calculator.py::TestCalculator::test_multiply
Basics of Writing Tests
Using assert for Checks
Pytest uses Python’s built‑in assert statement for condition verification:
def test_basic_assertions():
# Equality check
assert 2 + 2 == 4
# Inequality check
assert 3 * 3 != 8
# Membership check
assert "Python" in "Python testing"
# Type check
assert isinstance(42, int)
# Truthiness check
assert bool([1, 2, 3])
# Falsiness check
assert not bool([])
Testing Exceptions
import pytest
def divide(a, b):
if b == 0:
raise ValueError("Division by zero is not allowed")
return a / b
def test_division_by_zero():
with pytest.raises(ValueError):
divide(10, 0)
def test_exception_message():
with pytest.raises(ValueError, match="Division by zero"):
divide(10, 0)
def test_exception_type():
with pytest.raises(ZeroDivisionError):
1 / 0
Organizing Tests in Classes
class TestCalculator:
def test_addition(self):
assert 2 + 3 == 5
def test_subtraction(self):
assert 5 - 3 == 2
def test_multiplication(self):
assert 3 * 4 == 12
def test_division(self):
assert 10 / 2 == 5
Fixtures
Fixture Basics
Fixtures allow you to prepare data, create connections, and clean up resources before and after tests:
import pytest
@pytest.fixture
def sample_user():
return {
"name": "Иван Петров",
"email": "ivan@example.com",
"age": 30
}
@pytest.fixture
def sample_list():
return [1, 2, 3, 4, 5]
def test_user_data(sample_user):
assert sample_user["name"] == "Иван Петров"
assert sample_user["age"] > 18
def test_list_operations(sample_list):
assert len(sample_list) == 5
assert sum(sample_list) == 15
Fixture Scopes
@pytest.fixture(scope="function") # default
def function_fixture():
return "function level"
@pytest.fixture(scope="module")
def module_fixture():
return "module level"
@pytest.fixture(scope="class")
def class_fixture():
return "class level"
@pytest.fixture(scope="session")
def session_fixture():
return "session level"
Fixtures with Finalization
@pytest.fixture
def database_connection():
# Setup
connection = create_connection()
print("Database connection established")
yield connection
# Teardown
connection.close()
print("Database connection closed")
@pytest.fixture
def temp_file():
import tempfile
import os
# Create temporary file
fd, path = tempfile.mkstemp()
yield path
# Cleanup
os.close(fd)
os.unlink(path)
Test Parametrization
Basic Parametrization
@pytest.mark.parametrize("a, b, expected", [
(2, 3, 5),
(10, 20, 30),
(-1, 1, 0),
(0, 0, 0)
])
def test_addition(a, b, expected):
assert a + b == expected
@pytest.mark.parametrize("input_value, expected", [
("hello", "HELLO"),
("world", "WORLD"),
("Python", "PYTHON")
])
def test_string_upper(input_value, expected):
assert input_value.upper() == expected
Multiple Parametrization
@pytest.mark.parametrize("x", [1, 2, 3])
@pytest.mark.parametrize("y", [4, 5, 6])
def test_multiply(x, y):
assert x * y == y * x
Parametrization with Named IDs
@pytest.mark.parametrize("test_input,expected", [
pytest.param("3+5", 8, id="simple_addition"),
pytest.param("2*4", 8, id="multiplication"),
pytest.param("10-2", 8, id="subtraction")
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
Markers and Test Grouping
Creating Custom Markers
@pytest.mark.slow
def test_slow_operation():
import time
time.sleep(2)
assert True
@pytest.mark.database
def test_database_operation():
assert True
@pytest.mark.api
def test_api_call():
assert True
Configuring Markers in pytest.ini
[pytest]
markers =
slow: marks tests as slow
database: marks tests as database tests
api: marks tests as API tests
integration: marks tests as integration tests
Running Tests by Markers
# Run only slow tests
pytest -m slow
# Run database tests
pytest -m database
# Combine markers
pytest -m "slow and database"
pytest -m "slow or api"
pytest -m "not slow"
Skipping Tests and Expecting Failures
Unconditional Skip
@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
assert False
@pytest.mark.skip
def test_temporary_disabled():
assert True
Conditional Skip
import sys
@pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8+")
def test_walrus_operator():
if (n := 42) > 40:
assert n == 42
@pytest.mark.skipif(not hasattr(sys, "gettrace"), reason="No trace function")
def test_debug_mode():
assert True
Expecting Failures
@pytest.mark.xfail
def test_known_bug():
assert 1 == 2 # Known bug
@pytest.mark.xfail(reason="Bug #123: Division returns wrong result")
def test_division_bug():
assert 10 / 3 == 3.333333333333333
@pytest.mark.xfail(sys.platform == "win32", reason="Bug on Windows")
def test_windows_specific():
assert True
Working with Temporary Files and Directories
Built‑in File Fixtures
def test_create_file(tmp_path):
# tmp_path is a pathlib.Path object
test_file = tmp_path / "test.txt"
test_file.write_text("Hello, World!")
assert test_file.read_text() == "Hello, World!"
assert test_file.exists()
def test_create_directory(tmp_path):
new_dir = tmp_path / "subdir"
new_dir.mkdir()
test_file = new_dir / "nested.txt"
test_file.write_text("Nested content")
assert test_file.exists()
assert test_file.parent == new_dir
def test_temporary_directory(tmpdir):
# tmpdir is a py.path.local object (legacy)
test_file = tmpdir.join("legacy.txt")
test_file.write("Legacy content")
assert test_file.read() == "Legacy content"
Mock Objects and Monkeypatch
Using monkeypatch
def get_user_id():
return 42
def get_username():
return "admin"
def test_monkeypatch_function(monkeypatch):
monkeypatch.setattr("__main__.get_user_id", lambda: 123)
assert get_user_id() == 123
def test_monkeypatch_environment(monkeypatch):
monkeypatch.setenv("TEST_VAR", "test_value")
import os
assert os.environ["TEST_VAR"] == "test_value"
def test_monkeypatch_delete_attr(monkeypatch):
monkeypatch.delattr("sys.platform")
import sys
assert not hasattr(sys, "platform")
Mocking with pytest-mock
# pip install pytest-mock
def external_api_call():
import requests
response = requests.get("https://api.example.com/data")
return response.json()
def test_api_call_mock(mocker):
mock_response = mocker.Mock()
mock_response.json.return_value = {"status": "success"}
mocker.patch("requests.get", return_value=mock_response)
result = external_api_call()
assert result == {"status": "success"}
Pytest Configuration
pytest.ini File
[pytest]
# Additional command‑line options
addopts = -v --tb=short --strict-markers
# Paths to search for tests
testpaths = tests
# Patterns for test files
python_files = test_*.py *_test.py
# Patterns for test functions
python_functions = test_*
# Patterns for test classes
python_classes = Test*
# Minimum pytest version
minversion = 6.0
# Markers
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
# Warnings
filterwarnings =
ignore::UserWarning
ignore::DeprecationWarning
Configuration via pyproject.toml
[tool.pytest.ini_options]
addopts = "-v --tb=short"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
markers = [
"slow: marks tests as slow",
"integration: marks tests as integration tests",
"unit: marks tests as unit tests"
]
Plugins and Extensions
Popular Plugins
# Code coverage
pip install pytest-cov
# Django integration
pip install pytest-django
# Asynchronous testing
pip install pytest-asyncio
# Mocking
pip install pytest-mock
# Parallel execution
pip install pytest-xdist
# HTML reports
pip install pytest-html
Using pytest-cov
# Run with coverage
pytest --cov=myproject
# Generate HTML report
pytest --cov=myproject --cov-report=html
# Exclude files
pytest --cov=myproject --cov-report=term-missing
Coverage Configuration in .coveragerc
[run]
source = .
omit =
*/tests/*
*/venv/*
setup.py
[report]
exclude_lines =
pragma: no cover
def __repr__
raise AssertionError
raise NotImplementedError
Debugging and Analyzing Tests
Command‑Line Options for Debugging
# Verbose output
pytest -v
# Very verbose output
pytest -vv
# Show stdout
pytest -s
# Stop after first failure
pytest --maxfail=1
# Short traceback
pytest --tb=short
# One‑line traceback per failure
pytest --tb=line
# No traceback
pytest --tb=no
# Long traceback with verbose
pytest --tb=long -v
Using pdb for Debugging
def test_debug_example():
x = 10
y = 20
import pdb; pdb.set_trace() # Breakpoint
result = x + y
assert result == 30
# Run with automatic pdb on failure
pytest --pdb
# Run with pdb on first failure
pytest --pdb --maxfail=1
CI/CD Integration
GitHub Actions
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9, '3.10', '3.11']
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-cov
pip install -r requirements.txt
- name: Run tests
run: |
pytest --cov=./ --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
GitLab CI
stages:
- test
test:
stage: test
image: python:3.11
script:
- pip install pytest pytest-cov
- pip install -r requirements.txt
- pytest --cov=./ --cov-report=xml --junitxml=report.xml
artifacts:
reports:
junit: report.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
Advanced Features
Automatic Fixture Usage
@pytest.fixture(autouse=True)
def setup_teardown():
print("Setup before each test")
yield
print("Teardown after each test")
def test_example1():
assert True
def test_example2():
assert True
Fixture Factories
@pytest.fixture
def user_factory():
def _create_user(name="Test User", email="test@example.com"):
return {"name": name, "email": email}
return _create_user
def test_user_creation(user_factory):
user1 = user_factory()
user2 = user_factory(name="John", email="john@example.com")
assert user1["name"] == "Test User"
assert user2["name"] == "John"
Parametrizing Fixtures
@pytest.fixture(params=["sqlite", "postgresql", "mysql"])
def database_url(request):
if request.param == "sqlite":
return "sqlite:///test.db"
elif request.param == "postgresql":
return "postgresql://user:pass@localhost/test"
elif request.param == "mysql":
return "mysql://user:pass@localhost/test"
def test_database_connection(database_url):
assert database_url.startswith(("sqlite", "postgresql", "mysql"))
Key Pytest Methods and Functions
| Function/Method | Description | Example Usage |
|---|---|---|
pytest.main() |
Programmatic test runner | pytest.main(['-v', 'tests/']) |
pytest.fixture() |
Decorator for creating fixtures | @pytest.fixture def data(): return [1,2,3] |
pytest.mark.parametrize() |
Test parametrization | @pytest.mark.parametrize("x", [1,2,3]) |
pytest.mark.skip() |
Skip a test | @pytest.mark.skip(reason="Not ready") |
pytest.mark.skipif() |
Conditional skip | @pytest.mark.skipif(sys.platform == "win32") |
pytest.mark.xfail() |
Expect failure | @pytest.mark.xfail(reason="Known bug") |
pytest.raises() |
Exception checking | with pytest.raises(ValueError): func() |
pytest.warns() |
Warning checking | with pytest.warns(UserWarning): func() |
pytest.deprecated_call() |
Deprecated call checking | with pytest.deprecated_call(): old_func() |
pytest.approx() |
Approximate comparison | assert 0.1 + 0.2 == pytest.approx(0.3) |
pytest.fail() |
Force test failure | pytest.fail("Test failed for reason") |
pytest.skip() |
Skip test inside a function | if condition: pytest.skip("Skipping") |
pytest.importorskip() |
Skip if a module is missing | np = pytest.importorskip("numpy") |
pytest.register_assert_rewrite() |
Register assert rewrite for a module | pytest.register_assert_rewrite("mymodule") |
pytest.exit() |
Exit pytest | pytest.exit("Exiting tests") |
Built‑in Fixtures
| Fixture | Scope | Description |
|---|---|---|
tmp_path |
function | Temporary directory (pathlib.Path) |
tmpdir |
function | Temporary directory (py.path.local) |
tmp_path_factory |
session | Factory for temporary directories |
tmpdir_factory |
session | Legacy factory for temporary directories |
monkeypatch |
function | Patch objects and environment variables |
capsys |
function | Capture stdout/stderr |
capsysbinary |
function | Capture stdout/stderr in binary mode |
capfd |
function | Capture file descriptors |
capfdbinary |
function | Capture file descriptors in binary mode |
caplog |
function | Capture log output |
request |
function | Information about the requesting test |
pytestconfig |
session | Pytest configuration object |
record_property |
function | Record properties for reports |
record_testsuite_property |
session | Record properties for the test suite |
recwarn |
function | Capture warnings |
Comparison with Other Frameworks
| Characteristic | Pytest | unittest | nose2 | doctest |
|---|---|---|---|---|
| Syntax | Simple | Verbose | Medium | Built‑in |
| Fixtures | Powerful | Limited | Available | None |
| Parametrization | Built‑in | None | Available | None |
| Plugins | Many | Few | Medium | None |
| Auto‑discovery | Excellent | Good | Good | Limited |
| Reports | Detailed | Basic | Good | Simple |
| Performance | High | Medium | High | Low |
| Learning Curve | Gentle | Steep | Medium | Easy |
Frequently Asked Questions
What is Pytest and how does it differ from unittest?
Pytest is a modern Python testing framework that offers a simpler syntax, powerful fixtures, and a rich ecosystem of plugins compared to the standard unittest module.
How can I run only specific tests?
Use filtering options such as pytest test_file.py::test_function, pytest -k "test_name", or pytest -m "marker_name".
What are fixtures and why do I need them?
Fixtures are functions that provide test data, configurations, or resources. They promote reuse of setup code and keep tests clean.
How does automatic test discovery work?
Pytest automatically finds files matching test_*.py or *_test.py and functions/classes that start with test_.
Can I run tests in parallel?
Yes, use the pytest-xdist plugin: pip install pytest-xdist, then pytest -n auto to auto‑detect the number of processes.
How do I set up code coverage?
Install pytest-cov and run pytest --cov=myproject --cov-report=html to generate a coverage report.
Does Pytest support asynchronous tests?
Yes, via the pytest-asyncio plugin. Install it and use the @pytest.mark.asyncio decorator for async functions.
Where should I store shared fixtures?
Place common fixtures in a conftest.py file at the root of the test directory. Pytest automatically loads them for all tests.
How do I integrate Pytest with Django?
Use the pytest-django plugin: pip install pytest-django, and add a pytest.ini with the DJANGO_SETTINGS_MODULE setting.
Can I use Pytest for API testing?
Absolutely. Pytest works well for API testing. Use libraries like requests or httpx together with fixtures to create test data.
Pytest is a powerful and flexible testing tool that greatly simplifies writing and maintaining tests. Its extensive plugin ecosystem and ease of use make it the optimal choice for most Python projects.
The Future of AI in Mathematics and Everyday Life: How Intelligent Agents Are Already Changing the Game
Experts warned about the risks of fake charity with AI
In Russia, universal AI-agent for robots and industrial processes was developed