Coverage.py - checking code coating with tests

онлайн тренажер по питону
Online Python Trainer for Beginners

Learn Python easily without overwhelming theory. Solve practical tasks with automatic checking, get hints in Russian, and write code directly in your browser — no installation required.

Start Course

What Is Coverage.py

Coverage.py is a powerful code‑coverage analysis tool for Python that enables developers to identify which parts of their code were executed during testing and which remained untouched. This tool is an essential component of the code‑quality assurance process and helps uncover potential gaps in test coverage.

The primary goal of Coverage.py is to provide detailed insight into how comprehensively your tests cover the written code. It monitors program execution and generates reports that show coverage percentages for each file, module, or even individual lines of code.

Key Features of Coverage.py

Coverage.py offers a wide range of capabilities for measuring code coverage:

Line coverage measurement – the core feature that tracks which lines of code were executed during tests.

Branch analysis – an advanced feature that verifies whether all possible execution paths in conditional statements (if, elif, else) were exercised.

Report generation – creates detailed reports in multiple formats: text, HTML, XML, JSON.

Framework integration – seamless compatibility with popular testing frameworks such as pytest, unittest, and nose.

Exclusion configuration – allows you to omit specific files, directories, or lines from coverage analysis.

Installation and Initial Setup

Standard Installation

Install Coverage.py using pip:

pip install coverage

Installation with Additional Features

If you work with pytest, it’s recommended to install the pytest‑cov plugin:

pip install pytest-cov

For TOML‑based configuration support:

pip install coverage[toml]

Verify Installation

After installing, confirm that Coverage.py is functioning correctly:

coverage --version

Core Commands and Usage

Run Coverage Analysis

Basic command to run coverage for a Python script:

coverage run my_script.py

Run with additional source specification:

coverage run --source=my_package my_script.py

Run with unittest Module

coverage run -m unittest discover

Run with pytest

coverage run -m pytest

Or use the pytest‑cov integration:

pytest --cov=my_module tests/

View Coverage Report

Text Report in Terminal

coverage report

Sample output:

Name                 Stmts   Miss  Cover
----------------------------------------
my_module.py            25      3    88%
utils.py                15      0   100%
main.py                 20      5    75%
----------------------------------------
TOTAL                   60      8    87%

Detailed Report with Missing Lines

coverage report --show-missing

HTML Report

coverage html

After execution, open htmlcov/index.html in a browser to explore the interactive report.

XML Report for CI/CD

coverage xml

JSON Report

coverage json

Manage Coverage Data

Erase Data

coverage erase

Combine Data

coverage combine

Summary Table of Coverage.py Commands

Command Description Key Options
coverage run Execute program while measuring coverage --source, --omit, --include, --branch
coverage report Generate a text coverage report --show-missing, --skip-covered, --include, --omit
coverage html Create an HTML coverage report --directory, --include, --omit, --title
coverage xml Produce an XML coverage report --include, --omit, -o
coverage json Produce a JSON coverage report --include, --omit, -o
coverage erase Delete stored coverage data None
coverage combine Merge multiple coverage data sets --append, --keep
coverage annotate Generate annotated source files --include, --omit, --directory
coverage debug Show debugging information data, sys, config

Coverage.py Configuration

.coveragerc File

Create a .coveragerc file at the project root to customize Coverage.py behavior:

[run]
# Enable branch analysis
branch = True

# Define source packages
source = my_package

# Exclude files and directories
omit = 
    */tests/*
    */migrations/*
    */__init__.py
    */venv/*
    */env/*
    setup.py

# Parallel execution support
parallel = True

[report]
# Show missing lines
show_missing = True

# Do not skip fully covered files
skip_covered = False

# Fail if coverage falls below this percent
fail_under = 80

# Decimal precision
precision = 2

# Lines to exclude from coverage
exclude_lines =
    pragma: no cover
    def __repr__
    if self.debug:
    if settings.DEBUG
    raise AssertionError
    raise NotImplementedError
    if 0:
    if __name__ == .__main__.:
    class .*\bProtocol\):
    @(abc\.)?abstractmethod

[html]
# Output directory for HTML report
directory = htmlcov

# Report title
title = Coverage Report

[xml]
# Output file for XML report
output = coverage.xml

Configuration via pyproject.toml

Modern projects often use pyproject.toml for configuration:

[tool.coverage.run]
branch = true
source = ["my_package"]
omit = [
    "*/tests/*",
    "*/migrations/*",
    "*/__init__.py",
]

[tool.coverage.report]
show_missing = true
skip_covered = false
fail_under = 80
precision = 2
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "if self.debug:",
    "if settings.DEBUG",
    "raise AssertionError",
    "raise NotImplementedError",
    "if 0:",
    "if __name__ == .__main__.:",
]

[tool.coverage.html]
directory = "htmlcov"
title = "Coverage Report"

[tool.coverage.xml]
output = "coverage.xml"

Branch Coverage Analysis

Understanding Branch Coverage

Branch coverage is a more sophisticated metric that checks whether every possible path in conditional statements has been executed.

Enabling Branch Coverage

In the configuration file:

[run]
branch = True

Or via the command line:

coverage run --branch my_script.py

Branch Coverage Example

def check_number(x):
    if x > 0:
        return "positive"
    elif x < 0:
        return "negative"
    else:
        return "zero"

Without branch analysis, the function might appear fully covered if you only test a positive value. Branch coverage reveals that the negative and zero branches were never exercised.

Integration with Testing Frameworks

Using pytest

Basic Usage

pytest --cov=my_package

Generate HTML Report

pytest --cov=my_package --cov-report html

Enforce Minimum Coverage Threshold

pytest --cov=my_package --cov-fail-under=90

Combine Multiple Report Types

pytest --cov=my_package --cov-report html --cov-report term --cov-report xml

Using unittest

coverage run -m unittest discover
coverage report
coverage html

Using nose

coverage run -m nose
coverage report

Excluding Code from Coverage

Pragma Comments

def debug_function():
    print("Debug information")  # pragma: no cover

def main():
    if DEBUG:  # pragma: no cover
        debug_function()
    # rest of the code

Excluding Whole Blocks

# pragma: no cover
def development_only_function():
    pass

class DebugClass:  # pragma: no cover
    def __init__(self):
        pass

Configuration‑Based Exclusions

In .coveragerc:

[run]
omit = 
    */tests/*
    */debug.py
    */development/*

Working with Reports

Understanding HTML Reports

The HTML report provides the most granular view of coverage:

  • Green lines – executed during testing
  • Red lines – never executed
  • Yellow lines – partially executed (branch coverage)

Coverage Metrics Explained

  • Statements – total number of executable lines
  • Missing – lines that were not executed
  • Coverage – percentage of covered statements
  • Branch – branch‑coverage details

CI/CD Integration

GitHub Actions

name: Tests with Coverage

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: '3.9'
    
    - name: Install dependencies
      run: |
        pip install -r requirements.txt
        pip install coverage
    
    - name: Run tests with coverage
      run: |
        coverage run -m pytest
        coverage report
        coverage xml
    
    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v1
      with:
        file: ./coverage.xml

GitLab CI

test:
  stage: test
  script:
    - pip install -r requirements.txt
    - pip install coverage
    - coverage run -m pytest
    - coverage report
    - coverage xml
  coverage: '/TOTAL.*\s+(\d+%)$/'
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.xml

Jenkins

pipeline {
    agent any
    
    stages {
        stage('Test') {
            steps {
                sh 'pip install -r requirements.txt'
                sh 'pip install coverage'
                sh 'coverage run -m pytest'
                sh 'coverage report'
                sh 'coverage xml'
            }
        }
    }
    
    post {
        always {
            publishHTML([
                allowMissing: false,
                alwaysLinkToLastBuild: true,
                keepAll: true,
                reportDir: 'htmlcov',
                reportFiles: 'index.html',
                reportName: 'Coverage Report'
            ])
        }
    }
}

IDE Integration

PyCharm

PyCharm Professional includes built‑in Coverage.py support:

  1. Open **Run/Debug Configurations**
  2. Select your test configuration
  3. Navigate to the **Coverage** tab
  4. Enable **Code Coverage**
  5. Adjust coverage options as needed

Visual Studio Code

VS Code offers extensions for coverage visualization:

  • Python Test Explorer – run tests with coverage
  • Coverage Gutters – display coverage markers in the editor

Example settings.json configuration:

{
    "python.testing.pytestEnabled": true,
    "python.testing.pytestArgs": [
        "--cov=my_package",
        "--cov-report=html"
    ]
}

Advanced Features

Parallel Execution

For projects that run tests in parallel:

[run]
parallel = True

After the test run, combine the results:

coverage combine
coverage report

Plugins and Extensions

Coverage.py supports a plugin architecture for extra functionality:

  • django_coverage_plugin – coverage for Django projects
  • pytest-cov – pytest integration
  • coverage-badge – generate badge images for README files

Programmatic API

You can control Coverage.py directly from Python code:

import coverage

# Create a Coverage object
cov = coverage.Coverage()

# Start measurement
cov.start()

# Code under test
import my_module
my_module.some_function()

# Stop measurement
cov.stop()

# Save data
cov.save()

# Produce a report
cov.report()

Performance Optimization

Reducing Overhead

  • Use the --source option to limit analysis to specific packages
  • Exclude unnecessary files with omit
  • Leverage parallel execution for large codebases

Settings for Large Projects

[run]
# Focus on the main package
source = my_package

# Exclude third‑party libraries and tests
omit = 
    */site-packages/*
    */tests/*
    */migrations/*
    */venv/*

# Enable parallel mode
parallel = True

# Optimize memory usage
concurrency = multiprocessing

Frequently Asked Questions

How do I configure Coverage.py for a Django project?

Install the Django plugin and adjust your .coveragerc:

pip install django_coverage_plugin
[run]
plugins = django_coverage_plugin
source = .
omit = 
    */migrations/*
    */venv/*
    */env/*
    manage.py
    */settings/*
    */tests/*

Why is my coverage percentage low?

Common reasons include:

  • Insufficient test cases
  • Dead or unused code
  • Misconfigured source/omit settings that include irrelevant files
  • Lack of tests for exception handling and edge cases

How can I exclude specific lines from coverage?

Use the # pragma: no cover comment or define exclude_lines in the configuration:

[report]
exclude_lines =
    pragma: no cover
    def __repr__
    raise AssertionError
    raise NotImplementedError

Is it safe to run Coverage.py in production?

Running Coverage.py in production is not recommended because it adds noticeable overhead. Use it only in development and CI environments.

How do I add Coverage.py to pre‑commit hooks?

Create a .pre-commit-config.yaml file:

repos:
  - repo: local
    hooks:
      - id: coverage
        name: coverage
        entry: coverage run -m pytest
        language: system
        pass_filenames: false
        always_run: true

What does “partial” coverage mean in reports?

“Partial” indicates that only some branches of a conditional statement were executed. For example, an if‑else block where only the if side ran will be marked as partially covered.

Best Practices

Target Coverage Recommendations

  • 80‑90 % – solid baseline for most projects
  • 100 % – rarely necessary and can lead to over‑testing
  • Prioritize critical modules and business‑logic code

Test Writing Guidelines

  • Cover both positive and negative scenarios
  • Include boundary‑value tests
  • Don’t forget exception and error‑handling tests

Maintaining Code Quality

  • Review coverage reports regularly
  • Integrate Coverage.py into code‑review workflows
  • Automate coverage checks in CI pipelines

Alternatives and Complementary Tools

Comparison with Other Solutions

  • pytest‑cov – pytest plugin built on top of Coverage.py
  • nose‑cov – coverage plugin for nose
  • mutmut – mutation testing tool that works alongside coverage analysis

Additional Services

  • Codecov – cloud platform for visualizing and tracking coverage
  • Coveralls – another hosted coverage reporting service
  • SonarQube – comprehensive code‑quality inspection suite

Conclusion

Coverage.py is an indispensable tool for maintaining high code quality in Python projects. It delivers granular insight into test coverage, highlights dead code, and guides improvements in testing strategies.

When used together with solid testing practices, Coverage.py helps you build more reliable, maintainable applications. It is especially valuable in team environments where consistent quality standards are essential.

Remember, high coverage is a means to an end—not the end itself. Leverage Coverage.py as part of a broader quality‑assurance toolkit that includes code reviews, static analysis, and modern DevOps practices.

$

News