In partnership with

When you’re writing tests, you’re likely more worried about edge cases, correct functionality, correct error handling and so much more.

It’s super easy to end up with entire sections of your codebase that no test ever reaches, not because your tests are bad, but because there might be dead code, unreachable code, or broken test wiring.

This is where looking at test coverage comes in handy.

When you run your pytest suite, an external tool will keep track of code that gets tested and code that doesn’t, then provides a report that you can look into further:

----------- coverage: python 3.12 -----------
Name                    Stmts   Miss  Cover
-------------------------------------------
src/orders.py              45     12    73%
src/payments.py            38      0   100%
src/notifications.py       22     18    18%
-------------------------------------------
TOTAL                     105     30    71%

In the report here, it shows that 100% of payments.py code is being tested - great! This report also tells us that notifications.py is only 18% covered - not so great!

Note: it’s important to remember that looking at coverage doesn’t determine if a test was written for a chunk of code - it looks to see if a line of code was executed at least once during the test run.

For example, if we’re testing the following function:

def format_currency(amount: float) -> str:
    return f"${amount:.2f}"

def process_order(order_id: str, total: float) -> dict:
    return {
        "order_id": order_id, 
        "total": format_currency(total)
    }

Our test case may look as such:

# test_orders.py
def test_process_order():
    result = process_order("ORD-1", 49.99)
    assert result["order_id"] == "ORD-1"

But when we run a coverage report, it will show that 100% of the code is covered. However, we never wrote any test code for format_currency!

How to check for coverage with your tests

First, you’re going to need to install it with either pip or uv:

pip install pytest-cov coverage

# OR

uv add --dev pytest-cov coverage

pytest-cov is the pytest plugin that wires coverage in so you don’t have to invoke the python script that is installed with coverage. To use the tool, add the --cov flag to your normal pytest command, pointing at the code you’re testing:

pytest --cov=src tests/

Going back up to the report, there’s a “Miss” column - this tells you how many statements were not executed. The report doesn’t say what lines are not covered. For this, pass in the —cov-report=term-missing argument:

pytest --cov=src --cov-report=term-missing tests/

Name                    Stmts   Miss  Cover  Missing
-----------------------------------------------------
src/notifications.py       22     18    18%  12-45, 58-72

This tells us that lines 12-45 and 58-72 aren’t being executed; this will provide insight for you to go in and look as to why this set of code isn’t being executed.

Additional ways to use the coverage tool

There’s more you can do with this tool, such as filtering out fully covered tests by passing in the :skip-covered modifier into —-cov-report:

pytest --cov=src --cov-report=term-missing:skip-covered tests/

----------- coverage -----------
Name                       Stmts   Miss  Cover  Missing
--------------------------------------------------------
src/notifications.py          22     18    18%  12-45, 58-72
src/orders.py                 45     12    73%  88-103
--------------------------------------------------------
TOTAL                        105     30    71%

This is super handy when you have lots of modules within your codebase so you don’t see a wall of 100% coverage.

For projects that are split into multiple test suites, such as unit tests, integration tests that hit individual services, or maybe a completely separate suite for a specific module.

Running the commands as-is lumps all of the coverage together, but you can provide coverage reports for individual test suite by passing in --cov-append and modifying which tests to run:

# run unit tests first
pytest --cov=src --cov-append tests/unit/

# then integration tests — adds to the same .coverage file
pytest --cov=src --cov-append tests/integration/

# final report reflects both runs combined
pytest --cov=src --cov-report=term-missing:skip-covered --co-append tests/

I’m curious - do you run coverage alongside your tests? If so, have you ran into anything interesting with it like being able to spot dead code? Reply to this email and let me know!

Happy coding!

📧 Join the Python Snacks Newsletter! 🐍

Want even more Python-related content that’s useful? Here’s 3 reasons why you should subscribe the Python Snacks newsletter:

  1. Get Ahead in Python with bite-sized Python tips and tricks delivered straight to your inbox, like the one above.

  2. Exclusive Subscriber Perks: Receive a curated selection of up to 6 high-impact Python resources, tips, and exclusive insights with each email.

  3. Get Smarter with Python in under 5 minutes. Your next Python breakthrough could just an email away.

You can unsubscribe at any time.

Interested in starting a newsletter or a blog?

Do you have a wealth of knowledge and insights to share with the world? Starting your own newsletter or blog is an excellent way to establish yourself as an authority in your field, connect with a like-minded community, and open up new opportunities.

If TikTok, Twitter, Facebook, or other social media platforms were to get banned, you’d lose all your followers. This is why you should start a newsletter: you own your audience.

This article may contain affiliate links. Affiliate links come at no cost to you and support the costs of this blog. Should you purchase a product/service from an affiliate link, it will come at no additional cost to you.

Reply

Avatar

or to participate

Keep Reading