Many Python developers use print() as a way to see what happens in their code while it’s executing. While the idea of having this is a fantastic way of gaining insight, it does have plenty of drawbacks.

For instance, if you were to put the script as a background process, how would we know if it crashed? What if it hit a function it was not supposed to? We can’t know for certain what happened with our code.

So, having print statements in your code can be a hindrance (unless it’s a one-off/throwaway type script). Python’s logging module helps clean these statements up and makes it so you can configure specifics of each output, such as severity level and format.

Why you don’t want to be using print

Take the following code, for example:

import sys

def is_connected_to_db(ok: bool):
    if not ok:
        print("database connection failed")
        sys.exit(1)

    print("successful db connection")

is_connected_to_db(ok = True)
is_connected_to_db(ok = False)

When we run this, we’d see this output:

successful db connection
database connection failed

Inherently nothing is wrong with this approach - it’s behaving the way we want it to by showing us what the code did. However, we can make this significantly better with logging, as there are 2 major drawbacks here:

  1. There isn’t a severity level associated with the output. So when we look through out print statements, we have to read each line to see if there was a success or a fail.

  2. Wiring this together with monitoring software such as Grafana makes it nearly impossible to do any kind of visualization, as you’d have to build a custom parser to be able to get it in a format that would be useful.

Fortunately, the logging module helps address some of these issues while extending further functionality. If we replace our print statements with log statements, our code would look something like this:

import logging
import sys

# Format how the output is going to look
logging.basicConfig(
    level=logging.INFO,
    format="%(levelname)s [%(asctime)s] - %(message)s"
)

logger = logging.getLogger(__name__)

# Original function, replacing print for logging
def is_connected_to_db(ok: bool):
    if not ok:
        logger.critical("database connection failed")
        sys.exit(1)

    logger.info("successful db connection")

is_connected_to_db(ok=True)
is_connected_to_db(ok=False)

When we run it, we’ll see:

INFO [2025-12-16 21:07:42,318] - successful db connection
CRITICAL [2025-12-16 21:07:42,319] - database connection failed

This information gives us much more insight as to what events happened and when the events happened and if anything truly went wrong.

By switching over to logging, we address the 2 major issues from above:

  • Upon an initial pass-through we can see something went wrong right away by looking for warning/error/critical statements; we know how bad it is.

  • Monitoring software can parse this if it’s sent to a file so you don’t have to go through the logs yourself and look for issues.

Logging levels

By default, the logging library gives you 5 levels of severity, which are as follows:

Log Level

Description

DEBUG

Detailed diagnostic information, typically of interest only when diagnosing a problem.

INFO

Confirmation that things are working as expected or general information about program flow.

ERROR

An indication that something unexpected happened or might happen in the near future (the default level if not specified).

WARNING

Due to a more serious problem, the software has not been able to perform some function.

CRITICAL

A serious error, indicating that the program itself may be unable to continue running.

Other kinds of logging

You don’t have to use the logging module found inside the Python’s standard library. There’s a few alternatives, with structlog being one you may want to consider:

  • structlog - designed for structured logging, allowing you to output logs in a format such as JSON. I personally use this when I build applications in AWS because it makes wiring monitoring applications like CloudWatch and Grafana easier.

  • loguru - provides built-in support for colored output, automatic log file rotation, and simplified log destination management. I haven’t used this library myself, but it does simplify some of the headaches the logging from the standard library has.

If we were to take a look at all 3 outputs side-by-side, we’ll see each one is slightly different:

# standard library
INFO:user logged in

# structlog (JSON)
{"event":"user logged in","level":"info"}

# loguru
2025-12-16 21:03:10 | INFO | user logged in

Happy coding!

📧 Join the Python Snacks Newsletter! 🐍

Want even more Python-related content that’s useful? Here’s 3 reasons why you should subscribe the Python Snacks newsletter:

  1. Get Ahead in Python with bite-sized Python tips and tricks delivered straight to your inbox, like the one above.

  2. Exclusive Subscriber Perks: Receive a curated selection of up to 6 high-impact Python resources, tips, and exclusive insights with each email.

  3. Get Smarter with Python in under 5 minutes. Your next Python breakthrough could just an email away.

You can unsubscribe at any time.

Interested in starting a newsletter or a blog?

Do you have a wealth of knowledge and insights to share with the world? Starting your own newsletter or blog is an excellent way to establish yourself as an authority in your field, connect with a like-minded community, and open up new opportunities.

If TikTok, Twitter, Facebook, or other social media platforms were to get banned, you’d lose all your followers. This is why you should start a newsletter: you own your audience.

This article may contain affiliate links. Affiliate links come at no cost to you and support the costs of this blog. Should you purchase a product/service from an affiliate link, it will come at no additional cost to you.

Reply

or to participate

Keep Reading

No posts found