- Python Snacks
- Posts
- Using Python's LRU Cache
Using Python's LRU Cache
Caching your program's output can increase significantly. Using Python's functools.lru_cache decorator makes creating caches easy.

Have you ever noticed slow performance when you call one of your own functions?
Maybe its a function that queries an API, does some expensive computation, or parses a file. Even though you’re passing in the same arguments, it’s still recomputing everything from scratch.
Caching is a mechanism that allows us to speed up our programs by storing the results of function calls and re-using them when the same inputs appear again.
Thankfully, Python’s functools library allows us to easily implement a solution using a the lru_cache decorator:
from functools import lru_cache
@lru_cache(maxsize = 128)
def get_weather_data(zip_code: str):
data = fetch_wx_data_by_zipcode(zipcode)
return data.json()
» In case you’re not familiar with decorators, check out this article about decorators.
What is lru_cache and how does it work?
This is a built-in Python decorator that automatically caches the results of function calls so that if the same inputs are used again, Python skips recomputation and returns the saved result.
There’s multiple caching patterns (such as FIFO, LIFO, and MRU), but LRU stands for “least recently used”. The cache will populate every time the function runs, adding in entries.
When the cache fills up and it needs to make space for more entries, it will remove the entry that was used the least amount of times.
Performance of lru_cache
To show that the cache has performance benefits, if we test timing when using the following script:
import time
from functools import lru_cache
# Simulate an expensive function
def slow_function(x):
time.sleep(1) # simulate heavy work
return x * 2
# Cached version
@lru_cache(maxsize=None)
def cached_slow_function(x):
time.sleep(1) # simulate heavy work
return x * 2
def test_function(func, label):
start = time.perf_counter()
for i in [1, 2, 3, 2, 3, 1]: # intentional duplicates
result = func(i)
end = time.perf_counter()
print(f"{label} took {end - start:.2f} seconds.")
We see that the results are 2x quicker when cached:
>>> test_function(slow_function, "No Cache")
No Cache took 6.03 seconds.
>>> test_function(cached_slow_function, "With lru_cache")
With lru_cache took 3.01 seconds.
Summary
In short, it’s always good to use caching mechanisms, as this will speed up the performance of your code. Using the functools.lru_cache decorator allows you to implement a commonly used caching mechanism without having to implement it yourself.
However, the main drawback here is that all inputs must be immutable (e.g. strings, tuples); you’re unable to you use dictionaries or lists. In addition, cached results are stored in memory, which does lead to increased memory consumption.
Happy coding!
📧 Join the Python Snacks Newsletter! 🐍
Want even more Python-related content that’s useful? Here’s 3 reasons why you should subscribe the Python Snacks newsletter:
Get Ahead in Python with bite-sized Python tips and tricks delivered straight to your inbox, like the one above.
Exclusive Subscriber Perks: Receive a curated selection of up to 6 high-impact Python resources, tips, and exclusive insights with each email.
Get Smarter with Python in under 5 minutes. Your next Python breakthrough could just an email away.
You can unsubscribe at any time.
Interested in starting a newsletter or a blog?
Do you have a wealth of knowledge and insights to share with the world? Starting your own newsletter or blog is an excellent way to establish yourself as an authority in your field, connect with a like-minded community, and open up new opportunities.
If TikTok, Twitter, Facebook, or other social media platforms were to get banned, you’d lose all your followers. This is why you should start a newsletter: you own your audience.
This article may contain affiliate links. Affiliate links come at no cost to you and support the costs of this blog. Should you purchase a product/service from an affiliate link, it will come at no additional cost to you.
Reply