[Django]-Difference between @cached_property and @lru_cache decorator

30👍

First and foremost, lru_cache is a decorator provided by the Python language itself as of version 3.4; cached_property is a decorator provided by Django for many years, while only being added to the Python language in version 3.8 in October 2019. That being said, they are similar.

lru_cache is specifically useful in functional programming. What it does is saves the results of function calls with a certain set of parameters. When a function decorated with lru_cache is called multiple times with the same parameters, the decorator will just return a cached result of the function result. This employs a method of programming called dynamic programming, and more specifically, memoization. Using these methods, you can drastically speed up code which repeatedly calls functions that are computationally expensive.

Python also provides another similar decorator called lfu_cache. Both of these decorators accomplish memoization, however with different replacement policies. lru_cache (least recently used) will fill it’s cache and have to kick something out during the next decorated function call. This replacement policy dictates that the least recently used entry gets replaced by the new data. lfu_cache (least frequently used) dictates that replacements happen based on which entries are used the least.

cached_property is similar to lru_cache in the sense that it caches the result of expensive function calls. The only difference here is that it can only be used on methods, meaning the functions belong to an object. Furthermore, they can only be used on methods that have no other parameters aside from self. You would specifically want to use this during django development for a method on a class that hits the database. The Django docs mention its usage on a model class which has a property method friends. This method presumably hits the database to gather a set of people who are friends of that instance of Person. Because calls to the database are expensive, we’d want to cache that result for later use.

12👍

  1. A major difference is that lru_cache will keep alive the objects in the cache, which might lead to memory leak, especially if the instance in which the lru_cache is applied is big (see: https://bugs.python.org/issue19859)
class A:

  @property
  @functools.lru_cache(maxsize=None)
  def x(self):
    return 123

for _ in range(100):
  A().x  # Call lru_cache on 100 different `A` instances

# The instances of `A()` are never garbage-collected:
assert A.x.fget.cache_info().currsize == 100

With cached_property, there is no cache, so no memory leak.

class B:

  @functools.cached_property
  def x(self):
    return 123

b = B()
print(vars(b))  # {}
b.x
print(vars(b))  # {'x': 123}
del b  # b is garbage-collected
  1. Another difference is that @property are read-only while @cached_property are not. cache_property allows writes to the attributes Refer Python docs
A().x = 123
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
B().x = 123  # Works

This is due to the fact that @cached_property are replacing the attribute, so the second call to b.x bypass the B.x.get descriptor call.

  1. Another difference which likely don’t matter in most cases is that cached_property is more performant if you access the same attribute multiple times, while lru_cache has overhead for the function call and attribute lookup. Note the difference is only visible with huge numbers.
[A().x for _ in range(10_000)]
[B().x for _ in range(10_000)]

a = A()
b = B()

print(timeit.timeit(lambda: a.x, number=1_000_000))  # ~0.83
print(timeit.timeit(lambda: b.x, number=1_000_000))  # ~0.57

6👍

They serve different purposes.

lru_cache saves the least recent uses – you should specify maxsize which distinguishes how many computations of your function you can save. Once you surpass this number, the ‘oldest’ result is discarded and the new one is saved.

cached_property just computes the result and saves it. It doesn’t take arguments unlike lru_cache (you can think of it as a lru_cache on an object type with maxsize = 1 with no arguments).

👤Mariy

Leave a comment