4
Short Answer
If you are using PostgreSQL or Oracle, you can use, Django’s builtin iterator:
queryset.iterator(chunk_size=1000)
This causes Django to use server-side cursors and not cache models as it iterates through the queryset. As of Django 4.1, this will even work with prefetch_related
.
For other databases, you can use the following:
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update
, use this:
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
Performance Profiling on PostgreSQL
I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2 and Postgres 13. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.
Basic Iteration
The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I’ve seen this run out of memory on larger tables.
x = sum(i.id for i in MyModel.objects.all())
Wall time: 3.53 s, 22MB of memory (BAD)
Django Iterator
The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.
assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x
Wall time: 3.11 s, <1MB of memory
Custom Iterator
The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django’s built-in iterator on speed because it makes more database queries.
def queryset_iterator(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
for obj in page:
yield obj
pk = obj.pk
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x
Wall time: 3.65 s, <1MB of memory
Custom Paging Function
The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It’s a bit slower than queryset_iterator in my tests and I don’t have a coherent theory as to why, but the slowdown isn’t substantial.
def queryset_to_pages(queryset, page_size=1000):
page = queryset.order_by("pk")[:page_size]
while page:
yield page
pk = max(obj.pk for obj in page)
page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]
assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x
Wall time: 4.49 s, <1MB of memory
Alternative Custom Paging Function
Given that Django’s queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It’s a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn’t have a primary key in the results (for whatever reason).
def queryset_to_pages2(queryset, page_size=1000):
page = []
page_count = 0
for obj in queryset.iterator():
page.append(obj)
page_count += 1
if page_count == page_size:
yield page
page = []
page_count = 0
yield page
assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x
Wall time: 4.33 s, <1MB of memory
Bad Approaches
The following are approaches you should never use (many of which are suggested in the question) along with why.
Do NOT Use Slicing on an Unordered Queryset
Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by
. Additionally, PostgreSQL does not have a default order by, and the Postgres docs specifically warn against using limit + offset without order by. As a result, each time you take a slice, you are getting a non-deterministic slice of your table, which means your slices may not be overlapping and won’t cover all rows of the table between them. In my experience, this only happens if something else is modifying data in the table while you are doing the iteration, which only makes this problem more pernicious because it means the bug might not show up if you are testing your code in isolation.
def very_bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x
Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!
Do NOT use Slicing for Whole-Table Iteration in General
Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.
def bad_iterator(queryset, page_size=1000):
counter = 0
count = queryset.count()
while counter < count:
for model in queryset.order_by("id")[counter:counter+page_size].iterator():
yield model
counter += page_size
assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x
Wall time: 15s (BAD), <1MB of memory
Do NOT use Django’s Paginator for Whole-Table Iteration
Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.
from django.core.paginator import Paginator
def bad_paged_iterator(queryset, page_size=1000):
p = Paginator(queryset.order_by("pk"), page_size)
for i in p.page_range:
yield p.get_page(i)
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x
Wall time: 13.1 s (BAD), <1MB of memory
3
The essential answer: use raw SQL with server-side cursors.
Sadly, until Django 1.5.2 there is no formal way to create a server-side MySQL cursor (not sure about other database engines). So I wrote some magic code to solve this problem.
For Django 1.5.2 and MySQLdb 1.2.4, the following code will work. Also, it’s well commented.
Caution: This is not based on public APIs, so it will probably break in future Django versions.
# This script should be tested under a Django shell, e.g., ./manage.py shell
from types import MethodType
import MySQLdb.cursors
import MySQLdb.connections
from django.db import connection
from django.db.backends.util import CursorDebugWrapper
def close_sscursor(self):
"""An instance method which replace close() method of the old cursor.
Closing the server-side cursor with the original close() method will be
quite slow and memory-intensive if the large result set was not exhausted,
because fetchall() will be called internally to get the remaining records.
Notice that the close() method is also called when the cursor is garbage
collected.
This method is more efficient on closing the cursor, but if the result set
is not fully iterated, the next cursor created from the same connection
won't work properly. You can avoid this by either (1) close the connection
before creating a new cursor, (2) iterate the result set before closing
the server-side cursor.
"""
if isinstance(self, CursorDebugWrapper):
self.cursor.cursor.connection = None
else:
# This is for CursorWrapper object
self.cursor.connection = None
def get_sscursor(connection, cursorclass=MySQLdb.cursors.SSCursor):
"""Get a server-side MySQL cursor."""
if connection.settings_dict['ENGINE'] != 'django.db.backends.mysql':
raise NotImplementedError('Only MySQL engine is supported')
cursor = connection.cursor()
if isinstance(cursor, CursorDebugWrapper):
# Get the real MySQLdb.connections.Connection object
conn = cursor.cursor.cursor.connection
# Replace the internal client-side cursor with a sever-side cursor
cursor.cursor.cursor = conn.cursor(cursorclass=cursorclass)
else:
# This is for CursorWrapper object
conn = cursor.cursor.connection
cursor.cursor = conn.cursor(cursorclass=cursorclass)
# Replace the old close() method
cursor.close = MethodType(close_sscursor, cursor)
return cursor
# Get the server-side cursor
cursor = get_sscursor(connection)
# Run a query with a large result set. Notice that the memory consumption is low.
cursor.execute('SELECT * FROM million_record_table')
# Fetch a single row, fetchmany() rows or iterate it via "for row in cursor:"
cursor.fetchone()
# You can interrupt the iteration at any time. This calls the new close() method,
# so no warning is shown.
cursor.close()
# Connection must be close to let new cursors work properly. see comments of
# close_sscursor().
connection.close()
- [Django]-How to lookup django session for a particular user?
- [Django]-How to catch A 'UNIQUE constraint failed' 404 in django
- [Django]-Django Rest Framework update field
1
There is another option available. It wouldn’t make the iteration faster, (in fact it would probably slow it down), but it would make it use far less memory. Depending on your needs this may be appropriate.
large_qs = MyModel.objects.all().values_list("id", flat=True)
for model_id in large_qs:
model_object = MyModel.objects.get(id=model_id)
# do whatever you need to do with the model here
Only the ids are loaded into memory, and the objects are retrieved and discarded as needed. Note the increased database load and slower runtime, both tradeoffs for the reduction in memory usage.
I’ve used this when running async scheduled tasks on worker instances, for which it doesn’t really matter if they are slow, but if they try to use way too much memory they may crash the instance and therefore abort the process.
- [Django]-Django – accessing the RequestContext from within a custom filter
- [Django]-How to access a dictionary element in a Django template?
- [Django]-Django manage.py: Migration applied before its dependency