[Django]-Remove duplicates in Django ORM — multiple rows

78👍

✅

def remove_duplicated_records(model, fields):
    """
    Removes records from `model` duplicated on `fields`
    while leaving the most recent one (biggest `id`).
    """
    duplicates = model.objects.values(*fields)

    # override any model specific ordering (for `.annotate()`)
    duplicates = duplicates.order_by()

    # group by same values of `fields`; count how many rows are the same
    duplicates = duplicates.annotate(
        max_id=models.Max("id"), count_id=models.Count("id")
    )

    # leave out only the ones which are actually duplicated
    duplicates = duplicates.filter(count_id__gt=1)

    for duplicate in duplicates:
        to_delete = model.objects.filter(**{x: duplicate[x] for x in fields})

        # leave out the latest duplicated record
        # you can use `Min` if you wish to leave out the first record
        to_delete = to_delete.exclude(id=duplicate["max_id"])

        to_delete.delete()

You shouldn’t do it often. Use unique_together constraints on database instead.

This leaves the record with the biggest id in the DB. If you want to keep the original record (first one), modify the code a bit with models.Min. You can also use completely different field, like creation date or something.

Underlying SQL

When annotating django ORM uses GROUP BY statement on all model fields used in the query. Thus the use of .values() method. GROUP BY will group all records having those values identical. The duplicated ones (more than one id for unique_fields) are later filtered out in HAVING statement generated by .filter() on annotated QuerySet.

SELECT
    field_1,
    …
    field_n,
    MAX(id) as max_id,
    COUNT(id) as count_id
FROM
    app_mymodel
GROUP BY
    field_1,
    …
    field_n
HAVING
    count_id > 1

The duplicated records are later deleted in the for loop with an exception to the most frequent one for each group.

Empty .order_by()

Just to be sure, it’s always wise to add an empty .order_by() call before aggregating a QuerySet.

The fields used for ordering the QuerySet are also included in GROUP BY statement. Empty .order_by() overrides columns declared in model’s Meta and in result they’re not included in the SQL query (e.g. default sorting by date can ruin the results).

You might not need to override it at the current moment, but someone might add default ordering later and therefore ruin your precious delete-duplicates code not even knowing that. Yes, I’m sure you have 100% test coverage…

Just add empty .order_by() to be safe. 😉

https://docs.djangoproject.com/en/3.2/topics/db/aggregation/#interaction-with-default-ordering-or-order-by

Transaction

Of course you should consider doing it all in a single transaction.

https://docs.djangoproject.com/en/3.2/topics/db/transactions/#django.db.transaction.atomic

1👍

If you want to delete duplicates on single or multiple columns, you don’t need to iterate over millions of records.

  1. Fetch all unique columns (don’t forget to include the primary key column)

    fetch = Model.objects.all().values("id", "skuid", "review", "date_time")
    
  2. Read the result using pandas (I did using pandas instead ORM query)

    import pandas as pd
    df = pd.DataFrame.from_dict(fetch)
    
  3. Drop duplicates on unique columns

    uniq_df = df.drop_duplicates(subset=["skuid", "review", "date_time"])
    ## Dont add primary key in subset you dumb
    
  4. Now, you’ll get the unique records from where you can pick the primary key

    primary_keys = uniq_df["id"].tolist()
    
  5. Finally, it’s show time (exclude those id’s from records and delete rest of the data)

    records = Model.objects.all().exclude(pk__in=primary_keys).delete()
    

Leave a comment