[Django]-Increment a counter and trigger an action when a threshold is exceeded

5👍

As far as I can tell Django’s ORM doesn’t support this out of the box – however, that doesn’t mean it can’t be done, you just need to dip down to the SQL level (exposed, in Django’s ORM via a Manager‘s raw method) to make it work.

If you are using PostgresSQL >= 8.2 then you can use RETURNING to get the final value for failure_count without any additional locking (the DB will still lock, but only long enough to set the value, no additional time lost communicating with you):

# ASSUMPTIONS: All IDs are valid and IDs are unique
# More defenses are necessary if either of these assumptions
# are not true.
failure_count = Thingy.objects.raw("""
    UPDATE Thingy
    SET failure_count = failure_count + 1
    WHERE id = %s
    RETURNING failure_count;
""", [thingy_id])[0].failure_count

if failure_count == THRESHOLD:
    issue_warning_for(thingy_id)

0👍

I don’t really know the reason that you have to do this job without locking, how many task do you have running concurrently?

However, I think there is one way to do this without locking like this:

You should have another model, for example Failure:

class Failure(models.Model):
    thingy = models.ForeignKey(Thingy)

Your *report_failure* should be like this:

from django.db import transaction
@transaction.commit_manually
def flush_transaction():
    transaction.commit()

@transaction.commit_on_success
def report_failure(thingy_id):
    thingy = Thingy.objects.get(id=thingy_id)
    #uncomment following line if you found that the query is cached (not get updated result)
    #flush_transaction()

    current = thingy.failure_set.count()
    if current >= THRESHOLD:
        issue_warning_for(thingy_id)
    Failure.objects.create(thingy=thingy)

I know this approach is quite bad because it creates a lot of Failure record. But this is the only idea i can figure out. Sorry about that.

👤vutran

Leave a comment