2👍
Depending on the database type you are using, you might want to use a trigger to calculate the derived field. That way, they can never get out of synch.
This means that the field (length) could be re-calculated every time start or end changes.
1👍
I’d store the length, but I’d make sure the calculation was done in my insert and update sprocs so that as long as everyone uses your sprocs there is no more overhead for them.
- [Django]-Weird django character input
- [Django]-How can i split up a large django-celery tasks.py module into smaller chunks?
- [Django]-Request.data in DRF v/s serializers.data in DRF
- [Django]-Django migrate: django.db.utils.OperationalError: (1364, "Field 'name' doesn't have a default value")
1👍
Unfortunately neither of your target databases support computed columns. I would do the following:
- First, determine whether you really have a performance problem. It is true that
WHERE end - start = ?
will perform more slowly thanWHERE length = ?
, but you don’t define what a “really big table” is in your application, nor what the required performance is. No need to optimize away a problem that may not exist. - Determine whether you can support any latency in your searches. If so, you can add the calculated column to the table but dedicate a separate task, running every five minutes, each hour, or whatever, to fill in the values.
- In PostgreSQL you could consider a materialized view,
which I believe are supported at the engine level. (See Catcall’s comment, below). - Finally, if all else fails, consider using a trigger to maintain the calculated column.
- [Django]-Edit Django User admin template
- [Django]-Django json single and double quotes?
- [Django]-Producing a WAR file from a django project with SQLite
- [Django]-Simplest framework for converting python app into webapp?
- [Django]-Define component schema with drf-spectacular for django API
Source:stackexchange.com