9๐
Be conscious of having enough threads/processes/pods to maintain availability if your application blocks while serving each HTTP request (e.g. Django). There is going to be some pod startup time if youโre using a horizontal pod autoscaler, and I found with a high traffic application I had much better availability with uwsgi and the application within each pod (same container), and a separate nginx pod doing reverse proxying and request pooling when all the uwsgi workers were busy.
YMMV but at the end of the day, availability is more important than sticking to the single process per pod rule of thumb. Just understand the downsides, such as less isolation between the processes within the same container. Logs are available on a per container basis, so there wonโt be isolation between anything in the same container using the built in kubectl logs functionality.
5๐
Recommended way to manage this in Kubernetes is to increase the number of PODs based on the workload requirements.
- Overridding Django Admin's object-tools bar for one Model
- Django template {%for%} tag add li every 4th element
2๐
We use a deployment model in which a django based app is served by gunicorn with couple of worker processes. We have further tried to scale this pod to 2-3 replicas and have seen performance improvements.
Itโs totally upto what works for your app.
Advantage of scaling pods is that you can configure it dynamically, hence not wasting resources.