5👍
From the latest Gunicorn docs, on the cmd line/script you can use:
--log-file - ("-" means log to stderr)
--log-level debug
or in the config file you can use:
errorlog = '-'
accesslog = '-'
loglevel = 'debug'
but there is no mention of the parameter format you specified in your question:
--log-file=-
--log-level=debug
The default logging for access is ‘None’ (ref), so shot in the dark, but this may explain why are not receiving detailed log information. Here is a sample config file from the Gunicorn source, and the latest Gunicorn config docs.
Also, you may want to look into changing your logging configuration in Django.
-1👍
While I am also looking for a good answer for how to see how many workers are busy, you are solving this problem the wrong way. For a task that takes that long you need a worker, like Celery/RabbitMQ, to do the heavy lifting asynchronously, while your request/response cycle remains fast.
I have a script on my site that can take 5+ minutes to complete, and here’s a good pattern I’m using:
- When the request first comes in, spawn the task and simply return a HTTP 202 Accepted. Its purpose is “The request has been accepted for processing, but the processing has not been completed.”
- Have your front-end poll the same endpoint (every 30 seconds should suffice). As long as the task is not complete, return a 202
- Eventually when it finishes return a 200, along with any data the front-end might need.
For my site, we want to update data that is more than 10 minutes old. I pass the Accept-Datetime header to indicate how old of data is acceptable. If our local cached copy is older than that, we spawn the task cycle.