1👍
Been using RQ for about a year now.
This answer relies COMPLETELY on what you’re running. If you’re CPU/memory intensive calculations, you obviously can’t spin up a lot. For example, I do lots of number crunching so i run about 2, sometimes 3, RQ workers on 2gb RAM vps. Im not sure if this is for everyone, but running django RQ worker w/o the worker doing anything eats about 150mb RAM from getgo. Maybe i configured something wrong. When it actually processes job, sometimes RAM usage goes up as high as 700 MB per worker.
If you pack too many jobs, you get JobFailed error with no clear indication of why. Because of nature of RQ (asynchronous computing), you really can’t tell unless you put in a ton of logging or have overhead of measuring & collecting cpu/memory usage. Either that, or run htop and see the utilization manually.
My recommendation:
- scale horizontally (less workers per server) instead of vertically (beefy machine w/tons of workers)
- limit # of execution time per job.. 100 1 minute jobs are better than 1 100 minute job
- use microdict and blist modules for large CSV / list processing… they are like 100x more efficient at RAM / CPU usage