2đź‘Ť
When multiple users access it “at the same time”, it’s actually not at the same time. Let’s say you are using uWSGI to serve your webapp, which means that your webapp has a bunch of workers that handle requests.
Let’s take the simplest case where you just have a single worker. In this case, everything is just sequential. uWSGI automatically queues up all the requests for you in the order that they came in, and feeds it to your webapp one by one. Hopefully this is clear that it wouldn’t cause any race conditions with respect to incorrect output.
However, if you are say getting 10 requests a second, that means that that one worker needs to process each request in <10ms. If it can’t do that then requests will start to pile up.
Also, let’s say you do get 10 requests “instantaneously”. Then one request may return in 10ms, the second one in 20ms, the third one in 30ms etc. (This is actually not accurate because there is also network round trip time but let’s ignore that for now)
Now let’s say you setup multiple workers (separate processes) to help process those requests faster. But since they are separate processes, they don’t share memory. So there really isn’t any race conditions in this case. (note this changes if you do stuff like write to/read from a file etc)
Here, you could have multiple workers processing requests in parallel. But it’s still just uWSGI maintaining a queue of requests that it feeds to workers whenever that worker finishes processing requests and frees up.