0👍
The execution flow should be the same when you run it on your local machine. Take the following code:-
import tornado.ioloop
import tornado.web
print 'This is executed only once and global'
name = 'myname'
class MainHandler(tornado.web.RequestHandler):
print 'This is executed only once'
def get(self):
print 'This is executed for each requests'
self.write("Hello, world %s " % name)
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
When, we’ll get the following output:-
$ ./bin/python server.py
This is executed only once and global
This is executed only once
This is executed for each requests
WARNING:tornado.access:404 GET /favicon.ico (::1) 0.52ms
This is executed for each requests
WARNING:tornado.access:404 GET /favicon.ico (::1) 0.25ms
This mean the code outside the class body will be executed every time the server start and any state will stay there until it being restarted. When the server was started, it also instantiate the class MainHandler
and for each incoming requests, the get
method on the instance get called. This mean the code in the class body will also being executed once and only the code in the method get
will get a fresh state for each incoming requests. I assume ‘client’ in your question mean incoming web requests.
Usually in normal python wsgi application, the way to keep some shared state between function without explicitly passing parameters down the chain is to use the thread-local object. I don’t know much about tornado but from a brief read it seem doesn’t run inside thread so you have to consult the docs to find out the recommended way of sharing state between functions.
1👍
Each client will have their own running version. Definitely.
If you want to have some sort of global variables, you should use some inter-process communication tools (message passing, synchronization, shared memory, or rpc). Redis, for example.
0👍
With Tornado you’ll have at least one process per machine/VM (Heroku calls these “dynos”); in multicore environments you’ll want to run multiple processes per machine (one per core). Each process handles many users, so in the simple case where there is only one process you can use global variables to share state between users, although as you grow to multiple dynos and processes you’ll need some sort of inter-process communication.