[Django]-Python Django Global Variables

24👍

You mustn’t declare global variables. Settings (constants) are OK if done right. But variables violate with shared-nothing architecture and might cause a lot of trouble. (best case they’ll be inconsistent)

I would simply store those statistics in the cache. (Well, actually I would store them in the database but you made clear you believe it will have a negative impact on performance, so…)

The new incr() and decr() methods are especially suitable for counting. See docs for more info.

👤muhuk

76👍

Why one mustn’t declare global variables? O_o. It just looks like a propaganda. If the author knows what he wants and what side-effects will be, why not. Maybe it’s just a quick experiment.

You could declare your counter as a model class-member. Then to deal with race condition you have to add a method that will wait if some other client, from another thread works with counter. Something like this:

import threading

class MyModel(ModelBase):
    _counter = 0
    _counter_lock = threading.Lock()

    @classmethod
    def increment_counter(cls):
        with cls._counter_lock:
            cls._counter += 1

    def some_action(self):
        # core code
        self.increment_counter()


# somewhere else
print MyModel._counter

Remember however: you have to have your application in one process. So if you’ve deployed the application under Apache, be sure it is configured to spawn many threads, but not many processes. If you’re experimenting with ./manage.py run no actions are required.

👤nkrkv

3👍

I would create a “config.py” file on the root directory. and put all global variables inside:

x = 10
my_string = ''

at “view.py”:

from your_project import config

def MyClass(reuqest):
    y = config.x + 20
    my_title = config.my_string
...

The benefit of creating this file is the variables can cross multiple .py files and it is easy to manage.

👤Ken

0👍

If you want to avoid the hassle with Django database, e.g. migrations or performance issues, you might consider in-memory database redis. Redis guarantees consistency even if there are multiple Django processes.

0👍

You can use variables from settings.py

see below example. It’s an app that counts requests

settings.py

REQ_COUNTER = 0

View.py

from {**your project folder name **}.settings import REQ_COUNTER 

def CountReq(request):
 global = REQ_COUNTER
 REQ_COUNTER = REQ_COUNTER + 1
 return HttpResponse(REQ_COUNTER)

Thanks 🙂

0👍

multiprocessing.Manager may be used to share variables between all Django sub-processes started by a main process:

from multiprocessing import Manager
from rest_framework.decorators import api_view
from rest_framework.response import Response

total_count = Manager().Value('i', 0)

@api_view(['GET'])
def hello_count(request):
    total_count.value = total_count.value + 1
    return Response(data={'count': total_count.value})

total_count will be increased for all running processes (checked with multiple workers running by gunicorn app server).

However, if you will run python manage.py shell the total_count will be 0 for this process, because that is a separated master process with own memory space. If you want to get the same shared value for all processes inside server (or even multiple servers) you have to use in-memory DB (redis, etc), what is better and more stable practice for production environments.

👤rzlvmp

Leave a comment