5👍
Basically your problem here is that browsers use a request-response pattern: they send a request and then get back an answer immediately. You have two options, polling the server periodically or some kind of notification system.
Notifications could be long-polling, i.e. client makes a request and server doesn’t respond until there’s data, or through websockets or through HTML5 server-side events.
Now, the thing is that these notification systems don’t integrate too well with a traditional Django deployment, since they result in an open socket and a corresponding hanging thread. So if your webserver has 10 Django threads, one browser with 10 tabs could tie up all of them.
Work is underway to change that, but in the mean time, unless you have a hard real-time requirement or lots of clients, just set a timer and poll every x seconds, where x depends on what latency is acceptable. Depending on how your data is stored, I would probably put in a simple mechanism so that the server doesn’t send the whole dataset each time, but only either what’s new or a carry-on-nothing-changed return code.
For instance, on the first request, the server may put in a timestamp or serial number in the response, and then the client asks for any changes since that timestamp/serial number.
A notification system gives you better latency with lower overhead, but it is probably also going to be more difficult to deploy, and will probably be overkill if this is just an app for internal use. Even with a notification system, you need to do some careful protocol design to be sure not to miss something.
4👍
In my opinon it’s okay to use something like a heartbeat-timer in your frontend which triggers a data fetch every second or so. Especially if you implemented caching on the backend.
A more sophisticated version could use something like django channels
to handle the communication via web sockets.
I would say it depends on the maturity of the project.
- How can i change django admin to rtl style
- Using materialized views or alternatives in django
- Getting scrapy project settings when script is outside of root directory
- How can I best find out how django works internally?
- Celery – No module named five
1👍
Something like this worked for me:
#views.py
from bokeh.plotting import figure, curdoc
from bokeh.client import pull_session
def my_line_chart(request):
session = pull_session(url = "http://localhost:5006/myapp")
bokeh_script=autoload_server(None,url = "http://localhost:5006/myapp", session_id= session.id)
return render(request, u'line_charts.html', {u'the_script': bokeh_script})
Then on your bokeh server use source.stream()
#myapp
'''
everything else here
'''
def update():
new_data = qu() #qu is the newdata to be updated
source.stream(new_data, rollover = 60)
print(source.data) #if you want to see new data
curdoc().add_root(p)
curdoc().add_periodic_callback(update,10000)
then start your bokeh server and allow connection from django like
bokeh serve --allow-websocket-origin=127.0.0.1:8000 myapp.py #you can add app2.py too
I used port 8000 because that is my django port, and port 5006 on views.py because it is my tornado port.
Check class columndatasource for more
Hope it helps.
- How to access RequestContext in class-based generic views?
- Django: prefetch related objects of a GenericForeignKey
- Django.core.exceptions.ImproperlyConfigured: Set the SECRET_KEY environment variable
- Making a tree structure in django models?
- Django + Emacs (as TextMate replacement)