Real-time notification using Python - dojo

First there is TornadoWeb, it's async and non-blocking, and on the other side: there is Dojo. If I use tornado, how can I communicate with dojo?
And the other problem, if I use a WSGI solution like Flask, can I make a "notification" with them? Or dojo must have an "open connection" to speak with the server, which is not done using WSGI? mean; Apache or CherryPy will not work with Dojo?
And if WSGI can't speak with Dojo, what about using Atom or Feeds to program notifications under WSGI?
NB: the notification will be devided on two: notification about products for all users, and notification about specific users; it will use sessions...
And last question, what about WebSockets and HTML5? the server must be compatible to use this option with the browser?

I'm not sure why Dojo seems to be the problem in the communication.
Dojo provides you with AJAX wrappers which you can use for almost real-time notifications in a web app with little load by making an AJAX request each 1-5 seconds.
If the app will have a lot of users, frequent AJAX requests can cause too much overhead quickly. Fortunately, you don't have to use Dojo to communicate with the server. You could have a look at Socket.IO and, if you want to stick to Python on the server-side, gevent-socketio. It uses the best technology available in the web browser (WebSockets, Flash sockets, comet) to provide real-time communication.
There is also dojox.socket but I think it's less robust (and far less popular).
You should remember, however, that by using any kind of persistent connection (be it WebSockets, Socket.IO or dojox.socket) you need an asynchronous server able to maintain many simultaneous connections.
The solution you choose should depend on the web app itself and its user base.

Related

Server Sent Events and Ajax VS Websockets and Ajax

I am creating an application(Nuxtjs) and am having troubles determining a good approach for sending data to the API(expressjs) and retrieving real-time updates. It seems that i can create "bi-di" connections with both protocals [Server Sent Events(SSE) and Axios or Websocket(WS)].
Both technologies work with most of the browsers, so i do not see a need to add additional libraries such as socket.io - For those individuals that do not have a current browser (too bad).
The application is based on user input of form data/clicks. Other users are then notified/updated with the information. At which point, the user can respond and the chain goes on(Basic chat like flow some information will be exchanged quickly while some may not or ever).
In my experience, the user flow would rely more heavily on listening for changes than actually changing the data - hence why i'm considering SSE. Unfortunately, both protocols have their flaws.
Websockets:
Not all components will require a WS to get/post information as such it doesn't make sense to upgrade a basic http connection at the additional server expense. Therefore another method other than WS will be required(Axios/SSR). Example: Checking to see if a user name exists
Security firewalls may prevent WS for operating properly
express-ws makes sockets easy on the API end
I believe you can have more than 6 concurrent connections by one user (which may be pro and con)
Server Sent Events
Seems like the technology is fading in favor of WS
Listening to the events seem to be as easy as listening to events for WS
No need to upgrade the connection but will have to use node-spdy within the expressjs API - This may also be a good implementation for WS due to multiplexing
Little more backend code to setup http2 and emit the SSEs(Ugly code as well - so functions will be made)
Limited to HTTP limitations (6 concurrent connections) which is a problem as the users could easily max this out(ie. having multiple chat windows open)
TLDR
The application will be more "feed" orientated with occasional posting(which can be handled by Axios). However, users will be listening to multiple "feeds" and the HTTP limitations will be a problem. I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative. Unfortunately, there is soooo much conflicting information regarding the two.
Thoughts?
SSE, Web Sockets, and normal HTTP requests (via AJAX or Fetch API) are all different tools for different jobs.
SSE
Unidirectional, from server to client.
Text-based data only. (Anything else must be serialized, i.e. JSON.)
Simple API, widely compatible, auto-reconnects, has built-in provision for catching up on possibly missed events.
Web Sockets
Bi-directional.
Text or binary data.
Requires you to implement your own meaning for the data sent.
Standard HTTP Requests
Client to Server or Server to Client, but only one direction at a time.
Text or binary data.
Requires extra effort to stream server-to-client response in realtime.
Streaming from client-to-server requires that the entire data be known at the time of the request. (You can't do an event stream, for example.)
How to decide:
Are you streaming event-like data from the server to the client? Use SSE. It's purpose-built for this and is a dead simple way to go.
Are you sending data in only one direction, and you don't need to spontaneously notify clients of something? Use a normal HTTP request.
Do you need to send bidirectional data with a long-term established connection? Use Web Sockets.
From your description, it sounds like either SSE or Web Sockets would be appropriate for your use case. I'd probably lean towards SSE, while sending the random API calls from the client with normal HTTP requests.
I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative.
Keep in mind that you can simply configure your server with HTTP keep-alive, making this point moot.
I personally avoid using websockets as a 2-way communication between client and server.
I try to use sockets to broadcast data from server to users or a single user(socket), so they can get real-time updates, but for the post requests from client to server I tend to use axios or something similar, because I don't want to pass sensitive data (like access keys etc) from client to server.
My data flow goes something like
User posts data to the server using axios, SSE or whatever
Backend server does what it has to and notifies socket that an event has occured
Socket server then notifies who he has to
My problem with using sockets to send data from client to server is the authentication issue. Technically, you can't pass anything that is not available to client-side javascript through a socket, meaning that to authenticate the action you will have to send sensitive information through a websocket. This is an issue for multiple reasons - if your sensitive data can be accessed using client-side js, there is a bunch of attacks that can be done here. Also someone can listen to the communication between ws and client. This is why I use API calls (axios etc) and store sensitive data to http-only cookies.
So once server wants to notify the user that something has happened, you can easily do that by telling the websocket server to send the data to the user.
You also want to keep your API server stateless, meaning no sockets in your API. I use separate server just for websocket connections, and my API server and websocket server communicate using redis. Pub/sub is a really neat feature for internal server communication and state management.
And to answer your question regarding multiple connections - you can use a single connection between your websocket server and client, and broadcast data using channels. So one channel would be for notification feed, other channel could be for story feed etc.
I hope this makes sense to you. This stack has worked really good for me.

Any other option to get realtime data from server than API calls or Sockets?

Simple question, are there any other options how to fetch data from server to client to achieve realtime refresh (For example realtime table) other than:
Call API request inside some loop
Subscribe to realtime websocket server
I mean it as core options. Sure there are many libraries or patterns but seems they using one of those two methods.
For Web Applications (browser clients):
There's SSE (Server Sent Events, a.k.a., EventSource), WebSockets and polling (short / long).
Other then that, you'll be working with non-standard solutions (i.e., flash sockets, etc').
IMHO, WebSockets are the best for realtime updates and there are plenty of tools to make the development easy enough.

Unconventional to bundle web socket server with REST API?

For an enterprise REST API (PHP in this case), is it a bad practice to include a web socket server along with a REST API? The pairing of the two makes a nice mix with event dispatching services, but I'm not sure if these two services are different enough where they warrant separation? I guess the only con I can see at the moment, would be that if the REST API were to go down, then your web socket servers are also down, which removes the possibility of having fail-over for any connected clients, or something to that degree.
If you're looking for a really robust way to manage web sockets, check out http://faye.jcoglan.com/ - it has libraries for JavaScript, Ruby, etc, and runs independently of your other servers.
If you don't need that kind of resilience, then I wouldn't worry about mixing your REST and web socket APIs on the same server.

GCM (Google Cloud Messaging) Bulk with Linux

Does any one have idea about the best way(implantation) to send Bulk Google Cloud Messaging on a Linux server. (Personally I like non-java implementation) Any help, link or suggession appreciated.
Edit
I didn't try any method for bulk messaging. I know there is a php implementation for GCM too, But I like to know what should I consider before go for an implementation. Like, How to handle failed messages, Is there any limitation on http requests goes to GCM server, etc.
Finally, I found the best answer for my own question. We can send a message to 1000 Google could message recipients using one http request. Sending bulk messaging Shouldn't be that much complicated. Any language or tool are capable of sending appropriate http request to the GCM server is enough.
GCM allows you to attach up to 1,000 recipients to a single message,
letting you easily contact large user bases quickly when appropriate,
while minimizing the work load on your server.
As shown by this example, it seems that the server-side code can even be written in C#. This question also confirms that this approach works. Other people seems to be able to setup standalone Java applications, as shown here.
If you have to setup a Linux server to send GCM push notifications, you can freely chose to use C# or Java at your own discretion.
For what concerns C/C++, however, things are a little more complicated. This question (PHP) shows that GCM notifications can be sent using CURL, so I suspect that a "C/C++" implementation using libCurl could be possible. However you'll have to tweak it yourself, given that it does not seem to be the "standard way" to use GCM.
If you are familiar with PHP than implement it in PHP. Since GCM uses only 2 GETs with HTTPS, you can easily implement it in any language, even batch processing with curl (i am using this for testing). You can find the calls here.
Note that you need a curl.exe which is capable of doing HTTPS. The link from Avio's answer shows you how to do that in PHP, stick to that and do not use C++.

What is a robust browser-based method for uploading (very) large files?

I'm looking at replacing our current file-upload solution, a bespoke java application which transmits files and metadata using sftp, with a browser-based solution. My intention is to have finer-grained control over who can and cannot upload by tying the upload to an authenticated session in a web app. This will also enable me to collect reliable data about who uploaded what when, etc, in a straight-forward manner.
My concern is that we need to be able to support uploading huge files- think 100GB or more. As such, I don't think standard HTTP is appropriate- I don't trust it to be reliable, and I want to be able to provide user feedback such as progress bars.
The best idea I've come up with so far is an embedded applet which uses sftp to push the file, but I'd like to do this using only js or similar if at all possible.
There is project that want to enable resumable uploads: https://tus.io/.
Its client library provides progress bar and resume-on-interruption in the browser.
You can integrate the server part into your app, to manage authentication yourself, while benefiting from the resumability!
Here is a blog post https://tus.io/blog/2018/09/25/adoption.html in which they mention it being used by Cloudflare.