First off I'm not too familiar with restlets , just starting out. I wanted to implement a broadcast chatroom where a client sending a message would have the message broadcast to all other clients.
My attempt was to use a resource on the server side where the client would send the message(as a String) using POST. The other clients would constantly have to poll this resource to receive the message. I know this method must be horribly ineffective.
I was wondering if there was a better method where a change on the server side(in this case the sending of the string message) would result in the server alerting the clients of this update.
Some things will come in version 2.1 with the new nio connector. Within web page, you might consider using technologies like Comet or HTML5 web sockets.
See the specification page from the developer wiki of Restlet: http://wiki.restlet.org/developers/172-restlet/g3/354-restlet.html
Thierry
Related
I have a WinForms application and I want to receive inbound SMS using Twilio. I am using VB.NET. The code that I find on the Twilio website are using web application and MVC. Can anyone help me how to use it in WinForms?
Did not get enough to try out anything.
Twilio uses a standard way of notifying your service, called webhooks. When an SMS, phone call, or something else happens, and you configure a webhook for that, Twilio will send HTTP requests to the URL configured as a webhook.
This does mean that you have to have a publicly running web server that can accept those HTTP requests with the details of the SMS, phone call, etc. That's why the samples will use ASP.NET, as this is only possible with web technology.
Winforms runs on your computer and doesn't expose any public web endpoints to receive the webhook HTTP requests, so you can't receive it directly. However, depending on your use-case, you have options.
If you don't need real-time updates, you can read the message history using the Twilio C# .NET SDK.
In your winforms app, you could add a button to refresh the messages on click. Alternatively, you could query the message X amount of seconds to give it a more real-time feel, even tho it's not really real-time.
Warning: to use the Twilio API to get the messages, you'll need to embed the Twilio credentials into your Winform app. Anyone that has access to your app will be able to read those credentials. Keep that security risk in mind!
The second option is to use ASP.NET to receive the webhook HTTP requests, and then use a SignalR or websockets to notify any connected clients, of which your winform app would be one.
For example, when Twilio receives an SMS, the ASP.NET application receives the webhook HTTP request, the ASP.NET app then send the SMS details to all clients connected to your SignalR hub, and then your Winforms app receives the SMS payload which you can use to render your app.
The second option is a lot more work and requires more infrastructure since the ASP.NET app needs to be hosted somewhere. We don't have a tutorial for that, but I'd be happy to forward more links to docs etc. if you have questions.
I am creating an application(Nuxtjs) and am having troubles determining a good approach for sending data to the API(expressjs) and retrieving real-time updates. It seems that i can create "bi-di" connections with both protocals [Server Sent Events(SSE) and Axios or Websocket(WS)].
Both technologies work with most of the browsers, so i do not see a need to add additional libraries such as socket.io - For those individuals that do not have a current browser (too bad).
The application is based on user input of form data/clicks. Other users are then notified/updated with the information. At which point, the user can respond and the chain goes on(Basic chat like flow some information will be exchanged quickly while some may not or ever).
In my experience, the user flow would rely more heavily on listening for changes than actually changing the data - hence why i'm considering SSE. Unfortunately, both protocols have their flaws.
Websockets:
Not all components will require a WS to get/post information as such it doesn't make sense to upgrade a basic http connection at the additional server expense. Therefore another method other than WS will be required(Axios/SSR). Example: Checking to see if a user name exists
Security firewalls may prevent WS for operating properly
express-ws makes sockets easy on the API end
I believe you can have more than 6 concurrent connections by one user (which may be pro and con)
Server Sent Events
Seems like the technology is fading in favor of WS
Listening to the events seem to be as easy as listening to events for WS
No need to upgrade the connection but will have to use node-spdy within the expressjs API - This may also be a good implementation for WS due to multiplexing
Little more backend code to setup http2 and emit the SSEs(Ugly code as well - so functions will be made)
Limited to HTTP limitations (6 concurrent connections) which is a problem as the users could easily max this out(ie. having multiple chat windows open)
TLDR
The application will be more "feed" orientated with occasional posting(which can be handled by Axios). However, users will be listening to multiple "feeds" and the HTTP limitations will be a problem. I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative. Unfortunately, there is soooo much conflicting information regarding the two.
Thoughts?
SSE, Web Sockets, and normal HTTP requests (via AJAX or Fetch API) are all different tools for different jobs.
SSE
Unidirectional, from server to client.
Text-based data only. (Anything else must be serialized, i.e. JSON.)
Simple API, widely compatible, auto-reconnects, has built-in provision for catching up on possibly missed events.
Web Sockets
Bi-directional.
Text or binary data.
Requires you to implement your own meaning for the data sent.
Standard HTTP Requests
Client to Server or Server to Client, but only one direction at a time.
Text or binary data.
Requires extra effort to stream server-to-client response in realtime.
Streaming from client-to-server requires that the entire data be known at the time of the request. (You can't do an event stream, for example.)
How to decide:
Are you streaming event-like data from the server to the client? Use SSE. It's purpose-built for this and is a dead simple way to go.
Are you sending data in only one direction, and you don't need to spontaneously notify clients of something? Use a normal HTTP request.
Do you need to send bidirectional data with a long-term established connection? Use Web Sockets.
From your description, it sounds like either SSE or Web Sockets would be appropriate for your use case. I'd probably lean towards SSE, while sending the random API calls from the client with normal HTTP requests.
I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative.
Keep in mind that you can simply configure your server with HTTP keep-alive, making this point moot.
I personally avoid using websockets as a 2-way communication between client and server.
I try to use sockets to broadcast data from server to users or a single user(socket), so they can get real-time updates, but for the post requests from client to server I tend to use axios or something similar, because I don't want to pass sensitive data (like access keys etc) from client to server.
My data flow goes something like
User posts data to the server using axios, SSE or whatever
Backend server does what it has to and notifies socket that an event has occured
Socket server then notifies who he has to
My problem with using sockets to send data from client to server is the authentication issue. Technically, you can't pass anything that is not available to client-side javascript through a socket, meaning that to authenticate the action you will have to send sensitive information through a websocket. This is an issue for multiple reasons - if your sensitive data can be accessed using client-side js, there is a bunch of attacks that can be done here. Also someone can listen to the communication between ws and client. This is why I use API calls (axios etc) and store sensitive data to http-only cookies.
So once server wants to notify the user that something has happened, you can easily do that by telling the websocket server to send the data to the user.
You also want to keep your API server stateless, meaning no sockets in your API. I use separate server just for websocket connections, and my API server and websocket server communicate using redis. Pub/sub is a really neat feature for internal server communication and state management.
And to answer your question regarding multiple connections - you can use a single connection between your websocket server and client, and broadcast data using channels. So one channel would be for notification feed, other channel could be for story feed etc.
I hope this makes sense to you. This stack has worked really good for me.
We are developing ASP.Net core that is hosted as Azure Web Application.
We also use Azure SignalR service
Everything works great as long as we have single instance of the Web App, but once we scale it out we have the following problem:
From the Controller's action we resolve IHubContext and we send message to Hub's client. Everything works great so far
Hub's client accepts response and sends it TheHub endpoint.
The problem here is that response could be sent to another instance of Web App. So we send request from instance #1 but response is sent to instance #2 with 50% chance and instance #1 never receives response
Any ideas of how we could make it work so instance that emitted request actually received response?
SignalR has support for scaleout scenarios out of the box, it's called backplanes. The idea is that with help of one of backplane components, it will spread out SignalR events accross all instances. For Asp.Net framework, use one of these packages
Microsoft.AspNet.SignalR.ServiceBus3
Microsoft.AspNet.SignalR.ServiceBus
Microsoft.AspNet.SignalR.StackExchangeRedis
Microsoft.AspNet.SignalR.SqlServer
For ASP.Net Core only Redis is ported with Microsoft.AspNetCore.SignalR.StackExchangeRedis package, but there are some provided by community, see https://github.com/thomaslevesque/AspNetCore.SignalR.AzureServiceBus
public void ConfigureServices(IServiceCollection services)
{
services.AddSignalR()
.AddAzureSignalR(options =>
{
options.ApplicationName = "app1";
}
);
}
You could specify an ApplicationName in the server SDK for different server groups.
It will help your server generate access tokens like ......?hub=app1_<your_hub> during negotiation which can help our ASRS instances differentiate connections coming from different server groups
Unfortunately I didn't find reliable solution that wouldn't require a bit clumsy workarounds
But there are 2 solutions I could offer for this scenario
#LexLi suggested a good approach to solve this problem. So you basically can make your web app a SignalR client as well and make it a member of a group. This way every instance of web app is also a client and then instance that receives response for Hub's client can pass this response to group of web app instances
You could leverage Azure Service Bus topics. So once started instance will start subscribe to listen a topic. And then once any instance receives a response from Hub's client it would place response into Service Bus Topic and then every instance will receive this response from the topic
I was hoping that there could be a better solution for such problem
Problem description
i am working on a Xamarin application that consumes a REST API written in Python flask.
The Xamarin application offers virtual shopping lists where user can collaborate on buying stuff they have on a shared list.
To improve the user experience, i want to be able to actively notify the user about finished items on the list.
Possible solutions:
Synchronous API polling from client side
Notifications are stored by the API in a relational database and have a flag indicating if the user received the notification already.
The API has an endpoint GET /users/:user_id/notifications/ that queries the database for notifications and returns a JSON response with those.
Advantages
fairly simple to implement
Problems
synchronous polling creates a huge amount of http requests
API service remains stateless, making a horizontal scaling with a loadbalancer easier
Websocket endpoint on the API
The API has an endpoint POST /users/:user_id/notifications/register which creates a websocket connection between client and API.
The connection is stored to a global array in which each entry maps a client id to a websocket connection.
When a new notification is created, the endpoint makes a lookup in the connection dictionary by comparing the owner id of the notification with the dictionary entries. The notification is sent to appropriate user through the websocket.
Notifications are stored in the database like in the first approach.
When a user calls the endpoint, a new websocket connection will be established first and upon success the API sends all unseen notifications from the database to the user.
Advantages
API can push notifications to clients asynchronously
Problems
When a user terminates the websocket connection his dictionary entry will persis
Retaining one websocket connection per user permanently adds additional overhead to the API
Horizontal scalability of the API is more difficult because the service is not stateless anymore (Websocket connection information saved in
RabbitMQ
The API uses a RabbitMQ service to send notifications to the client. Every client uses subscribes to his own notification queue to prevent the broadcasting of messages.
Advantages
API remains stateless
Problems
Notifications needs to be resend to the exchange when a user is offline
Amount of queues grows drastically
Additional costs for RabbitMQ service
High temporary load on the RabbitMQ service when many users come online in the same time
Final words
It would be interesting to hear the opinion of others.
I believe the active distribution of notifications from backen services to clients i a very common use case.
best,
D
I would use RabbitMQ and consume events forwarding them as push notifications. This will work while the user is not actively connected to the website and enhance the engagement with each user experience that will return to the website when notified for more information see How to setup basic web push notification functionality using a Flask backend or How to send push notifications to a browser in ASP.NET Core or Sending Notifications with Spring Boot, Angular, and Firebase Cloud Messaging this way the RabbitMQ will not wait until the user is back online. If the user is online you can forward the notification directly to the Xamarin application via WebSockets and a load balancer like NGINX that can handle many WebSockets in an optimized way.
Synchronous API polling from the client-side is the less preferred way since it overloads the webserver with requests while nothing was changed.
I don't think the scalability of WebSocket is a problem. You can scale up easily with pub/sub. The hotspot of long connections is a kind of serious problem.
For one-way communication, I would suggest Server sent event. In the end, it usually depends on what your team is confident with.
I can recommend on a different approach for API that provides JSON which is called GraphQL
It supports subscriptions capabilities that are pushed by the GraphQL API Server (using web sockets)
GraphQL is considered today to be better than RESTful API since its very flexible and you can get exactly the data you need with one query.
I have a question with regards to the comet implementation. I know that it is used to handle asynchronous requests similar to what now can be achieved through Servlet 3.0 async functionality. Yet what i do not understand is how the push is done to the calling client.
In web sockets we open the connection by providing the ip and port. With comet, how do you connect with the server in order to receive call backs when the server pushes data?
The Wikipedia page on Comet (programming) is a pretty good resource for this question (sorry it's so obvious).
Comet is an umbrella term for using HTTP to simulate a bi-directional connection between a client and server. Ultimately you make an HTTP request to the server and attempt to hold it open (long polling and streaming). With long-polling that connection closes after a given interval or when data is returned. With streaming the connection is help open as long as possible and new data is sent over the existing connection.
How these are achieved differ between web browsers - that why Comet is classed as a hack. Again, the wikipedia page should provide almost all the information you need.
I wrote an article covering the history of realtime web communication (with a focus on client) and why WebSockets are a game-changer.