Managing connections on a command based TCP socket API in node.js - api

I built a RESTful API based on expresss.js which communicates to a remote server through a TCP socket using JSON. Requested URLs are converted into the appropriate JSON messages, a new TCP socket is open and the message is sent. Then when a message coming from the same connection is received an event is fired, the JSON reply is evaluated and a new JSON message is returned as the result of the GET request.
Possible paths:
Async (currently in use) - Open a connection to the server for each
request.
Sync - Create a queue with all the requests and wait for
the response, blocking code.
Track - Send all the request at once and asynchronously receive the answers. Use a tracker id on the request to relate each request with its answer.
What will be the best direction to go? Is there any common pattern to solve this kind of application?

1 (async, a new connection for each request) is probably the easiest to implement.
If you want to reuse the socket for efficiently, you should come up with your own "keep-alive" mechanism - essentially streaming multiple requests and answers using the same socket.
I'd probably use double CRLF ('\n\r\n\r') as the delimiter of each JSON request, fire a 'request' event for each request, and simply write back the answer asynchronously. Delimiter-less streaming is possible, but it requires extra parsing when you receive a partial JSON string from the socket.

Related

ASP.NET Core and 102 status code implementation

I have long operation, which called via Web API. Status code 102 says to us:
An interim response used to inform the client that the server has
accepted the complete request, but has not yet completed it.
This status code SHOULD only be sent when the server has a reasonable
expectation that the request will take significant time to complete.
As guidance, if a method is taking longer than 20 seconds (a
reasonable, but arbitrary value) to process the server SHOULD return a
102 (Processing) response. The server MUST send a final response after
the request has been completed.
So, I want to return 102 status code to client, then client waits response about result of operation. How to implement it on .NET?
I read this thread: How To Return Http 102 Processing in Asp.Net Web Api?
This thread has good explanation what is necessary, but no response. I don't understand how it implement on .NET, not theory...
Using HTTP 102 requires that the server send two responses for one request. ASP.NET (Core or not) does not support sending a response to the client without completely ending the request. Any attempt to send two responses will end up in throwing an exception and just not working. (I tried a couple different ways)
There's a good discussion here about how it's not actually in the HTTP spec, so implementing it isn't really required.
There are a couple alternatives I can think of:
Use web sockets (a persistent connection that allows data to be sent back and forth), like with SignalR, for example.
If your request takes a long time because it's getting data from elsewhere, you can try pulling in that data via a stream and send it to the client via a stream. That will send the data as it's coming in, rather than loading it all into memory first before sending it. Here's an example of streaming data from a database to the response: https://stackoverflow.com/a/45682190/1202807

Binding Request inside Data attribute of Send Indication

When two peers are using WebRTC transmission with TURN as a relay server we've noticed that from time to time the data inside Send Indication or Channel Data is actually a valid STUN Binding Request message (type 0x0001). The other peer responds in the same way with a valid Binding Request Response (type 0x0101). It happens repeatedly during the whole conversation. Both peers are forced to use TURN server. What is the purpose of encapsulating typical STUN message inside data attribute of TURN transmission frame? Is it described in any document?
Here is an example of Channel Data frame:
[0x40,0x00,0x00,0x70,0x00,0x01,0x00,0x5c,0x21,0x12,0xa4,0x42,0x71,0x75,0x6d,0x6a,0x6f,0x66,0x69,0x6f...]
0x40,0x00 - channel number
0x00,0x70 - length of data
0x00,0x01,0x00,0x5c,0x21,0x12... - data, that can be parsed to a Binding Request
This is ICE (described in RFC 5245) connectivity checks running via TURN as well as consent checks described in RFC 7675.

Instant and graceful response to the client from server side while using NServiceBus.Send

I agreed that the command pattern is an excellent way of building loosely coupled application. My concern is how to respond to the client instantly and gracefully about the status of the request. For example, a client making a request to place order. In the typical way, order will be created followed by sending order id as a JSON response to browser. In the command pattern particularly with NServiceBus, how is it possible to send the response?
Isn't this what Return and Reply are for?
Full duplex
Replying to a message
Handling responses

Using Grizzly with JMS/ActiveMQ

I'm working on proof-of-concept project designed to explore the benefits of offloading work from a NIO server to a message queue for backend processing. I'm using Grizzly for the NIO boilerplate stuff, and Spring Integration for the messaging (with JMS/ActiveMQ as the messaging implementation). Basically, what I want to do is this:
Client connection -> Server -> Server creates "work-to-be-done" message -> JMS/ActiveMQ
On the ActiveMQ message queue, a number of "workers" will be actively consuming these messages, processing them, and placing the result on another queue. The server is listening for "response messages" on that queue, and once a message is picked up it will execute the following:
Response queue -> Server serializes the message to something the client can understand -> back to the client
My immediate problem is my lack of understanding of Grizzly, specifically how to decouple the event handling from the messaging. The server has to create the work-to-be-done message in such a way that when the reply message comes back from the worker, the server knows who the client was (find the related FilterChainContext in Grizzly) in order to send the tcp message.
I might be able to use FilterChainContext.getAddress() and place that on the work message, but I'm not sure how to code a method which takes a peer address and a message and somehow sends that (FilterChainContext.write()) when it has no FilterChainContext.
I'm playing with the idea now of keeping a Map around, but I'm apprehensive about this approach because I don't want stuff to go stale in a map if something happens to the message during serialization or processing.
Ideas and suggestions are welcome.
-Michael
You could use the TCP adapters/gateways (which have an option to use NIO), together with custom (de)serializers. If you must use Grizzly, you could write a server connection factory implementation. In the case of the outbound adapter (or inbound gateway), the endpoint is registered as a 'TcpListener' (using the connectionId) and the SI message contains the IpHeaders.CONNECTION_ID header used to determine which connection gets the reply. When a connection closes, it is unregistered (removed from the map).

WCF Service- Sending back object to calling App

My WCF service(hosted as Windows Service), has some 'SendEmail' methods, which sends out emails after doing some processing.
Now, I have got another requirement where client wants to preview emails before they are being sent out, so my WCF service needs to return whole email object to calling web app.
If client is happy with emails object, they can simply click 'Send out' which will then again call WCF service to send the emails.
Because at times it can take a bit longer for emails object processingy, I do not want calling application to wait until emails object is ready.
Can anyone please guide what changes I need to make to my WCF service (which currently has all one way operation)?
Also, please guide me whether I need to go for Asynch operation or message queuing or may be a duplex contract?
Thank you!
Based on your description I think you will have to:
Change current operation from sending email to storing email (probably in database).
Add additional operation for retrieving prepared emails for current user
Add additional method to confirm sending one or more emails and removing them from storage.
The process will be:
User will trigger some http request which will result in calling your WCF service for processing (first operation)
WCF service will initiate some processing (asynchronously or firt operation will be one-way so that client doesn't have to wait).
Processing will save email somehow
Depend on duration of processing you can either use AJAX to poll WebApp which will in turn poll WCF service for prepared emails or you will create separate page which will user have to access to see prepared emails. Both methods are using second operation.
User will check prepared email(s) and trigger http request which will result in calling third operation to send those emails.
You have multiple options:
Use Ladislav's approach. Only to add that service returns a token and then client uses the token to poll until a time out or a successful response. Also server keeps these temp emails for a while and after a timeout purges them.
Use duplex communication so that server also gets a way to callback the client and does so when it has finished processing. But don't do this - and here is my view why not.
Use an Asynchronous approach. You can find nice info here.