Detecting end of Flux data v.s. error - angular5

Currently looking at SSE using Angular 5 and Spring 5 webflux. The basic application is working correctly, but whilst investigating error handling we've noticed that the EventSource in the angular application doesn't see any difference between spring closing the connection due to reaching the end of the Flux stream of data, and an error occuring (e.g. terminating the application mid transfer).
The examples we've based our investigations on are the following.
https://thepracticaldeveloper.com/2017/11/04/full-reactive-stack-ii-the-angularjs-client/
http://javasampleapproach.com/reactive-programming/angular-4-spring-webflux-spring-data-reactive-mongodb-example-full-reactive-angular-4-http-client-spring-boot-restapi-server#25_Service
Both onerror and the completion function in the EventSource get called either when the data is send by spring successfully and reaches the end of the stream, which then closes the connection, or when we ctrl+c the application mid stream, or when we throw an exception randomly in the middle of sending data.
The EventSource argument just contains {type: 'error'} in all 3 cases.

From what I understand, SSE streams are mainly about infinite streams; the spec doesn't seem to offer a standard way of signaling the end of a stream to clients (they will try to reconnect by default).
You could implement that in your controller, by returning a Flux<ServerSentEvent> and terminating the flux with a custom event:
return Flux.concat(
fetchUserEvents(),
Flux.just(ServerSentEvent.builder().event("disconnect").build()
);
On the client side, you could correctly close the connection when you're done, leaving all other use cases as errors and letting the browser reconnecting automatically:
evtSource.addEventListener("disconnect", function(e) {
evtSource.close();
}, false);
This is rather annoying, so I've raised SPR-16761 to improve SSE support there.

Related

SignalR - Unhandled Rejection (Error): WebSocket closed with status code: 1000 ()

An asp.net core application with react+redux on the client side, using signalR.
Getting the following error on the client side:
Unhandled Rejection (Error): WebSocket closed with status code: 1000 ().
Seems like this is a "normal closure", but there's no code to close the connection.
The application sends small images at 60 FPS per viewport, in several viewports. This utilizes the JS thread almost completely, to the extent that I'd assume that it may prevent signalR from maintaining keep-alive.
Tried setting the timeouts in the server for signalR to their max value, that did not prevent the issue from recurring.
What is it that could cause the signalR socket to close without invoking the close and without an error message?
I'm guessing the browser or the server could close out of self-preservation or reaching set limits.
Most likely: The default maximum size of a hub message (MaximumReceiveMessageSize) is 32 KB, and a image could easily surpass this. You could turn on EnableDetailedErrors to see if there's more info.
If the browser is unable to send quickly enough, it will need to buffer and this buffer can't grow infinitely. You could also run into some sort of anti-malware protection based either on hogging the JS thread (maybe use workers?) or on using too much network I/O. The server can also close for similar reasons.
As for why the error message is vague: The browser literally can't give you too much feedback about this - see the warning text before 9.3.4. Edit: this is wrong and only applies to close code 1006.
To solve the issue, I turned on the logs as Jesper suggested.
The issue was that I was cancelling a CancellationToken passed to the SendAsync method. For some odd reason cancelling the send closes the socket (I'd expect it to only cancel the specific message, not close the connection).

How to handle network time-out exception with rabbit mq while sending messages asynchronously using spring-amqp library

I have written a program which requires multiple queues interaction - means consumer of one queue writes message to another queue and same program has consumer to take action on that queue.
Problem: How to handle network time-out issues with queue while sending messages asynchronously using spring rabbit ampq library?or RabbitTemplate.send() function must throw an exception if there are network issues.
Currently, I have implemented RabbitTemplate.send() that returns immediately and working fine. But, If network is down, send function returns immediately, doesn't throw any exception and client code assumes success. As a result, i have in-consistent state in DB that message is successfully processed. Please note that call to send function is wrapped inside transactional block and goal is if queue writing fails, DB commit must also rollback. I am exploring following solutions but no success:
Can we configure rabbitTemplate to throw run-time exception if any network connectivity issue so that client call is notified? Please suggest how to do this.
Shall we use synchronous SendAndReceive function call but it leads to delay in processing? Another problem, observed with this function, my consumer code gets notification while sendAndReceive function is still blocked for writing message to queue. Please advise if we can delay notification to queue unless sendAndReceive function is returned. But call to SendAndReceive() was throwing an amqp exception if network was down which we were able to capture, but it has cost associated related to performance.
My application is multi-threaded, if multiple threads are sending message using sendAndReceive(), how spring-amqp library manages queue communication? Does it internally creates channel per request? If messages are delivered via same channel, it would impact performance a lot for multi-threaded application.
Can some-one share sample code for using SendAndReceive function with best-practices?
Do we have any function in spring-amqp library to check health of RabbitMQ server before submitting send function call? I explored rabbitTemplate.isRunning() but not getting proper result. If any specific configuration required, please suggest.
Any other solution to consider for guaranteed message delivery or handle network time-out issues to throw runtime exceptions to client..
As per Gary comment below, I have set: rabbitTemplate.setChannelTransacted(true); and it makes call sync. Next part of problem is that if I have transaction block on outer block, call to RabbitTemplate.send() returns immediately. I expect transaction block of outer function must wait for inner function to return, otherwise, ii don't get expected result as my DB changes are persisted though we enabled setChannelTransacted to true. I tried various Transaction propagation level but no success. Please advise if I am doing anything wrong and review transactional propagation settings as below
#Transactional
public void notifyQueueAndDB(DBRequest dbRequest) {
logger.info("Updating Request in DB");
dbService.updateRequest(dbRequest));
//Below is call to RabbitMQ library
mqService.sendmessage(dbRequest); //If sendMessage fails because of network outage, I want DB commit also to be rolled-back.
}
MQService defined in another library of project, snippet below.
#Transactional( propagation = Propagation.NESTED)
private void sendMessage(......) {
....
rabbitTemplate.send(this.queueExchange, queueName, amqpMessage);
}catch (Exception exception) {
throw exception
}
Enable transactions so that the send is synchronous.
or
Use Publisher confirms and wait for the confirmation to be received.
Either one will be quite a bit slower.

WCF client causes server to hang until connection fault

The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.

Uncatchable errors in node.js

So I'm trying to write a simple TCP socket server that broadcasts information to all connected clients. So when a user connects, they get added to the list of clients, and when the stream emits the close event, they get removed from the client list.
This works well, except that sometimes I'm sending a message just as a user disconnects.
I've tried wrapping stream.write() in a try/catch block, but no luck. It seems like the error is uncatchable.
The solution is to add a listener for the stream's 'error' event. This might seem counter-intuitive at first, but the justification for it is sound.
stream.write() sends data asynchronously. By the time that node has realized that writing to the socket has raised an error your code has moved on, past the call to stream.write, so there's no way for it to raise the error there.
Instead, what node does in this situation is emit an 'error' event from the stream, and EventEmitter is coded such that if there are no listeners for an 'error' event, the error is raised as a toplevel exception, and the process ends.
Peter is quite right,
and there is also another way, you can also make a catch all error handler with
process.on('uncaughtException',function(error){
// process error
})
this will catch everything which is thrown...
it's usually better to do this peter's way, if possible, however if you where writing, say, a test framework, it may be a good idea to use process.on('uncaughtException',...
here is a gist which covers (i think) all the different aways of handling errors in nodejs
http://gist.github.com/636290
I had the same problem with the time server example from here
My clients get killed and the time server then tries to write to closed socket.
Setting an error handler does not work as the error event only fires on reception. The time server does no receiving, (see stream event documentation).
My solution is to set a handler on the stream close event.
stream.on('close', function() {
subscribers.remove(stream);
stream.end();
console.log('Subscriber CLOSE: ' + subscribers.length + " total.\n");
});

How can I recognize ActiveMQ disconnect using NMS and C#

I have a C# publisher and subscriber that talk to each other using ActiveMQ and NMS. Everything works fine, except I have no way to know when ActiveMQ goes down. This is particularly bad for the consumer. They stop getting data, but aside from the fact that data stops showing up, no errors or events are raised.
Is there a way using NMS(particulary Apache.NMS.IConnection or the Apache.NMS.ISession objects)
I downloaded the implementation that I'm using from Spring, but I'm not using any specific spring implementations, everything I'm using is in the Apache.NMS and Apache.NMS.ActiveMQ namespaces.
Well, it's been a lot since this question has been asked, but now you have several events available:
m_connection.ConnectionInterruptedListener += new ConnectionInterruptedListener(OnConnectionInterruptedListener);
m_connection.ConnectionResumedListener += new ConnectionResumedListener(OnConnectionResumedListener);
m_connection.ExceptionListener += new ExceptionListener(OnExceptionListener);
where m_connection is a IConnection object.
With these 3 events you will be able to find when your broker is down (among other usefull information, such as when it resumes the connection or when he encounters an exception)
Note: If you are in failover mode, these exceptions will be swallowed by the failover transport layer and handled automatically with them. Hence you will not receive any of these events.