Detecting a closed connection in Restlet - restlet

I have a Restlet server application that needs to do some cleanup when the response representation has been incompletely delivered (due to user hitting the stop button etc.). So far, I have found two callbacks that are called after the representation has been sent:
Override OutputRepresentation.release() in my custom representation
Pass a Uniform instance to Resource.setOnSent(Uniform)
Both of these are called whether or not the response representation has been completely delivered. How can I check whether the representation has been completely delivered?

The way I worked around this was by discovering that Restlet will throw an IOException when the connection is broken. I can just do my cleanup in a catch block.

Related

A failure to decode a rabbit message fails the reactive Reactive Messaging - readiness check

I've encountered a problem when using small rye reactive messaging with Quarkus, for a rabbit MQ incoming handler.
The message being published to rabbit has a json content type, and the method signature of the handling code is written accordingly:
#Incoming("event-name")
#Acknowledgment(Acknowledgment.Strategy.POST_PROCESSING)
public void processEvent(final JsonObject payload)
However, in the event that the message body contains bad json, that cannot be parsed successfully, this method is never invoked and a io.vertx.core.json.DecodeException is thrown, when handling the failure this calls into the io.smallrye.reactive.messaging.providers.extension.HealthCenter.reportApplicationFailure() which then means the healthcheck endpoint will product a DOWN response. The app in question runs in k8s, so the pod gets restarted, but the new instance will pick up on the same message and produce the same. The only way to deal with the issue seems to be manually remove the bad message from the queue.
Looking in the docs https://quarkus.io/guides/rabbitmq-reference#health-reporting it suggests that a failed message should be nacked and the failure-strategy should handle it, but it seems because the message isn't being parsed properly, it isn't getting as a far as the processing, the the failure strategy isn't being called.
I'm actually not certain if this is the intended behaviour in this circumstance, if I can do something about it or if it genuinely is a bug - using Quarkus 2.12.0.
My expectation is that it should be possible to handle this circumstance in some way without causing the health check to fail and dequeueing the message so that the bad message isn't picked up again and again.

How to handle network time-out exception with rabbit mq while sending messages asynchronously using spring-amqp library

I have written a program which requires multiple queues interaction - means consumer of one queue writes message to another queue and same program has consumer to take action on that queue.
Problem: How to handle network time-out issues with queue while sending messages asynchronously using spring rabbit ampq library?or RabbitTemplate.send() function must throw an exception if there are network issues.
Currently, I have implemented RabbitTemplate.send() that returns immediately and working fine. But, If network is down, send function returns immediately, doesn't throw any exception and client code assumes success. As a result, i have in-consistent state in DB that message is successfully processed. Please note that call to send function is wrapped inside transactional block and goal is if queue writing fails, DB commit must also rollback. I am exploring following solutions but no success:
Can we configure rabbitTemplate to throw run-time exception if any network connectivity issue so that client call is notified? Please suggest how to do this.
Shall we use synchronous SendAndReceive function call but it leads to delay in processing? Another problem, observed with this function, my consumer code gets notification while sendAndReceive function is still blocked for writing message to queue. Please advise if we can delay notification to queue unless sendAndReceive function is returned. But call to SendAndReceive() was throwing an amqp exception if network was down which we were able to capture, but it has cost associated related to performance.
My application is multi-threaded, if multiple threads are sending message using sendAndReceive(), how spring-amqp library manages queue communication? Does it internally creates channel per request? If messages are delivered via same channel, it would impact performance a lot for multi-threaded application.
Can some-one share sample code for using SendAndReceive function with best-practices?
Do we have any function in spring-amqp library to check health of RabbitMQ server before submitting send function call? I explored rabbitTemplate.isRunning() but not getting proper result. If any specific configuration required, please suggest.
Any other solution to consider for guaranteed message delivery or handle network time-out issues to throw runtime exceptions to client..
As per Gary comment below, I have set: rabbitTemplate.setChannelTransacted(true); and it makes call sync. Next part of problem is that if I have transaction block on outer block, call to RabbitTemplate.send() returns immediately. I expect transaction block of outer function must wait for inner function to return, otherwise, ii don't get expected result as my DB changes are persisted though we enabled setChannelTransacted to true. I tried various Transaction propagation level but no success. Please advise if I am doing anything wrong and review transactional propagation settings as below
#Transactional
public void notifyQueueAndDB(DBRequest dbRequest) {
logger.info("Updating Request in DB");
dbService.updateRequest(dbRequest));
//Below is call to RabbitMQ library
mqService.sendmessage(dbRequest); //If sendMessage fails because of network outage, I want DB commit also to be rolled-back.
}
MQService defined in another library of project, snippet below.
#Transactional( propagation = Propagation.NESTED)
private void sendMessage(......) {
....
rabbitTemplate.send(this.queueExchange, queueName, amqpMessage);
}catch (Exception exception) {
throw exception
}
Enable transactions so that the send is synchronous.
or
Use Publisher confirms and wait for the confirmation to be received.
Either one will be quite a bit slower.

Twisted - succes (or failure) callback for LineReceiver sendLine

I'm still trying to master Twisted while in the midst of finishing an application that uses it.
My question is:
My application uses LineReceiver.sendLine to send messages from a Twisted TCP server.
I would like to know if the sendLine succeeded.
I gather that I need to somehow add a success (and error?) callback to sendLine but I don't know how to do this.
Thanks for any pointers / examples
You need to define "succeeded" in order to come up with an answer to this.
All sendLine does immediately (probably) is add some bytes to a send buffer. In some sense, as long as it doesn't raise an exception (eg, MemoryError because your line is too long or TypeError because your line was the number 3 instead of an actual line) it has succeeded.
That's not a very useful kind of success, though. Unfortunately, the useful kind of success is more like "the bytes were added to the send buffer, the send buffer was flushed to the socket, the peer received the bytes, and the receiving application acted on the data in a persistent way".
Nothing in LineReceiver can tell you that all those things happened. The standard solution is to add some kind of acknowledgement to your protocol: when the receiving application has acted on the data, it sends back some bytes that tell the original sender the message has been handled.
You won't get LineReceiver.sendLine to help you much here because all it really knows how to do is send some bytes in a particular format. You need a more complex protocol to handle acknowledgements.
Fortunately, Twisted comes with a few. twisted.protocols.amp is one: it offers remote method calls (complete with responses) as a basic feature. I find that AMP is suitable for a wide range of applications so it's often safe to recommend for new development. It largely supersedes the older twisted.spread (aka "PB") which also provides both remote method calls and remote object references (and is therefore more complex - in my experience, more complex than most applications need). There are also some options that are a bit more standard: for example, Twisted Web includes an HTTP implementation (HTTP, as you may know, is good at request/response style interaction).

Netty SSL mode strange behavior

I am trying to understand, why does Netty SSL mode work on strange way?
Also, the problem is following, when any SSL client(https browser, java client using ssl, also any ssl client application) connects to Netty server I get on beginning the full message, where I can recognize correctly the protocol used, but as long the channel stays connected, any following messages have strange structure, what is not happening same way with non-ssl mode.
As example on messageReceived method when the https browser connects to my server:
I have used PortUnificationServerHandler to switch protocols.. (without using nettys http handler, it is just example, because i use ssl mode for my own protocol too)
first message is ok, I get full header beginning with GET or POST
than I send response...
second message is only one byte long and contains "G" or "P" only.
third message is than the rest beginning either with ET or OST and the rest of http header and body..
here again follows my response...
fourth message is again one byte long and again contains only one byte..
fifth message again the rest... and on this way the game goes further..
here it is not important, which sub protocol is used, http or any else, after first message I get firstly one byte and on second message the rest of the request..
I wanted to build some art of proxy, get ssl data and send it unencoded on other listener, but when I do it directly without waiting for full data request, the target listener(http server as example) can not handle such data, if the target gets one byte as first only (even if the next message contains the rest), the channel gets immediately closed and request gets abandoned..
Ok, first though would be to do following, cache the first byte temporarily and wait for next message and than join those messages, and only than response, that works fine, but sometimes that is not correct approach, because the one byte is sometimes really the last message byte, and if i cache it and await wrongly next message, i can wait forever, because the https browser expects at this time some response and does not send any data more..
Now the question, is it possible to fix this problem with SSL? May be there are special settings having influence on this behavior?
I want fully joined message at once as is and not firstly first byte and than the rest..
Can you please confirm, that with newer Netty versions you have same behaving by using PortUnificationServerHandler (but without netty http handler, try some own handler.)
Is this behavior Ok so, I do not believe, it was projected so to work..
What you're experiencing is likely to be due to the countermeasures against the BEAST attack.
This isn't a problem. What seems to be the problem is that you're assuming that you're meant to read data in terms of messages/packets. This is not the case: TCP (and TLS/SSL) are meant to be used as streams of continuous data. You should keep reading data while data is available. Where to split incoming data where it's meaningful is guided by the application protocol. For HTTP, the indications are the blank line after the header and the Content-Length or chunked transfer encoding for the entity.
If you define your own protocol, you'll need a similar mechanism, whether you use plain HTTP or SSL/TLS. Assuming you don't need it only works by chance.
I had experienced this issue and found it was caused bu using JDK1.7. Moving back to JDK1.6 solved it. I did not have time to investigate further but have assumed for now that the SSLEngine implementation has changed in the JDK. I will investigate further when time permits.

How to detect a WCF session crash before calling a contract method?

I am using a session mode for my WCF service. The problem is the following: if session is broken and no longer exists, client can't know it before calling a contract.
For example, if the service has been restarted, the client's session id is invalid, because that session has been closed on the server side.
I check the channel state before calling the contract and its value is CommunicationState.Opened even if session is already broken. So, when I call the contract after this check I get a CommunicationException with this message:
The remote endpoint no longer recognizes this sequence. This is most likely due to an abort on the remote endpoint. The value of wsrm:Identifier is not a known Sequence identifier. The reliable session was faulted.
Is there any workaround? I need a way to get an appropriate session state before calling a contract so that I can restore it without getting an exception.
P.S. The CommunicationException type is general, so I can't detect a session crash by catching this exception.
P.P.S. I have asked the similar question here, but in that case I didn't know the reason, now I don't know how to evade it.
No, there is no workaround - all you can (and should do) is use proper defensive programming principles to be able to catch and handle those kind of exceptions as they happen.
If the server crashes or the network goes down, unfortunately, there's no mechanism to inform all potential clients of this case - they'll just find out the next time they try to call.
Update: yes, the CommunicationException is just the common base class for all exceptions related to WCF - check out the MSDN docs to see about all the descendant exceptions you can catch to be more specific - EndpointNotFoundException, FaultException (or FaultException<T>), ProtocolException and many many more!