Why does the TLS heartbeat extension allow user supplied data? - ssl

The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?

At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.

Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.

Related

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

Details of SSL communication

I have been trying to find out exactly how SSL works, and have found descriptions of the packet sequence that starts the conversation, but not how requests are processed. Here is a link to an example showing the initial handshake:
https://www.eventhelix.com/RealtimeMantra/Networking/SSL.pdf
Once communication is established, both sides are sharing a private session key which is updated every so often. I would like to know details on:
I assume that an attacker cannot just replicate an observed packet and execute it multiple times? This is a so-called replay attack.
How is encryption done using AES-256? If both sides simply applied the algorithm, then a replay attack would work. So I assume there is some kind of chaining so that each packet uses different encryption.
The session key switches every interval (like once every 30 or 60 minutes. How does this exchange work? What messages are exchanged, and what happens if a method is sent before the exchange that arrives after the switch?
The underlying mechanism most recently is TLS 1.2. Is this the same for SSL and SSH, or are the two protocols different?
An explanation is always good but a link to relevant documentation would also be extremely helpful. If these interlocking parts are too much, I can split out into a separate question, but there is a lot of overlapping information in the above sections.
I assume that an attacker cannot just replicate an observed packet and execute it multiple times? This is a so-called replay attack.
Correct. TLS is immune from replay attacks.
How is encryption done using AES-256? If both sides simply applied the algorithm, then a replay attack would work. So I assume there is some kind of chaining so that each packet uses different encryption.
There is chaining, and sequence numbers, and also a MAC for each message.
The session key switches every interval (like once every 30 or 60 minutes. How does this exchange work? What messages are exchanged
Another ClientHello with the same sessionID, and an 'abbreviated handshake' after that, that changes the session key. Details in RFC 2246.
and what happens if a method is sent before the exchange that arrives after the switch?
The switch is by mutual agreement, and any message that arrives before the same sender's ChangeCipherSpec message is decrypted under the old parameters.
The underlying mechanism most recently is TLS 1.2. Is this the same for SSL and SSH, or are the two protocols different?
They are different.

Handling Telnet negotiation

I'm trying to implement Telnet Client using C++ and QT as GUI.
I have no idea to handling the telnet negotiations.
Every telnet command is preceded by IAC, e.g.
IAC WILL SUPPRESS_GO_AHEAD
The following is how I handling the negotiation.
Search for IAC character in received buffer
According to the command and option, response to the request
My questions are described as follows:
It seems that the telnet server won't wait for a client response after a negotiation command is sent.
e.g. (send two or more commands without waiting for client reponse)
IAC WILL SUPPRESS_GO_AHEAD
IAC WILL ECHO
How should I handle such situation? Handle two requests or just the last one?
What the option values would be if I don't response the request? Are they set as default?
Why IAC character(255) won't be treated as data instead of command?
Yes, it is allowed to send out several negotiations for different options without synchronously waiting for a response after each of them.
Actually it's important for each side to try to continue (possibly after some timeout if you did decide to wait for a response) even if it didn't receive a reply, as there are legitimate situations according to the RFC when there shouldn't or mustn't be a reply and also the other side might just ignore the request for whatever reason and you don't have control over that.
You need to consider both negotiation requests the server sent, as they are both valid requests (you may choose to deny one or both, of course).
I suggest you handle both of them (whatever "handling" means in your case) as soon as you notice them, so as not to risk getting the server stuck if it decides to wait for your replies.
One possible way to go about it is presented by Daniel J. Bernstein in RFC 1143. It uses a finite state machine (FSM) and is quite robust against negotiation loops.
A compliant server (the same goes for a compliant client) defaults all negotiable options to WON'T and DON'T (i.e. disabled) at the start of the connection and doesn't consider them enabled until a request for DO or WILL was acknowledged by a WILL or DO reply, respectively.
Not all servers (or clients for that matter) behave properly, of course, but you cannot anticipate all ways a peer might misbehave, so just assume that all options are disabled until enabling them was requested and the reply was positive.
I'll assume here that what you're actually asking is how the server is going to send you a byte of 255 as data without you misinterpreting it as an IAC control sequence (and vice versa, how you should send a byte of 255 as data to the server without it misinterpreting it as a telnet command).
The answer is simply that instead of a single byte of 255, the server (and your client in the opposite direction) sends IAC followed by another byte of 255, so in effect doubling all values of 255 that are part of the data stream.
Upon receiving an IAC followed by 255 over the network, your client (and the server in the opposite direction) must replace that with a single data byte of 255 in the data stream it returns.
This is also covered in RFC 854.

Immediate flag in RabbitMQ

I have a clients that uses API. The API sends messeges to rabbitmq. Rabbitmq to workers.
I ought to reply to clients if somethings went wrong - message wasn't routed to a certain queue and wasn't obtained for performing at this time ( full confirmation )
A task who is started after 5-10 seconds does not make sense.
Appropriately, I must use mandatory and immediate flags.
I can't increase counts of workers, I can't run workers on another servers. It's a demand.
So, as I could find the immediate flag hadn't been supporting since rabbitmq v.3.0x
The developers of rabbitmq suggests to use TTL=0 for a queue instead but then I will not be able to check status of message.
Whether any opportunity to change that behavior? Please, share your experience how you solved problems like this.
Thank you.
I'm not sure, but after reading your original question in Russian, it might be that using both publisher and consumer confirms may be what you want. See last three paragraphs in this answer.
As you want to get message result for published message from your worker, it looks like RPC pattern is what you want. See RabbitMQ RPC tuttorial. Pick a programming language section there you most comfortable with, overall concept is the same. You may also find Direct reply-to useful.
It's not the same as immediate flag functionality, but in case all your publishers operate with immediate scenario, it might be that AMQP protocol is not the best choice for such kind of task. Immediate mean "deliver this message right now or burn in hell" and it might be a situation when you publish more than you can process. In such cases RPC + response timeout may be a good choice on application side (e.g. socket timeout). But it doesn't work well for non-idempotent RPC calls while message still be processed, so you may want to use per-queue or per-message TTL (or set queue length limit). In case message will be dead-lettered, you may get it there (in case you need that for some reason).
TL;DR
As to "something" can go wrong, it can go so on different levels which we for simplicity define as:
before RabbitMQ, like sending application failure and network problems;
inside RabbitMQ, say, missed destination queue, message timeout, queue length limit, some hard and unexpected internal error;
after RabbitMQ, in most cases - messages processing application error or some third-party services like data persistence or caching layer outage.
Some errors like network outage or hardware error are a bit epic and are not a subject of this q/a.
Typical scenario for guaranteed message delivery is to use publisher confirms or transactions (which are slower). After you got a confirm it mean that RabbitMQ got your message and if it has route - placed in a queue. If not it is dropped OR if mandatory flag set returned with basic.return method.
For consumers it's similar - after basic.consumer/basic.get, client ack'ed message it considered received and removed from queue.
So when you use confirms on both ends, you are protected from message loss (we'll not run into a situation that there might be some bug in RabbitMQ itself).
Bogdan, thank you for your reply.
Seems, I expressed my thought enough clearly.
Scheme may looks like this. Each component of system must do what it must do :)
The an idea is make every component more simple.
How to task is performed.
Clients goes to HTTP-API with requests and must obtain a respones like this:
Positive - it have put to queue
Negative - response with error and a reason
When I was talking about confirmation I meant that I must to know that a message is delivered ( there are no free workers - rabbitmq can remove a message ), a client must be notified.
A sent message couldn't be delivered to certain queue, a client must be notified.
How to a message is handled.
Messages is sent for performing.
Status of perfoming is written into HeartBeat
Status.
Clients obtain status from HeartBeat by itself and then decide that
it's have to do.
I'm not sure, that RPC may be useful for us i.e. RPC means that clients must to wait response from server. Tasks may works a long time. Excess bound between clients and servers, additional logic on client-side.
Limited size of queue maybe not useful too.
Possible situation when a size of queue maybe greater than counts of workers. ( problem in configuration or defined settings ).
Then an idea with 5-10 seconds doesn't make sense.
TTL doesn't usefull because of:
Setting the TTL to 0 causes messages to be expired upon reaching a
queue unless they can be delivered to a consumer immediately. Thus
this provides an alternative to basic.publish's immediate flag, which
the RabbitMQ server does not support. Unlike that flag, no
basic.returns are issued, and if a dead letter exchange is set then
messages will be dead-lettered.
direct reply-to :
The RPC server will then see a reply-to property with a generated
name. It should publish to the default exchange ("") with the routing
key set to this value (i.e. just as if it were sending to a reply
queue as usual). The message will then be sent straight to the client
consumer.
Then I will not be able to route messages.
So, I'm sorry. I may flounder in terms i.e. I'm new in AMQP and rabbitmq.

About losing HTTP Requests

I have a server to which my client sends a HTTP GET request with some values. The server on its end simply stores these values to a database.
Now, I am observing that sometimes I do not observe these values in the database. One of the following could have happened:
The client never sent it
The server never received it
The server failed in writing to the database
My strongest doubt is that the reason is 2 - but I am unable to explain it completely. Since this is an HTTP request (which means there is TCP underneath) reliable delivery of the GET request should be guaranteed, right? Is it possible that even though I send a GET request to the server - it was never received by the server? If yes, what is TCP doing there?
Or, can I confidently assert that if the server is up and running and everything sent to the server is written to the database, then the absence of the details of the GET request in the database means the client never sent it?
Not sure if the details will help - but I am running a tomcat server and I am just sending a name-value pair through the get request.
There are a few things you seem to be missing. First of all, yes, if TCP finishes successfully, you pretty much have a guarantee that your message (i.e. the TCP payload) has reached the other side: TCP assures that it will take care of lost packages and the order in which packages arrive. However, this is not universially failproof, as there are still things beyond the powers of TCP (think of a physical disconnect by cutting through an ethernet cable). There is also no assertion regarding the syntactical correctness of the protocol "above." Any checks beyond delivering a bit-perfect copy is simply not TCP's concern.
So, there is a chance that the requests issued by your client are faulty or that they are indeed correct but not parsed correctly by your server. Former is striking me as more likely as latter one as Tomcat is a very mature piece of software. I think it would help tremendously if you would record and analyse some of your generated traffic through e.g. Wireshark.
You do not really mention what database you have in use. But there are some sacrificing acid-compliance in favour of increased write speeds. The nature of these databases brings it that you can never be really sure wether something actually got written to disk or is still residing in some buffer in memory. Should you happen to use such a db, this were another line of investigation.
Programmatically, I advise you take the following steps when dealing with HTTP traffic:
Has writing to the socket finishes without error?
Could a response be read from the socket?
Does the response carry a code in the 2xx range (indicating a successful operation)?
If any of these fail, you should really log something.
On a realated note, what you are doing there does not call for the GET method but for POST as you are changing application state. Consider it as a nice-to-have ;)
Without knowing the specifics, you can break it down into two parts. The HTTP request and the DB write. The client will receive a 200 OK response from the server when its GET request has been acknowledged. I've written code under Tomcat to connect to a MySQL DB using DAO. In the case of a failure an exception would be thrown and logged. Which ever method you're using, you'll want to figure out how failures are logged.