How to answer an invalid ISO8583 message - iso8583

Our system receives ISO8583 messages which we decode and handle appropriately. Now we are getting invalid ISO messages in between which our system can't handle. In fact, it sends nothing in return. This causes a timeout on the other side. As a consequence, the (invalid) transaction is reverted which then causes quite a messup as there is nothing to be reverted.
Can anyone give me a clue on how to deal with/answer an invalid/undecodable ISO8583 message? Is there a standard answer (e.g. 'NAK' like)?

According to the ISO-8583 spec, 6XX (or 16XX, if you're using the '93 version)-class messages are appropriate for administrative notifications. Generally, a 644 or 1644 MTI is prescribed for notifying the sender of a problem processing a message, where
X6XX - Indicates an administrative message, often containing the details of a failure
XX4X - Indicates that the message is a notification; the sender is not to repeat the message
XXX4 - Indicates the source of the message (acquirer/issuer/other); here, it's Other
Putting it all together, your message should have at least the following fields
MTI: 1644
DE-24 (Function code): 650 (Unable to parse message)
Of course, you're to include the standard message identification fields: DE-7,11,12,39. These fields will be necessary for the message sender to match your response with the request.

I don't think there is a standard way of handling invalid ISO 8583 request messages. You didn't say why you are receiving invalid request messages, and without knowing that it is difficult to suggest how you should handle them.
Depending on the situation it may be best not to answer invalid ISO 8583 requests. In fact I know of systems that not only don't answer invalid request messages but will also blacklist the device that sent the invalid message and refuse to answer all other messages from it.
If you do decide not to respond to invalid request messages then as you have found out the client is likely to time out and then attempt to reverse the transaction. This is not usually a problem because servers will usually respond with an approval message to reversal request for transactions that don't exist. Remember that when a client times out after sending a request, it doesn't know if the request was processed or even if the request was received. So a server has to be prepared to handle both 1. a request that was received and processed (by undoing the transaction and then responding with an approval), and 2. a request that was never received/processed (by sending an approval). NOTE: In case 2 there is no need to undo the transaction because the transaction never took place.

From my experience with integrating ISO links, invalid ISO messages are usually, by industry standard, handled by a dropping down of the acquring host's connection followed by an angry mail from the acquirer's service provider accusing you of segfaulting down their mainframe.
Other than that different implementations, when implemented well, will handle invalid messages differently, from what #kolossus said in case the parser fails completely, to a normal **10 response with a specific response code such as RC 12 "Invalid transaction" when just some subfields don't make sense (such as problems with packaging of the complex subfields with tokens, track2 parsing etc)
The practical reason why #kolossus solution doesn't really make sense and why Stuard has a point is, if the client has issues of forming the ISO messages, then it almost certainly has a problem with parsing them too, so another ISO message doesn't really tell the client anything except provoking a parser exception on his side too.
End result will be the same - a technical reversal by the client, just not after a timeout. Basically, with iso8583, the best way to handle invalid messages is to not have them, there's no clean way.

Related

Message versioning in RabbitMQ / AMQP?

What is the recommended way to deal with message versioning? The main schools of thought appear to be:
Always create a new message class as the message structure changes
Never use (pure) serialized objects as a message. Always use some kind of version header field and a byte stream body field. In this way, the receiver can always accept the message and check the version number before attempting to read the message body.
Never use binary serialized objects as a message. Instead, use a textual form such as JSON. In this way, the receiver can always accept the message, check the version number, and then (when possible) understand the message body.
As I want to keep my messages compact I am considering using Google Protocol Buffers which would allow me to satisfy both 2 & 3.
However I am interested in real world experiences and advice on how to handle versioning of messages as their structure changes?
In this case "version" will be basically some metadata about the message, And these metadata are some instruction/hints to the processing algorithm. So I willsuggest to add such metadata in the header (outside of the payload), so that consumer can read the metadata first before trying to read/understand and process the message payload. For example, if you keep the version info in the payload and due to some reason your (message payload is corrupted) then algorithm will fail parse the message, then it can not event reach the metadata you have put there.
You may consider to have both version and type info of the payload in one header.

JSON:API HTTP status code for duplicate content creation avoidance

Suppose I have an endpoint that supports creating new messages. I am avoiding the creation of two times the same message in the backend, in case the user tries to push the button twice (or in case the frontend app behaves strangely).
Currently for the duplicate action my server is responding with a 303 see other pointing to the previously created resource URL. But I see I could also use a 302 found. Which one seems more appropriate ?
Note that the duplicate avoidance strategy can be more complex (eg for an appointment we would check whether the POSTed appointment is within one hour of an existing one)
I recommend using HTTP Status Code 409: Conflict.
The 3XX family of status codes are generally used when the client needs to take additional action, such as redirection, to complete the request. More generally, status codes communicate back to the client what actions they need to take or provide them with necessary information about the request.
Generally for these kind of "bad" requests (such as repeated requests failing due to duplication) you would respond with a 400 status code to indicate to the client that there was an issue with their request and it was not processed. You could use the response to communicate more precisely the issue.
Also to consider, if the request is just "fire and forget" from the client then as long as you've handled the case for duplication and no more behavior is needed from the client it might be acceptable to send a 200 response. This tells the client "the request was received and handled appropriately, nothing more you need to do." However this is a bit deceptive as it does not indicate the error to the client or allow for any modified behavior.
The JSON:API specification defines:
A server MUST return 409 Conflict when processing a POST request to create a resource with a client-generated ID that already exists.

Multiple acknowledge for the same delivery tag

In my project I saw that there is a chance of acknowledging the same delivery tag twice. When this happens, the consumer gets unbound from the queue and no further messages come to the consumer (Observed using the RabbitMQ management dashboard).
How can I check that a given delivery tag has already been acknowledged? Is there a recommended way to handle such scenario using the RabbitMQ API?
I tried to avoid acknowledging twice in my code but unfortunately it is not possible due to some design issues.
As the AMQP protocol reference is pretty clear about this:
A message MUST not be acknowledged more than once. The receiving peer MUST validate that a non-zero delivery-tag refers to a delivered message, and raise a channel exception if this is not the case. ...
A quick test reveals that, at least in current versions, this does not cause a consumer to stop working, but that behavior might be implementation-dependent.
In short, you would have to review your design to avoid this situation.

Can a CAN message have two reliable recipients?

In my situation multiple modules report their state over a CAN bus to a central processor, which replies and drives them. There's also a supervising processor, which listens in on the CAN bus and analyzes incoming messages from the modules for critically dangerous situations (two different modules reporting activating outputs which are absolutely forbidden from being activated simultaneously).
This all works okay as long as the CAN bus is noise-free.
CAN bus guarantees the recipient to receive a message; the message will be resent if no recipient confirms receiving it. The problem begins if there's more than one recipient and all of them absolutely must receive the message.
If the line is clean, both receive it, confirm it, and everything is okay.
If the message is badly damaged, neither will receive it, and it will be resent. That's okay.
But if the noise on the line is "just on the brink", one of them will receive it, and confirm, and the other will fail to receive it (noise on its end of the bus just minimally worse), and since the sender got the confirmation, the message won't be resent.
Is there a reliable way to assure two different recipients of a message both receive it? ...other than sending two messages with two addresses, specifically? (it's essential that the supervising CPU hears the same messages as the main CPU, not just similar)
There is no way at the CAN layer to detect receipt by more than one module. You would need to add messages to your communication protocol to confirm receipt if this is absolutely critical. As mentioned, you could have each module receive the same message and send a unique reply.
Some general thoughts:
1) Are the important messages broadcast periodically? If so, the recipient could test that the periodicity of the message is correct and fail safely if the period is violated.
2) CAN is a very robust network. In my many years, I have not seen noise affecting a single node like you described other than when the node was at the end of a exceedingly (and irrationally) long wire. You are correct to worry about this scenario and design your message format and system to be robust to all CAN failures. Generally, when safety or reliability was paramount, we would have more than one CAN bus communicating the information along with a number of crosscheck messages to verify that not only the path was intact but the device on the other end was operating intelligently. Our general assumption was that if crosscheck messages were making the trip, then our operational messages were making the trip successfully as well.
Obviously not.
It fails even in the simple case, that one receiver is shutdown.
There is no possibility for the master to detect this (for this single packet).
You need an advanced CAN, with more acknowledge slots, for each recipients one slot.
But you could request that each reciepient has to confirm the message with a unique response message.
So your master can detect by a timeout that not all reciepent received the message.

Why does the TLS heartbeat extension allow user supplied data?

The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?
At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.
Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.