RabbitMQ scheduled message and revoke feature - rabbitmq

Is it possible to schedule a message using RabbitMQ and also remove the message (which is scheduled to be processed) when certain conditions are met?
We have a requirement where we need to call a external service to get some data. The call is asynchronous. The client calls the API endpoint of the server mentioning the data that it needs. The server just responds back with a acknowledgement that it has received the request from client. Internally the server also starts processing the client request and it will call the client API endpoint with the actual response to the query it received sometime back from the client.
There is a time limit (30sec) till the client needs to wait to get the response from the server. If the client receives response within 30sec then it will proceed with the execution. Even if the client does not receives response from the server in 30sec, it will proceed with other steps.
There are thousands of independent transactions (request and response) happening each second between the client and server. How can the client keep a track of the requests and and response received in the most effective way using RabbitMQ.
Can the RabbitMQ plugin rabbitmq_delayed_message_exchange used for this scenario in which the client will push new messages in a queue along with x-delay header (30sec)? How can the scheduled message be removed from the queue in case the client receives the response from server before 30sec?

I'd do the following:
Make the response go through RabbitMQ too (using RPC)
Make sure that the name of the response queue is also sent as a parameter that is used to route it by some exchange policy (routing key or use header exchange)
Set up a DLX exchange with the correct policy for 2.
Set a 30s TTL in the client->server queue
How'd this work in the usual case?
Client creates the reply-to queue
Client sends the request to the server
Client consumes from the reply-to queue
Server consumes the message and posts the response to the reply-to queue
What'd happen with a timeout?
Client creates the reply-to queue
Client sends the request to the server
Client consumes from the reply-to queue
Request message TTL triggers
RMQ deadletters the request message to the reply-to queue
Client receives its own request instead of the response

Related

Add SessionId to IoTHub messages before sending to a ServiceBus queue

We are using IoTHub Routes to direct messages to the ServiceBus queues. One of the queues is Session enabled for the sake of ordered message processing.
Is it possible to enrich messages for that particular endpoint and add SessionId to them right in IotHub before directing to the queue? The value for the SessionId is inside the JSON content of the message.
As of writing this, It's not possible to do so. Session enabled queues could be added as an endpoint to IoTHub but then stay Unreachable there.

Mqtt message delivery when user will come online

Is it possible to use mqtt+mosquitto (or any broker like rabbitmq, redis) for the purpose of push notification instead of FCM ?
Let's assume we are using mqtt+mosquitto.
I am explaining my needed scenario:
An user A is sending a message to user B but the user B is now offline. Whenever user B will come online he should be notified about his pending message.
How to implement this scenario with broker
MQTT has a concept of "persistent sessions". There's a flag called "clean session" that the client sends to the broker in the connect packet when first connecting. By setting this flag to false, the client is asking the broker to "remember me".
Then if the client disconnects or loses it's connection, the broker will hold messages for the client until the next time it reconnects, and send them to the client in the order received.
In MQTT, each client is required to have a unique "ClientID". This is how the broker recognizes the client when it reconnects. The client uses subscriptions to tell the broker what messages that it wants the first time it connects, and then after that the broker remembers the list of subscriptions for that client and all the messages that match those subscriptions.
So, for your scenario, Client B would need to connect once with a persistent session, and then after that, the broker will hold messages for it whenever it disconnects.

If a server calls an API, what IP will the API detect?

Consider the case where there exists a simple client-server web application where the client sends requests to the server. If the server sends a request to an external API, what IP and header values will be detected by the API? The ones of the client that first send the request to the server, or the ones of the server?
Only the actual IP that makes the request will be visible to the API. So if there is a chain of requests, only the last request IP will be accessible to the receiving party.

Should ConnectionToken in SignalR be protected when using SSL for transport?

What exactly is the role of the ConnectionToken in SignalR?
I inspected the SignalR handshake in fiddler and saw that a ConnectionToken is being passed in the response to the negotiate request and then passed in all subsequent requests.
However, when inspecting the WS frames, I saw no trace of that ConnectionToken. Is it because fiddler hides it from me or is it simply not passed on the wire?
If it's because it's not passed on the wire, what is its' purpose?
If it is passed on the wire, is it considered to be a secret even if the transport is over ssl? how can an attacker exploit that token?
Connection token is an encrypted string containing the connection id and, if available, user name. It needs to be sent with each http request sent by the client to the server. If a server receives a request without the connection token or if it cannot decrypt the connection token it will reject the request. To read more on connection token and how it works take a look at this article.
You don't see the connection token in websocket frames since the connection token was validated when the websocket was opened (the connect request) and further validation is not needed (it is impossible for someone else to use this websocket). You would see the connection token again in case the connection was dropped and the client tried to reconnect.
Other transports send more http requests (e.g. for sending messages) and you will see that basically each of these requests (except for ping) contain the connection token. You can take a look at the SignalR protocol description I wrote some time ago for more details.

How does client cert authentication work on per directory basis?

Based on the documentation Apache allows to request a client cert authentication for one directory and don't request it for another directory.
http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html#arbitraryclients
How is it possible?
I assumed that first TLS/SSL does a handshake (including client certificate validation) and only after it, HTTP request is sent over secured channel. And this HTTP request contains a URL.
So, it looks like to get a URL (a diretory) you need to do (or skip) client certificate authentication.
So, it's not clear for me, how can Apache check URL first and decide later whether to request a client cert authentication or not.
It uses SSL/TLS renegotiation: the server sends a TLS Hello Request message to ask the client to trigger a new handshake by sending a new Client Hello message (and this time the server will send a Certificate Request after its Server Hello message).
The Hello Request message could in principle happen at any time during the HTTP exchange. For this particular feature, the server sends it just after receiving the request (therefore knowing which resource was requested), but before sending its response.