Mqtt message delivery when user will come online - firebase-cloud-messaging

Is it possible to use mqtt+mosquitto (or any broker like rabbitmq, redis) for the purpose of push notification instead of FCM ?
Let's assume we are using mqtt+mosquitto.
I am explaining my needed scenario:
An user A is sending a message to user B but the user B is now offline. Whenever user B will come online he should be notified about his pending message.
How to implement this scenario with broker

MQTT has a concept of "persistent sessions". There's a flag called "clean session" that the client sends to the broker in the connect packet when first connecting. By setting this flag to false, the client is asking the broker to "remember me".
Then if the client disconnects or loses it's connection, the broker will hold messages for the client until the next time it reconnects, and send them to the client in the order received.
In MQTT, each client is required to have a unique "ClientID". This is how the broker recognizes the client when it reconnects. The client uses subscriptions to tell the broker what messages that it wants the first time it connects, and then after that the broker remembers the list of subscriptions for that client and all the messages that match those subscriptions.
So, for your scenario, Client B would need to connect once with a persistent session, and then after that, the broker will hold messages for it whenever it disconnects.

Related

Add SessionId to IoTHub messages before sending to a ServiceBus queue

We are using IoTHub Routes to direct messages to the ServiceBus queues. One of the queues is Session enabled for the sake of ordered message processing.
Is it possible to enrich messages for that particular endpoint and add SessionId to them right in IotHub before directing to the queue? The value for the SessionId is inside the JSON content of the message.
As of writing this, It's not possible to do so. Session enabled queues could be added as an endpoint to IoTHub but then stay Unreachable there.

RabbitMQ scheduled message and revoke feature

Is it possible to schedule a message using RabbitMQ and also remove the message (which is scheduled to be processed) when certain conditions are met?
We have a requirement where we need to call a external service to get some data. The call is asynchronous. The client calls the API endpoint of the server mentioning the data that it needs. The server just responds back with a acknowledgement that it has received the request from client. Internally the server also starts processing the client request and it will call the client API endpoint with the actual response to the query it received sometime back from the client.
There is a time limit (30sec) till the client needs to wait to get the response from the server. If the client receives response within 30sec then it will proceed with the execution. Even if the client does not receives response from the server in 30sec, it will proceed with other steps.
There are thousands of independent transactions (request and response) happening each second between the client and server. How can the client keep a track of the requests and and response received in the most effective way using RabbitMQ.
Can the RabbitMQ plugin rabbitmq_delayed_message_exchange used for this scenario in which the client will push new messages in a queue along with x-delay header (30sec)? How can the scheduled message be removed from the queue in case the client receives the response from server before 30sec?
I'd do the following:
Make the response go through RabbitMQ too (using RPC)
Make sure that the name of the response queue is also sent as a parameter that is used to route it by some exchange policy (routing key or use header exchange)
Set up a DLX exchange with the correct policy for 2.
Set a 30s TTL in the client->server queue
How'd this work in the usual case?
Client creates the reply-to queue
Client sends the request to the server
Client consumes from the reply-to queue
Server consumes the message and posts the response to the reply-to queue
What'd happen with a timeout?
Client creates the reply-to queue
Client sends the request to the server
Client consumes from the reply-to queue
Request message TTL triggers
RMQ deadletters the request message to the reply-to queue
Client receives its own request instead of the response

Push notifications in IBM Mobilefirst foundation server with APNS

1) Are push notification messages from IBM Mobilefirst are guaranteed? At least delivering them to APNS server? What happens if APNS server is not reached from MFP, is there any retry mechanism? How can I know push message is delivered?
2) Is there a time out value that we can control when MFP connects to APNS to send push message?
3) Are there any other such settings related to push with APNS in MFP? Where can I find details and explanations of such configurations/settings/properties?
1) MFP server does its best to deliver push notifications to the respective mediator. If delivery to APN server does not happen successfully , MFP server retries dispatching the notification. After multiple retries, if the notification cannot be sent to APNS, the information is logged and can be found in the standard logs. This is where you should be analyzing your network settings.
If push notification is delivered successfully to the mediator, the "message sent" count is incremented. This can be found either by accessing the Push notifications tab in Operations Console or using REST API calls.
2) Timeout value for the connection to APNS ? There is no timeout value as such that can be controlled.Communication with APNS happens over persistent socket connections. There is a timeout value to keep this socket open.
"push.apns.connectionIdleTimeout"
3) Refer to the KnowledgeCenter link on Push Properties.

PubNub connections per channel

Does subscribing to multiple PubNub channels share an HTTP connection or create separate connections?
The reason for asking is that clients will receive notifications from a central hub.
We can use channels for routing the notification types. (Like REST).
We can have a single channel for events, with an event_type field. (Like SOAP).
The former is preferable in terms of implementation simplicity, so just checking if there are any drawbacks.
PubNub now offers Channel Groups and Wildcard Subscribe via the Stream Controller add-on
Channel Groups
PubNub now offers Channel Groups so that one client connection can subscribe to 20K channels at once (10 channel groups X 2000 channels in each channel group). See Channel Groups KBs for more details.
Wildcard Subscribe
Subscribe to a.b.* and publish to any channel that is prefixed by a.b. (a.b.c, a.b.d, a.b.aa, etc) and your a.b.* subscribe will get those messages. See Wildcard KBs for more details.
PubNub Connections Per Channel
PubNub SDK client connections utilize one TCP connection per SDK instance. The number of channels used will not increase the open TCP connection count. Multiple PubNub channels share a connection. PubNub uses Multiplexing allowing your channel messages to be received using only one TCP connection.

WCF Transport security weakness

On 2nd edition of "Programming WCF Services" By Lowy, ch 10, page 512.
Lowy said about Transport security: Its main downside is that it can only guarantee transfer security point-point, meaning when the client connects directly to the service. Having multiple intermediaries between the client and the service renders Transport security questionable, as those intermediaries may not be secure. Consequently, Transport security is typically used only by intranet applications.
HTTPS is one of Transport security options, How previous paragraph applies to HTTPS ?!!, HTTPS encrypts every thing all the way from start to end points. Also every e-commerce application in the world is using HTTPS, how you can limit it to intranet applications!!
Thanks
HTTPS encrypts data from point-to-point, and once the data reaches one of the points and is decrypted, no security guarantee is made from that point onwards. Intermediary nodes, however, cannot read the information.
Message security, on the other hand, can encrypt data to be decrypted only by a certain recipient, which can be a separate entity from the receiving end. The receiving end might eventually forward the encrypted message to the intended recipient who will be able to decrypt the message.
An analogy would be email. If you establish a connection with your mail server using transport security (e.g. HTTPS), any information is guaranteed to be secured from your machine to the mail server. However, anyone with access to the mail server (e.g. server administrators) will be able to read the content of the email.
On the other hand, if you use message security to encrypt the message so only a specified recipient can decrypt it, the actual email message is encrypted (and not simply the communication between you and the server), so that even once the message is received by the server, it is still encrypted. Only when the email server forwards your message to your intended recipient, the recipient can decrypt the message using his own private key, thereby keeping the email private across a whole path of delivery while not requiring direct communication by the sender and that recipient, as is required by transport-level security.
Of course, some parts of the message must remain visible to the email server, for example the recipients address, and so you may want to use both levels of security: message security will ensure the mail server (or any party except the recipient) can't read the content of your email, and transport security will additionally ensure that a third party listening in to the communications between you and your mail server can't find out who you're sending an email to (unless the mail server divulges that information to that third party).