We are working on our chatbot that is connected to UIB. Some of our messages have a bit of a complex structure, and we need to split them up in order to send them in proper order. Consider, we have a single message that has the following content structure: <text><image><text>. In order to send this message to a WhatsApp user, we need to split the content into three messages (#1 <text>; #2 <image>; #3 <text>). If we send these messages one-by-one, in WhatsApp client we might receive these messages as in the order <text><text><image>, because posting images takes longer than posting text messages. We have a workaround (adding delay between requests) but images could be big in size, so it takes very long to send them. We could constantly increase delay, but it's not a good way to do such things.
So, my question is the following:
Is it possible to make a request to check the message status, whether it has been delivered to WhatsApp servers or not? Actually, it doesn't matter, whether the message were delivered to the end user, because users might be offline. We just need to know if the messages have been delivered to the WhatApp server in the proper order.
You'll get the read/delivery status in the webhook URL.
Please contact support#uib.ai for further clarifications.
Related
We have a website that works with two million users. When we have new events on the website we send an FCM notification to our user's mobile app. But the website does not have enough resources for lots of users at once.
Can we send FCM messages to a topic chunk by chunk or deliberately decrease the fanout rate and put a delay between each fanout?
What is your suggestion?
There is no way to control the fanout rate of topics in Firebase Cloud Messaging.
The only options I can think of are to:
Create a number of more specific topics (e.g. topic-001, topic-002, ... topic-100), subscribing each client to one of the topics randomly (a form of sharding), and then sending a message to each topic in turn with a delay in between them.
Using a data only message, and delaying the display in your application code by a random amount.
No longer using topics but delivering straight to FCM tokens in your code, so that you fully control when each individual message gets sent.
I want to attach a button for a media group
To do this, I intercept the message and see if there is the same mediaGroup_id, then I save the file_id to the database
After all messages from this media group have been received, I send them to a group in a separate channel (here is the problem) -> how can I determine that this is the last message from the media group, I had a stupid idea to create a job with a delay of several seconds, enough time to receive the entire media group, and then send the entire media group in this job, however, I am worried about the reliability of this method, and for sure it will be buggy if one day I have to use asynchronous
Then, in the main channel, I send a message containing a link to the media group and a button, as I wanted
Is there some way to do this more elegantly?
That actually sounds rather reasonable and in fact I know a bot that does something very similar. The idea why this works, is that TG apparently first uploads all the media files and then sends all messages at once rather then looping over "upload, then send".
We are sending push notifications to Android devices via GCM API.
People are allowed to subscribe to different topics and receive alert every couple of days.
There is between 100_000 to 1_000_000 users subscribed for given topic, so we wanted to speed things up using more than ten connections.
We see answers with retry, so we retry after specified period of time as stated in the docs.
Can we get rid of retires by using more connections and sending the requests slower?
Or is the quota set for given API key and starting more connections will even hurt us?
EDIT:
We are using GCM HTTP interface. To be precise erlang-gcm library: https://github.com/pdincau/gcm-erlang We are sending message to 1M users. We are not sending to topic. We are performing multicast send to list of users. gcm-erlang library allows us to pass 1000 users per request (which is also the limit of GCM API). This means, we have to perform at least 1000 requests.
It takes something around 10 minutes to process all those 1000 requests, so we wanted to make them in parallel, but it doesn't make it faster. Here I've found information on throttling: https://stuff.mit.edu/afs/sipb/project/android/docs/google/gcm/adv.html#throttling
"Messages are throttled on a per application"
Does it mean, that even if this are messages to different users, we are still throttled, because they are using single API key for our mobile application?
Will the XMPP endpoint faster?
It is weird that parallelizing requests didn't make them faster. How come? Are you sure that the bottleneck is not on your side?
No, it doesn't look like you're throttled (you would receive errors if you were instead of waiting on line)
I still don't understand why topics don't work for you. They seem like a good match.
Anyway, if you want to send messages individually, I would highly recommend switching to XMPP. You will be able to send one hundred messages at a time per connection and open up to 1000 connection (but you're not gonna need that much really).
In my last project, I am using MassTransit (2.10.1) with RabbitMQ.
On some scenarios, a producer is allowed to send a bulk of messages to the queue.
For example - the user set to a bulk notification to his list of contacts - the list could be as large as 100000 contacts on some cases. This will send a message per each contact to the queue (I need to keep track of each message). Now since - as I understand - messages are being processed in the order of entrance, that user is clogging up the queue for a large amount of time while another user, which may have done a simple thing such as send a test message to himself, waits for the processing to end.
I have considered separating queues for regular VS bulk operations but this still doesn't solve the problem for small bulks (user with dozens of contacts waiting for users with hundred thousands) and causes extra maintenance.
The ideal solution for me - I think - would involve manipulating the routing in such a way that the consumer will be handling x messages from the same user, move the X messages from the next user, than again, and than moving back to the beginning of the queue, until all messages are processed.
Is that possible? Is there a better solution?
Thanks in advance.
You will to have to write code to manage this yourself. RabbitMQ doesn't really have any built-in mechanism to handle a scenario like this, without your code getting involved.
If you want to process a few at a time from bulk, then back to normal, then back to bulk, you'll need 2 queues and code to manage which one is being pulled from, when.
Just my opinion, seeing as how there is no built in way to my knowledge...Have you considered using whatever storage you are using to store the notifications, then just publish one message, with a List of Notifications, store it in you DB, and then have a retrieve notifications for user consumer. the response would be one message, it may have a massive payload, but even if that gets bogged down, add a skip and take property to the message and force it to be between 0 and 50 (or whatever). In what scenario would you want to show a user 100,000 notifications at once?
We are currently evaluating RabbitMQ. Trying to determine how best to implement some of our processes as Messaging apps instead of traditional DB store and grab. Here is the scenario. We have a department of users who perform similar tasks. As they submit work to the server applications we would like the server app to send messages back into a notification window saying what was done - to all the users, not just the one submitting the work. This is all easy to do.
The question is we would like these message to live for say 4 hours in the Queue. If a new user logs in or say a supervisor they would get all the messages from the last 4 hours delivered to their notification window. This gives them a quick way to review what has recently happened and what is going on without having to ask others, "have you talked to John?", "Did you email him is itinerary?", etc.
So, how do we publish messages that have a lifetime of x hours from the time they were published AND any new consumers that connect will get all of these messages delivered in chronological order? And preferably the messages just disappear after they have expired from the queue.
Thanks
There is Per-Queue Message TTL and Per-Message TTL in RabbitMQ. If I am right you can utilize them for your task.
In addition to the above answer, it would be better to have the application/client publish messages to two queues. Consumer would consume from one of the queues while the other queue can be configured using per queue-message TTL or per message TTL to retain the messages.
Queuing messages you do to get a message from one point to the other reliable. So the sender can work independently from the receiver. What you propose is working with a temporary persistent store.
A sql database would fit perfectly, but also a mongodb would work nicely. You drop a document in mongo, give it a ttl and let the database handle the expiration.
http://docs.mongodb.org/master/tutorial/expire-data/