Send FMC Message to a topic chunk by chunk - firebase-cloud-messaging

We have a website that works with two million users. When we have new events on the website we send an FCM notification to our user's mobile app. But the website does not have enough resources for lots of users at once.
Can we send FCM messages to a topic chunk by chunk or deliberately decrease the fanout rate and put a delay between each fanout?
What is your suggestion?

There is no way to control the fanout rate of topics in Firebase Cloud Messaging.
The only options I can think of are to:
Create a number of more specific topics (e.g. topic-001, topic-002, ... topic-100), subscribing each client to one of the topics randomly (a form of sharding), and then sending a message to each topic in turn with a delay in between them.
Using a data only message, and delaying the display in your application code by a random amount.
No longer using topics but delivering straight to FCM tokens in your code, so that you fully control when each individual message gets sent.

Related

Firebase Messaging Topic quota exceeded

I'm reciving error "Topic quota exceeded" when trying to send a push.
I thought Firebase Cloud Messaging doesn't had limitations, what I' doing wrong?
As for as I know there is no limitations. you can reach 1000 at once. But If you are over that firebase will need some more time to send to everyone. Even If you use own server to send push notification it will be same
The frequency of new subscriptions is rate-limited per project. If you send too many subscription requests in a short period of time, FCM servers will respond with a "quota exceeded" response.
There is no limit for topics, but there is a time limit for processing those after crossing certain number. FCM limit the number of concurrent message fanouts per project to 1,000. After that, FCM may reject additional fanout requests or defer the fanout of the requests until some of the already in progress fanouts complete. I attached the related DOCS below, please go through that for more info.
Same question on a forum
Topic messaging
Fanout Throttling

WhatsApp messages order

We are working on our chatbot that is connected to UIB. Some of our messages have a bit of a complex structure, and we need to split them up in order to send them in proper order. Consider, we have a single message that has the following content structure: <text><image><text>. In order to send this message to a WhatsApp user, we need to split the content into three messages (#1 <text>; #2 <image>; #3 <text>). If we send these messages one-by-one, in WhatsApp client we might receive these messages as in the order <text><text><image>, because posting images takes longer than posting text messages. We have a workaround (adding delay between requests) but images could be big in size, so it takes very long to send them. We could constantly increase delay, but it's not a good way to do such things.
So, my question is the following:
Is it possible to make a request to check the message status, whether it has been delivered to WhatsApp servers or not? Actually, it doesn't matter, whether the message were delivered to the end user, because users might be offline. We just need to know if the messages have been delivered to the WhatApp server in the proper order.
You'll get the read/delivery status in the webhook URL.
Please contact support#uib.ai for further clarifications.

How would I use Reddis + Azure Event Hubs to handle mobile push notifications archiving for billions of topics?

I need to design a system that allows
Users to subscribe to any topic
No defined topic limit
Control over sending to one device, or all
Recovery when offline clients, (or APNS) that drops a notification. Provide a way to catch up via REST
Discard all updates older than age T.
I studied many different solutions, such as Notification Hubs, Service Bus, Event Hub... and now discovered Kafka and not sure if that's a good fit.
Draft architecture
Use an Event Hub to listen for mobile deviceID registrations, and userIDs that requests for topic subscriptions .. Pass that to Reddis, below
If registering a phone/subscribing to a topic, save the deviceID userID to the topic key.
If sending a message to a topic, query Reddis for the topic key, and send that result to a FIFO queue for processing.
Pipe the output of the previous query into the built in Reddis Pub/Sub features to alert worker roles that there is work pending.
While the workers send notices to Apple and Firebase, archive out the sent notices to some in-memory store below.
Archive server maintains a history of sent events, so that out-of-sync devices can get the most up to date information LIFO-queue style.
Question
What are your thoughts on using this approach to solve the above needs?
What other things should I learn, research, or experiment (measure)?

Message throttling in GCM / FCM push notification

I would like to know what is called Message throttling in Google FCM push notification? I am trying to implement a sample push notification using FCM, but didn't understand about message throttling mentioned in their steps. There is no documentation also found about it.
https://aerogear.org/docs/unifiedpush/aerogear-push-android/guides/#google-setup
Could someone clarify about this term?
This documentation of Throttling by https://stuff.mit.edu explains it really well:
To prevent abuse (such as sending a flood of messages to a device) and to optimize for the overall network efficiency and battery life of devices, GCM implements throttling of messages using a token bucket scheme. Messages are throttled on a per application and per collapse key basis (including non-collapsible messages). Each application collapse key is granted some initial tokens, and new tokens are granted periodically therefter. Each token is valid for a single message sent to the device. If an application collapse key exhausts its supply of available tokens, new messages are buffered in a pending queue until new tokens become available at the time of the periodic grant. Thus throttling in between periodic grant intervals may add to the latency of message delivery for an application collapse key that sends a large number of messages within a short period of time. Messages in the pending queue of an application collapse key may be delivered before the time of the next periodic grant, if they are piggybacked with messages belonging to a non-throttled category by GCM for network and battery efficiency reasons.
On a simpler note, I guess you can simply see throttling like a funnel that prevents an overflow of messages (normally for downstream messaging), regulating the in-flow of messages to avoid flooding.
For example, you send 1000 messages to a single device (let's also say that all is sent successfully), there's a chance that GCM will throttle your messages so that only a few would actually push through OR each message will be delivered but not simultaneously to the device.

Sending huge amount of emails using Amazon SES

I'm going to use Amazon SES for sending emails in the website I'm building currently. According to the sample java code they have provided in their API documentation I developed the functionality and I was able to send the emails. But when it comes to handle huge number of emails in a very short time of period what is the best mechanism to follow up? Do they provide any queue mechanism for emails? I couldn't find this from their API documentation and their technical service is available only for users who has purchased the account.
Can anyone has come across a solution for this problem?
Generally I use a custom SQS solution for a batch mailing process like this.
Sending more than a few emails from a web server isn't ideal, so I usually only have the website submit the request for the emails to a back-end process in a single call, then I create an SQS message for each recipient and (in my case) use a windows service that requests messages from SQS and sends the emails at the pace I want them to go out. If errors are encountered the message stays in the queue, and get retried automatically.
With an architecture like this, depending on your volumes you can spin up new instances automatically if the SQS queue size gets too large for a single instance to process in a timely manner.