We launched a new mobile app for iOS and Android devices first week of January and use FCM to send push notification to users.
Thus far we've sent (based on the firebase console report) ~60k notifications out to our users and overall its a very solid and reliable platform. We split our 'sends' in groups of 1000 push tokens / devices.
Question: ~15 times since we've launched we've received 'No Result' back from the CURL that sends the notifications upstream to FCM... and on one occasion we received an error 500.
To work around this and not just assume success we are detecting when the result isn't what we expect it to be upon success, and we log the response (i.e. "no result")... then wait 5 seconds and retry, up to 3 times. (our log message denotes the 'try number' as well).
We have, maybe twice a week, received the 'first try' message (meaning the first attempt failed and 5 secs later the 2nd attempt kicks off)... and only ONCE (this week) have we received the 'second try' message...
We're wondering if this is normal behavior for FCM? Is there some paid level of support or access that would alleviate these re-try instances for us? I don't think there is an SLA for FCM, but generally speaking are others seeing this same behavior and is the rate I've described here what you'd' consider 'normal'?
Thx!
Answer received from Google today:
Hello!
If I've correctly understood this, you have sent 60k messages and received 16 failures? That comes out to around 99.9997% success. Three nines is pretty much industry gold. So looking stellar so far.
There is no paid FCM version, but all clients, regardless of payment plan, run on the best hardware available so you're already in the premium servers. : )
Related
I have a free tier subscription to Azure IoT hub with only two edge devices connected to it, one of them mostly off. Yesterday, it looks like my hub recorded a slew of messages--within 45 minutes (5 to 5:45 pm PST), 25K messages were recorded by the hub. A few related issues.
I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST today (same day UTC), but not sure why it is still blockign me after that.
I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade"
Any help with any of these?
1.I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. You can use it to monitor Device identity operations,Device telemetry,Cloud-to-device messages,Connections,File uploads and Message routing.
2.Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST
today (same day UTC), but not sure why it is still blockign me after
that.
IoT Hub Free edition enables you to transmit up to a total of 8,000 messages per day, and register up to 500 device identities. The device identity limit is only present for the Free Edition.
3.I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade".
You cannot switch from Free to one of the paid editions. The free edition is meant to test out proof-of-concept solutions only.
Confirming the earlier answer, the only solution is to delete the old hub and create a new free one, which is simple enough.
I still haven't figured out what those specific error messages were, but I do notice that when there are errors such as CA certificate auth failures, lots of messages get sent up. I'm still working with MSFT support on the CA certificate signing issues, but this one is a side effect.
For future reference, look at yoru hub's metrics, and note that (i) quote gets reset midnight UTC, but (ii) net violations do not.
I have configured Fabric and Crashlytics in my application. I have added the call to test crashes:
Crashlytics.sharedInstance().crash()
I am seeing those crashes reported in the Dashboard with the stack traces and everything.
In the Settings menu under Notifications, I have all the alerts set to On, including Issue Velocity Alert.
According to this answer the Issue Velocity Alert:
If an issue is causing a crash in 1% of all user sessions within the past hour, you'll be notifiied.
I have received a New Fatal Issue Alert for the calls to crash() which shows the I am receiving alerts correctly.
But I haven't received any Issue Velocity Alert. Since 100% of my sessions have crashed to the same error, I should be receiving it right? The first crash happened 3 hours ago.
Note that I have tested it with 1 user on 1 device.
Why am I not receiving any alerts?
Paul from Fabric here. Crashlytics has a minimum threshold of unique users of an app before Velocity Alerts will be sent. I can't say what the exact numbers are, but they're designed to prevent apps with few users from getting spammed by our issue reporting.
This is what I found out looking into it:
An issue in an app exceeds the defined threshold for that app.
The app has 250 sessions in that time period.
There was no alert previously raised for the issue in the app.
Source: https://firebase.google.com/docs/crashlytics/velocity-alerts
Personally, I find the name velocity alert quite misleading.
I'm sending messages over GCM with TTL=15 - and they arrive just fine. Despite that fact, the developer console (where GCM messages can be tracked) show status=expired.
According to Google's docs, expired means:
Reached their time-to-live (TTL) and expired.
Am I doing something wrong? Perhaps I'm not acking the message on my Android app?
As Reference;
time_to_live: This parameter specifies how long (in seconds) the message should be
kept in GCM storage if the device is offline. The maximum time to live
supported is 4 weeks, and the default value is 4 weeks. For more
information, see Setting the lifespan of a message.
So 15 seconds are too short to track you may want to increase this value.
I have a quickblox account that we're using internally for testing. Very low throughput (Total of around 600 messages across 2 days and never more than a 3 or 4 per second at the very peak.)
Today the messages stopped sending in the chatroom. There doesn't appear to be any errors coming through the network panel of chrome and no errors popping up in the admin panel.
As a test, without changing any client code, I created a new room and simply updated my config so my client pointed there. This worked with absolutely no problems.
Are there any things I may be missing here? Is this possibly a free tier thing where only a few hundred messages may be sent at any one time or is this more likely something client side?
It were some maintenance periods, you should receive emails about it.
So maybe you were trying to use chat during that period.
600 messages across 2 days it's very small value, so no problems here with limits.
I'm getting 406 Not Acceptable as response when I try to send a push notification, I understand the problem and I've fixed the code that was causing it but I'm not quite sure how to make the error go away, the server responds with 406 to every one of my requests for push notifications. Will this happen by itself after a period of time or? Thanks.
This isn't an error you should (or could) do something about. It just means that you sent too many notifications to the same device or too many notifications per second, and you have to wait until you can send more.
406 Not Acceptable
This error occurs when an unauthenticated cloud service has reached the per-day throttling limit for a subscription, or when a cloud service (authenticated or unauthenticated) has sent too many notifications per second. The cloud service can try to re-send the push notification every hour after receiving this error. The cloud service may need to wait up to 24 hours before normal notification flow will resume.
(Source)
Fixed itself after a while, half a day or so.