Latency in google web push GCM responses after january 2019 - google-cloud-messaging

We use the https://github.com/web-push-libs/web-push package. We divide all tokens into groups of 1000 and send web push by sending a post request to GCM for each token with foreach loop. The system is working smoothly for about two years. But the response timeles have been prolonged on the GCM side for about 1 month. As a result of the detailed debuging, the encryption (AuthToken and p256PH) and sending GCM request processes are completed in 3 seconds for all 1000 tokens. But their responces are coming late from GCM side. Therefore, the pushing time has increased and we are experiencing timeout problems.
Is there any kind of problem that you encountered and solved?

Related

Mass Transit - Are messages still delivered after timeout with RabbitMQ

This may be a very stupid question but I haven't been able to find a definitive answer online. I'm using the Masstransit Request/Response pattern with RabbitMQ as my message broker.
I have a request to add a user to a database and a consumer running in a separate service that consumes that request and sends a response.
The request has a ten second timeout. My question is: If that request timed out before the consumer was able to consume it, is the request removed or will it eventually get consumed by the consumer and the request client just times out and moves on?
The request client is for requests, which by default have a 30 second timeout (which you indicated you are changing to ten seconds). This setting applies to both the request client timeout (the point at which it stops waiting for a response) and the time-to-live of the message sent.
If you want to extend the message TimeToLive, you can change that value when sending the request using:
await client.GetResponse<T>(request, x =>
x.Execute(context => context.TimeToLive = TimeSpan.FromDays(10)));
TL;DR - yes, the request message will expire after 10 seconds and by automatically removed from the queue by the message broker.

HTTP status for exponential authentication timeout

On my web server, all API requests made related-to and before authentication are subject to an exponential timeout.
For example, after a user unsuccessfully logs in a few times, the delay before the server will accept a request will go from 2 to 4, 8, 16 seconds and so on.
Any requests made during these delay periods will be immediately rejected by the server with a rety-after header passed.
What HTTP status code should the server return in this case?
I think it should be
429 Too Many Requests
http://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429

Google Authentication error "500. That’s an error"

Since about 1 week ago I'm getting consistently 500. That’s an error from accounts.google.com side when trying to authenticate with Google.
The first request to
https://accounts.google.com/o/oauth2/auth?scope=openid%20email%20https://adwords.google.com/api/adwords/%20https://www.googleapis.com/auth/drive%20https://spreadsheets.google.com/feeds%20https://docs.google.com/feeds&response_type=token&redirect_uri=https://<XXXXXX>/&state=<XXXX>2&client_id=<MY_CLIENT_NUMBER>.apps.googleusercontent.com&hd=
returns 200, but then the second request to
https://accounts.google.com/signin/oauth?hd&client_id=<app_id>.apps.googleusercontent.com&as=-XXXXXX&nosignup=1&destination=https://<my_app_uri>&approval_state=<somewhatrandomstate>&xsrfsig=<signature>
almost always fails with 500.
I'm using Java Google API client version 1.22.0 and my application is deployed on AWS (region eu-central-1). I'm currently signed in to multiple google accounts, so Account Chooser is triggered.
Any ideas what could be the problem? This auth flow worked fine for long time before then.
The problem was a very slight change of behavior on Google side. Mainly handling hd parameter. Before this change hd parameter was accepted even empty, but now when it's empty it usually ends up with status code 500.

Any way to get more details on GCM failure?

I'm currently working on a push notification API that will work with several apps at once, handling notifications and reducing programming time for future apps. It's already partially working, as I'm able to register and receive notifications on Android devices.
Eventually, one of our apps is gonna send broadcast notifications to registered users. But some tokens might be expired, which will lead to a GCM failure. I already tested, and it seems that sending an array of tokens to GCM with a single http call is working really well, as devices with valid tokens got their notifications.
What I wasn't able to find searching GCM documentation was a way to get more details in case of failure. For example, when I send a notification to two users, one with a valid token and the other with an invalid one, I got this result :
{
"multicast_id":7625209716676388798,
"success":1,
"failure":1,
"canonical_ids":0,
"results":[
{"error":"InvalidRegistration"},
{"message_id":"0:1466511379030431%c4718df8f9fd7ecd"}
]
}
We can see that one of the messages failed to send, but what I'm looking for is a way to get more details, ideally the token that leads to a failure, so I can remove it from my database.
Any way to achieve that ? Using the message_id maybe ? Or is there any solution for me to find invalid tokens stored in my database so I can clear them ? I might have missed something in the documentation, even a link to it would be useful.
Based from this documentation, the GCM server will respond to your server with some information about the token you used to try to send the push notification.
According also to this link, if the app server fails to complete its part of the registration handshake, the client app should retry sending registration token to the server or delete the registration token. Wiping old tokens from the GCM servers can be done with ÌnstanceID.deleteToken().
Check these links:
How to remove gcm id from server which is not used
GCM get invalid tokens when sending to multiple devices at once

Pubnub and Multiple XHR requests

I created simple webpage with a pubnub (3.4) subscription, about every 5 minutes I see an XHR request to pubnub from my Chrome console. Is this correct behavior? Thanks for any insight!
In doSubscribe
XHR finished loading: "https://ps3.pubnub.com/time/0". pubnub-3.4.min.js:11
XHR finished loading: "https://ps2.pubnub.com/time/0". pubnub-3.4.min.js:11
XHR finished loading: "https://ps1.pubnub.com/subscribe/demo/xxxxx/0/0?uuid=fb81a2a0-3fdc-4be1-94b2-dd23ce0c4bcd". pubnub-3.4.min.js:11
XHR finished loading: "https://ps1.pubnub.com/subscribe/demo/xxxxx/0/13569794952114592?uuid=fb81a2a0-3fdc-4be1-94b2-dd23ce0c4bcd". pubnub-3.4.min.js:11
XHR finished loading: "https://ps1.pubnub.com/subscribe/demo/xxxxx/0/13569794952114592?uuid=fb81a2a0-3fdc-4be1-94b2-dd23ce0c4bcd". pubnub-3.4.min.js:11
Yes, this is expected as PubNub uses HTTP Long-Polling for communication between the client and server.
ANSWER: PubNub and Multiple XHR requests
These 5 minute responses you show are PINGS from PubNub Cloud. Yes this is expected and correct behavior. Read more below to learn about how PubNub streams data to your client like Google Chrome. Also see What are the Blank Messages my Application Keeps Receiving to learn a bit more. Continue reading for the specifics:
PubNub Socket Connections
PubNub JavaScript client for mobile and web browsers like Chrome (webkit) maintain a socket connection that will last for one hour and up to 24 hours depending on the network traffic. Connections are refreshed after 24 hours and will automatically reconnect with reliable data deliverability (Catch-up Missed Messages). This is because PubNub maintains a Cloud Queue for your clients which re-delivers any missed messages.
PubNub Dropped Connection Catch-up
PubNub uses Cloud Queues to maintain messages in-memory until the data has been delivered to your client device like Google Chrome. So if you drop your connection, you will still receive the data sent during the offline state and once the connection has been restored, you will receive your messages.
PubNub XHR Pings and Requests
In your Chrome Dev Console you will see pings every 300 seconds which is 5 minutes. These pings are an Application Layer transport protocol that ensures the data stream is still active and alive. This is helpful in states of internet loss and restoration. This allows better-than TCP-KeepAlive connectivity and improved reliability. During Data Transmission Cycle, the connection state is maintained as described in the previous section with connections laster up to 24 hours before they are recycled.