Response time when sending post to Http GCM server is very high - google-cloud-messaging

I am developing an application server to send GCM messages. My application server is sending post requests to GCM connection servers with HTTP protocol. The average response time of a request is acceptable (around 100 milliseconds), but sometimes I'm getting responses with times greater than 1 second. More occasionally I've got some response times of 2, 4 seconds or even more.
I'm using this library gcm-server to send my requests to GCM servers. I believe it's owned by Google.
In order to despite the problem, I've made some analysis in the gcm-server library code but can't found any clue or answer. Ultimately, I decide to send some curl's with valid posts to GCM server. The results are the same. I'm sure this is not the expected behavior, maybe I'm doing something wrong, but can't find or understand what it is.
Below is the script I'm running to test this. I'm sending 1000 post requests to GCM, without any definition of Keep-Alive (by default cURL uses Keep-Alive enabled).
#!/bin/bash
for i in {1..1000}; do
curl -s -w "%{http_code} - %{time_total}\n" -o /dev/null -X POST -H "Content-Type: application/json" -H "Authorization: key={putHereOneGCMValidToken}}" -H "Cache-Control: no-cache" -d '{
"collapse_key": "92667ba1-c8e9-4018-bf14-156417065641",
"delay_while_idle": false,
"data": {
"field1": "value1",
"field2": "value2"
},
"time_to_live": 0,
"registration_ids": [
"REGISTRATION_ID_1",
"REGISTRATION_ID_2",
"BAD_REGISTRATION_ID"
]
}' "https://gcm-http.googleapis.com/gcm/send";
sleep 0.2;
done
Reference:
GCM http server documentation

You are queuing your server with too many Downstream Messaging. Try sending with a lower test numbers. If you are testing or sending to multiple devices or users try Device Group Messaging or Send Messages to Topics.
Device Group Messaging
With device group messaging, app servers can send a single message to multiple instances of an app running on devices belonging to a group.
Send Messages to Topics
GCM topic messaging allows your app server to send a message to multiple devices that have opted in to a particular topic.
Also, you must consider external factor like intermittent internet connection for the delay of receiving the message. Lastly you might want to check this related SO question that experience High Server CPU after sending GCM.
Hope this helps!

Related

FCM batch messages URL

I want send multiple notifications with one HTTP request to Firebase using REST API
Documentation says - "You should combine requests and send"
https://firebase.google.com/docs/cloud-messaging/send-message#send-messages-to-multiple-devices
curl command for send is
curl *** -H 'Content-Type: multipart/mixed; boundary="subrequest_boundary"' https://fcm.googleapis.com/batch
I found that https://fcm.googleapis.com/batch url is no longer supported https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html
What right url for send multiple notifications to FCM?
There is a new API for sending messages through Firebase Cloud Messaging, called the versioned API. It lives on /v1/projects/<your-project-id>/messages:send and is fully documented in the Firebase documentation on sending requests.
In this new, versioned API sending multiple messages through the regular end point by sending a multi-part request to it. This process is fully documented in the section on sending a batch of messages and is also wrapped by most of the Admin SDKs that are available.

Is it possible to retrieve more than 400 messages from ActiveMQ queue via Jolokia API?

I have an error queue in ActiveMQ, which is populated by Apache Camel's onException error handler. There could be thousands of messages in this queue.
Instead of using the ActiveMQ web console, I am building a custom web admin to integrate several other statistics from other components as well. Thus, I wanted to include the statistics from ActiveMQ as well.
ActiveMQ version: 5.14.3
I have looked at Jolokia JMX API, and its operations. For instance, I have the following payload to the broker's Jolokia API endpoint:
{
"type": "exec",
"mbean": "org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Queue,destinationName=test.errors",
"operation": "browse(java.lang.String)",
"arguments": ["EXCEPTION_TYPE LIKE '%jdbc%'"]
}
The header field EXCEPTION_TYPE is already populated via Apache Camel Route. I have more than 10k messages in this queue at the moment. I made a POST request to my broker API endpoint with the payload as shown above. Although I had more than 10k messages, this request resulted in just 400 messages (due to the max page size limitation, hard coded in the source code). This means that I will not be able to get more than 400 messages at a time via Jolokia. I also tried the browseMessages() method as well. Looks like, it does the same thing, in general.
Is it possible to browse these messages (let's say if they are high in number, may be around 10k+)?
Or, is it possible to paginate them? I could not see a relevant operation method for that.
I tried to see if Hawtio did something special in retrieving all the messages. But, the result is same( with max 400 messages).
ActiveMQ web console does fetch all the messages. This probably could be because it is tightly coupled with the ActiveMQ project.
I am not restricted to just JMX/Jolokia. If these stats can be fetched via some API, its equally fine.
Any inputs would be great !

Splunk Enterprise HEC not sending data

I've installed the Splunk Enterprise trial. I've enabled the HTTP Event Collector feature as described here which enables sending machine data from my app into Splunk.
I tried to send a POST request using Postman to Splunk and got no response.
method: POST
url : http://localhost:8088/services/collector
Authorization : my generated token
Why there is no response if I already enabled the HEC feature? It seems that no server listens on that port at all.
What I don't understand about Splunk is -- where is my data stored? Is data for Splunk Enterprise stored only locally and should be in use inside companies LAN network? Or Splunk's own servers in the cloud that stored all my data? Are Splunk Enterprise and Splunk Cloud different on that subject?
You have 2 choices, send JSON data using HEC event data format specification for ex.
curl -k "https://mysplunkserver.example.com:8088/services/collector" \
-H "Authorization: Splunk CF179AE4-3C99-45F5-A7CC-3284AA91CF67" \
-d '{"event": "Hello, world!", "sourcetype": "manual"}'
Or send raw data, for this you need to use the raw endpoint, for ex.
curl -k
"https://mysplunkserver.example.com:8088/services/collector/raw?channel=00872DC6-
AC83-4EDE- -H "Authorization: Splunk CF179AE4-3C99-45F5-A7CC-3284AA91CF67" -d '1,
2, 3... Hello, world!'

Telit GE910 SSL protocol with Firebase

I'm making a project with Arduino and a Telit GE910 QUAD V3 GPRS board, I want to datalog in Firebase but I'm having problems, I cannot use external libs to make http requests because I have to follow the GPRS AT commands syntax. Do you think is possible to have a workaround for SSL protocol?
my main problem is that the certificate is too big and this protocol seems too heavy for my ATMEGA328P-MU.
thanks a lot!
Use the REST API to make HTTPS requests directly to the Firebase API.
For example, I can perform a POST request to https://<instance>.firebaseio.com/path/to/data.json to write data, and a GET request to the same URL to read it back.
You can try this out from the command line using a demo server:
> curl -X PUT -d '{ "first": "Michael", "last": "Wulf" }' https://sample-rest-endpoint.firebaseio-demo.com/users/kato/name.json
> curl https://sample-rest-endpoint.firebaseio-demo.com/users/kato/name.json

Heroku application log streaming

I'm trying to use the new Heroku api to stream the logs of my application using curl:
Here is what is said in the doc (https://devcenter.heroku.com/articles/platform-api-reference#app)
Streaming is performed by doing an HTTP GET method on the provided logplex url and retrieving log lines utilizing chunked encoding.
So first I retrieve the logplex url:
curl -X POST https://api.heroku.com/apps/my-app/log-sessions \
-H "Accept: application/vnd.heroku+json; version=3" \
-H "Authorization:XXX" -v
Then I get something like this in the response:
"logplex_url":"https://logplex.heroku.com/sessions/abcdef-079b-4264-a83c-031feb31bfc2?srv=132456798"
So I make another curl call:
curl -X GET "https://logplex.heroku.com/sessions/abcdef-8a7e-442f-a164-4c64e845b62d?srv=123456798" -H "Transfer-Encoding: chunked"
I got a persistent connection but nothing comes...
If I don't specify the Transfer-Encoding header, I get the logs, but the connection close.
Is it really possible to stream the logs, like it's specified in the reference ?
It is possible, unfortunately I was mistaken in writing it and it is not chunked encoding as I believed. We do use this interface in the CLI and in log2viz, but it is unfortunately not standard http per se.
Basically you should do a normal HTTP request and read back the HTTP response headers. Given the headers returned you would then normally read from the socket until you got a zero length read, at which point you can assume you are done and finish up. In the logplex case we are reticent to block (perhaps indefinitely) so we go ahead and return an empty read. Then we just expect that when you are done you can simply close the socket.
Unfortunately I was unable to figure out how to do this with cURL. But I can point to the examples in our open source where we tackle this, and hopefully that will help.
toolbelt - https://github.com/heroku/heroku/blob/master/lib/heroku/client.rb#L482
log2viz - https://github.com/heroku/log2viz/blob/master/app.rb#L153
Hopefully that helps clarify the current situation at least, I'll try to update the docs to reflect this. Thanks for the detailed report and let me know if you have additional questions I can help with.