GCM how to differentiate between 2 apps using the same Sender Id? - google-cloud-messaging

I'm working on an Android project with multiple Flavors and of course each one has debug and release builds, our setup is to have the API end-points pointing to staging and release pointing to live server, I'm using Google play services plugin for Grade
classpath 'com.google.gms:google-services:2.1.0-alpha5'
The problem I'm facing is with google-services.json since it's not possible to have one for Debug and one for Release (per each flavor).
In our project Flavors represent countries (with different brand names and languages).
Why do I want to use different GCM sender per flavor per build-type?
Because in both apps, users subscribe to Topics and we send push messages to topics not to individual users, so we collect GCM tokens just for reference, but the sending is always topic-based, so if you have the same topic name in Debug and Release you'll run into trouble.
so any accident during testing by sending for example to topic "X" will end-up delivery messages to Live users as well as Staging (debug).
The other thing, we still don't want to have it per Release type, cause not all Flavors will have the same Sender.
so the Ideal solution would be something as follows:
-app/src/flavor1Debug/google-services.json (with project_number:"1000001")
-app/src/flavor1Release/google-services.json (with project_number:"1000002")
-app/src/flavor2Debug/google-services.json (with project_number:"9000001")
-app/src/flavor2Release/google-services.json (with project_number:"9000002")
but it seems that Google-Play Services look only for
- app/google-services.json
- app/src/flavor1/google-services.json
- app/src/flavor2/google-services.json
- app/src/debug/google-services.json
- app/src/release/google-services.json
is there any other way to separate Sender id for each "Flavor+Build"?

Related

Using Azure IoT - telemetry from a Windows desktop application

I work for a company that manufactures large scientific instruments, with a single instrument having 100+ components: pumps, temperature sensors, valves, switches and so on. I write the WPF desktop software that customers use to control their instrument, which is connected to the PC via a serial or TCP connection. The concept is the same though - to change a pump's speed for example, I would send a "command" to the instrument, where an FPGA and custom firmware would take care of handling that command. The desktop software also needs to display dozens of "readback" values (temperatures, pressures, valve states, etc.), and are again retrieved by issuing a "command" to request a particular readback value from the instrument.
We're considering implementing some kind of telemetry service, whereby the desktop application will record maybe a couple of dozen readback values, each having its own interval - weekly, daily, hourly, per minute or per second.
Now, I could write my own telemetry solution, whereby I record the data locally to disk then upload to a server (say) once a week, but I've been wondering if I could utilise Azure IoT for collecting the data instead. After wading through the documentation and concepts I'm still none the wiser! I get the feeling it is designed for "physical" IoT devices that are directly connected to the internet, rather than data being sent from a desktop application?
Assuming this is feasible, I'd be grateful for any pointers to the relevant areas of Azure IoT. Also, how would I map a single instrument and all its components (valves, pumps, etc) to an Azure IoT "device"? I'm assuming each component would be a device, in which case is it possible to group multiple devices together to represent one customer instrument?
Finally, how is the collected data reported on? Is there something built-in to Azure, or is it essentially a glorified database that would require bespoke software to analyse the recorded data?
Azure IoT would give you:
Device SDKs for connecting (MQTT or AMQP), sending telemetry, receiving commands, receiving messages, reporting properties, and receiving property update requests.
An HA/DR service (IoT Hub) for managing devices and their authentication, configuring telemetry routes (where to route the incoming messages).
Service SDKs for managing devices, sending commands, requesting property updates, and sending messages.
If it matches your solution, you could also make use of the Device Provisioning Service, where devices connect and are assigned an IoT hub. This would make sense, for instance, if you have devices around the world and wish to have them connect to the closest IoT hub you have deployed.
Those are the building blocks. You'd integrate the device SDK into your WPF app. It doesn't have to be a physical device, but the fact it has access to sensor data makes it behave like one and that seems like a good fit. Then you'd build a service app using the Service SDKs to manage the fleet of WPF apps (that represent an instrument with components, right?). For monitoring telemetry, it would depend on how you choose to route it. By default, it goes to an EventHub instance created for you. You'd use the EventHub SDK to subscribe to those messages. Alternatively, or in addition to, those telemetry messages could be routed to Azure Storage where you could perform historical analysis. There are other routing options.
Does that help?

Can you update a message on google nearby message API?

I have 2 simple questions that I did not find by reading the official documentation about google nearby message API
https://developers.google.com/nearby/messages/android/pub-sub
If you publish multiple messages with the publish method (on the same instance of an app), the messages are saved as several different messages or are updated and overwritten (on cloud console)?.
Is it possible with the publish method to update a message?
I'm building an application where each user sees what others are posting, but I just need to know the most up-to-date data of each user, I don't need all the messages.
Thank you.
With PubSub, you publish message in a queue. Then you can't update or delete them, they are published.
On the consumer side, the messages are usually distributed in the order, but without any guarantee. In each message, you have a published timestamp.
In your use case, it could be interesting to keep in memory the userID and the latest processed timestamp. If you application is distributed, the best is to store these data in memorystore.
Like that, when a message comes in
Either it is newer than the value in memory store and your process it
Or, it's older and your trash it.

Redis - Max Subscriptions/Connections?

I'm designing a Java Spring based real-time notifications system & chat system using Redis & WebSockets(with sockJS and STOMP). Requirement is for each user to subscribe to a unique channel (channel name will be user id). This is because notifications can be targeted to a single user and chat conversation can be 1-on-1. The very reason im using redis is to get an event triggered in the corresponding application server(there are many) where the user is connected via WebSocket. As I understand, when a publish happens to say "user1" - and if I want to get the "onMessage handler" fired for just that target user:
Do I need to maintain 1 redis connection per user ?
is it okay to open 15k connections at a time with 15k unique subscriptions for those many users connected to the system at once?
Since you have tagged the question with Redisson, I presume you are using it already. If your choice of WebSocket framework is flexible, i.e. not limited to be SockJS with STOMP, you could consider the netty-socketio project. It is written by the author of Redisson and the integration between the two can't be any more natural.
Netty-socketio is fully compatible wit the popular SocketIO client side JS library and it is used by plenty companies commercially.
It doesn't require one redis connection per user and there are people whose usage are known to have already exceeded your requirement.
This is mentioned in the project's README file.
Customer feedback in 2014:
"To stress test the solution we run 30 000 simultaneous websocket clients and managed to peak at total of about 140 000 messages per second with less than 1 second average delay." (c) Viktor Endersz - Kambi Sports Solutions

Event notification on new e-mail in IBM Domino

Is it possible to subscribe to mail events on an IBM Domino server?
I need a service similar to the one provided by Microsoft Exchange Event Notification, where you can subscribe to events and get notified when there are changes - eg. arrival of a new e-mail. I need the solution to be server side, since I can't rely on users having their client running.
Unfortunately, as per my comment above, there is no pre-packaged equivalent to the push, pull and streaming subscription services that EWS supports. A Notes client can get notifications via Notes RPC protocol, and there's also obviously some technology in IBM's Notes Traveler mobile product, but nothing that I'm aware of as a pre-packed web service or even as a notifications API. You would have to build it. There are a variety of ways you could go about it.
For push or streaming subscriptions, one way would be with a Notes C API plugin using the Extension Manager, running on the server and monitoring the mailboxes. You might be able to use a DSAPI plugin into Domino's HTTP stack to manage the incoming connections and feed the data out to subscribers, but honestly I have no idea if Domino's HTTP stack can handle the persistent connections that are implied in the subscription model. Alternatively, the Extension Manager plugin could quickly send the data over to code written in any other language that you want, running on any web stack that. Of course, you'll have to deal with security through all the linked-together parts.
For pull subscriptions, I guess it's really more of a polling archiecture, with state saved somewhere so that only changes since the last call will be delivered. You have any number of options for that. You could use Domino's built-in HTTP server, obviously, so you could write your own Domino-hosted web service for this. You could also use the Domino Data Service, which is a REST API, to do this -- with all necessary state information being stored on the client-side. (On quick look, I don't see a good option for getting all new docs since a specified date-time via Domino Data Service, but it might be possible.)
I do worry a bit about scalability of any custom solution for this. My understanding is that Microsoft has quite a bit of caching and optimization in their services in order to address scale. Obviously, you can build whatever you need for that into your own web service, but it will likely add a lot of effort.

Offline client and messages to azure

I'm playing around with windows azure and I would like to build a clouded server application that receives messages from many different clients, such as mobile and desktop.
I would like to build the client so that they work while in "offline-mode", i.e. I would like the client to build up a local queue of messages that are sent to the azure server as soon as they get online.
Can I accomplish this using wcf and/or azure queing mechanism, so that I don't have to worry about whether the client is online or offline when I write the code?
You won't need queuing in the cloud to accomplish this. For the client app to be "offline enabled" you need to do queuing on the client. For this there are many options, a local database, xml files, etc. Whenever the app senses network availability you can upload your queue to Azure. And yes, you can use WCF for that.
For the client queue/sync stuff you could take a look at the Sync Framework.
I haven't found a great need for the queue so far. Maybe it's just that I'm not seeing it in my app view. Could also be that the data you can store in the queue is minimal. You basically store short text strings (like record ids), and then you have to do something with the ID when you pull it from the queue, such as look it up, delete it, whatever.
In my app, I didn't use the queue at all, just as Peter suggests. I wrote directly to table storage (accessed via it's REST interface using StorageClient) from the client. If you want to look at a concrete example, take a look at http://www.netalerts.mobi/traffic. Like you, I wanted to learn Azure so I built a small web site.
There's a worker_role that wakes up every 60 seconds. Using one thread, it retrieves any new data from it's source (screen scraping a web page). New entries are stored directly in table storage (no need for a queue). Another thread deletes entries in table storage that are older than a specified threshold (there's no issue with running multiple threads against table storage). And then I'm working on the third thread which is designed to send notifications to handheld devices.
The app itself is a web_role, obviously.