How to get notification subscriptions in Worklight - ibm-mobilefirst

I want to trigger VERY basic push notifications, but, don't see any way to iterate through all the subscribed users, without rolling my own tool to read the db notification_user table. Surely, with such an extensive (and expensive) product such as Worklight there is an API to do this?
The only way I see is within my adapter to call another HTTP adapter (go get the user subscriptions). And, I've tried using the HTTP interface to retrieve the subscribed users via:
http://<server:port>/<context>/console/api/push/get?adapterName=PushAdapter&eventSource=EventSource
and many variations, but, nothing seems to work through browser testing (the doc indicates all are GET requests). There are NO examples of how the "Push, Event Sources" (?) format for the api is (should the "API Context" value be "Push", or "EventSources", what?). The chart given in the Worklight 6.0 Information Center is pretty bare minimum (how difficult would it have been to include an example of each?)
Basically, I want to iterate through a specific adapter/event source in a WL adapter, grab the "options" that were passed in when the user subscribed to perform some business logic on whether to send out a notification. Would think this would be a very common pattern, but, don't see any examples of this type of model.
Anyone have suggestions for similar processing with WL 6?
Thanks.

You're not wrong. Worklight has 3 methods to send out notifications to subscribed users/devices
See
WL.Server.notifyAllDevices
WL.Server.notifyDevice
WL.Server.notifyDeviceSubscription

Related

Assigning webhooks to Firebase Messaging "subscribe to topic" event

In my current project I am using the Kreait Firebase PHP SDK to send out push notifications to Android & iOS devices that have subscribed to notifications on named topics. No issues thus far. However, rather than have fixed topic names I would now like to generate topic names based on the current "condition" of the connecting device. The condition could, for example, be a geographic location.
This is not too difficult either and I have modified my app to handle this. However, in order to put the ability to have such autogenerated topics to use I need to know the topic names on my server so I can send out targeted messages via Kreait. I find Google's Firebase documentation a bit dense at times and have not been able to establish whether it is possible to assign webhooks that get called by Firebase whenever a SubscribeToTopic, UnsubscribeFromTopic event occurs.
A simple question - does FCM even offer anything like this capability? If so, any pointers to the relevant documentation would be much appreciated.
There is no public API to get a list of topic names from Firebase, nor is there a way to hook into the subscription mechanism.
Your best bet is to simply make two calls when a user subscribes to a topic: one to Firebase, and one to your own backend API that keeps a list of active topics/conditions.

Application Insights strategies for web api serving multiple clients

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.
this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)

After I've built a conversation flow in a chatbot, how'd I get the chatbot to actually perform the desired actions?

For example, if I’ve built a full conversational flow in a service like API. AI that results in a booking being made. How do I actually then make that booking sync to a third party calendar?
Can this be done directly between the two? Would I need to build an application to sit between the two?
I’m tech inexperienced, so I’m curious how these things work…
You will need to add "fulfillment" to your API.AI app, and yes, have a custom application (the "webhook") in between.
That is, once you've collected all the information to make that booking, you don't want to just say "Thank you, here's the book information you've provided [...]", you want to do things with it. That's what fulfillment does. API.AI will send a REST call to your webhook with the information the intent has, you do whatever you want with it (e.g.: actually add the booking to the calendar), and also return the response that you want API.AI to give, that'll take the place of the "text response" you normally provide for a given intent.
To set this up on the API.AI side, there are two steps: Find "fulfillment" in the menu for your app, and tell it how to connect to your webhook. Then go to any intent where you want the webhook to be called when it's matched, and select "use webhook" under "fulfillment".
The more involved part may be to actually provide a webhook that API.AI can call - that's where your custom logic goes, it sits between, in your example, the API.AI app and the calendar application and makes things actually happen.
Useful reading: https://docs.api.ai/docs/webhook

Why can't I delete native events with Cronofy.com?

I am using Cronofy to integrate my application (only locally tested yet) with multiple calendar platforms.
I am having trouble getting it to update or delete events which are created natively (google or outlook) and I cannot understand why. The documentation (https://www.cronofy.com/developers/api/) is not sufficing to understand it and there's not much more out there besides that.
When I send a request for deletion of a native event I do get a 202 HTTP response back but the event remains in my google/outlook calendar and if I do the same for my own event it deletes there smoothly with the same 202.
How can I make it work? I've read about auth flow and that 202 means it is processing but this processingtime seems to be taking too long for it to be that (~2days)
As standard, we sandbox calendar access and don't allow developers to edit existing events in end-users calendars.
There is a process you can go through to request extended permissions on one or more of a user's calendars if you need this functionality. Let me know via the support#cronofy.com if you would like access to this.
We differentiate between 'managed' and 'unmanaged' events in our API to help streamline the kinds of operations different use cases require.
Managed events are events that are created by your application. When they are created we require an event_id which is your id for the event in your application. You have complete control over events with an event_id. In order to delete a managed event you would pass the event_id as the identifying parameter https://www.cronofy.com/developers/api/#delete-event
Unmanaged events are events created by the user in their calendar. These have an event_uid which is used to identify the event. If you have sufficient permission to delete unmanaged events then you would pass this event_uid as the identifying parameter.
The reason we're returning a 202 is that our API is asynchronous. Every API request is a journal operation which is executed by a worker. We don't inline calls to downstream APIs. Instead we protect your application performance from having to deal with whether a calendar server is available and responsive to meet your request.
I hope this helps explain what you're seeing. Any questions, let me know either hear or at support#cronofy.com.
Adam

Error monitor for different applications

Currently we have many applications, where each application has its own error notification and reporting mechanism, so we clearly have many problems:
Lack of consistent error monitoring across different systems/applications: different GUIs, interfaces, different messages, etc.
Different approaches for error notification per application (many applications use email notifications, other applications publish messages to queue, etc.).
Separated configuration settings for reporting and monitoring per application: notification frequency, message recipients, etc.
You could add many other issues to the list, but the point is clear. Currently there is a plan to develop a custom application or service to provide a consistent and common solution for this situation.
Anyway, I am not sure if it is a good idea to create a custom application for this, I am sure that there should be a framework, platform or an existing solution or product (preferentially open source) that already solves this problem, so my question is: do you know what project or product to check before deciding to create our custom application?
Thanks!
Have a look at AlertGrid, it works as a centralized event handler, and notification dispatcher. It can collect events from different sources and you can easily manage event handling by creating rules in a visual editor. So, you can filter events, and raise notifications (email, sms, phone - works worldwide) whenever your custom condition is met. You can react not only to events that ocurred but also the ones that did not occur (detect missing 'heartbeats'). All you need is to feed AlertGrid with events (Signals), by a very simple API.
I'm not quite sure if this is what you're looking for. I'm in the AlertGrid dev team, if you had any questions - feel free to ask. We constantly develop this tool and appreciate any feedback.
Depending on how much information is written to application logs, you could consider using Hyperic. It's open source and has a lot of the features you are looking for.
http://www.hyperic.com/
Bugsnag is an awesome option if you are looking to do cross platform error monitoring. It supports some 50 platforms in one interface. https://blog.bugsnag.com/react-native-plus-code-push/