I'm using Grafana and set an alert in a graph, How can I call an external API or web service when alert fired? thanks.
So the goal is to get information into an external service. I am making the assumption that your particular external api / web service is not in the list of supported notification channels.
Personally in this case, I would suggest using the webhook notification channel option, as it gives a TON of information to work through / interact with:
{
"dashboardId":1,
"evalMatches":[
{
"value":1,
"metric":"Count",
"tags":{
}
}
],
"imageUrl":"https://grafana.com/assets/img/blog/mixed_styles.png",
"message":"Notification Message",
"orgId":1,
"panelId":2,
"ruleId":1,
"ruleName":"Panel Title alert",
"ruleUrl":"http://localhost:3000/d/hZ7BuVbWz/test-dashboard?fullscreen\u0026edit\u0026tab=alert\u0026panelId=2\u0026orgId=1",
"state":"alerting",
"tags":{
"tag name":"tag value"
},
"title":"[Alerting] Panel Title alert"
}
This can be sent to any service that is capable of receiving webhooks and translating them into whatever you need for your external API endpoint, I might suggest the following:
integromat.com (Free account gives 1000 operations / month)
n8n.io (OSS and self-hosted but limited direct integration... does have HTTP, so you can use that to interact with whatever (including internal stuff)
Once in either of these tools, you build a webhook receiver and then a workflow that will translate the action into the formats needed by your external API / service.
Related
How do I handle and return a human readable error in a Java Azure Function app?
All examples of this on a Google search, are just simple instructions on how to do a try-catch, which is not my question.
More specfically, how do we design the return status code and the response body of the request, in a way that provides the most flexibility to a wide array of situations?
Given that we are not integrating Spring-Boot in this case, and that we do not have access to anything Spring.
Given that my API generally returns an object that we will call Pojo1, on error, what is the best way to return a informative message.
NOTE: Of course, I do know there are situations where you want security through obscurity, in which case I would probably choose logging errors to app insights. This is not my question though.
Well, you can set custom headers while returning the request. This
can be done using a setHeader function.
You can also use azure service bus or event grid which will carry
specific messages regarding the errors.
Also, you can use azure monitoring which collect all the error and
notify you when everything happens.
Refer this article by Eugen Paraschiv for indepth explanation on how to use setheader.
Refer this documentation on azure service bus and this documentation on event grid.
Refer this documentation on azure monitoring logs.
I am trying to learn more about .Net Core Health Checks.
I understand the concept of a web hook i.e. it notifies you that an event has occurred in a third party application. However, I do not understand the concept of a web hook in the context of the Health Checks UI. If I setup the Health Checks UI, then there are two menu items in the sidebar i.e. Health Checks (as expected) and web hooks.
What are webhooks used for in the Health Checks UI? I have spent hours Googling this and all I have found is this: https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks/blob/master/doc/webhooks.md, which has not helped.
You can configure Webhooks in the Startup.cs.
services.AddHealthChecksUI(options => {
options.AddWebhookNotification("email",
uri: "http://localhost:5008/api/noti/email",
payload: "{ \"message\": \"Webhook report for [[LIVENESS]]: [[FAILURE]] - Description: [[DESCRIPTIONS]]\"}",
restorePayload: "{ \"message\": \"[[LIVENESS]] is back to life\"}");
}).AddInMemoryStorage();
We are developing an SAP Fiori App to be used on the Launchpad and as an offline-enabled hybrid app as well using the SAP SDK and its Kapsel Plug Ins. One issue we are facing at the moment is the ODATA message handling.
On the Gateway, we are using the Message Manager to add additional information to the response
" ABAP snippet, random Gateway entity method
[...]
DATA(lo_message_container) = me->mo_context->get_message_container( ).
lo_message_container->add_message(
iv_msg_type = /iwbep/cl_cos_logger=>warning
iv_msg_number = '123'
iv_msg_id = 'ZFOO'
).
" optional, only used for 'true' errors
RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception
EXPORTING
message_container = lo_message_container.
In the Fiori app, we can directly access those data from the message manager. The data can be applied to a MessageView control.
// Fiori part (Desktop, online)
var aMessageData = sap.ui.getCore().getMessageManager().getMessageModel().getData();
However, our offline app always has an empty message model. After a sync or flush, the message model is always empty - even after triggering message generating methods in the backend.
The only way to get some kind of messages is to raise a /iwbep/cx_mgw_busi_exception and pass the message container. The messages can be found, in an unparsed state, in the /ErrorArchive entity and be read for further use.
// Hybrid App part, offline, after sync and flush
this.getModel().read("/ErrorArchive", { success: .... })
This approach limits us to negative, "exception worthy", messages only. We also have to code some parts of our app twice (Desktop vs. Offlne App).
So: Is there a "proper" to access those messages after an offline sync and flush?
For analyzing the issue, you might use the tool ILOData as seen in this blog:
Step by Step with the SAP Cloud Platform SDK for Android — Part 6c — Using ILOData
Note, ILOData is part of the Kapsel SDK, so while the blog above was part of a series on the SAP Cloud Platform SDK for Android, it also applies to Kapsel apps.
ILOData is a command line based tool that lets you execute OData requests and queries against an offline store.
It functions as an offline OData client, without the need for an application.
Therefore, it’s a good tool to use to test data from the backend system, as well as verify app behavior.
If a client has a problem with some entries on their device, the offline store from the device can be retrieved using the sendStore method and then ILOData can be used to query the database.
This blog about Kapsel Offline OData plugin might also be helpful.
I am using openfire. I am able to add users and groups. But now stuck at send message from one user to another. I was going through libraries but not found any suitable. I tried xmpp bosh library but getting error:
"message": "Declaration of XMPPHP_BOSH::connect($server, $wait = '1', $session = false) should be compatible with XMPPHP_XMLStream::connect($timeout = 30, $persistent = false, $sendinit = true)",
"exception": "ErrorException",
The REST API Plugin does not provide the feature you are looking fore (1:1 messaging). The REST API Plugin is made to manage the Openfire Instance (Users, Groups, Channels etc.)
To send one to one messages, you could use the openfire chat plugin. (https://github.com/igniterealtime/openfire-chat)
Example:
POST /restapi/v1/chat/{streamid}/messages/{destination}
{
"body" : "desired message"
}
I want to make a unified "inbox" for messages from across multiple platforms, some of them are widely supported by all mejor chatbot services, like Facebook Messenger, others are more obscure like WhatsApp, but others are plain unsupported (like Steam Web Chat).
I've encountered several solutions that have some sort of "one-click" integration for the most popular messengers, but I can't find one that will let you integrate third-party messengers (which ideally have an API to read/send messages at the very least) into a chatbot-like service. Is there such a thing out there?
PS: I don't really care about fancy AI conversational support, I'd just like to receive all messages into, say, one webhook I can then act on, and also be able to reply to them.
API.ai doesn't have an 'integration pooling' architecture, it treats each platform as a separate integration or conversation. Given that, you'll have to build your own server side message pooling solution which plugs into all your 3rd party APIs, and then pools/queues messages across all streams before passing to API.ai, and with some messageID/tracking system on your server side solution to remember which 3rd party API to respond to with API.ai response. Something like this as an aggregate/pooling function should work:
var queue = [];
var queueProcessing = false;
function queueRequest(request) {
queue.push(request);
if (queueProcessing) {
return;
}
queueProcessing = true;
processQueue();
}
function processQueue() {
if (queue.length == 0) {
queueProcessing = false;
return;
}
var currentRequest = queue.shift();
//Send to API.ai
request(currentRequest, function(error, response, body) {
if (error || response.body.error) {
console.log("Error sending messages!");
}
processQueue();
});
}
What I would do is have a Node.js backend.
Direct every messaging integration to it and then direct that to API.AI.
So the flow would be like this:
There is a service called Message.io which does I believe what you want. They support the widest range of platforms.
Message.io sits between your bot and the messaging platforms, you receive messages in a standardized way from Message.io, and when sending messages out to users, it converts it to the appropriate format for the platform you're responding to.