Setting up AWS CloudWatch Alarms to alert via SNS if in INSUFFICIENT_DATA state - amazon-cloudwatch

I've been completely unable to get a clear, straight answer about undertaking this, and am at a point where there doesn't seem to be one doc page or forum post to refer to.
Currently, I've got CloudWatch alarms set in place to monitor a variety of variables to alert on breach for, through SNS to Slack. One thing that I'm looking to do after the fact, is to make sure that any alarms in the INSUFFICIENT_DATA status get their own alerts sent to Slack, as well. I assumed setting up composite alarms would lead me down the right path, but it doesn't actually look like this method enables me to send out alerts on that status. Getting the cloudwatch agents out to the instances, along with the configs, was done via chef recipe.
My infrastructure is on the larger side, and there are at least five metrics that are being monitored, which leaves me with 1370+ alarms. Getting these alarms to alert on INSUFFICIENT_DATA statuses needs to (or ideally) be done in one large swathe, or in large, individual batches. I'm not finding a single, rational way to do this, either. Scripting this out could be the answer, sure, but I'm unsure how to tailor the script to grab each and every alarm name and add this specific type of alert variable to them.
If anyone has any idea about how to proceed, I'd be immensely grateful. This is big task blocker right now, so I'm willing to work out a solution one phase at a time. Thanks very much!!

Related

How can I send scheduled messages to multiple Telegram groups?

I've added the Telegram Bot to multiple groups as an admin, I registered the Bot in IntelliJ using Java like it is described here, so I can receive and send messages. What I need to do now is send a message like:
"TODAY IS <DAY_OF_THE_WEEK>, THE BEST DAY OF THE WEEK!"
plus some animation (a different one for each day of the week) to all of the groups every day at a certain time. I am new to all this and would really appreciate the help. I'm not even sure whether I should use Java or another language to do that. I picked Java only because it was used in the tutorial and I wanted to follow the instruction directly without having to change anything in the process.

CRUD with single drag/drop or other action via API?

This is my first post/question. If I missed an existing thread that answers my question, I missed that thread in my search and definitely appreciate you linking me! Please let me know if I should be posting/asking this elsewhere....
My question relates to Salesforce.
I have a use case where a client has a monthly batch of files that need to be made available on various cloud-based storage/distribution platforms like Box and Dropbox but also other less ubiquitous tools specific to the sector. Currently, the client is logging into each distribution platform, one-at-a-time, and uploading the files; then, if at any point any files need to be updated or removed/restricted, the client logs into each platform one-by-one and takes the necessary action. Obviously the process being described is tedious/laborious and leaves multiple gaps for error. The client and I are discussing a solution that would allow for create/read/update/delete actions in all of the distribution platforms without having to leave their Salesforce org. I am aware of existing AppExchange integrations for Box, Dropbox, etc. but they don’t quite do everything we need (to my knowledge)—they tie-in nicely and there are use cases where they are powerful tools...but—my understanding of those existing integration is that they would still each require dedicated tabs within the Object and repeated ‘drags’ and ‘drops’ of the same files to each tab. Again, the end goal here is that, for example, the client wants to drag and drop one time and have it pushed to the various platforms, etc. Or another example is they would like to choose "delete" one time from within Salesforce and have the file removed/restricted on all distribution platforms.
I am a certified SF Admin 1, so...perhaps this should be in my wheelhouse but...I feel unsure how to approach. My feeling is this is asking for a combination of integrations via API and Process/Flow work, but I am hoping for some ideas/input/guidance. Any insight or help any of you have to offer would be so greatly appreciated!!
Thanks so much!

What is a good strategy for polling Microsoft Graph API so that I don't get throttled?

My goal is to maintain "real-time" (or as close as possible) information about the email messages sent by a group of users.
My ideal solution would be to periodically query the API for messages by all users in the group. This feature is not (yet?) implemented.
My second choice would be to create subscriptions (https://graph.microsoft.io/en-us/docs/api-reference/v1.0/api/subscription_post_subscriptions) for every member in the group and then request message information after I become aware of an event. The problem is, in practice, I am only allowed to create 20-30 simultaneous subscriptions (Issues to use Webhook for Microsoft Graph API), which might not be enough.
So I'm stuck with polling all the users in a cycle. The main problem with this approach is I can't find any information how many request are "too many", ie I get throttled. I want to maximize the number of requests to minimize the time it takes to get through one cycle.
A solution that comes to mind is to develop an adaptive program that slowly decreases the time between requests until throttled, then abruptly adds some time to it until a nice balance is found and maintained. This seems like a lot of work though. Right now I'm working on the assumption that 1/second is about the highest I can safely go (0.5 seconds on average per round trip, then a cool down of another 0.5 seconds).
What is the best way to deal with an unknown throttling limit in general, and Microsoft Graph in particular?
Edit:
While I think the accepted answer is a good response, it might not be suitable for all cases. For instance, if you don't want to use the 365 API, and you don't mind using beta features, perhaps you might check out this (delta tokens), which seem to be designed for real-time syncing with the data.
The only potential downside with the accepted answer is that you still need a subscription for each user you want to track (I think...), and there are limits on those. Very curious as to how other people tackled this problem.
While still in Preview, you may want to take a look at Outlook Streaming Notifications. These APIs provide a more robust notification model than simple web hooks. You would still need to establish multiple subscriptions but I expect you'll see far less latency.

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).

Error monitor for different applications

Currently we have many applications, where each application has its own error notification and reporting mechanism, so we clearly have many problems:
Lack of consistent error monitoring across different systems/applications: different GUIs, interfaces, different messages, etc.
Different approaches for error notification per application (many applications use email notifications, other applications publish messages to queue, etc.).
Separated configuration settings for reporting and monitoring per application: notification frequency, message recipients, etc.
You could add many other issues to the list, but the point is clear. Currently there is a plan to develop a custom application or service to provide a consistent and common solution for this situation.
Anyway, I am not sure if it is a good idea to create a custom application for this, I am sure that there should be a framework, platform or an existing solution or product (preferentially open source) that already solves this problem, so my question is: do you know what project or product to check before deciding to create our custom application?
Thanks!
Have a look at AlertGrid, it works as a centralized event handler, and notification dispatcher. It can collect events from different sources and you can easily manage event handling by creating rules in a visual editor. So, you can filter events, and raise notifications (email, sms, phone - works worldwide) whenever your custom condition is met. You can react not only to events that ocurred but also the ones that did not occur (detect missing 'heartbeats'). All you need is to feed AlertGrid with events (Signals), by a very simple API.
I'm not quite sure if this is what you're looking for. I'm in the AlertGrid dev team, if you had any questions - feel free to ask. We constantly develop this tool and appreciate any feedback.
Depending on how much information is written to application logs, you could consider using Hyperic. It's open source and has a lot of the features you are looking for.
http://www.hyperic.com/
Bugsnag is an awesome option if you are looking to do cross platform error monitoring. It supports some 50 platforms in one interface. https://blog.bugsnag.com/react-native-plus-code-push/