AWS Cloudwatch Email Specific Logs - amazon-cloudwatch

I have a series of logs inside my Cloudwatch Log groups.
They look like this:
2023-01-27T08:00:00 - {LogTitle} Unfortunately {DataName} has an error {error}
I want to be able to send individual emails for specific LogTitles and specific DataNames.
I might want to know about the "LogTitle": "Importer" and get an email but I don't want others as they happen too frequently.
Any idea how I should approach this?
I have read about using Dimensions inside a metric alarm and metric filter but I don't know much about it.
Also, I intend to use Terraforms to code this AWS log service as infrastructure. That way I can deploy it again easily without going through manual AWS set up.
Any help appreciated to point me in the right direction.
Thanks.

Related

Is there any way to use variables in looker while sending alerts?

I am sending alerts when a certain condition is met on the looker dashboard. However the mail i.e. alert sent by looker is very limited. I wanted to know if there is any way to attach certain variables, current_time and other details to it when alert is triggered. As of now what I see is that it has capability to send only text along with dashboard link.
I would like to have something like:
An alert was triggered from #{variable1} at #{current_time}
where the #{variable1}, #{current_time} can be treated as variables.
I am not sure if there is any support for it or not. So far documentation didn't help me.

Cargowise eAdaptor API manual + integration with QlikSense

just wanted to ask, if somebody has a manual for eAdaptor API for Cargowise?
We are trying to get the data from the system to QlikSense via REST API and we are not entirely sure how to do it. We do have an URL that will get us probably to some "middle man", however we still need to create a query for a specific (for example) shipment.
If somebody was working with this API and would have some insight, any help would be much appreciated.
There is an "eAdaptor Developers Guide.pdf" download which contains all the info about the CargoWise side of things.
You'll need someone with a CargoWise instance to download it for you from the following location in the CargoWise portal:

Setting up alerts for metrics in Splunk

I'm sending data to Splunk and everything is working just fine, i can see the data that i'm sending and run a query and get results. Right now I'm only using a test data set, but eventually people will be sending their own fields (as well as the mandatory ones). My question is, since I don't know what kind of data they will be sending, can I still set up alerts for them? Can I create something general?
It's pretty hard to create a generic alert that's actually useful. You may be able to craft something using the mandatory fields, but it may not be all that helpful.
If you're opposed to letting users create their own alerts then let them come to you with what they want.

AWS Cloudwatch Logs Insights find logs close to another log or timestamp

On AWS Cloudwatch Log Insights, often after you filter your logs and find an interesting log message you want to see what was happening right before or after that message.
What is the best way to find log messages right next to another message?
I imagine a good way is to filter based on #tiemstamp and they conveniently provide a date_floor function, but I cannot figure out the syntax that works for equality to timestamps.
In Logs Insights, if you query for:
fields #timestamp, #message, #logStream
| filter #message like /<Your Log Message>/
you'll get a link where you can access the log stream. Clicking there will bring you right to the context before/after the log you're interested in.
It's a fair bit of work for something that should be a single click but it's the only work-around I'm aware of. Feel free to go bug the AWS team to build this as a 1-click feature right from the logs themselves.
Edit:
Something I didn't know when I wrote this answer: this trick only works if you're querying a single log group. If you're querying multiple, it still shows the logStream but it's not clickable.

How to configure PagerDuty alerts in Splunk Cloud?

I've run into a few different issues with the PagerDuty integration in Splunk Cloud.
The documentation on PagerDuty's site is either outdated, not applicable to Splunk Cloud or else there's something wrong with the way my Splunk Cloud account is configured (could be a permissions issue): https://www.pagerduty.com/docs/guides/splunk-integration-guide/. I don't see an Alert Actions page in Splunk Cloud, I have a Searches, Reports and Alerts page though.
I've configured PD alerts in Splunk using the alert_logevent app but it's not clear if I should instead be using some other app. These alerts do fire when there are search hits but I'm seeing another issue (below). The alert_webhook app type seems like it might be appropriate but I was unable to get it to work correctly. I cannot create an alert type using the pagerduty_incident app. . . although I can set it as a Trigger Action (I guess this is how it's supposed to work, I don't find the UI to intuitive here).
When my alerts fire and create incidents in PagerDuty, I do not see a way to set the PagerDuty incident severity.
Also, the PD incidents include a link back to Splunk, which I believe should open the query with the search hits which generated the alert. However, the link brings me to a page with a Page Not Found! error. It contains a link to "more information about my request" which brings up a Splunk query with no hits. This query looks like "index=_internal, host=SOME_HOST_ON_SPLUNK_CLOUD, source=*web_service.log, log_level=ERROR, requestid=A_REQUEST_ID". It it not clear to me if this is a config issue, bug in Splunk Cloud or possibly even a permissions issue for my account.
Any help is appreciated.
I'm also a Splunk Cloud + PagerDuty customer and ran into the same issue. The PagerDuty App for Splunk seems to create all incidents as Critical but you can set different severities with event rules.
One way to do this is dynamically is to rename your Splunk alerts with the desired severity level and then create a PagerDuty event rule for each level that looks for the keyword in the Summary. For example...
If the following condition is met:
Summary contains "TEST"
Then perform the following actions:
Set severity = Info
screenshot of the example in the event rule edit screen
It's a bit of pain to rename your existing alerts in Splunk but it works.
If your severity levels in Splunk are programmatically set like in Enterprise Security, then another method would be to modify the PagerDuty App for Splunk to send the $alert.severity$ custom alert action token as a custom detail in the webhook payload and use that as event rule condition instead of Summary... but that seems harder.