AWS Cloudwatch Logs Insights find logs close to another log or timestamp - amazon-cloudwatch

On AWS Cloudwatch Log Insights, often after you filter your logs and find an interesting log message you want to see what was happening right before or after that message.
What is the best way to find log messages right next to another message?
I imagine a good way is to filter based on #tiemstamp and they conveniently provide a date_floor function, but I cannot figure out the syntax that works for equality to timestamps.

In Logs Insights, if you query for:
fields #timestamp, #message, #logStream
| filter #message like /<Your Log Message>/
you'll get a link where you can access the log stream. Clicking there will bring you right to the context before/after the log you're interested in.
It's a fair bit of work for something that should be a single click but it's the only work-around I'm aware of. Feel free to go bug the AWS team to build this as a 1-click feature right from the logs themselves.
Edit:
Something I didn't know when I wrote this answer: this trick only works if you're querying a single log group. If you're querying multiple, it still shows the logStream but it's not clickable.

Related

AWS Cloudwatch Email Specific Logs

I have a series of logs inside my Cloudwatch Log groups.
They look like this:
2023-01-27T08:00:00 - {LogTitle} Unfortunately {DataName} has an error {error}
I want to be able to send individual emails for specific LogTitles and specific DataNames.
I might want to know about the "LogTitle": "Importer" and get an email but I don't want others as they happen too frequently.
Any idea how I should approach this?
I have read about using Dimensions inside a metric alarm and metric filter but I don't know much about it.
Also, I intend to use Terraforms to code this AWS log service as infrastructure. That way I can deploy it again easily without going through manual AWS set up.
Any help appreciated to point me in the right direction.
Thanks.

Is it possible to link to a job in the bigquery console?

If I execute a BigQuery job using the REST API (i.e. bigquery.googleapis.com) in the response I get back a selfLink that looks something like this:
https://bigquery.googleapis.com/bigquery/v2/projects/my-project/jobs/job_0123456789ABCDEF?location=EU
In the UI (ie.. console.cloud.google.com) I can see the very same job in the project's query history:
Is it possible to use the information within that API response and construct a URL that will allow a person to visit that URL in the browser and be taken directly to the information about that query in the UI? This would be really useful because we could log a message containing that URL so that anyone viewing the logs can see a user-friendly UI regarding that job.
I suspect the answer is "no" but just thought I'd ask.
I believe you can share this link:
https://console.cloud.google.com/bigquery?project=<my-project>j=<bq:<location>:<job_id>>&page=queryresults
For example: https://console.cloud.google.com/bigquery?project=my-project&j=bq:US:2846160a-9a13-4192-9bff-e691ff2adab6&page=queryresults
If a user has BQ Job List permission in that project, then when they open up the link they will be be able to see the query that was run in the UI, along with the job information.
But they can't see the query results, which is intended behavior. Instead they will get a warning:
Access Denied: User does not have permission to access results of another user's job.

Setting up alerts for metrics in Splunk

I'm sending data to Splunk and everything is working just fine, i can see the data that i'm sending and run a query and get results. Right now I'm only using a test data set, but eventually people will be sending their own fields (as well as the mandatory ones). My question is, since I don't know what kind of data they will be sending, can I still set up alerts for them? Can I create something general?
It's pretty hard to create a generic alert that's actually useful. You may be able to craft something using the mandatory fields, but it may not be all that helpful.
If you're opposed to letting users create their own alerts then let them come to you with what they want.

How to configure PagerDuty alerts in Splunk Cloud?

I've run into a few different issues with the PagerDuty integration in Splunk Cloud.
The documentation on PagerDuty's site is either outdated, not applicable to Splunk Cloud or else there's something wrong with the way my Splunk Cloud account is configured (could be a permissions issue): https://www.pagerduty.com/docs/guides/splunk-integration-guide/. I don't see an Alert Actions page in Splunk Cloud, I have a Searches, Reports and Alerts page though.
I've configured PD alerts in Splunk using the alert_logevent app but it's not clear if I should instead be using some other app. These alerts do fire when there are search hits but I'm seeing another issue (below). The alert_webhook app type seems like it might be appropriate but I was unable to get it to work correctly. I cannot create an alert type using the pagerduty_incident app. . . although I can set it as a Trigger Action (I guess this is how it's supposed to work, I don't find the UI to intuitive here).
When my alerts fire and create incidents in PagerDuty, I do not see a way to set the PagerDuty incident severity.
Also, the PD incidents include a link back to Splunk, which I believe should open the query with the search hits which generated the alert. However, the link brings me to a page with a Page Not Found! error. It contains a link to "more information about my request" which brings up a Splunk query with no hits. This query looks like "index=_internal, host=SOME_HOST_ON_SPLUNK_CLOUD, source=*web_service.log, log_level=ERROR, requestid=A_REQUEST_ID". It it not clear to me if this is a config issue, bug in Splunk Cloud or possibly even a permissions issue for my account.
Any help is appreciated.
I'm also a Splunk Cloud + PagerDuty customer and ran into the same issue. The PagerDuty App for Splunk seems to create all incidents as Critical but you can set different severities with event rules.
One way to do this is dynamically is to rename your Splunk alerts with the desired severity level and then create a PagerDuty event rule for each level that looks for the keyword in the Summary. For example...
If the following condition is met:
Summary contains "TEST"
Then perform the following actions:
Set severity = Info
screenshot of the example in the event rule edit screen
It's a bit of pain to rename your existing alerts in Splunk but it works.
If your severity levels in Splunk are programmatically set like in Enterprise Security, then another method would be to modify the PagerDuty App for Splunk to send the $alert.severity$ custom alert action token as a custom detail in the webhook payload and use that as event rule condition instead of Summary... but that seems harder.

How to log requests and user info like username automatically into log file to track user activity in liferay?

In Liferay 7 Enterprise Edition,
I want to log user info like user_name in external log files automatically in each request to track user activities, how to do that?
without using auditing plugin
when I tried to log post request for example (login), it doesn't contain any info about user ?!
This kind of thing is much harder than you might think...
Getting access to the current user is really easy. As Victor pointed out, you can use the ThemeDisplay object to get current user. If you don't have the request around, you can use the PrincipalThreadLocal to find the current user id.
That gives you the who, but certainly not the "what is user doing" aspects. Since the portal aggregates the HTML fragments of many portlets, from a servlet filter perspective it would be hard to gleam which one of the available portlets on an incoming URL is actually being interacted with. You could try a portlet filter to narrow the field, but this will just tell you what portlet is being accessed but not what they are doing with it.
Although you rejected the built in audit functionality available in DXP, it really is the answer for tracking who did what in the portal because it has the necessary touch points to get those two pieces and put them together.
Now if you rejected the built in audit functionality because you want a file and not a database entry, that is easy to solve. Go to the System Settings control panel and find the Logging Audit Message Processor and enable it. It will write the audit events out to a file in CSV format, but you should have the source for modules/apps/foundation/portal-security-audit/portal-security-audit-router/src/main/java/com/liferay/portal/security/audit/router/internal/LoggingAuditMessageProcessor.java so you can use this as a basis to write your own format.
Look at this code:
https://github.com/amusarra/liferay-portal-security-audit
in particular the portal-security-audit-capture-events module that catch the login events.
This seems a job for a filter, the user information is normally extracted from the themeDysplay, like in:
ThemeDisplay themeDisplay = ( ThemeDisplay ) request.getAttribute( THEME_DISPLAY );
long userId = themeDisplay.getRealUserId();
If you want to track specific portlets, an OSGi portlet filter would do the job.