Is is possible to have custom fields in CloudWatch Logs from my application's logs? - amazon-cloudwatch

I have several NodeJS applications running on ECS Fargate and the logs are being shipped to CloudWatch. I'd like to have custom fields show up in CloudWatch for each log message such as application.name and application.version and possibly ones created depending on the content of the log message. Say my log message is [ERROR] Prototype: flabebes-flower crashed and I'd like to pull out the log level ERROR and the name of the prototype flabebes-flower. Is it possible to have these fields in CloudWatch? If so, how can I accomplish this? I know how to achieve this using Filebeat processors and shipping the logs to Elasticsearch, I have that solution already but I'd like to explore the possibility of moving away from Elasticsearch and just using CloudWatch without having to write my own parsers.

There are basically two options:
If your log messages always have the same format, you can use the parse feature of CloudWatch Log Insights to extract these fields, e.g.,
parse #message "[*] Prototype: * crashed" as level, prototype
If the metadata that you want to extract into custom fields is not in a parsable format, you can configure your application to log in JSON format and add the metadata to the JSON log within your application (how depends on the logging library that you use). Your JSON format can then look something like this:
{"prototype":"flabebes-flower","level":"error","message":"[ERROR] Prototype: flabebes-flower crashed","timestamp":"27.05.2022, 18:09:46"}
Again with CloudWatch Log Insights, you can access the custom fields prototype and level. CloudWatch will automatically parse the JSON for you, there is no need to use the parse command as in the above method.
This allows you, e.g., to run the query
fields #timestamp, message
| filter level = "error"
to get all error message.

Related

How to set log4j2 log levels and categories in DataWeave 2.x?

When using the log() function in DataWeave I have a few questions:
How can I set a log level and log category so my logs are handled by log4j2 the same way as log messages from the Logger components in the Mule flow?
How can I suppress logging the expression? If the expression result is very large (what if it is streaming data?) I might only want to log to first argument to log, and skip the actual DW expression evaluation.
There is no way to set the logging level with the log() function in DataWeave. As an alternative you could implement a custom function in a custom module in to log that allows to set levels.
You could use the same custom function to implement some logic, however there is the generic problem of determining if a payload is big without fully consuming it. In any case DataWeave logging is meant to be used as a debugging tool and should not be used in production or for big payloads. The best practice is to avoid logging at all unless you need to debug an issue, and then remove the logging.

How to make scheduled query in BigQuery less verbose?

Using the console GUI, I managed to set up jobs to trigger and send an email notification in the event a condition in the data (SQL query) is met.
The email is being sent successfully and the process works as expected, however, when sending the message through Outlook, the email is very verbose and include gibberish that is not needed - is there a way in settings or through code to make the message less verbose ? - see below for reference;
All i need really to display in the email is below yellow highlighted line; thank you
Looks like in the documentation it states that The messages are **not** configurable
https://cloud.google.com/bigquery-transfer/docs/transfer-run-notifications#email_notifications
But as an alternatively, you could publish to a pubsub topic instead. This would allow you to access the full payload and extract the portions you want.
Steps to configure pubsub:
https://cloud.google.com/bigquery-transfer/docs/transfer-run-notifications#notifications
The payload looks like this:
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs.runs#TransferRun

Pulling summary report for monitoring using reporting task in NiFi

Working on a piece of the project where a report needs to be generated with all the flow details(memory used, number of records processed, Processes ran successful, failed, etc). Most of the details are present on the Summary tab, but the requirement is to have separate reports.
Can any one help me with solution/steps/examples/screens/videos.
Thanks much.
Every underlying behavior of the UX/UI that Apache NiFi provides is also accessible through an API (in fact, the UI calls the API to perform each of these tasks). So you can invoke the GET /system-diagnostics API to return that information in JSON form, and then parse this data and present it in whatever form you like.

mule4 batch - how to send oncomplete phase response to http listner?

I have common scenario but I am not able to figure out the solution in Mule 4 batch. In my flow I have a http listner which invokes the flow and then I am calling DB select and then using a batch to upsert data into salesforce.
by default batch will create stats in On-Complete phase and my requirement is to send exact stats as response but I am not able to access it outside of batch. Tried with vars, attributes and even tried VM publish (in this case response will not go back to listner)
Can someone please guide me on this? I'm attaching the flow design for reference.
flow design
Thanks.
You can't. Batch works in the background, your flow will be long gone before your batch is done.
My suggestion is you (1) Store the reporting data somewhere and (2) get to the data using another request/way.
Here's the documentation: https://docs.mulesoft.com/mule-runtime/4.2/batch-processing-concept
You can store the payload in on-complete phase in an objectStore and can retrieve it later to build your report. The payload stored in the on-complete phase is a java object that has properties that you would need to build your report. (For e.g.loadedRecords, failedRecords etc)..

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).