How to create alerts using CloudWath for Spring Boot metrics? - amazon-cloudwatch

I have a Spring Boot microservice (Spring Cloud Gateway) deployed on AWS EKS. It sends metrics to CloudWatch, but I can't figure out how to define an alert on an aggregate metric.
This is a sample of the metrics:
I added a custom tag ("instanceId") which provides the instance identifier of the pod:
#Bean
public MeterRegistryCustomizer<CloudWatchMeterRegistry> cloudWatchMetricsCommonTags() {
return (registry) -> registry.config().commonTags("instanceId", EC2MetadataUtils.getInstanceId());
}
This query provides the slowest request:
SELECT MAX("spring.cloud.gateway.requests.max") FROM "tas-gpp-gateway"
But it seems to be impossible define an alert on it. I found this issue, but I can't provide all possible metric dimensions because outcome, httpMethod and instanceId have many different values. In fact, according to the documentation, you should:
provide a set of metrics
define a math expression using the metrics
create an alert based on the math expression
For example:
But this is not an option, because I can't list all the possible dimensions of the metric spring.cloud.gateway.requests.max.

Related

Update Micronaut metric without a controller or a job

I'm working in a micronaut project, using Kotlin, and I want to use micrometer to create a metric related with the number of entries that I've in the database.
My question is: The update of this metric must rely on a cronjob? What I want is when the metric's endpoint is requested (eg /metrics/name) it authomaticaly updates the metric and return the response. However, I don't want to have a route in a controller dedicated to it. So there's any way of doing it?
Regards

How to set log4j2 log levels and categories in DataWeave 2.x?

When using the log() function in DataWeave I have a few questions:
How can I set a log level and log category so my logs are handled by log4j2 the same way as log messages from the Logger components in the Mule flow?
How can I suppress logging the expression? If the expression result is very large (what if it is streaming data?) I might only want to log to first argument to log, and skip the actual DW expression evaluation.
There is no way to set the logging level with the log() function in DataWeave. As an alternative you could implement a custom function in a custom module in to log that allows to set levels.
You could use the same custom function to implement some logic, however there is the generic problem of determining if a payload is big without fully consuming it. In any case DataWeave logging is meant to be used as a debugging tool and should not be used in production or for big payloads. The best practice is to avoid logging at all unless you need to debug an issue, and then remove the logging.

Using AWS Elastic Inference without making changes to the client code

I have an endpoint deployed in SageMaker with a Tensorflow model, and I make calls to it using the Scala SDK like this:
runtime = AmazonSageMakerRuntimeClientBuilder
.standard()
.withCredentials(credentialsProvider)
.build()
...
val invokeEndpointResult = runtime.invokeEndpoint(request);
Can i use Sagemaker's Elastic Inferece with this code as it is and gain the performance enhancement of EI?
I have tried running an endpoint with a configuration of 8 ml.m5d.xlarge instances vs a configuration with 8 ml.m5d.xlarge instances with added EI of ml.eia2.xlarge but looking at the cloud watch metrics I get the same number of invocations per minute, and the total run time (on the same input) is the same.

How To: Save JMeterVariable values to influxdb with the sampleresults

I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.

Spring sleuth Runtime Sampling and Tracing Decision

I am trying to integrate my Application with Spring sleuth.
I am able to do a successfull integration and I can see spans getting exported to Zipkin.
I am exporting zipkin over http.
Spring boot version - 1.5.10.RELEASE
Sleuth - 1.3.2.RELEASE
Cloud- Edgware.SR2
But now I need to do this in a more controlled way as application is already running in production and people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
I need to decide on runtime wether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
Always export the trace if service exceeds a certain threshold or in case of Exception.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
What about this solution? I guess this will work in sampling specific request at runtime.
#Bean
public Sampler customSampler(){
return new Sampler() {
#Override
public boolean isSampled(Span span) {
logger.info("Inside sampling "+span.getTraceId());
HttpServletRequest httpServletRequest=HttpUtils.getRequest();
if(httpServletRequest!=null && httpServletRequest.getServletPath().startsWith("/test")){
return true;
}else
return false;
}
};
}
people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
Do they have any information about the overhead? Have they turned it on and the application started to lag significantly? What are they scared of? Is this a high-frequency trading application that you're doing where every microsecond counts?
I need to decide on runtime whether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
I don't think that's possible. The instrumentation is set up by adding interceptors, aspects etc. They are started upon application initialization.
Always export the trace if service exceeds a certain threshold or in case of Exception.
With the new Brave tracer instrumentation (Sleuth 2.0.0) you will be able to do it in a much easier way. Prior to this version you would have to implement your own version of a SpanReporter that verifies the tags (if it contains an error tag), and if that's the case send it to zipkin, otherwise not.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
Yes, there is cause you need to pass tracing data. However, the overhead is small.