As per Spring Cloud Sleuth span Sampling we can control sampling rate.
Samplers do not stop span (correlation) ids from being generated, but
they do prevent the tags and events being attached and exported. By
default you get a strategy that continues to trace if a span is
already active, but new ones are always marked as non-exportable.
To reduse performance bottlenack, Can we disable span ids generation in deployment instance at runtime without restarting application?
In Edgware, the sampler bean is using #RefreshScope so you can, at runtime, change the sampling percentage. However, I don't know if that's exactly what you're asking for. Most likely you're asking about disabling Sleuth at all at runtime. That's unfortunately not possible by default. What you can do however is register a custom Random bean that can be #RefreshScoped and that will generate a fixed ID when required.
Related
Is there an API point or whatever other queryable source where I can get the total queued data?:
setting up a little dataflow in NiFi to monitor NiFi itself sounds sketchy, but if it's a common practice, let's be it. Anyway, I cannot find the API endpoint to get that total
Note: I have a single NiFi instance: I don't have nor will implement S2S reporting since I am on a single instance, single node NiFi setup
The Site-to-Site Reporting tasks were developed because they work for clustered, standalone, and multiple instances thereof. You'd just need to put an Input Port on your canvas and have the reporting task send to that.
An alternative as of NiFi 1.10.0 (via NIFI-6780) is to get the nifi-sql-reporting-nar and use QueryNiFiReportingTask, you can use a SQL query to get the metrics you want. That uses a RecordSinkService controller service to determine how to send the results, there are various implementations such as Site-to-Site, Kafka, Database, etc. The NAR is not included in the standard NiFi distribution due to size constraints, but you can get the latest version (1.11.4) here, or change the URL to match your NiFi version.
#jonayreyes You can find information about how to get queue data from NiFi API Here:
NiFi Rest API - FlowFile Count Monitoring
I am using spring cloud finchley.rc2 with spring boot version 2 along with sleuth and zipkin.
I have a facade layer which uses reactor project. Facade calls services parallel and each service store some tracing info in rabbit mq.
Issue is I am seeing in zipkin some spans like
facade.async
service.publish > Because of mq
How can i stop such traces from being captured
Can you follow the guidelines described here https://stackoverflow.com/help/how-to-ask and the next question you ask, ask it with more details? E.g. I have no idea how exactly you use Sleuth? Anyways I'll try to answer...
You can create a SpanAdjuster bean, that will analyze the span information (e.g. span tags) and basing on that information you will change the sampling decision so as not to send it to Zipkin.
Another option is to wrap the default span reporter in a similar logic.
Yet another option is to verify what kind of a thread it is that is creating this span and toggle it off (assuming that it's a #Scheduled method) - https://cloud.spring.io/spring-cloud-static/Finchley.RC2/single/spring-cloud.html#__literal_scheduled_literal_annotated_methods
I'm using NServiceBus as an abstraction layer for Azure Service Bus (in case we move away from Azure). I find that when working with multiple subscribers (who subscribe to the same events) the number of duplicate messages increases. I know Azure Service Bus (ASB) has a way of detecting these duplicates and I can see that the feature is configurable through NServiceBus (according to documentation). However, I can only get a sample of of achieving duplication detection by means of configuration section. What I require is a sample of how to achieve this with code.
Thanks
Suraj
You can specify configuration using code-based approach as well. NServiceBus has to contracts that can help with that IConfigurationSource and IProvideConfiguration<T>. Here's an example how you can take a configuration file section (UnicastBusConfig) and specify values via code.
Specifically to what you've asked, implementing IProvideConfiguration<AzureServiceBusQueueConfig> will allow you configure ASB transport, specifying duplicates and such.
The observation about number of duplicates increasing as a result of increasing subscribers feels as a symptom, not the problem. That is probably a different question, not related to the configuration. Saying that, I'd look into it prior to enabling the native de-dupplication. While you can specify RequiresDuplicateDetection and DuplicateDetectionHistoryTimeWindow be aware that ASB performing duplicate detection on the ID property only. Also, it is better to build your handlers as idempotent, rather than relying on the native de-duplication.
I want to know if there is a way to capture the bulletin messages(basically errors) that appear on the Nifi UI and store it in some attribute/file so that it can be looked upon later. The screen gets refreshed every 5 min and if there is a failure in any of the processors i would want to know the reason for it.
I am not particularly talking about the logging part here.
As you know, the bulletins reflect the messages that are already logged. So all this content is already stored in the {NIFI_HOME}/logs/nifi-app.log. However, if you wanted to consume the bulletin directly you have a couple different options.
You could consume the bulletins from the REST API. There are a couple endpoints for accessing the bulletins.
http[s]://{host}:{port}/nifi-api/controller/process-groups/{process-group-id}/status?recursive=true
This request will get the status (including bulletins) of all components under the specified Process Group. You can use the alias 'root' for the root level Process Group. The recursive flag will indicate whether or not to return just the children of that Process Group or all descendant components.
http[s]://{host}:{port}/nifi-api/controller/status
This request will get the status (including bulletins) of the Controller level components. This includes any reported bulletins from Controller Services, Reporting Tasks, and the NiFi Framework itself (clustering messages, etc).
http[s]://{host}:{port}/nifi-api/controller/bulletin-board?limit=n&sourceId={id}&message={str}
This request will access all bulletins and supports filtering based components, message and limiting the number of bulletins returned.
You could also create a Reporting Task implementation which has access to the bulletin repository. Reporting Tasks are an extension point which are meant to report details from this NiFi instance. This would require some Java code but would allow you to report the bulletin's however you like. Here is an example that reports metrics to Ambari [1].
[1] https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-ambari-bundle/nifi-ambari-reporting-task/src/main/java/org/apache/nifi/reporting/ambari/AmbariReportingTask.java
My JMeter load test includes a setUp thread group that makes 190 http requests to my test environment prior to running my test thread groups. This results in a spike of requests at the beginning of the test that appear to be included in the results of the load test. The results for average throughput are higher than they would be without the setUp thread group traffic. Is there any way to exclude the setUp thread group from the test results?
There are several options:
Synthesis Report plugin allows to set telemetry start and end offset values so you will be able to filter out unneeded data
You can add Listener of your choice to 2nd Thread Group - it will record only samplers in its scope.
You can run your JMeter tests via Taurus tool - the report will come in fancy web interface where you can zoom to the report part you need.
Which Listener/Reporter do you use to check your results?
Some reporters can show you not only average throughput but also split it by requests or threads. So, you can rename your SetUp requests using one label and just exclude these values from your total. Don't know how to do it using built-in methods but you can easily export your result table data as a file or just operate it using scripting.
If you just want to prevent spike on your graph, then use Constant Throughput Timer for your SetUp thread. It will slow down your request sending (by increasing delays between them) to meet defined throughput.
If your recording catches extra "http requests" which are not required , Click on "Add Suggested Excludes" or else can exclude by defining , so that will not come in your Result Analysis.