Get start time and end time of a Span when using Spring cloud sleuth - instrumentation

We are using Cloud sleuth to instrument microservices that use PubSub (using spring cloud stream binder). Currently PubSub traces are not getting propagated automatically to GCP cloud trace. So we are writing the traces using google cloud trace service client which internally calls trace v2 API. For each span we also have to set start time and end time.
In order to handle a retry scenario, I need to get start time of span. Is there a provision to get start time and end time of span? Tracer object does not have method to return these values. Any help will be appreciated.

No you can't get them from a Span. You can get them only when the span has been finished. If you're using Brave as the Tracer implementation you can create your own SpanHandler and you can access the information from the MutableSpan instance.

Related

Apache Ignite's Continuous Queries event handler group & sequencing

We are trying to use the Continuous Query feature of Ignite. But we are facing an issue on handling that event. Below is our problem statement
We have defined a Continuous Query with remote filter for a cache and shared the filter definition with Thick Client.
We are running multiple replica of the "Thin Client" in Kubernetes cluster.
Now the problem is each instance of the "Thin Client" running in k8s cluster have registered the remote filter and each instance receiving the event and trying to process the data in parallel. This resulting in duplicating data process or even overriding the data in my store.
Is there any way to form a consumer group and ensure that only one instance of the "Thin Client" is receiving the notification and its processing the data ?
My Thick client and Thin Clients are in .NET
Couldn't found any details on Ignite document
https://ignite.apache.org/docs/latest/key-value-api/continuous-queries
Here each thin client is starting its own continuous query and thereby, by design, each thin client is getting its own event to consume. If you want to route an event to a specific client then you would need to start only one continuous query, and distribute that event to your app as you see fit.
Take a look at ignite messaging to see whether it fits your use case.
Also check out the distributed Queue/Set which have unique delivery guarantees.

How to get request duration from sleuth tracer

I have my setup with sleuth (spring-cloud-starter-sleuth) and zipkin (spring-cloud-sleuth-zipkin) which works perfectly fine. At the same time I would like to log to ELK duration of the requests. I was trying to get this information from currentSpan like
tracer.currentSpan().context()
but I don't see anything related to duration or start time. Any ideas how I can get duration or when current span (request) was started?
You can't do it from the span not the context. If you're using brave you can register your own SpanHandler beam that handles finished spans. There you have access to a MutableSpan that can give you the duration

API or other queryable source for getting total NiFi queued data

Is there an API point or whatever other queryable source where I can get the total queued data?:
setting up a little dataflow in NiFi to monitor NiFi itself sounds sketchy, but if it's a common practice, let's be it. Anyway, I cannot find the API endpoint to get that total
Note: I have a single NiFi instance: I don't have nor will implement S2S reporting since I am on a single instance, single node NiFi setup
The Site-to-Site Reporting tasks were developed because they work for clustered, standalone, and multiple instances thereof. You'd just need to put an Input Port on your canvas and have the reporting task send to that.
An alternative as of NiFi 1.10.0 (via NIFI-6780) is to get the nifi-sql-reporting-nar and use QueryNiFiReportingTask, you can use a SQL query to get the metrics you want. That uses a RecordSinkService controller service to determine how to send the results, there are various implementations such as Site-to-Site, Kafka, Database, etc. The NAR is not included in the standard NiFi distribution due to size constraints, but you can get the latest version (1.11.4) here, or change the URL to match your NiFi version.
#jonayreyes You can find information about how to get queue data from NiFi API Here:
NiFi Rest API - FlowFile Count Monitoring

Creating multiple subs on same topic to implement load sharing (pub/sub)

I spent almost a day on google pub sub documentation to create a small app. I am thinking of switching from rabbitMQ to google pub/sub. Here is my question:
I have an app that push messages to a topic (T). I wanted to do load sharing via subscribers. So I created 3 subscribers to T. I have kept the name of all 3 subs same (S), so that I don't get same message 3 times.
I have 2 issues:
There is no where I console I see 3 same subscribers to T. It shows 1
If I try to start all 3 instances of subscribers at same time. I get "A service error has occurred.". Error disappeared if I start in sequential manner.
Lastly, Is google serious about pub/sub ? Looking at the documentations and public participations, I am not sure if I should switch to google pub/sub.
Thanks,
In pub/sub, each subscription gets a copy of every message. So to load balance handling message, you don't want 3 different subscriptions, but a single subscription that distributes messages to 3 workers.
If you are using pull delivery, simply create a single subscription (as a one-time action when you set up the system), and have each worker pull from the same subscription.
If you are using push delivery, have a single subscription pushing to a single endpoint that provides load balancing (e.g. push to a HTTP load balancer with multiple instances in a backend service
Google is serious about Pub/Sub, it is deeply integrated into many products (GCS, BigQuery, Dataflow, Stackdriver, Cloud Functions etc) and Google uses it internally.
As per documentation on GCP,https://cloud.google.com/pubsub/architecture.
Load balanced subscribers are possible, but all of them have to use same subscription. Don't have any code sample or POC ready but working on same.

Disable sleuth for from storing some traces

I am using spring cloud finchley.rc2 with spring boot version 2 along with sleuth and zipkin.
I have a facade layer which uses reactor project. Facade calls services parallel and each service store some tracing info in rabbit mq.
Issue is I am seeing in zipkin some spans like
facade.async
service.publish > Because of mq
How can i stop such traces from being captured
Can you follow the guidelines described here https://stackoverflow.com/help/how-to-ask and the next question you ask, ask it with more details? E.g. I have no idea how exactly you use Sleuth? Anyways I'll try to answer...
You can create a SpanAdjuster bean, that will analyze the span information (e.g. span tags) and basing on that information you will change the sampling decision so as not to send it to Zipkin.
Another option is to wrap the default span reporter in a similar logic.
Yet another option is to verify what kind of a thread it is that is creating this span and toggle it off (assuming that it's a #Scheduled method) - https://cloud.spring.io/spring-cloud-static/Finchley.RC2/single/spring-cloud.html#__literal_scheduled_literal_annotated_methods