how to setup jaeger backend when services are running in different hosts? - jaeger

I'm following this tutorial: https://github.com/yurishkuro/opentracing-tutorial/tree/master/java/src/main/java/lesson03. What need to be set so that services running in different hosts can send the data to the same backend?

You have two options:
Run Jaeger agent (or OpenTelemetry Collector) on each host that runs your applications and let the agent forward trace data to a central Jaeger collector. The Jaeger client can continue emitting data via UDP port in this case.
Configure the Jaeger client with an HTTP endpoint of the Jaeger collector.
For (2), you can pass the environment variable to you applications:
JAEGER_ENDPOINT=http://jaeger-collector:14268/api/traces
Additional references:
https://www.jaegertracing.io/docs/latest/client-features/
https://github.com/jaegertracing/jaeger-client-java/tree/master/jaeger-core

Related

EdgeX: Listen to IoT Messaging Bus from Redis Server

EdgeX uses Redis PubSub by default for its messaging bus (https://docs.edgexfoundry.org/2.3/microservices/application/Triggers/).
I have started the Redis server locally.
I have Core Data and/or Device Services running, which I believe is
also configured defaultly to use Redis Pub/Sub.
I have a Virtual Device Service that publishes data to the
edgex/events/# topic
(https://docs.edgexfoundry.org/2.3/microservices/device/virtual/Ch-VirtualDevice/).
Finally, I have configured my Application Service to subscribe to
the topic edgex/events/#, as shown in the example.
[Trigger.EdgexMessageBus]
Type = "redis" # message bus type (i.e "redis`, `mqtt` or `zero` for ZeroMQ)
[Trigger.EdgexMessageBus.SubscribeHost]
Host = "localhost"
Port = 6379
Protocol = "redis"
SubscribeTopics="edgex/events/#"
[Trigger.EdgexMessageBus.PublishHost]
Host = "localhost"
Port = 6379
Protocol = "redis"
PublishTopic="" # optional if publishing response back to the MessageBus
The Application Service is able to recieve all the messages sent to the topic.
However, when I go directly to the redis server (using redis-cli) and subscribe to SUBSCRIBE edgex/events/# or any other variant (edgex/events,edgex), nothing appears. Even checking PUBSUB CHANNELS shows that there are no active channels.
I am assuming that since EdgeX is using my localhost redis server (or any remote server, for that matter), that I'd be able to directly check with that redis server, subscribe to the topic that EdgeX is publishing to, and see the same messages.
Am I missing anything?
Thanks!
The EdgeX implementation is using PSUBSCRIBE with wildcards; the only command that will give you visibility is PUBSUB NUMPAT. You will need to identify the correct pattern for what you are trying to subscribe to AND have your subscriber running before anything is published as Redis PubSub is fire/forget.
Rather than going directly to Redis, I recommend using the EdgeX Application Services to subscribe and then either operate on the results directly or feed that to an external service.

Apache Ignite server node health check

I am working on launching an Apache Ignite (v2.13.0) cluster in AWS. I am targeting using Amazon ECS for container management and running these container nodes on EC2 instances.
I am fronting these instances with an Application Load Balancer and using the Apache Ignite aws-ext modules TcpDiscoverALBIpFinder to find other nodes in the cluster. As part of setting up an ALB in AWS, you add a listener that routes traffic to a registered healthy target. These targets are represented by a target group. These nodes in the target group are tested periodically to check their health via a health check. The health check sends a request to a configured port and path and determines the health based on returned status codes.
My question is if there is an out of the box path on an Apache Ignite server that I should utilize for health checks?
I looked for additional documentation online on how others have set this up however came up dry.
Cheers!
you can use the PROBE/VERSION commands to implement these checks.
example usage: https://www.gridgain.com/docs/latest/installation-guide/kubernetes/amazon-eks-deployment
https://www.gridgain.com/docs/latest/developers-guide/restapi#probe
Most people use the REST API for health checks.
readinessProbe:
with auth: http://localhost:8080/ignite?cmd=probe&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=probe
livenessProbe:
with auth: http://localhost:8080/ignite?cmd=version&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=version

What is XRay daemon?

When we are running a serverless app, say a beanstalk, a danymoDB and
a SNS, there will be three XRay daemons in each JVM, right?
If so,how could XRay trace a request from the beginning to the end?
The trace ID will go with http request header or sth like that?
According to the docs, "The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service."
It's important to note that the X-Ray SDK produces what's called a remote subsegment to mimic the result of the downstream in a client-sided manner. For a service like DynamoDB, you would see this. For something like SNS, the trace header information is propagated across through the Http headers. The X-Ray Daemon is used to forward segments generated when a service receives an upstream request; DynamoDB doesn't do this yet, and SNS forwards it through the trace header mentioned.
The Daemon isn't part of the JVM; it's an external process that's ran in an instance that forwards the trace data to the service. Technically, it can be run by a single instance, the same instance, or all instances.

Should I register pod or kubernete service to consul on kubernetes cluster

I have deployed ocelot and consul on the kubernetes cluster. Ocelot acts as the api gateway which will distribute request to internal services. And consul is in charge of service discovery and health check. (BTW, I deploy the consul on the kubernetes cluster following the consul's official document).
And my service (i.e. asp.net core webapi) is also deployed to the kubernetes cluster with 3 replicas. I didn't create a kubernete service object as those pods will only be consumbed by the ocelot which is in the same cluster.
The architecture is something like below:
ocelot
|
consul
/\
webapi1 webapi2 ...
(pod) (pod) ...
Also, IMO, consul can de-register a pod(webapi) when the pod is dead. so I don't see any need to create a kubernete service object
Now My question: is it right to register each pod(webapi) to the consul when the pod startup? Or should I create a kubernete service object in front of those pods (webapi) and register the service object to the consul?
Headless Service is the answer
Kubernetes environment is more dynamic in nature.
deregister a service when the pod is dead
Yes
Kubernetes Pods are mortal. They are born and when they die, they are
not resurrected. While each Pod gets its own IP address, even those IP
addresses cannot be relied upon to be stable over time. A Kubernetes
Service is an abstraction which defines a logical set of Pods and
provides stable ip
That's why it is recomended to use headless service which basically fits into this situation. As they mentioned in first line in docs
Sometimes you don’t need or want load-balancing and a single service
IP. In this case, you can create “headless” services by specifying
"None" for the cluster IP (.spec.clusterIP)
headless service doesn't get the ClusterIP. If you do nslookup on the headless servive, it will resolve all IPs of pods that are under headless service. K8s will take care of adding/managing pod IP under the headless service. Please for more details. And I believe, you can register/provide this headless service name in Cosule.
Please refer this blog for detailed here
UPDATE1:
Please refer this Youtube video. May give you some idea.(Even I have to watch it..!!)

Mule HA Cluster - Application configuration issue

We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.
1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.
creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.