Cannot see the trace data in zipkin - instrumentation

I'm new to zipkin and brave api for distribute tracing. I've setup a zipkin server on my localhost listening on port 9411. I've executed below function but there is no trace data show in my zipkin server. Could someone point out what I'm missing?
public static void main(String[] args) {
Sender sender = OkHttpSender.create("http://localhost:9411/api/v1/spans");
Reporter reporter = AsyncReporter.builder(sender).build();
// Now, create a tracer with the service name you want to see in Zipkin.
Tracer tracer = Tracer.newBuilder()
.localServiceName("my-service")
.reporter(reporter)
.build();
Span twoPhase = tracer.newTrace().name("twoPhase").start();
try {
Span prepare = tracer.newChild(twoPhase.context()).name("prepare").start();
try {
System.out.print("prepare");
} finally {
prepare.finish();
}
Span commit = tracer.newChild(twoPhase.context()).name("commit").start();
try {
System.out.print("commit");
} finally {
commit.finish();
}
} finally {
twoPhase.finish();
}
}

It seems to be a timing issue.
If we add some delay, for instance, between children spans execution like
Thread.sleep(1);
In between
Span prepare = tracer.newChild(twoPhase.context()).name("prepare").start();
try {
System.out.print("prepare");
} finally {
prepare.finish();
}
Thread.sleep(1); // <<<
Span commit = tracer.newChild(twoPhase.context()).name("commit").start();
try {
System.out.print("commit");
} finally {
commit.finish();
}
Then we get to see spans:
I've faced something like this before when Zipkin dropped spans I was (mistakenly) assigning wrong timestamps to.
For reference and ease of reproduction: I've setup a project for reproducing this issue / "fix".

Related

RabbitMQ Camel Consumer - Consume a single message

I have a scenario where I want to "pull" messages of a RabbitMQ queue/topic and process them one at a time.
Specifically if there are already messages sitting on the queue when the consumer starts up.
I have tried the following with no success (meaning, each of these options reads the queue until it is either empty or until another thread closes the context).
1.Stopping route immediately it is first processed
final CamelContext context = new DefaultCamelContext();
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
RouteDefinition route = from("rabbitmq:harley?queue=IN&declare=false&autoDelete=false&hostname=localhost&portNumber=5672");
route.process(new Processor() {
Thread stopThread;
#Override
public void process(final Exchange exchange) throws Exception {
String name = exchange.getIn().getHeader(Exchange.FILE_NAME_ONLY, String.class);
String body = exchange.getIn().getBody(String.class);
// Doo some stuff
routeComplete[0] = true;
if (stopThread == null) {
stopThread = new Thread() {
#Override
public void run() {
try {
((DefaultCamelContext)exchange.getContext()).stopRoute("RabbitRoute");
} catch (Exception e) {}
}
};
}
stopThread.start();
}
});
}
});
context.start();
while(!routeComplete[0].booleanValue())
Thread.sleep(100);
context.stop();
}
Similar to 1 but using a latch rather than a while loop and sleep.
Using a PollingConsumer
final CamelContext context = new DefaultCamelContext();
context.start();
Endpoint re = context.getEndpoint(srcRoute);
re.start();
try {
PollingConsumer consumer = re.createPollingConsumer();
consumer.start();
Exchange exchange = consumer.receive();
String bb = exchange.getIn().getBody(String.class);
consumer.stop();
} catch(Exception e){
String mm = e.getMessage();
}
Using a ConsumerTemplate() - code similar to above.
I have also tried enabling preFetch and setting the max number of exchanges to 1.
None of these appear to work, if there are 3 messages on the queue, all are read before I am able to stop the route.
If I were to use the standard RabbitMQ Java API I would use a basicGet() call which lets me read a single message, but for other reasons I would prefer to use a Camel consumer.
Has anyone successfully been able to process a single message on a queue that holds multiple messages using a Camel RabbitMQ Consumer?
Thanks.
This is not the primary intention of the component as its for continued received. But I have created a ticket to look into supporting a basicGet (single receive). There is a new spring based rabbitmq component coming in 3.8 onwards so its going to be implemeneted there (first): https://issues.apache.org/jira/browse/CAMEL-16048

NServiceBus test client not receiving Messages

I am very new to NService Bus, so I am trying to get it working with a simple test solution using LearningPersistence, obviously this will be changed soon!
So I have 3 projects:
IceDataExtractor - Client which sends a message
IceProcessManager - Processes messages
Messages - Contains a single Message class Messages
I am using the standard code generated by NServiceBus.Bootstrap.WindowsService 2.0.1
Here is page I used as to get sample
I then modified as follows
Ice Data Extractor
private async Task AsyncOnStart()
{
try
{
var endpointConfiguration = new EndpointConfiguration("IceDataExtractor");
var transport = endpointConfiguration.UseTransport<LearningTransport>();
transport.Routing().RouteToEndpoint(typeof(TestMessage), "IceProcessManager");
endpointConfiguration.UseSerialization<JsonSerializer>();
//TODO: optionally choose a different error queue. Perhaps on a remote machine
// https://docs.particular.net/nservicebus/recoverability/
endpointConfiguration.SendFailedMessagesTo("error");
//TODO: optionally choose a different audit queue. Perhaps on a remote machine
// https://docs.particular.net/nservicebus/operations/auditing
endpointConfiguration.AuditProcessedMessagesTo("audit");
endpointConfiguration.DefineCriticalErrorAction(OnCriticalError);
//TODO: For production use select a durable persistence.
// https://docs.particular.net/nservicebus/persistence/
endpointConfiguration.UsePersistence<LearningPersistence>();
//TODO: For production use script the installation.
endpointConfiguration.EnableInstallers();
endpointConfiguration.Conventions()
.DefiningCommandsAs(t => t.Namespace != null && t.Namespace.StartsWith("Messages") &&
t.Namespace.EndsWith("Commands"));
endpoint = await Endpoint.Start(endpointConfiguration)
.ConfigureAwait(false);
PerformStartupOperations();
**var testMessage = new TestMessage {Id = Guid.NewGuid()};
await endpoint.Send(testMessage).ConfigureAwait(false);**
}
catch (Exception exception)
{
logger.Fatal("Failed to start", exception);
Environment.FailFast("Failed to start", exception);
}
}
Ice Process Manager
private async Task AsyncOnStart()
{
try
{
var endpointConfiguration = new EndpointConfiguration("IceDataExtractor");
var transport = **endpointConfiguration.UseTransport<LearningTransport>();
transport.Routing().RouteToEndpoint(typeof(TestMessage), "IceProcessManager");**
endpointConfiguration.UseSerialization<JsonSerializer>();
//TODO: optionally choose a different error queue. Perhaps on a remote machine
// https://docs.particular.net/nservicebus/recoverability/
endpointConfiguration.SendFailedMessagesTo("error");
//TODO: optionally choose a different audit queue. Perhaps on a remote machine
// https://docs.particular.net/nservicebus/operations/auditing
endpointConfiguration.AuditProcessedMessagesTo("audit");
endpointConfiguration.DefineCriticalErrorAction(OnCriticalError);
//TODO: For production use select a durable persistence.
// https://docs.particular.net/nservicebus/persistence/
endpointConfiguration.UsePersistence<LearningPersistence>();
//TODO: For production use script the installation.
endpointConfiguration.EnableInstallers();
**endpointConfiguration.Conventions()
.DefiningCommandsAs(t => t.Namespace != null && t.Namespace.StartsWith("Messages") &&
t.Namespace.EndsWith("Commands"));**
endpoint = await Endpoint.Start(endpointConfiguration)
.ConfigureAwait(false);
PerformStartupOperations();
var testMessage = new TestMessage {Id = Guid.NewGuid()};
await endpoint.Send(testMessage).ConfigureAwait(false);
}
catch (Exception exception)
{
logger.Fatal("Failed to start", exception);
Environment.FailFast("Failed to start", exception);
}
}
TestMessage class
using System;
namespace Messages.Commands
{
public class TestMessage
{
public Guid Id { get; set; }
}
}
This all compiles and runs fine, other than performance warnings which I dont think matter
I have a message handler
TestMessageHandler
using System;
using System.Threading.Tasks;
using Messages.Commands;
using NServiceBus;
namespace IceProcessManager
{
public class TestMessageHandler : IHandleMessages<TestMessage>
{
public Task Handle(TestMessage message, IMessageHandlerContext context)
{
Console.WriteLine("Handled TEst MEssage ID:{0}", message.Id);
return Task.CompletedTask;
}
}
}
As you can see from the screenshot, no message is being received by the IceProcessManager. What am I doing wrong? I was thinking initially that I am sending the message too early, i.e. before the ProcessManager is up and running, but this not the problem because if I leave the ProcessManager running (i.e. run from explorer) then run the extractor, no message is receieved
Ideally I would like to have sent lots of messages to test this but I am not familiar with async stuff yet!
Can someone help please?
Paul
If I am not missing something you are using the same endpoint name for both instances?
var endpointConfiguration = new EndpointConfiguration("IceDataExtractor");
While you are routing the message to "IceDataManager" which doesn't exist.
I guess you might have pasted the wrong code?

What is the cleanest way to listen to JMS from inside a Spring-batch step?

Spring batch documentation recommends using the JmsItemReader, which is a wrapper around the JMSTemplate. However, I have discovered that the JMSTemplate has some issues - see http://activemq.apache.org/jmstemplate-gotchas.html .
This post came to my attention only because the queue was appearing to disappear before I could actually read the data of of it. The opportunity to miss messages seems like a fairly significant issue to me.
For consumers atleast try using DefaultMessageListenerContainer coupled with a SingleConnectionFactory or any such connection factory , it not need a scheduler to wake them up.there are log of examples explaining this , this one is really good in explaining stuff
http://bsnyderblog.blogspot.com/2010/05/tuning-jms-message-consumption-in.html
Here is the solution I ended up with. Since the query was about the "cleanest" way to listen to JMS from within a spring-batch step, I'm going to leave the question open for a while longer just in case there's a better way.
If someone can figure out why the code isn't formatting correctly, please let me know how to fix it.
1. In the a job listener, implement queue setup and teardown inside the beforeJob and afterJob events, respectively:
public void beforeJob(JobExecution jobExecution) {
try {
jobParameters = jobExecution.getJobParameters();
readerConnection = connectionFactory.createConnection();
readerConnection.start();
} catch (JMSException ex) {
// handle the exception as appropriate
}
}
public void afterJob(JobExecution jobExecution) {
try {
readerConnection.close();
} catch (JMSException e) {
// handle the exception as appropriate
}
}
2. In the reader, implement the StepListener and beforeStep / afterStep methods.
public void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
this.setJobExecution(stepExecution.getJobExecution());
try {
this.connection = jmsJobExecutionListener.getReaderConnection();
this.jmsSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
this.messageConsumer = jmsSession.createConsumer(jmsJobExecutionListener.getQueue());
}
catch (JMSException ex)
{
// handle the exception as appropriate
}
}
public ExitStatus afterStep(StepExecution stepExecution) {
try {
messageConsumer.close();
jmsSession.close();
} catch (JMSException e) {
// handle the exception as appropriate
}
return stepExecution.getExitStatus();
}
3. Implement the read() method:
public TreeModel<SelectedDataElementNode> read() throws Exception,
UnexpectedInputException, ParseException,
NonTransientResourceException {
Object result = null;
logger.debug("Attempting to receive message on connection: ", connection.toString());
ObjectMessage msg = (ObjectMessage) messageConsumer.receive();
logger.debug("Received: {}", msg.toString());
result = msg.getObject();
return result;
}
4. Add the listeners to the Spring Batch context as appropriate:
<batch:job id="doStuff">
<batch:listeners>
<batch:listener ref="jmsJobExecutionListener" />
</batch:listeners>
... snip ...
<batch:step id="step0003-do-stuff">
<batch:tasklet transaction-manager="jtaTransactionManager"
start-limit="100">
<batch:chunk reader="selectedDataJmsReader" writer="someWriter"
commit-interval="1" />
</batch:tasklet>
<batch:listeners>
<batch:listener ref="selectedDataJmsReader" />
</batch:listeners>
</batch:step>
</batch:job>

play, java8 - re-use test fake application in tests?

is there anyway to stop the actor system from shutting down and starting up between tests?
I keep getting akka exceptions complaining about the actor system being down.
I can mock/stub to get rid of the reliance on the fake app but it needs a bit of work - hoping to be able to just start one static test application up and run different things in the app.
Eg I have a (crappy) test like this - can I somehow re-use the running app between tests? it still seems to shut down somewhere along the line.
running(Fixtures.testSvr, HTMLUNIT, browser -> new JavaTestKit(system) {{
F.Promise<TestResponseObject> resultPromise = client.makeRequest("request", "parameterObject", system.dispatcher());
boolean gotUnmarshallingException = false;
try {
Await.result(resultPromise.wrapped(), TotesTestFixtures.timeout.duration());
} catch (Exception e) {
if ((e instanceof exceptions.UnmarshallingException)) {
gotUnmarshallingException = true;
}
}
if(gotUnmarshallingException == false) fail();
}});
You can try to get rid of the running method (it stops the testserver at the end) and initialize a testserver by yourself, but I don't know if Akka will be available to you:
#BeforeClass
public static void start() {
testServer = testServer(PORT, fakeApplication(inMemoryDatabase()));
testServer.start();
// Maybe you dont ned this...
try {
testbrowser = new TestBrowser(HTMLUNIT, "http://localhost:" + PORT);
} catch (Exception e) {
}
}
#Test
public void testOne() {
new JavaTestKit() {
// (...)
}
}
#AfterClass
public static void stop() {
testServer.stop();
}

Maximum threads issue

To begin with, I checked the discussions regarding this issue and couldn't find an answer to my problem and that's why I'm opening this question.
I've set up a web service using restlet 2.0.15.The implementation is only for the server. The connections to the server are made through a webpage, and therefore I didn't use ClientResource.
Most of the answers to the exhaustion of the thread pool problem suggested the inclusion of
#exhaust + #release
The process of web service can be described as a single function.Receive GET requests from the webpage, query the database, frame the results in XML and return the final representation. I used a Filter to override the beforeHandle and afterHandle.
The code for component creation code:
Component component = new Component();
component.getServers().add(Protocol.HTTP, 8188);
component.getContext().getParameters().add("maxThreads", "512");
component.getContext().getParameters().add("minThreads", "100");
component.getContext().getParameters().add("lowThreads", "145");
component.getContext().getParameters().add("maxQueued", "100");
component.getContext().getParameters().add("maxTotalConnections", "100");
component.getContext().getParameters().add("maxIoIdleTimeMs", "100");
component.getDefaultHost().attach("/orcamento2013", new ServerApp());
component.start();
The parameters are the result of a discussion present in this forum and modification by my part in an attempt to maximize efficiency.
Coming to the Application, the code is as follows:
#Override
public synchronized Restlet createInboundRoot() {
// Create a router Restlet that routes each call to a
// new instance of HelloWorldResource.
Router router = new Router(getContext());
// Defines only one route
router.attach("/{taxes}", ServerImpl.class);
//router.attach("/acores/{taxes}", ServerImplAcores.class);
System.out.println(router.getRoutes().size());
OriginFilter originFilter = new OriginFilter(getContext());
originFilter.setNext(router);
return originFilter;
}
I used an example Filter found in a discussion here, too. The implementation is as follows:
public OriginFilter(Context context) {
super(context);
}
#Override
protected int beforeHandle(Request request, Response response) {
if (Method.OPTIONS.equals(request.getMethod())) {
Form requestHeaders = (Form) request.getAttributes().get("org.restlet.http.headers");
String origin = requestHeaders.getFirstValue("Origin", true);
Form responseHeaders = (Form) response.getAttributes().get("org.restlet.http.headers");
if (responseHeaders == null) {
responseHeaders = new Form();
response.getAttributes().put("org.restlet.http.headers", responseHeaders);
responseHeaders.add("Access-Control-Allow-Origin", origin);
responseHeaders.add("Access-Control-Allow-Methods", "GET,POST,DELETE");
responseHeaders.add("Access-Control-Allow-Headers", "Content-Type");
responseHeaders.add("Access-Control-Allow-Credentials", "true");
response.setEntity(new EmptyRepresentation());
return SKIP;
}
}
return super.beforeHandle(request, response);
}
#Override
protected void afterHandle(Request request, Response response) {
if (!Method.OPTIONS.equals(request.getMethod())) {
Form requestHeaders = (Form) request.getAttributes().get("org.restlet.http.headers");
String origin = requestHeaders.getFirstValue("Origin", true);
Form responseHeaders = (Form) response.getAttributes().get("org.restlet.http.headers");
if (responseHeaders == null) {
responseHeaders = new Form();
response.getAttributes().put("org.restlet.http.headers", responseHeaders);
responseHeaders.add("Access-Control-Allow-Origin", origin);
responseHeaders.add("Access-Control-Allow-Methods", "GET,POST,DELETE"); //
responseHeaders.add("Access-Control-Allow-Headers", "Content-Type");
responseHeaders.add("Access-Control-Allow-Credentials", "true");
}
}
super.afterHandle(request, response);
Representation requestRepresentation = request.getEntity();
if (requestRepresentation != null) {
try {
requestRepresentation.exhaust();
} catch (IOException e) {
// handle exception
}
requestRepresentation.release();
}
Representation responseRepresentation = response.getEntity();
if(responseRepresentation != null) {
try {
responseRepresentation.exhaust();
} catch (IOException ex) {
Logger.getLogger(OriginFilter.class.getName()).log(Level.SEVERE, null, ex);
} finally {
}
}
}
The responseRepresentation does not have a #release method because it crashes the processes giving the warning WARNING: A response with a 200 (Ok) status should have an entity (...)
The code of the ServerResource implementation is the following:
public class ServerImpl extends ServerResource {
String itemName;
#Override
protected void doInit() throws ResourceException {
this.itemName = (String) getRequest().getAttributes().get("taxes");
}
#Get("xml")
public Representation makeItWork() throws SAXException, IOException {
DomRepresentation representation = new DomRepresentation(MediaType.TEXT_XML);
DAL dal = new DAL();
String ip = getRequest().getCurrent().getClientInfo().getAddress();
System.out.println(itemName);
double tax = Double.parseDouble(itemName);
Document myXML = Auxiliar.getMyXML(tax, dal, ip);
myXML.normalizeDocument();
representation.setDocument(myXML);
return representation;
}
#Override
protected void doRelease() throws ResourceException {
super.doRelease();
}
}
I've tried the solutions provided in other threads but none of them seem to work. Firstly, it does not seem that the thread pool is augmented with the parameters set as the warnings state that the thread pool available is 10. As mentioned before, the increase of the maxThreads value only seems to postpone the result.
Example: INFO: Worker service tasks: 0 queued, 10 active, 17 completed, 27 scheduled.
There could be some error concerning the Restlet version, but I downloaded the stable version to verify this was not the issue.The Web Service is having around 5000 requests per day, which is not much.Note: the insertion of the #release method either in the ServerResource or OriginFilter returns error and the referred warning ("WARNING: A response with a 200 (Ok) status should have an entity (...)")
Please guide.
Thanks!
By reading this site the problem residing in the server-side that I described was resolved by upgrading the Restlet distribution to the 2.1 version.
You will need to alter some code. You should consult the respective migration guide.