Incosistent Results in neo4j-ogm - Related to Session Scope? - neo4j-ogm

We developing a Spring Boot REST Application using Spring Data Neo4J. Recently we upgraded to Spring Data Neo4j 4.2 along with ogm 2.1.1 and using the embedded driver.
In our application we provide some GET operations in which we build some object structures from nodes fetched from Neo4j.
As soon as we are processing multiple requests in parallel, we are facing inconsitent results, i.e., our object structures have a different size on each request.
We are not really sure about the reasons for this inconsistent behavior - but probably it is related to the session handling in OGM? We know that Sessions are not thread safe, but we have no idea how to deal with this issue in SD 4.2. Before 4.2 we changed the sesssion scope to prototype when defining the session bean, but the configuration changed in SD 4.2
Configuration before 4.2
#Bean
#Scope(value = "prototype", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return;
}
We could narrow the source of our problems to the place where we are loading elements from Neo4j via a Repository class:
repository.findOne(id,-1);
If we place this call in a synchronized block, no inconsistencies occur.
synchronized (this) {
repository.findOne(id,-1);
}
We are probably missing some important point using SD 4.2/ogm, but could not find any useful information in the documentation, and on the web.
Is it still possible/necessary to change the sesssion scope in SD 4.2?

This is a bug in the OGM. See: https://jira.spring.io/browse/DATAGRAPH-951
We hope to have a fix for this in the next version (OGM 2.1.2).

Related

Custom SpanAdjuster is not working in Sleuth 1.3.X

I'm using Sleuth 1.3.X to add distributed tracing feature to a microservice, I'm trying to change the Span name, and I came across this Link
It says that the SpanReporter should inject the SpanAdjuster and allow span manipulation before the actual reporting is done.
How I can do that?
here is my SpanAdjuster
#Bean
SpanAdjuster mySpanAdjuster(){
return (SpanAdjuster) span -> {
if ("/rest/XYZ/message".equals(span.tags().get("http.path"))){
Span.builder().from(span).name("Rest API").build();
}
return span;
};
}
You seems to be doing this as the docs suggests, check out the Sleuth auto-configuration, it should be injected, it might happen that you are using an incompatible version of another module/project, e.g.: Spring Boot.
Also, we are about to release Sleuth 3.1.0, you are two major versions behind, this version is not supported anymore, so even if this is a bug, there won't be a new release from the 1.x line that could fix this.

How does the distributed executor service in Redisson work with regards to scoping / closuring?

If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.

Spring sleuth Runtime Sampling and Tracing Decision

I am trying to integrate my Application with Spring sleuth.
I am able to do a successfull integration and I can see spans getting exported to Zipkin.
I am exporting zipkin over http.
Spring boot version - 1.5.10.RELEASE
Sleuth - 1.3.2.RELEASE
Cloud- Edgware.SR2
But now I need to do this in a more controlled way as application is already running in production and people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
I need to decide on runtime wether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
Always export the trace if service exceeds a certain threshold or in case of Exception.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
What about this solution? I guess this will work in sampling specific request at runtime.
#Bean
public Sampler customSampler(){
return new Sampler() {
#Override
public boolean isSampled(Span span) {
logger.info("Inside sampling "+span.getTraceId());
HttpServletRequest httpServletRequest=HttpUtils.getRequest();
if(httpServletRequest!=null && httpServletRequest.getServletPath().startsWith("/test")){
return true;
}else
return false;
}
};
}
people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
Do they have any information about the overhead? Have they turned it on and the application started to lag significantly? What are they scared of? Is this a high-frequency trading application that you're doing where every microsecond counts?
I need to decide on runtime whether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
I don't think that's possible. The instrumentation is set up by adding interceptors, aspects etc. They are started upon application initialization.
Always export the trace if service exceeds a certain threshold or in case of Exception.
With the new Brave tracer instrumentation (Sleuth 2.0.0) you will be able to do it in a much easier way. Prior to this version you would have to implement your own version of a SpanReporter that verifies the tags (if it contains an error tag), and if that's the case send it to zipkin, otherwise not.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
Yes, there is cause you need to pass tracing data. However, the overhead is small.

ServiceLoader issue in WebLogic12c

I have been trying to refactor our Activiti implementation into using CDI but ran into a number of problems. I've spent way too much time trying to resolve this already, but I just can't let it go...I think I've pinned the problem down now, setting up a clean structured war without involving Activiti and have been able to reproduce what I think is the main problem.
Basically I have jar1 and jar2, both CDI enabled by including META-INF/beans.xml. Both jars specify a class in META-INF/services/test.TheTest pointing to implementations local to respective jar. jar1 depends on jar2. Also, both jars point to an implementation of javax.enterprise.inject.spi.Extension, triggering the scenario. In each implementation of Extension, I have a method like:
public void afterDeploymentValidation(
#Observes AfterDeploymentValidation event, BeanManager beanManager) {
System.out.println("In jar1 extension");
ServiceLoader<TheTest> loader = ServiceLoader.load(TheTest.class);
Iterator<TheTest> serviceIterator = loader.iterator();
List<TheTest> discoveredLookups = new ArrayList<TheTest>();
while (serviceIterator.hasNext()) {
TheTest serviceInstance = (TheTest) serviceIterator.next();
discoveredLookups.add(serviceInstance);
System.out.println(serviceInstance.getClass().getName());
}
}
Now, my problem is that the ServiceLoader does not see any implementations in either case when running WebLogic12c. The same code works perfectly fine in both Jboss 7.1.1 and Glassfish , listing both implementations of the test.TheTest interface.
Is it fair to assume that this is indeed a problem in WebLogic 12c or am I doing something wrong? Please bare in mind that I am simply trying to emulate the production setup we use when incorporating Activiti.
Regards,
/Petter
There is a Classloader Analysis Tool provided with WLS, have you seen if this will help with the diagnosis of your issue.
You can access this tool by going to ip:port/wls-cat/index.jsp
Where port will be the port of the managed server where your application is deployed.

Jetty JDBCSessionManager not serializable

I have a bit of a problem using the JDBCSessionManager in Jetty 7. For some reason the it tries to persist the SessionManager when persisting the SessionAuthentication :
16:46:02,455 WARN org.eclipse.jetty.util.log - Problem persisting changed session data id=b75j2q0lak5s1o2zuryj05h9y
java.io.NotSerializableException: org.eclipse.jetty.server.session.JDBCSessionManager
Setup code:
server.setSessionIdManager(getSessionIdManager());
final SessionManager jdbcSessionManager = new JDBCSessionManager();
jdbcSessionManager.setIdManager(server.getSessionIdManager());
context.setSessionHandler(new SessionHandler(jdbcSessionManager));
server.setHandler(context);
private SessionIdManager getSessionIdManager() {
JDBCSessionIdManager idMan = new JDBCSessionIdManager(server);
idMan.setDriverInfo("com.mysql.jdbc.Driver", "jdbc:mysql://localhost/monty?user=xxxx&password=Xxxx");
idMan.setWorkerName("monty");
return idMan;
}
Has anyone experienced something similar?
I wouldn't recommend serializing anything regarding JDBC to session. My preferred mode of operation is the acquire, use, and close any and all database resources such as connection, statements, and result sets in the narrowest scope possible. I use connection pools to amortize the cost of opening database connections. That's the way I think you should go, too.
Besides, you have no choice if the class doesn't implement java.io.Serializable. Perhaps the designers were trying to express my feelings in code.
I checked the javadocs for JDBCSessionManager. Neither the leaf class nor any of its superclasses implement Serializable.