I have a bit of a problem using the JDBCSessionManager in Jetty 7. For some reason the it tries to persist the SessionManager when persisting the SessionAuthentication :
16:46:02,455 WARN org.eclipse.jetty.util.log - Problem persisting changed session data id=b75j2q0lak5s1o2zuryj05h9y
java.io.NotSerializableException: org.eclipse.jetty.server.session.JDBCSessionManager
Setup code:
server.setSessionIdManager(getSessionIdManager());
final SessionManager jdbcSessionManager = new JDBCSessionManager();
jdbcSessionManager.setIdManager(server.getSessionIdManager());
context.setSessionHandler(new SessionHandler(jdbcSessionManager));
server.setHandler(context);
private SessionIdManager getSessionIdManager() {
JDBCSessionIdManager idMan = new JDBCSessionIdManager(server);
idMan.setDriverInfo("com.mysql.jdbc.Driver", "jdbc:mysql://localhost/monty?user=xxxx&password=Xxxx");
idMan.setWorkerName("monty");
return idMan;
}
Has anyone experienced something similar?
I wouldn't recommend serializing anything regarding JDBC to session. My preferred mode of operation is the acquire, use, and close any and all database resources such as connection, statements, and result sets in the narrowest scope possible. I use connection pools to amortize the cost of opening database connections. That's the way I think you should go, too.
Besides, you have no choice if the class doesn't implement java.io.Serializable. Perhaps the designers were trying to express my feelings in code.
I checked the javadocs for JDBCSessionManager. Neither the leaf class nor any of its superclasses implement Serializable.
Related
If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.
I am looking for a way to intercept Session.SaveChanges() so that I may execute some extra work using the same session instance (this is handy in some cases).
Edit: The point about re-using the session is that I have more work that needs to run in the same transaction.
I am already aware of (and make use of) IDocumentStoreListener - but this interface doesn't help because it does not give me access to the current session.
I can't find anything in RavenDb documentation about a way to intercept the call to SaveChanges and get a handle on the current session. Does anyone know of a way?
Open a new session it's free (in terms of performance), I think that IDocumentStoreListener has been thought for what you're looking for. I don't know other that works as you say.
implementing
void AfterStore(string key, object entityInstance, RavenJObject metadata);
you have all the information about the stored entity and then you can do what you need
We developing a Spring Boot REST Application using Spring Data Neo4J. Recently we upgraded to Spring Data Neo4j 4.2 along with ogm 2.1.1 and using the embedded driver.
In our application we provide some GET operations in which we build some object structures from nodes fetched from Neo4j.
As soon as we are processing multiple requests in parallel, we are facing inconsitent results, i.e., our object structures have a different size on each request.
We are not really sure about the reasons for this inconsistent behavior - but probably it is related to the session handling in OGM? We know that Sessions are not thread safe, but we have no idea how to deal with this issue in SD 4.2. Before 4.2 we changed the sesssion scope to prototype when defining the session bean, but the configuration changed in SD 4.2
Configuration before 4.2
#Bean
#Scope(value = "prototype", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return;
}
We could narrow the source of our problems to the place where we are loading elements from Neo4j via a Repository class:
repository.findOne(id,-1);
If we place this call in a synchronized block, no inconsistencies occur.
synchronized (this) {
repository.findOne(id,-1);
}
We are probably missing some important point using SD 4.2/ogm, but could not find any useful information in the documentation, and on the web.
Is it still possible/necessary to change the sesssion scope in SD 4.2?
This is a bug in the OGM. See: https://jira.spring.io/browse/DATAGRAPH-951
We hope to have a fix for this in the next version (OGM 2.1.2).
I fixed some bug related to the way we were using BasicDataSource and though I understand part of it I still have some questions unanswered :)
Problem:
The application was not able to auto-connect to the database after a db failure.
Application is using org.apache.commons.dbcp.BasicDataSource class as a TCP-connection pool for a JDBC connection to Oracle db.
Fix:
After some research I discovered that in BasicDataSource testOnBorrow and testOnreturn were not set. I provided the validation query to test connections. This fixed the problem
Max no of connections in pool was set to 1
My Understanding:
The connection pool would hand over a connection to the application.
What I think was happening was the application MAGICALLY returned the bad collection to the pool when it db crashed . Now since the Pool does not know if it is a bad connection it would hand over the same connection to the application next time it needs it causing the application to not auto-reconnect to db.
Now, after the fix.. whenever a bad connection is returned to the connection pool it would be discarded and wont be used again because of the fix I made above.
Now I know that BasicDataSource wraps the connection before giving to the application, such that whenever application says con.close ..BasicDataSource would know that the connection is not used any more.. it will take care of either returning the connection to the pool or discardigg etc.
Unanswered Question:
However what I do not understand is what makes the application MAGICALLY return the connection to the connection pool when its broken[Note that con.close method is not called when the connection exits un-gracefully]. There is no way of BasicDataSource to know that the connection closed or there is ?. Can someone point me to code for that ?
I my overall understanding connect of why the fix worked ??
Now, I know that this is kind of an old thread, but it's high on google search results, so I thought I might give it a quick answer. For more information on configuring the BasicDataSource, you should reference the DBCP Configuration page: http://commons.apache.org/proper/commons-dbcp/configuration.html
To answer the "unanaswered" question of "How does BasicDataSource know when a connection is abondoned and needs to be returned to the connection pool?" (paraphrased)...
org.apache.commons.dbcp.BasicDataSource is able to monitor traffic and usage on the connections it offers by using a wrapper class for the Connection. Every time you call a method on the connection or any Statements created from the connection, you are actually calling a wrapper class that implements an interface or extends a base class with those same methods (Hurray for Polymorphism!). These custom methods allow the DataSource to know whether or not a Connection is active.
On the BasicDataSource itself, there is a property called "removeAbandoned" and another called "removeAbandonedTimeout" that are used to configure this behavior of returning abondonded connections to the connection pool.
"removeAbandoned" is a boolean that indicates whether abandoned connections should be returned to the pool. Defaults to "false".
"removeAbandonedTimeout" is an int, that represents the number of seconds of inactivity that should be allowed to pass before a connection is considered to be abandoned. Default value is 300 (about 5 minutes).
Looking at the test for abandoned connections, it appears that when all connections in the pool are "in use" when a new connection is requested, the "in-use" connections are tested for abandonment (they maintain a timestamp of last used time).
See BasicDataSource#setRemoveAbandoned(boolean) and BasicDataSource#setRemoveAbandonedTimeout(int)
Regardless of how clever or not your connection pool is in closing abandoned connections, you should always ensure each connection is closed in a finally block, e.g.:
Connection conn = getConnection();
try {
... // perform work
} finally {
conn.close();
}
Or use some other means such as Apache DBUtils.
I'd like to ensure, that when I'm persisting any data to the database, using Fluent NHibernate, the operations are executed inside a transaction. Is there any way of checking that a transaction is active via an interceptor? Or any other eventing mechanism?
More specifically, I'm using the System.Transaction.TransactionScope for transaction management, and just want to stop myself from not using it.
If you had one place in your code that built your session, you could start the transaction there and fix the problem at a stroke.
I haven't tried this, but I think you could create a listener implementing IFlushEventListener. Something like:
public void OnFlush(FlushEvent #event)
{
if (!#event.Session.Transaction.IsActive)
{
throw new Exception("Flushing session without an active transaction!");
}
}
It's not clear to me (and Google didn't help) exactly when OnFlush is called. There also may be an implicit transaction that could set IsActive to true.
If you had been using Spring.Net for your transaction handling, you could use an anonymous inner object to ensure that your DAOs/ServiceLayer Objects are always exposed with a TransactionAdvice around their service methods.
See the spring documentation for an example.
To what end? NHProf will give you warnings if you're not executing inside a transaction. Generally you should be developing with this tool open anyway...