I'm using BoneCP connection pooling mechanism and I want to manage my transactions using support of the Spring framework. I found an example about Spring Transaction Management and I tried to apply this example. I got a DataSource instance from my connection pool and give this data source to created DataSourceTransactionManager as below.
DataSource dataSource = new BoneCPDataSource(getConnectionPool().getConfig());
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource);
But when I test it, I saw that the transaction manager has written the data to the store before commit operation.
Can it be related with creating a new data source before transaction manager is created? or do you have any idea?
I found the cause of the problem. I was using SDB RDF storage component. It is related with the implementation of the SDB's add triple method. I found that it directly calls the commit method of the current SQL connection. There is no problem about integrating DataSourceTransactionManager with BoneCP connection pool.
Related
From my understanding, when you register a DbContext using AddDbContext it is registered as scoped lifetime which means one DbConnext per http request. And that you don't need to use a using statement because the connection will be closed and cleaned up for you if you are not performing long running db calls. But when is the connect to the database opened? Does it open/close the connection automatically after each db call or is the connection opened at the time the DbContext is injected into a service and closed when the request finishes?
The reason I ask is because I am rewriting an existing app that uses both classic asp and ASP.NET 4.8 (using EF and Microsoft Enterprise Data Library). I had previously implemented repositories/services and had the DbContext in a repository where all the calls in that repository used EF. Other repositories use sql statements using dapper. I was asked to remove the repositories and use our old method of creating separate DataAccess files (each implement the same partial class) and named based on the database being used and not functionality. I really don't like this idea because we have projects where there are 100s of database calls in one file because they happen to use the same database (some even using multiple databases) and we end up with DataAccess methods that are very similar/same but named differently. I refactored the repositories to follow this pattern. Since all the files implement the same partial class, I had to inject the DbContext into the constructor; which a good chuck of the database calls will not use. After doing so, I started wondering if the calls that don't use EF will now have an extra open db connection for no reason.
But when is the connect to the database opened?
From this we can see:
By default the a single DbContext will be created and used for each
HTTP request. A connection can be opened and closed many times during
the lifetime of a DbContext, but if the DbContext starts a transaction
the same underlying connection will remain open and be reused for the
lifetime of the transaction.
When you create a DbContext
it's not immediately initialized, nor is its connection immediately
opened.
It opens a connection when loading or saving and closes it once it's done. If you force it by starting a transaction and keeping it open, the connection will be opened during DbContext lifetime .
see this to know more.
Everytime you use spring JdbcTemplate, does it actually create a new connection to the sql server ?
Answer 1:
In short yes it does close the connection. The long answer it depends.
When you don’t have a Spring managed transaction then yes the JdbcTemplate will call the close() method on the Connection. However if there was already a connection available due to Springs transaction management closing the connection will be handled by Springs transaction support, which in turn also will call close() on the Connection.
The only difference is when the connection is closed but close() will be called.
If the connection will be actually closed depends on which DataSource is used, in general when using a connection pool the connection will be returned to the pool instead of actually closing the connection.
Answer 2:
Yes it does.
And if the connection was obtained from connection pool, it won’t actually close the connection, rather will send it back to the pool.
Answer 3:
No need to close the connection manually. Spring container itself to take of the operation. Kindly refer this spring url,
http://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/jdbc.html
Answer 4:
We can also close connection while using jdbcTemplete, in some cases it compulsory to close connection after execute query otherwise getting connection issue. for more details visit
[Close connection in jdbc template][1] [1]: http://www.javaiq.in/2019/05/jdbctemplate.html
Link is: https://inneka.com/programming/spring/does-springs-jdbctemplate-close-the-connection-after-query-timeout/
Understanding datasource interface is the key to understanding the answer to this question. JdbcTemplate has a dependency on datasource and official javadoc for DataSource interface says:
Datasource is a factory for connections to the physical data source that this datasource object represents.
It means every-time a JdbcTemplate is used for executing a SQL query, it requests a connection from the datasource. Datasource retrieves a connection from the connection pool, if available, and gives it to JdbcTemplate. JdbcTemplate then executes the SQL query and releases the connection back to the pool.
So, yes, we would need a new connection every-time JdbcTemplate is used for executing a SQL query but that connection is always fetched from the connection pool that any implementation of Datasource interface maintains.
Maintaining a connection pool is lot more time efficient than creating a new connection on demand. Obviously, considering memory limits, there has to be an upper cap on the connection pool size.
I've been doing a lot of reading on this one, and some of the documentation doesn't seem to relate to reality. Some of the potential causes would be appropriate here, however they're only related to 2008 or earlier.
I define a transaction scope. I use a number of different EF contexts (in different method calls) within the transaction scope, however all but one of them are only for data reads. The final use of a Context is to create and add some new objects to the context, and then call
context.SaveChanges()
IIS is running on one server. The DB (Sql2012) is running on another server (WinServer 2012).
When I execute this code, I receive the error:
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
Obviously, if I enable DTC on the IIS machine, this goes away. However why should I need to?
This:
http://msdn.microsoft.com/en-us/library/ms229978.aspx
states:
• At least one durable resource that does not support single-phase
notifications is enlisted in the transaction. • At least two durable
resources that support single-phase notifications are enlisted in the
transaction
Which I understand is not the case here.
Ok. I'm not entirely sure if this should have been happening (according to the MS doco), but I have figured out why and the solution.
I'm using the ASPNet membership provider, and have two connection strings in my web.config. I thought the fact that they were pointing to the same DB was enough for them to be considered the same "durable resource".
However I found that the membership connection string also had:
Connection Timeout=60;App=EntityFramework
whereas the Entity Framework connection string didn't.
Setting these values to the same connection string meant that the transaction is not escalated to MSDTC.
I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.
I am considering using Fluent NHibernate for a new application with SQL Server 2008 and I am having trouble understanding the connection handling behavior I am seeing.
I am monitoring connections using sp_who2 and here is what I see:
When the SessionFactory is created, a single connection is opened. This connection
seems to stay open until the app is killed.
No connection is opened when a new session is opened. (That's ok, I understand NHibernate waits until the last moment to create database connections).
No new connection is opened even when I run a query through NHibernate. I must assume it is using the connection created when the SessionFactory was created, which is still open. I set a breakpoint after the query (before the session was closed), and no new sessions had appeared in sp_who.
Running an entire app through a single connection is not acceptable (obviously). How can I ensure each ISession gets its own connection? I'm sure there's something obvious I'm missing here...
Thanks in advance.
The behaviour that you see is nothing NHibernate specific - Connection pooling is default behaviour in SQL Server.
Even if it may sound awkward at first glance, it actually is a good thing because creating new connections and maintaining them is expensive.
(for more information, see the Wikipedia article about connection pooling)
So there is no need to try to get NH to open a new connection for each session, because reusing existing connections actually improves SQL Server performance.