Setting Transaction Isolation Level in Berkeley DB Java Edition for Distributed Transactions (XA) - jta

I am using distributed transactions in a BDB JE application to coordinate transactions across multiple BDB JE environments. I want to set the transaction isolation level to serializable. To begin distributed transactions, I use an Xid that I generate and have to ensure is globally unique, eschewing BDB JE's native Transaction class. The transaction branch that starts is ThreadLocal, so null is passed into the transaction field in operations. So how do I set the isolation level? Is the isolation level already defaulted to serializable? My Google-fu isn't turning anything up...

I am a huge fan of Stackoverflow, but I'm also the Product Manager for Oracle Berkeley DB so I have to first suggest that the "right place" to ask this kind of question is on the OTN Forum for BDB JE (http://forums.oracle.com/forums/forum.jspa?forumID=273).
Here are the Javadoc for BDB JE http://download.oracle.com/berkeley-db/docs/je/3.2.76/TransactionGettingStarted/BerkeleyDB-JE-Txn.pdf
And here is information about LockMode http://download.oracle.com/berkeley-db/docs/je/3.3.62/java/com/sleepycat/je/LockMode.html
Try Google-Fu of:
site:download.oracle.com berkeley db java edition
The docs for all products live in the Oracle "DocArch" system which publishes to the download server (because when it was first designed there was no "Interweb" only CDROMs and printed materials which you would download, get it?).
Good luck.

Related

Groovy set isolation level - locking UPDATE sql transcation

Is it possible to lock UPDATE transaction in pure Groovy for writing (leave it free for reading)?
DB behind is MSSQL.
I see there are ways how to do it in Java, or in the level of procedure but I am interested in the groovy way.
Possible using optimistic Transaction Isolation Levels: READ COMMITTED SNAPSHOT or SNAPSHOT. They use row versioning for reads instead of shared locks, so when a row is updated (and locked with exclusive lock), its content copied to the Version Store in tempdb, so other process doesn't wait for update finishing, but just read previous version or row from the Version Store.
Here is more reading about snaphot isolation levels: https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx
As both of them need Version Store in tempdb, they can't be just specified in a connection, instead of that ALTER DATABASE is needed: https://technet.microsoft.com/en-us/library/ms175095(v=sql.105).aspx

JTA Transaction Support for Gemfire

Can anyone please help me on below queries.
1>How to achieve 100% consistency between cache and database.if I want both Gemfire and database participate in JTA transaction as regular transactional resources (supported two-phase commit).?
2> Is "last resource" optimization guarantee 100% consistency?
3> What are the JTA transaction managers supported and tested with "last resource" optimization?
4> What are the external transaction managers which are supported and tested with Gemfire?
For a non-last resource JTA transaction GemFire registers as a synchronization, so it has a say in the outcome of the transaction, but if the GemFire server was to die within the small window (between the beforeCommit() and afterCommit() call, GemFire would be inconsistent with other resources in the transaction). When GemFire is used as a last-resource, it gets the final say in the outcome of the transaction, so this window is effectively closed.
You will need to consult the documentation of your JTA transaction Manager to see if they could guarantee 100% consistency.
In GemFire the last resource optimization is tested with weblogic, and GemFire works with JTA transaction managers that register with the JNDI with the following names:
"java:/TransactionManager"
"java:comp/TransactionManager"
"java:appserver/TransactionManager"
"java:pm/TransactionManager"
"java:comp/UserTransaction"

.NET SQLDatareader isolated read

I have a SQL Server database which stores accounts with credits (about 200.000 records), and a separate table which stores the transactions (about 20.000.000).
Whenever a transaction is added to the database the credits are updated.
What I need to do is update client programs (using a web service) to store the credits locally, and whenever new transactions are added to the server they are sent to the clients as well (using timestamps for the delta). My main problem is creating the first data set for the client. I need to supply the list of all accounts and the last timestamp on the transaction table.
This would mean I have to create this list and the last timestamp within a snapshot, because any updates during creating this list would mean a mismatch in credits total and last transaction timestamp.
I've researched the ALLOW_SNAPSHOT_ISOLATION setting and using snapshot isolation on the SqlCommand transaction, but from what I've read this will induce a significant performance penalty. Is this true, and can this problem be solved using other means ?
but from what I've read this will induce a significant performance penalty.
I don't even want to know where you read that. I'll refer you to the official document. The costs come from additional tempdb space used for row versions and from traversing old row versions. These problems do not concern you if you have a low write rate.
Snapshot isolation is a boon for solving blocking and consistency issues. It is a perfect match for your scenario.
Many SQL Server questions on Stack Overflow lead me to comment "did you investigate snapshot isolation yet?". Most underused feature.
Oracle and Postgres have it always on.
Don't jump onto SI wagon hastily. As everythng else it has it's benefits and has it's drawbacks.
As the drawbacks are concerned, for example, the application might count on blocking behaviour or/and is willing to wait for that last version of the data. You should thoroughly test the application under SI to be sure it behaves correctly. Further, an uncommitted transaction could make a mess of the version store and lead to dramatic tempdb growth, so monitoring is a must.
Also, SI might be an overkill for you, if normally you don't have blocking issues.
Instead, if what you need is a one-off or close to it, create a database snapshot of your database, create the initial list from that snapshot, and then simply drop it.

What do I expect from changing default transaction isolation level from READ_COMMITTED_SNAPSHOT to READ_COMMITTED?

In SQL Server the default isolation level is READ_COMMITTED but in SQL Azure the default level is READ_COMMITTED_SNAPSHOT.
Suppose I change the default level in my SQL Azure server to READ_COMMITTED_SNAPSHOT (using SET TRANSACTION ISOLATION LEVEL) so that it behaves like SQL Server.
What negative consequences should I expect?
Your application logic may break. Really, it depends a lot on what you're doing. Overall, some pointers:
True SNAPSHOT has a lot less 'surprises' than RCSI. As the 'snapshot' rows version is clearly defined in the true SNAPSHOT as the moment the transaction started, it does not suffer from RCSI issues of seeing different row versions inside the same transaction (which leads to very subtle and difficult to understand issues)
You will get update conflicts instead of deadlocks, but one exactly 'instead-of'. There are some differences, and definetely the app may not expect the new error code 3960.
I would recommend going over Implementing Snapshot or Read Committed Snapshot Isolation in SQL Server: A Guide.

Using IsolationLevel.Snapshot but DB is still locking

I'm part of a team building an ADO.NET based web-site. We sometimes have several developers and an automated testing tool working simultaneously a development copy of the database.
We use snapshot isolation level, which, to the best of my knowledge, uses optimistic concurrency: rather than locking, it hopes for the best and throws an exception if you try to commit a transaction if the affected rows have been altered by another party during the transaction.
To use snapshot isolation level we use:
ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;
and in C#:
Transaction = SqlConnection.BeginTransaction(IsolationLevel.Snapshot);
Note that IsolationLevel Snapshot isn't the same as ReadCommitted Snapshot, which we've also tried, but are not currently using.
When one of the developers enters debug mode and pauses the .NET app, they will hold a connection with an active transaction while debugging. Now, I'd expect this not to be a problem - after all, all transactions are using snapshot isolation level, so while one transaction is paused, other transactions should be able to proceed normally since the paused transaction isn't holding any locks. Of course, when the paused transaction completes, it is likely to detect a conflict; but that's acceptable so long as other developers and the automated tests can proceed unhindered.
However, in practice, when one person halts a transaction while debugging, all other DB users attempting to access the same rows are blocked despite using snapshot isolation level.
Does anybody know why this occurs, and/or how I can achieve true optimistic (non-blocking) concurrency?
The resolution (an unfortunate one for me): Remus Rusanu noted that writers always block other writers; this is backed up by MSDN - it doesn't quite come out and say so, but only ever mentions avoiding reader-writer locks. In short, the behavior I want isn't implemented in SQL Server.
SNAPSHOT isolation level affects, like all isolation levels, only reads. Writes are still blocking each other. If you believe that what you see are read blocks, then you should investigate further and check out the resource types and resource names on which blocking occurs (wait_type and wait_resource in sys.dm_exec_requests).
I wouldn't advise in making code changes in order to support a scenario that involves developers staring at debugger for minutes on end. If you believe that this scenario can repeat in production (ie. client hangs) then is a different story. To achieve what you want you must minimize writes and perform all writes at the end of transaction, in one single call that commits before return. This way no client can hold X locks for a long time (cannot hang while holding X locks). In practice this is pretty hard to pull off and requires a lot of discipline on the part of developers in how they write the data access code.
Have you looked at the locks when one developer pauses the transaction? Also, just turning on snapshot isolation level does not have much effect. Have you set ALLOW_SNAPSHOT_ISOLATION ON?
Here are the steps:
ALTER DATABASE <databasename>
SET READ_COMMITTED_SNAPSHOT ON;
GO
ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
After the database has been enabled for snapshot isolation, developers and users must then request that their transactions be run in this snapshot mode. This must be done before starting a transaction, either by a client-side directive on the ADO.NET transaction object or within their Transact-SQL query by using the following statement:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Raj