Rollback not working properly while using sqlalchemy on SQL Server - sql

I am facing an issue with SQLalchemy's sessions in python. Underlying database - SQL Server.
What I am trying to do?
Insert into parent and child tables(both have an auto-increment PK) as part of one transaction and rollback changes on both if any error occurs else move on with commiting changes.
This is how the pseudo-code looks -
session = Session() # get the session from somewhere. (Auto-Commit=false)
try:
parent = Parent(...)
session.add(parent)
session.flush()
# By now I expect the new ID updated in parent object.
child = Child(...)
session.add(child)
session.flush()
# raise Exception('abc') ->
# In case of this exception I expect no changes to be written to database,
# but surprisingly after the execution of mere flush I am able see changes written in SQL server.
session.commit()
except:
session.rollback()
Now when I check out the logger, in case of exception I see these set of queries going in -
[20210612 03:43:23:530 base.py:727 INFO] BEGIN (implicit)
[20210612 03:43:23:531 base.py:1234 INFO] INSERT INTO PARENT TABLE
[20210612 03:43:23:534 base.py:1234 INFO] INSERT INTO CHILD TABLE
[20210612 03:43:23:574 base.py:747 INFO] ROLLBACK
Any ideas how is this happening and what might be the corrective actions for it?
Please do let me know in case any other detail is required.

Related

Airflow/SQLAlchemy Error - Loading context has changed within a load/refresh handler

I am attempting to use clairvoyant's db-cleanup dag to clear metadata in our xcom table, but when I run it, I receive the following warning, printed thousands of times before I manually stop the job in order to not take down our mysql instance:
SAWarning: Loading context for <BaseXCom at 0x7f26f789b370> has changed within a load/refresh handler, suggesting a row refresh operation took place. If this event handler is expected to be emitting row refresh operations within an existing load or refresh operation, set restore_load_context=True when establishing the listener to ensure the context remains unchanged when the event handler completes.
The other cleanup tasks work fine, but it is the xcom table in particular I am having trouble with. We have hundreds/thousands of active dags and so the xcom table is constantly being written to nearly every second or two. I think that is what is causing this error, the fact that the data is continually changing while it is being queried.
I have been unable to find the cause of this or any examples of how this can be resolved. I tried adding a "restore_load_context":True line as per SQLAlchemy docs but it did not work.
Here are the snippets I attempted to add to the database object and the cleanup task:
{
"airflow_db_model": XCom,
"age_check_column": XCom.execution_date,
"keep_last": False,
"keep_last_filters": None,
"keep_last_group_by": None,
"restore_load_context":True
},
....
def cleanup_function(**context):
logging.info("Retrieving max_execution_date from XCom")
max_date = context["ti"].xcom_pull(
task_ids=print_configuration.task_id, key="max_date"
)
max_date = dateutil.parser.parse(max_date) # stored as iso8601 str in xcom
airflow_db_model = context["params"].get("airflow_db_model")
state = context["params"].get("state")
age_check_column = context["params"].get("age_check_column")
keep_last = context["params"].get("keep_last")
keep_last_filters = context["params"].get("keep_last_filters")
keep_last_group_by = context["params"].get("keep_last_group_by")
restore_load_context = context["params"].get("restore_load_context")
In order to not paste too much code here, I am using the same code in the db-cleanup dag. Has anyone encountered this and found a way to resolve?
I am very inexperienced with sqlalchemy and am entirely unsure where else to place this code or how to go about it.

Doctrine deadlock with ORM updates

I'm trying to figure out what is causing deadlocks in my Symfony 2 application. I'm running a cronjob that does batch-updates on a fairly large dataset and one part of it causes this error:
Doctrine\DBAL\DBALException: An exception occurred while executing
'UPDATE SpotEvent SET ts = ?, current = ? WHERE id = ?' with params
["2015-12-28 00:35:27", 1, 39316]: SQLSTATE[40P01]: Deadlock
detected: 7 ERROR: deadlock detected DETAIL: Process 32030 waits for
ShareLock on transaction 2130787; blocked by process 32029. Process
32029 waits for ShareLock on transaction 2130786; blocked by process
32030. HINT: See server log for query details. CONTEXT: while updating tuple (105,68) in relation "spotevent" (uncaught exception)
at
/home/maf/symfony/vendor/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php
line 91 while running console command
The code causing it is basically this:
check event
if (already in database) {
update timestamp
} else {
create new
}
From what I see in the error, the first branch causes the deadlock, but from what I read about deadlocks, the second should be more likely. In any case I don't understand why I have a deadlock at all.
I should say I am running this job in 6 parallel processes. However, there is no overlap between them (i.e. job one is checking from 1-200, job 2 from 201 to 400, etc.)
I'm using PostgreSQL as the database backend. My "check event" step is done using DQL, everything else is pure ORM.

Data is not properly stored to hsqldb when using pooled data source by dbcp

I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations

"This SqlTransaction has completed; it is no longer usable."... configuration error?

I've been working on this for about a day and a half now, and searched numberous blogs and help articles on the Web. I found several questions on SO related to this error, but I didn't think they quite applied to my situation (or in some cases, unfortunately, I couldn't understand them well enough to implement :P). I'm not sure I can describe this well enough for help... but here goes:
We have a .NET app to track our resources. There's an export function to copy a resource to the time tracking system and the billing system; this accesses a stored procedure that links to the time and billing databases.
I recently moved the billing system database to a new server (original server: Server 2003 SP2, SQL 2005; new server: Server 2008 R2, SQL 2008 R2). I have a Linked Server set up that points to the 2008 databases. I updated the stored procedure to point to the 2008 server, and then I got an error about MSDTC and RPC (http://www.safnet.com/writing/tech/archives/2007/06/server_myserver.html). I enabled 'rpc/rpc out' on the Linked Server and set MSDTC to allow Network Access (something like this: http://www.sqlwebpedia.com/content/msdtc-troubleshooting).
Now I'm getting the above, when I try to run the export function: "This SqlTransaction has completed; it is no longer usable." What seems odd to me is that when I just run the stored procedure (from SSMS), it says it completes successfully.
Has anyone seen this before? Have I missed something in the configuration? I keep going over the same pages, and the only thing I found was that I didn't reboot after making the MSDTC changes (mentioned in here: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/).
I can post part or all of the stored procedure, if it would help... please let me know.
I believe this error message is due to a "zombie transaction".
Look for possible areas where the transacton is being committed twice (or rolled back twice, or rolled back and committed, etc.). Does the .Net code commit the transaction after the SP has already committed it? Does the .Net code roll it back on encountering an error, then attempt to roll it back again in a catch (or finally) clause?
It's possible an error condition was never being hit on the old server, and thus the faulty "double rollback" code was never hit. Maybe now you have a situation where there is some configuration error on the new server, and now the faulty code is getting hit via exception handling.
Can you debug into the error code? Do you have a stack trace?
I had this recently after refactoring in a new connection manager. A new routine accepted a transaction so it could be run as part of a batch, problem was with a using block:
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
using (transaction.Connection)
{
using (transaction)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
}
}
It looks as though the outer using was closing the underlying connection thus any attempts to commit or rollback the transaction threw up the message "This SqlTransaction has completed; it is no longer usable."
I removed the usings added a covering test and the problem went away.
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
Check for anything that might be closing the connection while inside the context of a transaction.
Had the exact same problem and just could not find the right solution.
Hope this helps somebody.
I have an .NET Core 3.1 WebApi with EF Core. Upon receiving multiple calls at the same time, the applications was trying to add and save changes to the database at the same time.
In my case the problem was that the table that the data would be saved in, did not have a primary key set.
Somehow EF Core missed when the migration was ran from the application that the ID in the model was supposed to be a primary key.
I found the problem by opening the SQL Profiler and seeing that all transactions was successfully submitted to the database (from the application) but only one new row was created. The profiler also showed that some type of deadlock was happening but I couldn't see much more in the trace logs of the profiler.
On further inspection I noticed that the primary key identifier was missing on the column "Id".
The exceptions I got from my application was:
This SqlTransaction has
completed; it is no longer usable.
and/or
An exception has been raised that is likely due to a transient
failure. Consider enabling transient error resiliency by adding
'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I have the same problem. This error occurs because conection pooling. When exists two or more users acess the system the connetion pooling reuse a connetion and the transation too. If the first user execute commit ou rollback the transaction is no longe usable.
I have recently ran across similar situation. To debug in any VS IDE version, open exceptions from Debug (Ctrl + D, E) - check all checkboxes against the column "Thrown", and run the application in debug mode. I have realized that one of the tables was not imported properly in the new database, so internal Sql Exception was killing the connection, thus results into this error.
Gist of the story is, If Previously working code returns this error on a new database, this could be database schema missing issue, realize by above debugging tip,
Hope It Helps,
HydTechie
Also check for any long running processes executed from your .NET app against the DB. For example you may be calling a stored procedure or query which does not have enough time to finish which can show in your logs as:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
This SqlTransaction has completed; it is no longer usable.
Check the command timeout settings
Try to run a trace (profiler) and see what is happening on the DB side...
In my case the problem was that one of the queries included in the transaction was raising an exception, and even though the exception was "gracefully" handled, it still managed to roll back the entire transaction.
My pseudo-code was like:
var transaction = connection.BeginTransaction();
for(all the lines in a file)
{
try{
InsertLineInTable(); // INSERT statement might fail and throw an exception
}
catch {
// notify the user about the error on line x and continue
}
}
// Commit and Rollback will fail if one of the queries
// in InsertLineInTable threw an exception
if(CheckTableForErrors())
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
Here is a way to detect Zombie transaction
SqlTransaction trans = connection.BeginTransaction();
//some db calls here
if (trans.Connection != null) //Detecting zombie transaction
{
trans.Commit();
}
Decompiling the SqlTransaction class, you will see the following
public SqlConnection Connection
{
get
{
if (this.IsZombied)
return (SqlConnection) null;
return this._connection;
}
}
I notice if the connection is closed, the transOP will become zombie, thus cannot Commit.
For my case, it is because I have the Commit() inside a finally block, while the connection was in the try block. This arrangement is causing the connection to be disposed and garbage collected. The solution was to put Commit inside the try block instead.
For what it's worth, I've run into this on what was previously working code. I had added SELECT statements in a trigger for debug testing and forgot to remove them. Entity Framework / MVC doesnt play nice when other stuff is output to the "grid". Make sure to check for any rogue queries and remove them.
In my case, I've some codes which need to execute after committing the transaction, at the same try-catch block. One of the codes threw
an error then try block handed over the error to its catch block which contains the transaction rollback.
It will show a similar error. For example, look at the code structure below :
SqlTransaction trans = null;
try{
trans = Con.BeginTransaction();
// your codes
trans.Commit();
//your codes having errors
}
catch(Exception ex)
{
trans.Rollback(); //transaction roll back
// error message
}
finally
{
// connection close
}
Hope it will help someone :)

"Using a single session in multiple threads is likely a bug" error in NHProf when using NServiceBus

When executing an NServiceBus handler that uses NHibernate for its data access operations, I am seeing an error that I am not sure if I need to be concerned with.
The handler has code that does something like this:
using (var tx = Session.BeginTransaction())
{
var accountGroup = _groupRepository.FindByID(message.GroupID);
accountGroup.CreateAccount(message.AccountNumber);
tx.Commit();
}
When I profile this process, I see the following lines:
enlisted session in distributed transaction with isolation level: Serializable
begin transaction with isolation level: Unspecified
SELECT ... FROM AccountGroups this_ WHERE this_.ID = 123
INSERT INTO Accounts ...
commit transaction
commit transaction
The first commit message is generated by my code when I call tx.Commit(). The second commit message, I believe occurs when we leave the Handle method of the handler and is called by NServiceBus. This second call to commit generates an alert in NHProf that states "Using a single session in multiple threads is likely a bug".
I don't think this is an issue, because there really is nothing to commit at that time, but am I doing some inappropriate here? I do want to run my code within a transaction, but when I do, I get this alert.
Any ideas?
This isn't an issue, what is happening is that NH Prof detects that the DTC commit is happening in another thread.
It should actually handle DTC commits properly, so I am not sure what is going on. At a guess, using both DTC commit and standard commit it confusing it.
I'll fix it.