How to log in hibernate which part of the code caused a given SQL - sql

We can turn on all of the SQL related logging with the following settings in spring:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
logging.level.org.hibernate.type=trace
If we have a standalone hibernate/springdata command like
myEntityRepository.save(myEntity);
OR
enityManager.persist(myEntity);
then it is easy to debug what happened just by reading the generated SQL from the log.
But, how would you debug when there isn't any explicit ORM action like here:
#Transactional
void doHundredOfTask(Long id){
MyEntity myEntity = myEntityRepository.findById(id);
// here comes ton of action on the entity like settings field,setting/adding to collection
// myEntity.setField1()..
//myEntity.setField2()
// ....
// myEntity.setField_N()
// myEntity.getSomeList.get(0).setSomeField()
// no ORM action
}
At the end we don't explicitly save anything but after the transaction hibernate will flush the changes, hence a massive amount of SQL will occur in the log. If you have a ton of action on the entity and on it's associations then it is extremly hard to debug why a given SQL was triggered.
Is there a way to assign the generated SQL to the triggering code in the log?
edit: Right know all I can do is splitting up the code to smaller chunks / or commenting out some part of it. But this process is slow..

p6spy can print a stacktrace for each executed SQL statement. Here is configuration to enable this: stacktrace=true.
How to configure p6spy for maven project:
Add p6spy dependency
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>3.9.1</version>
</dependency>
Wrap the jdbc connection with p6spy:
spring.datasource.url=jdbc:p6spy:mysql://localhost:3306/xxx
spring.datasource.driver-class-name=com.p6spy.engine.spy.P6SpyDriver
Add spy.properties config src/main/resources/spy.properties
stacktrace=true
appender=com.p6spy.engine.spy.appender.Slf4JLogger
logMessageFormat=com.p6spy.engine.spy.appender.MultiLineFormat
You can remove the properties bellow:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
With this configuration, p6spy will output SQL and the stacktrace. E.g.:
select x0_.id as id1_7_ from X x0_
15:10:16.166 default [main] INFO c.p.e.spy.appender.Slf4JLogger[logException]-39 -
java.lang.Exception: null
at com.p6spy.engine.common.P6LogQuery.doLog(P6LogQuery.java:126)
...
at org.hibernate.loader.Loader.getResultSet(Loader.java:2341)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:2094)
...
at com.springapp.Test.test(Test.java:36)
...

Related

Run a fallback script when liquibase script fails in gradle

I'm using Liquibase with gradle in order to apply database changes.
I have three activities in runList:
runList='stop_job, execute_changes, start_job'
It works fine in case that I don't have any exception, but if something fails on the second step (execute_changes) it stops there and does not execute "start_job" activity.
Is it possible to introduce something like a fallback activity or "finally" block?
You could use failOnError:false. It defines whether the migration will fail if an error occurs while executing the changeset. Default value is true.

Full SQL statement logging on Dropwizard

I've a Dropwizard application using JDBI and SQL Server. I would like to get all SQL statements logged with their parameters but I don't seem to be able to.
This is what's usually recommended to do:
logging:
level: INFO
loggers:
"org.skife": TRACE
"com.microsoft.sqlserver.jdbc": TRACE
But this only logs statements, without the parameters:
TRACE [2016-07-08 16:40:27,711] org.skife.jdbi.v2.DBI: statement:[/* LocationDAO.detail */ EXEC [api].[GetCountryCodes] #CountryId = ?] took 487 millis
DEBUG [2016-07-08 16:37:44,499] com.microsoft.sqlserver.jdbc.Connection: ENTRY /* LocationDAO.detail */ EXEC [api].[GetCountryCodes] #CountryId = ?
Is there any way to get the actual statement run against the database?
Using p6spy seems to be the easiest way to go. Just add the dependency:
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>2.3.1</version>
</dependency>
On the database config, use the p6spy class instead and slightly modify your connection url
database:
driverClass: com.p6spy.engine.spy.P6SpyDriver
url: jdbc:p6spy:sqlserver://10.0.82.95;Database=psprd1

Pentaho Data Integration: Error Handling

I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log

Why is NHibernate's adonet.batch_size setting ignored - session.SetBachSize() throws exception?

I am using NHibernate and switched from my local SQLExpress database to Oracle11g.
My code started to complain. The session objects method SetBatchSize() throws a System.NotSupported exception:
No batch size was defined for the session factory, batching is disabled. Set adonet.batch_size = 1 to enable batching.
It worked on the SQLExpress database. Ok, so I added this
<property name="adonet.batch_size">1</property>
to the config but it still throws the same exception. The sessions Batcher property is set to this
Value: {NHibernate.AdoNet.NonBatchingBatcher}
Type: NHibernate.Engine.IBatcher {NHibernate.AdoNet.NonBatchingBatcher}
It does not make any difference if i try to set the batch size in- or outside the transaction.
NHibernate has only batcher for some RDBMs. if it doesn't find one for the database in question it defaults to nonbatchingbatcher which is not able to batch at all. you could implement your own IBatcher.

"This SqlTransaction has completed; it is no longer usable."... configuration error?

I've been working on this for about a day and a half now, and searched numberous blogs and help articles on the Web. I found several questions on SO related to this error, but I didn't think they quite applied to my situation (or in some cases, unfortunately, I couldn't understand them well enough to implement :P). I'm not sure I can describe this well enough for help... but here goes:
We have a .NET app to track our resources. There's an export function to copy a resource to the time tracking system and the billing system; this accesses a stored procedure that links to the time and billing databases.
I recently moved the billing system database to a new server (original server: Server 2003 SP2, SQL 2005; new server: Server 2008 R2, SQL 2008 R2). I have a Linked Server set up that points to the 2008 databases. I updated the stored procedure to point to the 2008 server, and then I got an error about MSDTC and RPC (http://www.safnet.com/writing/tech/archives/2007/06/server_myserver.html). I enabled 'rpc/rpc out' on the Linked Server and set MSDTC to allow Network Access (something like this: http://www.sqlwebpedia.com/content/msdtc-troubleshooting).
Now I'm getting the above, when I try to run the export function: "This SqlTransaction has completed; it is no longer usable." What seems odd to me is that when I just run the stored procedure (from SSMS), it says it completes successfully.
Has anyone seen this before? Have I missed something in the configuration? I keep going over the same pages, and the only thing I found was that I didn't reboot after making the MSDTC changes (mentioned in here: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/).
I can post part or all of the stored procedure, if it would help... please let me know.
I believe this error message is due to a "zombie transaction".
Look for possible areas where the transacton is being committed twice (or rolled back twice, or rolled back and committed, etc.). Does the .Net code commit the transaction after the SP has already committed it? Does the .Net code roll it back on encountering an error, then attempt to roll it back again in a catch (or finally) clause?
It's possible an error condition was never being hit on the old server, and thus the faulty "double rollback" code was never hit. Maybe now you have a situation where there is some configuration error on the new server, and now the faulty code is getting hit via exception handling.
Can you debug into the error code? Do you have a stack trace?
I had this recently after refactoring in a new connection manager. A new routine accepted a transaction so it could be run as part of a batch, problem was with a using block:
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
using (transaction.Connection)
{
using (transaction)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
}
}
It looks as though the outer using was closing the underlying connection thus any attempts to commit or rollback the transaction threw up the message "This SqlTransaction has completed; it is no longer usable."
I removed the usings added a covering test and the problem went away.
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
Check for anything that might be closing the connection while inside the context of a transaction.
Had the exact same problem and just could not find the right solution.
Hope this helps somebody.
I have an .NET Core 3.1 WebApi with EF Core. Upon receiving multiple calls at the same time, the applications was trying to add and save changes to the database at the same time.
In my case the problem was that the table that the data would be saved in, did not have a primary key set.
Somehow EF Core missed when the migration was ran from the application that the ID in the model was supposed to be a primary key.
I found the problem by opening the SQL Profiler and seeing that all transactions was successfully submitted to the database (from the application) but only one new row was created. The profiler also showed that some type of deadlock was happening but I couldn't see much more in the trace logs of the profiler.
On further inspection I noticed that the primary key identifier was missing on the column "Id".
The exceptions I got from my application was:
This SqlTransaction has
completed; it is no longer usable.
and/or
An exception has been raised that is likely due to a transient
failure. Consider enabling transient error resiliency by adding
'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I have the same problem. This error occurs because conection pooling. When exists two or more users acess the system the connetion pooling reuse a connetion and the transation too. If the first user execute commit ou rollback the transaction is no longe usable.
I have recently ran across similar situation. To debug in any VS IDE version, open exceptions from Debug (Ctrl + D, E) - check all checkboxes against the column "Thrown", and run the application in debug mode. I have realized that one of the tables was not imported properly in the new database, so internal Sql Exception was killing the connection, thus results into this error.
Gist of the story is, If Previously working code returns this error on a new database, this could be database schema missing issue, realize by above debugging tip,
Hope It Helps,
HydTechie
Also check for any long running processes executed from your .NET app against the DB. For example you may be calling a stored procedure or query which does not have enough time to finish which can show in your logs as:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
This SqlTransaction has completed; it is no longer usable.
Check the command timeout settings
Try to run a trace (profiler) and see what is happening on the DB side...
In my case the problem was that one of the queries included in the transaction was raising an exception, and even though the exception was "gracefully" handled, it still managed to roll back the entire transaction.
My pseudo-code was like:
var transaction = connection.BeginTransaction();
for(all the lines in a file)
{
try{
InsertLineInTable(); // INSERT statement might fail and throw an exception
}
catch {
// notify the user about the error on line x and continue
}
}
// Commit and Rollback will fail if one of the queries
// in InsertLineInTable threw an exception
if(CheckTableForErrors())
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
Here is a way to detect Zombie transaction
SqlTransaction trans = connection.BeginTransaction();
//some db calls here
if (trans.Connection != null) //Detecting zombie transaction
{
trans.Commit();
}
Decompiling the SqlTransaction class, you will see the following
public SqlConnection Connection
{
get
{
if (this.IsZombied)
return (SqlConnection) null;
return this._connection;
}
}
I notice if the connection is closed, the transOP will become zombie, thus cannot Commit.
For my case, it is because I have the Commit() inside a finally block, while the connection was in the try block. This arrangement is causing the connection to be disposed and garbage collected. The solution was to put Commit inside the try block instead.
For what it's worth, I've run into this on what was previously working code. I had added SELECT statements in a trigger for debug testing and forgot to remove them. Entity Framework / MVC doesnt play nice when other stuff is output to the "grid". Make sure to check for any rogue queries and remove them.
In my case, I've some codes which need to execute after committing the transaction, at the same try-catch block. One of the codes threw
an error then try block handed over the error to its catch block which contains the transaction rollback.
It will show a similar error. For example, look at the code structure below :
SqlTransaction trans = null;
try{
trans = Con.BeginTransaction();
// your codes
trans.Commit();
//your codes having errors
}
catch(Exception ex)
{
trans.Rollback(); //transaction roll back
// error message
}
finally
{
// connection close
}
Hope it will help someone :)