In my app code it is common that where something goes wrong during app use I want as much information as possible about the circumstances to be logged, including if applicable the stack trace of an Exception which has been thrown.
But during testing I don't want these stack traces to be logged, cluttering up the log file to no purpose.
If this is a situation where the test itself creates an Exception object you can potentially give it a message which can identify it as a dummy Exception, like so:
given:
indexMgr.queryParser = Mock( QueryParser ){
parse(_) >> { throw new Exception( 'dummy parse problem' )}
}
and then in the app code do this:
try {
query = queryParser.parse(queryString)
}catch( e ) {
log.debug( "QP exception msg: $e.message" )
// we don't want a stack trace to be logged if this is a dummy Exception deliberately thrown during testing
if( ! e.message.contains( 'dummy' )) {
// this will log the stack trace of e
log.error( 'query threw Exception in QP.parse()', e )
}
return false
}
... but there are 2 problems with this: firstly it is not always the case that an expected Exception will be created by the testing code, rather than by the app code, and secondly that it feels wrong to be checking for conditions identifying the conduct of a test in the actual app code.
Is there a "best practice" way of tackling this?
If just dropping a stack trace from a line is fine, you can configure the exception conversion logic in a logger's pattern layout. Below is an example using log4j2:
public class ExceptionOutput {
private static final Logger LOG = LoggerFactory.getLogger(ExceptionOutput.class);
public static void main(String[] args) {
LOG.info("Foo", new NullPointerException("MESSAGE"));
}
}
log4j2 configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"
alwaysWriteExceptions="false"/>
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Take note of alwaysWriteExceptions=false. It disables exceptions output completely. Now if we run the code we will get:
01:04:50.151 [main] INFO ExceptionOutput - Foo
But if you revert to alwaysWriteExceptions=true, which is also the default behaviour if the parameter is omitted, then you get:
01:07:03.018 [main] INFO ExceptionOutput - Foo
java.lang.NullPointerException: MESSAGE
at ExceptionOutput.main(ExceptionOutput.java:8)
But there is more to this. For more flexibility you can use %throwable{...} conversion word in the pattern as explained here in the Patterns table for the respective conversion pattern. In order to apply the logic only to tests you can have log4j2-test.xml on your classpath as explained here. Similar functionality of exceptions conversion also exists for other logging libraries, e.g. logback
Related
I'm using jclouds 2.5.0. It's working perfectly in all of our deployments except for one. In this case, we're seeing the following jclouds message in our log4j2 logs:
2022-07-14 21:37:29.263 +0000,3124098302712886 {} ERROR o.j.h.h.BackoffLimitedRetryHandler [clrd-highpri-1] Cannot retry after server error, command has exceeded retry limit 5: [method=org.jclouds.aws.s3.AWSS3Client.public abstract java.lang.String org.jclouds.s3.S3Client.getBucketLocation(java.lang.String)[hammerspace-data-bucket-us-west-2], request=GET https://s3.amazonaws.com/hammerspace-data-bucket-us-west-2?location HTTP/1.1]
This message occurs during a getBlob call, so I'm assuming part of getBlob is to determine the bucket from which the blob should be retrieved. This call is failing 5 times - but not just failing with a bad return code - it's hanging and timing out, so these 5 retries are taking up the lion's share of the time it takes to download the blob.
After getBlob finally stops calling getBucketLocation, it then tries the download with the default region (us-east-1). Since the bucket is actually in us-west-2, the download takes a bit longer than it should, but - again - the actual download bottleneck is the failed calls to getBucketLocation.
Has anyone seen anything like this before?
I'd also be interested in knowing how to turn on more jclouds logging. I used to uncomment lines like this in my log4j2.xml file:
<!-- <logger name="org.jclouds" level="debug" additivity="true" /> -->
<!-- <logger name="jclouds.compute" level="debug" additivity="true" /> -->
<!-- <logger name="jclouds.wire" level="debug" additivity="true" /> -->
<!-- <logger name="jclouds.headers" level="debug" additivity="true" /> -->
<!-- <logger name="jclouds.ssh" level="debug" additivity="true" /> -->
<!-- <logger name="software.amazon.awssdk" level="debug" additivity="true" /> -->
<!-- <logger name="org.apache.http.wire" level="debug" additivity="true" /> -->
But these don't seem to have any effect in 2.5.0 anymore.
Finally, if anyone knows how I can stop getBlob from calling getBucketLocation, I'd much appreciate some advice here. I'm thinking there must be a way to specify the desired bucket to the jclouds blob context up front so it doesn't have to resolve it.
John
[Update 1]
We thought originally the problem was we didn't have our AIM profile configured correctly for the bucket, but after playing with it, we were able to run the AWS command line tool from the same host on that bucket and it didn't hang, but jclouds is still hanging on getBucketLocation on the same box. I'm completely stumped by this. It HAS to be something internal to jclouds 2.5.0 with the AWS provider.
I've discovered the root cause of this issue and thought there might be others out there that would like to know what's going on.
Amazon has a general work flow they publish that allows clients to always find the correct URL endpoint for a given bucket:
ask s3.amazonaws.com for the bucket location
use the url returned to make the container specific request (get/put, etc)
If a client is slightly more intelligent, it will ask only on the first request and cache the bucket location URL and reuse it in subsequent requests.
If a client is even more intelligent, and it notices a region-specific URL is specified, it will use the URL directly to attempt a request. Upon failure, it will then call back to the US west coast to get the bucket location, cache it and use it.
Apparently, jclouds is only at intelligence level 1 above. It completely ignores the specified URL, but it does at least cache the results from the first getBucketLocation call and use that region-specific URL, as needed.
Internally, it's using a google guava LoadingCache for this process. It might be nice if there was a mechanism in jclouds to pre-load this cache with known region-specific URLs for a given bucket. Then it would not have to go off box for the getLocation data - even on the first request.
I hope this is helpful to others. It sure cost me a lot of pain to find out. And since I received no answers from any of my jclouds mailing list queries, I have to assume that there was no one in the jclouds community that understood how this worked either. (Or perhaps I just didn't word my query well enough.)
[UPDATE]
I did find a work around for this. I wrote this static inner class in my jclouds-consuming client:
#ConfiguresHttpApi
private static class BucketToRegionHack extends AWSS3HttpApiModule {
private String region;
private String bucket;
public void setBucketForRegion(String region, String bucket) {
this.region = region;
this.bucket = bucket;
}
#Override
#SuppressWarnings("Guava")
protected CacheLoader<String, Optional<String>> bucketToRegion(Supplier<Set<String>> regionSupplier, S3Client client) {
Set<String> regions = regionSupplier.get();
if (regions.isEmpty()) {
return new CacheLoader<String, Optional<String>>() {
#Override
#SuppressWarnings({"Guava", "NullableProblems"})
public Optional<String> load(String bucket) {
if (BucketToRegionHack.this.bucket != null && BucketToRegionHack.this.bucket.equals(bucket)) {
return Optional.of(BucketToRegionHack.this.region);
}
return Optional.absent();
}
#Override
public String toString() {
return "noRegions()";
}
};
} else if (regions.size() == 1) {
final String onlyRegion = Iterables.getOnlyElement(regions);
return new CacheLoader<String, Optional<String>>() {
#SuppressWarnings("OptionalUsedAsFieldOrParameterType")
final Optional<String> onlyRegionOption = Optional.of(onlyRegion);
#Override
#SuppressWarnings("NullableProblems")
public Optional<String> load(String bucket) {
if (BucketToRegionHack.this.bucket != null && BucketToRegionHack.this.bucket.equals(bucket)) {
return Optional.of(BucketToRegionHack.this.region);
}
return onlyRegionOption;
}
#Override
public String toString() {
return "onlyRegion(" + onlyRegion + ")";
}
};
} else {
return new CacheLoader<String, Optional<String>>() {
#Override
#SuppressWarnings("NullableProblems")
public Optional<String> load(String bucket) {
if (BucketToRegionHack.this.bucket != null && BucketToRegionHack.this.bucket.equals(bucket)) {
return Optional.of(BucketToRegionHack.this.region);
}
try {
return Optional.fromNullable(client.getBucketLocation(bucket));
} catch (ContainerNotFoundException e) {
return Optional.absent();
}
}
#Override
public String toString() {
return "bucketToRegion()";
}
};
}
}
}
This is mostly a copy of the code as it exists in S3HttpApiModule in jclouds. Then I added the following snippet to my init code where I'm setting up the JClouds client:
BucketToRegionHack b2mModule = new BucketToRegionHack();
contextBuilder.modules(ImmutableSet.of(b2mModule));
Pattern pattern = Pattern.compile("s3-([a-z0-9-]+)\\.amazonaws.com");
Matcher matcher = pattern.matcher(endpoint);
if (matcher.find()) {
String region = matcher.group(1);
b2mModule.setBucketForRegion(region, cspInfo.getContainer());
}
...where 'contextBuilder' is the jclouds context builder I'm using. This essentially overrides the S3HttpApiModule with my own version, which allows me to provide my own bucket-to-region method that pre-loads the LoadingCache with my known bucket and region.
A better fix for this would expose a way for users to simply preload the loading cache with a map of buckets to regions so no calls to getBucketLocation would be made for those that are pre-loaded.
I am deploying a web app (WAR) to a Tomcat 8 web container.
The WAR includes in the '/WEB-INF/lib' directory the following jTDS JDBC driver:
<dependency org="net.sourceforge.jtds" name="jtds" rev="1.3.1" />
(file is: jtds-1.3.1.jar).
This is how the resource is defined in META-INF/context.xml:
<Resource name="jdbc/jtds/sybase/somedb"
auth="Container"
type="javax.sql.DataSource"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sybase://localhost:2501/somedb"
username="someuser" password="somepassword"
/>
In my code I obtain the javax.sql.DataSource the normal way:
InitialContext cxt = new InitialContext();
if ( cxt == null ) {
throw new RuntimeException("Uh oh -- no context!");
}
DataSource ds = (DataSource) cxt.lookup( lookupName );
I further verify (by printing) that the DataSource object ds is of the expected type:
org.apache.tomcat.dbcp.dbcp2.BasicDataSource
… but when I try to get a connection out of it:
Connection conn = ds.getConnection();
… I get the following trace:
java.lang.AbstractMethodError
net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
org.apache.tomcat.dbcp.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:924)
org.apache.tomcat.dbcp.dbcp2.PoolableConnection.validate(PoolableConnection.java:282)
org.apache.tomcat.dbcp.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:359)
org.apache.tomcat.dbcp.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2316)
org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2299)
org.apache.tomcat.dbcp.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2043)
org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1543)
What gives?
Turns out I had to add:
validationQuery="select 1"
in the Resource declaration in context.xml.
This is mentioned here (although mispelled as validateQuery).
Digging into the implementation of JtdsConnection one sees:
/* (non-Javadoc)
* #see java.sql.Connection#isValid(int)
*/
public boolean isValid(int timeout) throws SQLException {
// TODO Auto-generated method stub
throw new AbstractMethodError();
}
This is really weird, I think AbstractMethodError is supposedly thrown by the compiler only, unimplemented methods ought to throw UnsupportedOperationException. At any rate, the following code from PoolableConnection shows why the presence or not of validationQuery in context.xml can change things. Your validationQuery is passed as the value of the sql String parameter in the below method (or null if you don't define a validationQuery):
public void More ...validate(String sql, int timeout) throws SQLException {
...
if (sql == null || sql.length() == 0) {
...
if (!isValid(timeout)) {
throw new SQLException("isValid() returned false");
}
return;
}
...
}
So basically if no validationQuery is present, then the connection's own implementation of isValid is consulted which in the case of JtdsConnection weirdly throws AbstractMethodError.
The answer mentioned above by Marcus worked for me when I encountered this problem. To give a specific example of how the validationQuery setting looks in the context.xml file:
<Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://SQLSERVER01:1433/mydbname;instance=MYDBINSTANCE"
username="dbuserid" password="dbpassword"
validationQuery="select 1"
/>
The validationQuery setting goes in with each driver setting for your db connections. So each time you add another db entry to your context.xml file, you will need to include this setting with the driver settings.
The above answer works. If you are setting it up for standalone Java application, set the validation query in the datasource.
BasicDataSource ds = new BasicDataSource();
ds.setUsername(user);
ds.setPassword(getPassword());
ds.setUrl(jdbcUrl);
ds.setDriverClassName(driver);
ds.setMaxTotal(10);
ds.setValidationQuery("select 1"); //DBCP throws error without this query
Using Spring Integration with RabbitMQ in my project I face a problem.
The project consist of receiving messaging from a queue, trace the incoming message, process the message using a service-activator, and trace the response or the exception thrown by the service activator.
Here is the sample configuration:
<!-- inbound-gateway -->
<int-amqp:inbound-gateway id="inboundGateway"
request-channel="gatewayRequestChannel"
queue-names="myQueue"
connection-factory="rabbitMQConnectionFactory"
reply-channel="gatewayResponseChannel"
error-channel="gatewayErrorChannel"
error-handler="rabbitMQErrorHandler"
mapped-request-headers="traceId"
mapped-reply-headers="traceId" />
<!-- section to dispatch incoming messages to trace and execute service-activator -->
<int:publish-subscribe-channel id="gatewayRequestChannel" />
<int:bridge input-channel="gatewayRequestChannel" output-channel="traceChannel"/>
<int:bridge input-channel="gatewayRequestChannel" output-channel="serviceActivatorInputChannel"/>
<!-- the trace channel-->
<int:logging-channel-adapter id="traceChannel"
expression="headers['traceId'] + '= [Headers=' + headers + ', Payload=' + payload+']'" logger-name="my.logger" level="DEBUG" />
<!-- service activator which may throw an exception -->
<int:service-activator ref="myBean" method="myMethod" input-channel="serviceActivatorInputChannel" output-channel="serviceActivatorOutputChannel"/>
<!-- section to dispatch output-messages from service-activator to trace them and return them to the gateway -->
<int:publish-subscribe-channel id="serviceActivatorOutputChannel" />
<int:bridge input-channel="serviceActivatorOutputChannel"
output-channel="traceChannel" />
<int:bridge input-channel="serviceActivatorOutputChannel"
output-channel="gatewayResponseChannel" />
<!-- section to dispatch exceptions from service-activator to trace them and return them to the gateway -->
<int:bridge input-channel="gatewayErrorChannel"
output-channel="traceChannel" />
<int:bridge input-channel="gatewayErrorChannel"
output-channel="gatewayResponseChannel" />
I simplified the code to suit my explanation. The idea is to trace the input and output/error messages coming and outgoing to/from the service-activator. To do this, I use a message's header named traceId. This identifier is used as a correlation-identifier to be able to associate the request-message with its response (these two messages share the same traceId value).
Everything is working fine when no exception is thrown by the service-activator.
But when an exception is thrown, it seems a new message is generated by the gateway, without my original traceId header.
Looking a little bit into the gateway code, I find the following piece of code into the class org.springframework.integration.gateway.MessagingGatewaySupport :
private Object doSendAndReceive(Object object, boolean shouldConvert) {
...
if (error != null) {
if (this.errorChannel != null) {
Message<?> errorMessage = new ErrorMessage(error);
Message<?> errorFlowReply = null;
try {
errorFlowReply = this.messagingTemplate.sendAndReceive(this.errorChannel, errorMessage);
}
...
}
It seems that, when an exception occurred, a new message is created with the exception message as payload and is sent to the gateway's errorChannel. Here is where I loose my custom headers.
Is there a way to preserve my custom-headers when an exception is occurring? (maybe there is a way to configure it and I may be missing it...). Or maybe I am not implementing my flow in a right way. If this is the case, any comment or suggestion is welcome.
By the way, I am using the version 4.0.3.RELEASE of the spring-integration-core artifact.
Thanks for yours answers
Edit: as Gary Russel said, this exemple is missing the following puslish/subscribe queue configuration
<int:publish-subscribe-channel id="gatewayErrorChannel"/>
The message on the error-channel is an ErrorMessage. It has two properties: cause - the original exception and failedMessage - the message at the point of failure. The ErrorMessage does not get the failedMessage's headers.
You can't just send the ErrorMessage back to the gateway without some extra work.
Typically, error flows will perform some analysis of the error before returning a response.
If you want to restore some custom header, you will need a header enricher on the error flow.
Something like
<int:header-enricher ...>
<int:header name="traceId" expression="payload.failedMessage.headers.traceId" />
</int:header-enricher>
In addition, your configuration is a little odd in that you have 2 subscribers on gatewayErrorChannel. Unless it is a <publish-subscribe-channel/>, these consumers will get alternate messages; it seems like you expect them both to get it so you need to declare the channel properly.
Does anybody know how to handle Unchecked / Runtime Exceptions in MULE..??
I mean, in my java code, for a certain reason , i am "throwing an Exception" and i want Mule to detect it and route it to proper flow , where i can Log or Print that Exception.
So , What exactly should i place in my "flow" in Mule Config File to achieve that.
My Java code :
public Object xyz (Map payload) throws Exception {
if (payload.isEmpty()) {
throw new Exception ("New Exception") ;
}
}
My Mule Config file :
<flow name="APIAuthenticate">
<http:inbound-endpoint address="http://localhost:1212/jcore/authorize" transformer-refs="HttpParams" responseTransformer-refs="JavaObjectToJson" contentType="application/json" encoding="UTF-8">
<not-filter>
<wildcard-filter pattern="/favicon.ico"/>
</not-filter>
</http:inbound-endpoint>
<component class="main.java.com.abc.XYZ"/>
</flow>
Any help will be deeply appreciated..!!
Configuring a default-exception-strategy in your flow should allow you to catch the exceptions (even runtime ones) and deal with them.
Read the error handling reference guide for more info.
Ok..i did some hit and trial and i figured out that
When the Exception is thrown, an Exception Strategy is required like default-exception-strategy OR custom-exception-strategy is required, that would route the flow to some Class that would handle it and do required Actions.
But When we Return an Exception (like below), then we can use the exception-payload-filter or choice attribute of Mule to handle it.
public Object xyz (Map payload) throws Exception {
if (payload.isEmpty()) {
return new Exception ("New Exception") ;
}
}
Please Correct me if i am wrong..??
Also if there are other answers to this question, please be kind to put them..
I want to log in the console or in a file, all the queries that Grails does, to check performance.
I had configured this without success.
Any idea would help.
Setting
datasource {
...
logSql = true
}
in DataSource.groovy (as per these instructions) was enough to get it working in my environment. It seems that parts of the FAQ are out of date (e.g. the many-to-many columns backwards question) so this might also be something that changed in the meantime.
I find it more useful to do the following, which is to enable Hibernate's logging to log the SQL along with bind variables (so you can see the values passed into your calls, and easily replicate the SQL in your editor or otherwise).
In your Config.groovy, add the following to your log4j block:
log4j = {
// Enable Hibernate SQL logging with param values
trace 'org.hibernate.type'
debug 'org.hibernate.SQL'
//the rest of your logging config
// ...
}
For grails 3.*
Option #1 add the following to logback.groovy
logger("org.hibernate.SQL", DEBUG, ["STDOUT"], false)
logger("org.hibernate.type.descriptor.sql.BasicBinder", TRACE, ["STDOUT"], false)
or
Option #2 add the following to dataSource in the application.yml. However this approach does not log the parameter values
environments:
local:
dataSource:
logSql: true
formatSql: true
Try this:
log4j = {
...
debug 'org.hibernate.SQL'
trace 'org.hibernate.type.descriptor.sql.BasicBinder'
}
It avoids the performance problems of trace logging the Hibernate type package. This works with Hibernate 3.6 and above. I got this from: https://burtbeckwith.com/blog/?p=1604
Solution is only for development, not production.
All the answers above work and are correct. But they do not show the complete query in a nice human readable way. If want to see the final (without any ?, ?) query you have two options.
A) proxy your jdbc connection with log4jdbc or p6Spy.
B) look at it on database level. For example really easy to do with mysql.
Find out where you general_log_file is. Active general log if no activated already.
mysql command line> show variables like "%general_log%";
mysql command line> set global general_log = true;
Now everything is logged to you log file. Mac / linux example to show nice stream of your queries.
tail -f path_to_log_file
Next works for me:
grails-app/conf/application.yml
# ...
hibernate:
format_sql: true # <<<<<<< ADD THIS <<<<<<<
cache:
queries: false
use_second_level_cache: true
# ...
environments:
development:
dataSource:
logSql: true // <<<<<<< ADD THIS <<<<<<<
dbCreate: create-drop
url: jdbc:h2:mem:...
# ...
grails-app/conf/logback.groovy
// ...
appender('STDOUT', ConsoleAppender) {
encoder(PatternLayoutEncoder) {
pattern = "%level %logger - %msg%n"
}
}
// >>>>>>> ADD IT >>>>>>>
logger 'org.hibernate.type.descriptor.sql.BasicBinder', TRACE, ['STDOUT']
logger 'org.hibernate.SQL', TRACE, ['STDOUT']
// <<<<<<< ADD IT <<<<<<<
root(ERROR, ['STDOUT'])
def targetDir = BuildSettings.TARGET_DIR
// ...
Source: http://sergiodelamo.es/log-sql-grails-3-app/
Pure for reference only, but I use p6spy to log the SQL queries. It's a small intermediate jdbc driver. The exact query is logged as it would be send to the server (with parameters included).
include it in your project:
runtime 'p6spy:p6spy:3.0.0'
Change your datasource driver:
driverClassName: com.p6spy.engine.spy.P6SpyDriver
And your jdbc url:
url: jdbc:p6spy:mysql://
Configure it using spy.properties (in grails-app/conf).
driverlist=org.h2.Driver,com.mysql.jdbc.Driver
autoflush=true
appender=com.p6spy.engine.spy.appender.StdoutLogger
databaseDialectDateFormat=yyyy-MM-dd
logMessageFormat=com.p6spy.engine.spy.appender.MultiLineFormat
Don't forget to disable this for production!
I know this was asked and answered long back .But I just happened to see this question and couldn't stop myself in answering or sharing our sql logging implementation approach in our project.
Hope it be of some help.
Currently it is in development environment.
We are using "log4jdbc Driver Spy " to log sql.
Configuration:
In your BuildConfig.groovy:
add below dependencies:
dependencies {
.....
runtime 'org.lazyluke:log4jdbc-remix:0.2.7'
}
And in your DataSource or other config related :[wherever you have defined the data source related configuration] ,
Add :
datasources{
.....
driverClassName: "net.sf.log4jdbc.DriverSpy",
url: "jdbc:log4jdbc:oracle:thin:#(DESCRIPTION =(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = XXXXX.XX>XXX)(PORT = 1521))) (CONNECT_DATA = (SID = XXXX)(SERVER =DEDICATED)))",
....
}
log4j = {
info 'jdbc.sqlonly' //, 'jdbc.resultsettable'
}
From my personal experience I found it quite useful and helpful while debugging.
Also more information you can find in this site. https://code.google.com/p/log4jdbc-remix/
King Regards
If you have the console plugin installed, you can get sql logging with this little code snippet.
// grails 2.3
def logger=ctx.sessionFactory.settings.sqlStatementLogger
// grails 3.3
def logger = ctx.sessionFactory.currentSession.jdbcCoordinator.statementPreparer.jdbcServices.sqlStatementLogger
logger.logToStdout=true
try {
<code that will log sql queries>
}
finally {
logger.logToStdout = false
}
This is a variation on many of the solutions above, but allows you to tweak the value at runtime. And just like the other solutions that deal with logToStdout it only shows the queries and not the bind values.
The idea was stolen from a burtbeckwith post I read some years ago that I can't find right now. It has been edited to work with grails 3.3.
A similar technique can be used to turn on logging for specific integration tests:
class SomeIntegrationSpec extends IntegrationSpec {
def sessionFactory
def setup() {
sessionFactory.settings.sqlStatementLogger.logToStdout = true
}
def cleanup() {
sessionFactory.settings.sqlStatementLogger.logToStdout = false
}
void "some test"() {
...
}
This will turn on sql logging for just the tests in this one file.
For a particular Block of code we can also create a method that accept a closure. eg.
static def executeBlockAndGenerateSqlLogs(Closure closure) {
Logger sqlLogger = Logger.getLogger("org.hibernate.SQL");
Level currentLevel = sqlLogger.level
sqlLogger.setLevel(Level.TRACE)
def result = closure.call()
sqlLogger.setLevel(currentLevel)
result }
executeBlockAndGenerateSqlLogs{DomainClazz.findByPropertyName("property value")}
logback.xml
Grails 5 and above only accepts logback.xml. Add the following inside the configuration tag:
<logger name="org.hibernate.SQL" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
</logger>
<logger name="org.hibernate.type.descriptor.sql.BasicBinder" level="TRACE" additivity="false">
<appender-ref ref="STDOUT" />
</logger>
application.yml
To better visualize SQL queries, add the following:
dataSource:
formatSql: true
If you want logback configuration for development only, you can add the following to the block environments > development with logging configuration in conf/logback-dev.xml:
logging:
config: classpath: logback-dev.xml