How to display detailed Hibernate log? - sql

I have a Spring Boot application with multiple endpoints and database operations and sometimes is difficult to find the location of a specific sql call.
It's possible to change Hibernate logs to show detailed logs based on the sql ?
Now the output is like:
2020-01-15 16:40:23.059 DEBUG 24348 --- [nio-8083-exec-2] org.hibernate.SQL : select id, name from ......
But i want to show java origin class instead of "org.hibernate.SQL".
Thank you.

You can enable the logging of the SQL statements
log4j.logger.org.hibernate.SQL=DEBUG, myLogger
and also the actual parameters passed to the queries
log4j.logger.org.hibernate.type=TRACE, eclLogger
The log originator will always be the Hibernate logger (org.hibernate.SQL), I guess you want to log your DAO method? You will need to add that in your code (obviously in case of exceptions you can log the entire stack and see the invocation chain)

Related

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

Get stacktrace of errors from PyRFC call?

Up to now I get only an error message if something inside my SAP RFC function is wrong
pyrfc._exception.ABAPRuntimeError: RFC_ABAP_MESSAGE (rc=4): key=No authorization,
message=No authorization [MSG: class=00, type=E, number=001, v1-4:=No authorization;;;]
It would increase the development speed a lot if I could get a stacktrace of ABAP function. Is there a way to get a stacktrace like for example in Python?
Related: https://softwarerecs.stackexchange.com/questions/52350/sentry-event-from-exception-to-html
Sentry uses a particular JSON to represent a stacktrace and the content of the local variables. Above link contains an example.
Stack trace inside ABAP can be called
with the class cl_abap_get_call_stack.
Local variables are not included in the stack trace returned by the class cl_abap_get_call_stack.
But you could use a log-point to monitor local variables and the stack trace. Log-points can be created, changed and viewed in transaction saab.
A example code snippet:
DATA(formatted_stack) =
cl_abap_get_call_stack=>format_call_stack_with_struct(
cl_abap_get_call_stack=>get_call_stack( ) ).
LOG-POINT ID my_log_point FIELDS formatted_stack
local_variable1 local_variable2.
For the authorization-error, please check transaction su53.
When you see the red authorization-object S_RFC, it means you are not allowed to call the function module in any way!
With ABAP 753 release there was introduced such structure as EPP - Extended Passport.
It seems to be doing something that you want, i.e. showing trace of the called system. I put "seems to be" because I have no 753+ system by my hands so I cannot check in practice.
From the description it should do what you want:
An Extended Passport (EPP) is a data structure that can be sent from a client to a server and is used to analyze call stacks
Extended Passport can be used by frameworks and analysis tools to track external call stacks in communication between clients and servers beyond system boundaries. The values of the EPP components can be saved to log files and used for monitoring. One example of this are short dumps, which all display the most important EPP components.
The DEMO_EPP gives the following usage pattern of EPP:
cl_demo_epp=>init( ).
"this program
cl_demo_epp=>append( ).
"Calling RFC to remote instance
CALL FUNCTION 'DEMO_RFM_EPP_1' DESTINATION instance.
"New SAP LUW
CALL FUNCTION 'DEMO_UPDATE_DELETE' IN UPDATE TASK
EXPORTING
values = VALUE demo_update_tab( ).
COMMIT WORK.
cl_demo_epp=>append( ).
cl_demo_output=>new(
)->begin_section( `Extended Passport (EPP)`
)->display( name = 'EPP Trace'
data = cl_demo_epp=>get( ) ).

How to build Query Parameter using Spring Traverson

I have a Spring Data Rest webservice with QueryDSL Web Support enabled so I can query any of the fields directly like below;
http://localhost:9000/api/prod1007?cinfo1=0126486035
And I was using Traverson to access this service but traverson is not generating the query parameter as above; below is my code (I have tried both withTemplateParameters() and withParameters() in Hop level)
Code:
Map<String,Object> parameters = new HashMap<String,Object>();
parameters.put("cinfo1", "0127498374");
PagedResources<Tbpinstance> items = traverson
.follow(Hop.rel("prod1007"))
.withTemplateParameters(parameters)
.toObject(resourceParameterizedTypeReference);
Any Help is much appreciated. Thanks!
Traverson needs to know where to put those parameters. They could be path parameters, or they could be query parameters. Furthermore, Traverson navigates the service from the root, so the parameters might need to be inserted somewhere in the middle, and not in the final step only.
For these reasons the server needs to clearly tell how to use the parameters. Traverson needs a HATEOAS-"directory" for the service. When Traverson HTTP GETs the http://localhost:9000/api document, it needs to contain a link similar to this:
"_links" : {
"product" : {
"href" : "http://localhost:9000/api/prod1007{?cinfo1}",
"templated" : true
},
}
Now it knows that the cinfo1 parameter is a query parameter and will be able to put it into its place.
#ZeroOne, you are entirely correct, that is what the response from the server should look like. Currently spring-hateoas does not support responses that look like that (I expect it will in the future as I have seen comments by Oliver Gierke indicating that spring-hateoas is going through a major upgrade).
As at the time of writing, to generate responses from the server as you describe, we have used spring-hateoas-ext mentioned in https://github.com/spring-projects/spring-hateoas/issues/169. You can find code at https://github.com/dschulten/hydra-java#affordancebuilder-for-rich-hyperlinks-from-v-0-2-0.
This is a 'drop in replacement' for spring-hateoas' ControllerLinkBuilder.
Here is the maven dependency we use (but check for the latest version).
<!-- Drop in replacement from spring-hateoas ControllerLinkBuilder -->
<dependency>
<groupId>de.escalon.hypermedia</groupId>
<artifactId>spring-hateoas-ext</artifactId>
<version>0.3.0-beta6</version>
</dependency>
Here's the import we use in our ResourceAssemblers.
import static de.escalon.hypermedia.spring.AffordanceBuilder.*;

how to get the prepared statement query after binding

I am using the prepared statement like this
PreparedStatement pstmt = getConnection().prepareStatement(INSERT_QUERY);
pstmt.setInt(1,userDetails.getUsersId());
log.debug("SQL for inserting child transactions " + pstmt.toString());
I want to log the exact SQL statement after binding into the log file but this thing is not working. It is logging it something like SQLServerPreparedStatement:7. I searched on internet but did not get the satisfactory answer. Any help will be appreciated.
You can try to enable the JDBC internal logging by setting a PrintWriter on the DriverManager. Log4j2 provides an IO Streams module so you can include the output in your normal log file. Sample code:
PrintWriter logger = IoBuilder.forLogger(DriverManager.class)
.setLevel(Level.DEBUG)
.buildPrintWriter();
DriverManager.setLogWriter(logger);
In addition, individual JDBC drivers often provide proprietary logging mechanisms, usually enabled with a system property. You will need to consult the documentation for your specific driver for the details.
You can write a utility function which will replace the '?' with the bind variables.

Determine request Uri from WCF Data Services LINQ query for FirstOrDefault against Azure without executing it?

Problem
I would like to trace the Uri that will be generated by a LINQ query executed against a Microsoft.WindowsAzure.StorageClient.TableServiceContext object. TableServiceContext just extends System.Data.Services.Client.DataServiceContext with a couple of properties.
The issue I am having is that the query executes fine against our Azure Table Storage instance when we run the web role on a dev machine in debug mode (we are connecting to Azure storage in the cloud not using Dev Storage). I can get the resulting query Uri using Fiddler or just hovering over the statement in the debugger.
However, when we deploy the web role to Azure the query fails against the exact same Azure Table Storage source with a ResourceNotFound DataServiceClientException. We have had ResoureNotFound errors before that dealt with the behavior of FirstOrDefault() on empty tables. This is not the problem here.
As one approach to the problem, I wanted to compare the query Uri that is being generated when the web role is deployed versus when it is running on a dev machine.
Question
Does anyone know a way to get the query Uri for the query that will be sent when the FirstOrDefault() method is called. I know that you can call ToString() on the IQueryable returned from the TableServiceContext but my concern is that when FirstOrDefault() is called the Uri might be further optimized and ToString() on IQueryable might not be what is ultimately sent to the server when FirstOrDefault() is called.
If someone has another approach to the problem I am open to suggestions. It seems to be a general problem with LINQ when trying to determine what will happen when the expression tree is finally evaluated. I am open to suggestions here as well because my LINQ skills could use some improvement.
Sample Code
public void AddSomething(string ProjectID, string Username) {
TableServiceContext context = new TableServiceContext();
var qry = context.Somethings.Where(m => m.RowKey == Username
&& m.PartitionKey == ProjectID);
System.Diagnostics.Trace.TraceInformation(qry.ToString());
// ^ Here I would like to trace the Uri that will be generated
// and sent to the server when the qry.FirstOrDefault() call below is executed.
if (qry.FirstOrDefault() == null) {
// ^ This statement generates an error when the web role is running
// in the fabric
...
}
}
Edit Update and Answer
Steve provided the write answer. Our problem was as exactly described in this post which describes an issue with PartitionKey/RowKey ordering in Single Entity query which was fixed with an update to the Azure OS. This explains the discrepancy between our dev machines and when the web role was deployed to Azure.
When I indicated we had dealt with the ResourceNotFound issue before in our existence checks, we had dealt with it in two ways in our code. One way was using exception handling to deal with the ResourceNotFound error the other way was to put the RowKey first in the LINQ query (as some MS people had indicated was appropriate).
It turns out we have several places where the RowKey was first instead of using the exception handling. We will address this by refactoring our code to target .NET 4 and using the .IgnoreResourceNotFoundException = true property of theTableServiceContext .
Lesson learned (more than once): Don't depend on quirky undocumented behavior.
Aside
We were able to get the query Uri's. They did turn out to be different (as indicated they would be in the blog post). Here are the results:
Query Uri from Dev Fabric
`https://ourproject.table.core.windows.net/Somethings()?$filter=(RowKey eq 'test19#gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
Query Uri from Azure Fabric
`https://ourproject.table.core.windows.net/Somethings(RowKey='test19#gmail.com',PartitionKey='41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
I can do one better... I think I know what the problem is. :)
See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.
Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.
I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.