Get stacktrace of errors from PyRFC call? - abap

Up to now I get only an error message if something inside my SAP RFC function is wrong
pyrfc._exception.ABAPRuntimeError: RFC_ABAP_MESSAGE (rc=4): key=No authorization,
message=No authorization [MSG: class=00, type=E, number=001, v1-4:=No authorization;;;]
It would increase the development speed a lot if I could get a stacktrace of ABAP function. Is there a way to get a stacktrace like for example in Python?
Related: https://softwarerecs.stackexchange.com/questions/52350/sentry-event-from-exception-to-html
Sentry uses a particular JSON to represent a stacktrace and the content of the local variables. Above link contains an example.

Stack trace inside ABAP can be called
with the class cl_abap_get_call_stack.
Local variables are not included in the stack trace returned by the class cl_abap_get_call_stack.
But you could use a log-point to monitor local variables and the stack trace. Log-points can be created, changed and viewed in transaction saab.
A example code snippet:
DATA(formatted_stack) =
cl_abap_get_call_stack=>format_call_stack_with_struct(
cl_abap_get_call_stack=>get_call_stack( ) ).
LOG-POINT ID my_log_point FIELDS formatted_stack
local_variable1 local_variable2.
For the authorization-error, please check transaction su53.
When you see the red authorization-object S_RFC, it means you are not allowed to call the function module in any way!

With ABAP 753 release there was introduced such structure as EPP - Extended Passport.
It seems to be doing something that you want, i.e. showing trace of the called system. I put "seems to be" because I have no 753+ system by my hands so I cannot check in practice.
From the description it should do what you want:
An Extended Passport (EPP) is a data structure that can be sent from a client to a server and is used to analyze call stacks
Extended Passport can be used by frameworks and analysis tools to track external call stacks in communication between clients and servers beyond system boundaries. The values of the EPP components can be saved to log files and used for monitoring. One example of this are short dumps, which all display the most important EPP components.
The DEMO_EPP gives the following usage pattern of EPP:
cl_demo_epp=>init( ).
"this program
cl_demo_epp=>append( ).
"Calling RFC to remote instance
CALL FUNCTION 'DEMO_RFM_EPP_1' DESTINATION instance.
"New SAP LUW
CALL FUNCTION 'DEMO_UPDATE_DELETE' IN UPDATE TASK
EXPORTING
values = VALUE demo_update_tab( ).
COMMIT WORK.
cl_demo_epp=>append( ).
cl_demo_output=>new(
)->begin_section( `Extended Passport (EPP)`
)->display( name = 'EPP Trace'
data = cl_demo_epp=>get( ) ).

Related

The value of the variable on the right hand side of an assignment has been updated after the left hand side variable's value changed afterwards [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

Create JSON request body for POST API2 with one dynamic field which comes from API1 response [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

How to display detailed Hibernate log?

I have a Spring Boot application with multiple endpoints and database operations and sometimes is difficult to find the location of a specific sql call.
It's possible to change Hibernate logs to show detailed logs based on the sql ?
Now the output is like:
2020-01-15 16:40:23.059 DEBUG 24348 --- [nio-8083-exec-2] org.hibernate.SQL : select id, name from ......
But i want to show java origin class instead of "org.hibernate.SQL".
Thank you.
You can enable the logging of the SQL statements
log4j.logger.org.hibernate.SQL=DEBUG, myLogger
and also the actual parameters passed to the queries
log4j.logger.org.hibernate.type=TRACE, eclLogger
The log originator will always be the Hibernate logger (org.hibernate.SQL), I guess you want to log your DAO method? You will need to add that in your code (obviously in case of exceptions you can log the entire stack and see the invocation chain)

"Could not get Timeline data" when using Timeline Visualization with Comma IDE

After implementing the answer to this question on how to set up a script for time visualization in this project (which uses a small extension to the published Log::Timeline that allows me to set the logging file from the program itself), I still get the same error
12:18 Timeline connection error: Could not get timeline data: java.net.ConnectException: Conexión rehusada
(which means refused connection). I've also checked the created files, and they are empty, they don't receive anything. I'm using this to log:
class Events does Log::Timeline::Event['ConcurrentEA', 'App', 'Log'] { }
(as per the README.md file). It's probably the case that there's no such thing as a default implementation, as shown in the tests, but, in that case, what would be the correct way of making it print to the file and also connect to the timeline visualizer?
If you want to use the timeline visualization, leave the defaults for logging, commenting out any modification of the standard logging output. In my case:
#BEGIN {
# PROCESS::<$LOG-TIMELINE-OUTPUT>
# = Log::Timeline::Output::JSONLines.new(
# path => log-file
# )
#}
Not really sure if this would have happened if an output file is defined using an environment variable, but in any case, better to be on the safe side. You can use the output file when you eventually drop the script into production.

Biztalk 2009: SQL to WCF-SQL adapter migration; orchestration not receiving message?

From topic,
I have a receive location that currently uses sql adapter(receive port) to call(poll?) a stored procedure. The stored proc returns a FOR XML result.
The receiver then activates an orchestration which takes the message and populate the data from the message and into some variables (expression shape).
Orchestration looks like:
LongScope[ AtomicScope[ Receive location -> Expression ] ][Error handling]
I tried a direct migration to wcf-sql with XMLpolling as InboundOperationType, but it throws a null exception during the variable assignation(I assume).
Additional detail:
I caught the message from the receiver by filtering pipelineName using a send port. There is a slight different in the message retrieved by sql and wcf-sql adapter
sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" /></rootnode>
wcf-sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" xmlns=""/></rootnode>
Which should do nothing, if this msdn post is correct
I also went into orchestration debugger. Weird thing is, when using sql adapter, the message is still = null, but the varibles are assigned without problem. I also tried adding a send port directly after the receive port to dump the message. Nothing came out
I would appreciate any info/suggestion/solution
Do tell me if im missing any info.
Irrelevant Info:
As of this post the receive port doesnt even trigger anymore. I dont know why. Rebooting PC.
Also I suspect Biztalk gave my bruxism and lead to me requiring 6 teeth fillings
The difference between XML in SQL en WCF-SQL has nothing to do with the MSDN post you are linking to.
In the 2nd XML (WCF-SQL adapter), the row node does not have a namespace. In the 1st XML (SQL adapter), the row node inherits the default namespace "namespace" from its parent: 'root'.
Regarding the Receive Port not triggering anymore:
Are you sure your Host Instance(s) are still running?
My solution:
I added "xmlns = 'namespace'" as a 'data' in the stored procedure.
The adapter recognized it and removed it(since it was the same as parent node), allowing me to use the old schema.
Filler:
So I generated a schema using the output from WCF-SQL adapter,
however I couldnt replace my old one with it, since the expression shape will not recognize its child elements (var = messageObject.childElement)
I created a map to map the new one back to the old one.
But that didnt work, because they both shared the same namespace, and biztalk complained during runtime that it couldnt decide which schema to use.