How to have a run script with display_trace() in it for debug purposes - vlab

I have a typical sort of "run" script (below).
When I want to debug the scenario, I use VLAB GUI ... and in this instance I want the trace window open, so I have display_trace() at the end of the run script.
However, often I just want to run this scenario in batch as part of a regression test. The problem is that VLAB throws an exception on display_trace() when in batch mode.
I don't really like the idea of
try:
display_trace()
except:
pass
(or even catching the specific error that is thrown) ... it just feels "messy". What if there is a valid exception upon display_trace() that I miss?
Is there some way not to call display_trace() at all if I'm in batch mode?
run script:
from vlab import *
import os
image_path = os.path.join('o5e', 'bin','o5e_dbg.elf')
load('ecu.sim', args=['--testbench=testbench_o5e',"--image=%s" % image_path] + __args__)
# First set up MMU
add_sw_execute_breakpoint(get_address("BamExit"))
run(blocking=True)
# Then we can set breakpoints in user code space
add_sw_execute_breakpoint(get_address("init_variables"))
run(blocking=True)
# Trace stuff
vtf_sink = trace.sink.vtf("o5e.vtf")
add_trace("+src:ecu.core_system.Core0.InstrTraceMsg", sink=vtf_sink)
add_trace(get_ports("ecu.core_system.Core0", kind="bus"), sink=vtf_sink)
display_trace(vtf_sink)

The "interface_mode" session property can be used to query whether VLAB was launched in "graphical" mode, where the VLAB IDE is displayed, or in "text" mode.
You could use this property to conditionally call vlab.display_trace():
in_graphical_mode = vlab.get_properties()["interface_mode"] == "graphical"
if in_graphical_mode:
vlab.display_trace()

Related

Why am I getting "java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway" when using quarkusDev task in IntelliJ?

I am using Gradle 7.5, Quarkus 2.12.3 and mockk 1.13.3. When I try to run quarkusDev task from command line and then start continuous testing (by pressing r), then all tests pass OK.
However, when I do the same as from IntelliJ (as gradle run configuration), all tests fail with error:
java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway
How can I fix that?
Masked thrown exception
After much debugging I found the problem. The thrown exception actually originates in HotSpotVirtualMachine.java and is thrown during attachment of ByteBuddy as a java agent. Here is the relevant code;
// The tool should be a different VM to the target. This check will
// eventually be enforced by the target VM.
if (!ALLOW_ATTACH_SELF && (pid == 0 || pid == CURRENT_PID)) {
throw new IOException("Can not attach to current VM");
}
Turning check off
So the check can be turned off by setting ALLOW_ATTACH_SELF constant to true. The constant is set from a system property named jdk.attach.allowAttachSelf:
String s = VM.getSavedProperty("jdk.attach.allowAttachSelf");
ALLOW_ATTACH_SELF = "".equals(s) || Boolean.parseBoolean(s);
So, in my case, I simply added the following JVM argument to my gradle file and tests started to pass:
tasks.quarkusDev {
jvmArgs += "-Djdk.attach.allowAttachSelf"
}

Python: If error occurs anywhere, do specific line of code

I have a script I'm trying to write to process a large amount of data. There are, of course, potential for errors. In the script I need to connect to databases. If the script encounters an error, the code never reaches the point where the connection to the database is terminated. I'd like to have something in my python code that will recognize an error occurs, not matter where, and if nothing else at least close those databases. Does something like this exist? I know I can use try/except, but that would only work if I know exactly where I could get the error? I'm basically looking for a catchall to close my databases in the event an error occurs in a location I didn't anticipate.
To run certain cleanup code even if there is an error, use the finally block:
try:
# do stuff, possible exception
except:
# run this if exception
finally:
# always run this, even if exception
Reference: https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions

WebSphere wsadmin testConnection error message

I'm trying to write a script to test all DataSources of a WebSphere Cell/Node/Cluster. While this is possible from the Admin Console a script is better for certain audiences.
So I found the following article from IBM https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/txml_testconnection.html which looks promising as it describles exactly what I need.
After having a basic script like:
ds_ids = AdminConfig.list("DataSource").splitlines()
for ds_id in ds_ids:
AdminControl.testConnection(ds_id)
I experienced some undocumented behavior. Contrary to the article above the testConnection function does not always return a String, but may also throw a exception.
So I simply use a try-catch block:
try:
AdminControl.testConnection(ds_id)
except: # it actually is a com.ibm.ws.scripting.ScriptingException
exc_type, exc_value, exc_traceback = sys.exc_info()
now when I print the exc_value this is what one gets:
com.ibm.ws.scripting.ScriptingException: com.ibm.websphere.management.exception.AdminException: javax.management.MBeanException: Exception thrown in RequiredModelMBean while trying to invoke operation testConnection
Now this error message is always the same no matter what's wrong. I tested authentication errors, missing WebSphere Variables and missing driver classes.
While the Admin Console prints reasonable messages, the script keeps printing the same meaningless message.
The very weird thing is, as long as I don't catch the exception and the script just exits by error, a descriptive error message is shown.
Accessing the Java-Exceptions cause exc_value.getCause() gives None.
I've also had a look at the DataSource MBeans, but as they only exist if the servers are started, I quickly gave up on them.
I hope someone knows how to access the error messages I see when not catching the Exception.
thanks in advance
After all the research and testing AdminControl seems to be nothing more than a convinience facade to some of the commonly used MBeans.
So I tried issuing the Test Connection Service (like in the java example here https://www.ibm.com/support/knowledgecenter/en/SSEQTP_8.5.5/com.ibm.websphere.base.doc/ae/cdat_testcon.html
) directly:
ds_id = AdminConfig.list("DataSource").splitlines()[0]
# other queries may be 'process=server1' or 'process=dmgr'
ds_cfg_helpers = __wat.AdminControl.queryNames("WebSphere:process=nodeagent,type=DataSourceCfgHelper,*").splitlines()
try:
# invoke MBean method directly
warning_cnt = __wat.AdminControl.invoke(ds_cfg_helpers[0], "testConnection", ds_id)
if warning_cnt == "0":
print = "success"
else:
print "%s warning(s)" % warning_cnt
except ScriptingException as exc:
# get to the root of all evil ignoring exception wrappers
exc_cause = exc
while exc_cause.getCause():
exc_cause = exc_cause.getCause()
print exc_cause
This works the way I hoped for. The downside is that the code gets much more complicated if one needs to test DataSources that are defined on all kinds of scopes (Cell/Node/Cluster/Server/Application).
I don't need this so I left it out, but I still hope the example is useful to others too.

Robotframework - get failing keyword / stack trace of failure

I have a keyword called "debug teardown" which prints the test status and then runs the debuglibrary Debug keyword, if the test has failed.
I would like to be able to log to console which keyword has caused the failure, so I can more effectively debug my test.
Is it possible to get the stack trace or most recent test keyword, and log it to the console?
Here is my Debug Teardown keyword:
Debug Teardown
Run Keyword If Test Failed Log ${TEST STATUS}: ${TEST MESSAGE} ERROR
Run Keyword If Test Failed Debug
You can get a bit more information if you create a listener and also set the log level to DEBUG. Inside the listener you can save the results of log commands, and then when a keyword fails you can print it out or do whatever else you want.
For example, here's a listener that will print to stdout the last log message when a keyword fails:
from __future__ import print_function
class my_listener():
ROBOT_LISTENER_API_VERSION = 2
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.last_log = None
def _log_message(self, message):
self.last_log = message
def _end_keyword(self, name, attrs):
if attrs['status'] == 'FAIL':
print("\n******\n", self.last_log['message'])
You would use it by importing the listener like a normal library, and also setting the log level to DEBUG (otherwise you'll get the error but no stack trace).
Example:
*** Settings ***
Library my_listener
Suite Setup set log level DEBUG
*** Test cases ***
Example
some keyword
You might be able to utilize set suite variable to update a "global" variable as you go. The last set variable value would be the value that failed.

how to test whether program exits or not

I want to test the next class:
from random import randint
class End(object):
def __init__(self):
self.quips=['You dead', 'You broke everything you can','You turn you head off']
def play(self):
print self.quips[randint(0, len(self.quips)-1)]
exit(1)
I want to test it with nosetests so I could see that the class exits correctly with code 1. I tried differents variants but nosetest returns error like
File "C:\Python27\lib\site.py", line 372, in __call__
raise SystemExit(code)
SystemExit: 1
----------------------------------------------------------------------
Ran 1 test in 5.297s
FAILED (errors=1)
Ofcourse I can assume that it exits but I want for test to return OK status not error. Sorry if my question may be stupid. Im very new to python and I try to test something my very first time.
I would recommend using the assertRaises context manager. Here is an example test that ensures that the play() method exits:
import unittest
import end
class TestEnd(unittest.TestCase):
def testPlayExits(self):
"""Test that the play method exits."""
ender = end.End()
with self.assertRaises(SystemExit) as exitexception:
ender.play()
# Check for the requested exit code.
self.assertEqual(exitexception.code, 1)
As you can see in the traceback, sys.exit()* raises an exception called SystemExit when you call it. So, that's what you want to test for with nose's assert_raises(). If you are writing tests with unittest2.TestCase that's self.assertRaises.
*actually you used plain built-in exit() but you really should use sys.exit() in a program.