Detect whether test has failed within fixture - testing

I am debugging an intermittent test failure. For this purposes I want to dump a lot of debug information if a test failed. Dumping debug stuff is quite slow process which produces a lot of data, so I do not want to do this for every test.
I am using pytest and yield autouse fixture should work great
#pytest.yield_fixture(scope="function", autouse=True)
def dump_on_failue(request):
prepare_debug_dump()
yield
if test_failed(request):
debug_dump()
The problem is that I can't figure out how do I detect whether test has failed or not. There was a questions already and even note on pytest website:
if request.node.rep_setup.failed:
print ("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print ("executing test failed", request.node.nodeid)
Unfortunately this code does not work anymore. There are no rep_setup and rep_calls symbols in node object. I tried to dig request and node object, but no luck.
Anybody knows how to detect whether test failed?

There are no rep_setup and rep_calls symbols in node object.
No.rep_setup and rep_calls symbols are still there.
Add this code into your root conftest.py. It will check pass/fail for every test function.
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
rep = __multicall__.execute()
setattr(item, "rep_" + rep.when, rep)
return rep
#pytest.fixture(scope='function', autouse=True)
def test_debug_log(request):
def test_result():
if request.node.rep_setup.failed:
print ("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print ("executing test failed", request.node.nodeid)
request.addfinalizer(test_result)

I use something like this, maybe it can be useful in your case:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
if rep.failed:
if rep.when == "setup":
print("Setup failed!")
elif rep.when == "call":
print("Test method failed")
elif rep.passed and rep.when == "call":
print("Test {} passed.".format(item.name))

Related

how to read the console output in python without executing any command

I have an API which gets the success or error message on console.I am new to python and trying to read the response. Google throws so many examples to use subprocess but I dont want to run,call any command or sub process. I just want to read the output after below API call.
This is the response in console when success
17:50:52 | Logged in!!
This is the github link for the sdk and documentation
https://github.com/5paisa/py5paisa
This is the code
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
client.login()
In general it is bad practice to get a value from STDOUT. There are some ways but it's pretty tricky (it's not made for it). And the problem doesn't come from you but from the API which is wrongly designed, it should return a value e.g. True or False (at least) to tell you if you logged in, and they don't do it.
So, according to their documentation it is not possible to know if you're logged in, but you may be able to see if you're logged in by checking the attribute client_code in the client object.
If client.client_code is equal to something then it should be logged in and if it is equal to something else then not. You can try comparing it's value when you successfully login or when it fails (wrong credential for instance). Then you can put a condition : if it is None or False or 0 (you will have to see this by yourself) then it is failed.
Can you try doing the following with a successful and failed login:
client.login()
print(client.client_code)
Source of the API:
# Login function :
# (...)
message = res["body"]["Message"]
if message == "":
log_response("Logged in!!")
else:
log_response(message)
self._set_client_code(res["body"]["ClientCode"])
# (...)
# _set_client_code function :
def _set_client_code(self, client_code):
try:
self.client_code = client_code # <<<< That's what we want
except Exception as e:
log_response(e)
Since this questions asks how to capture "stdout" one way you can accomplish this is to intercept the log message before it hits stdout.
The minimum code to capture a log message within a Python script looks this:
#!/usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
class RequestHandler(logging.Handler):
def emit(self, record):
if record.getMessage().startswith("Hello"):
print("hello detected")
handler = RequestHandler()
logger.addHandler(handler)
logger.warning("Hello world")
Putting it all together you may be able to do something like this:
import logging
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
class PaisaClient(logging.Handler):
def __init__():
self.loggedin = False # this is the variable we can use to see if we are "logged in"
def emit(self, record):
if record.getMessage().startswith("Logged in!!")
self.loggedin = True
def login():
client.login()
logging.getLogger(py5paisa) # get the logger for the py5paisa library
# tutorial here: https://betterstack.com/community/questions/how-to-disable-logging-from-python-request-library/
logging.basicConfig(handlers=[PaisaClient()], level=0, force=True)
c = PaisaClient()
c.login()

How to extract the [Documentation] text from Robot framework test case

I am trying to extract the content of the [Documentation] section as a string for comparision with other part in a Python script.
I was told to use Robot framework API https://robot-framework.readthedocs.io/en/stable/
to extract but I have no idea how.
However, I am required to work with version 3.1.2
Example:
*** Test Cases ***
ATC Verify that Sensor Battery can enable and disable manufacturing mode
[Documentation] E1: This is the description of the test 1
... E2: This is the description of the test 2
[Tags] E1 TRACE{Trace_of_E1}
... E2 TRACE{Trace_of_E2}
Extract the string as
E1: This is the description of the test 1
E2: This is the description of the test 2
Have a look at these examples. I did something similar to generate testplans descritio. I tried to adapt my code to your requirements and this could maybe work for you.
import os
import re
from robot.api.parsing import (
get_model, get_tokens, Documentation, EmptyLine, KeywordCall,
ModelVisitor, Token
)
class RobotParser(ModelVisitor):
def __init__(self):
# Create object with remarkup_text to store formated documentation
self.text = ''
def get_text(self):
return self.text
def visit_TestCase(self, node):
# The matched `TestCase` node is a block with `header` and
# `body` attributes. `header` is a statement with familiar
# `get_token` and `get_value` methods for getting certain
# tokens or their value.
for keyword in node.body:
# skip empty lines
if keyword.get_value(Token.DOCUMENTATION) == None:
continue
self.text += keyword.get_value(Token.ARGUMENT)
def visit_Documentation(self,node):
# The matched "Documentation" node with value
self.remarkup_text += node.value + self.new_line
def visit_File(self, node):
# Call `generic_visit` to visit also child nodes.
return self.generic_visit(node)
if __name__ == "__main__":
path = "../tests"
for filename in os.listdir(path):
if re.match(".*\.robot", filename):
model = get_model(os.path.join(path, filename))
robot_parser = RobotParser()
robot_parser.visit(model)
text=robot_parser._text()
The code marked as best answer didn't quite work for me and has a lot of redundancy but it inspired me enough to get into the parsing and write it in a much readable and efficient way that actually works as is. You just have to have your own way of generating & iterating through filesystem where you call the get_robot_metadata(filepath) function.
from robot.api.parsing import (get_model, ModelVisitor, Token)
class RobotParser(ModelVisitor):
def __init__(self):
self.testcases = {}
def visit_TestCase(self, node):
testcasename = (node.header.name)
self.testcases[testcasename] = {}
for section in node.body:
if section.get_value(Token.DOCUMENTATION) != None:
documentation = section.value
self.testcases[testcasename]['Documentation'] = documentation
elif section.get_value(Token.TAGS) != None:
tags = section.values
self.testcases[testcasename]['Tags'] = tags
def get_testcases(self):
return self.testcases
def get_robot_metadata(filepath):
if filepath.endswith('.robot'):
robot_parser = RobotParser()
model = get_model(filepath)
robot_parser.visit(model)
metadata = robot_parser.get_testcases()
return metadata
This function will be able to extract the [Documentation] section from the testcase:
def documentation_extractor(testcase):
documentation = []
for setting in testcase.settings:
if len(setting) > 2 and setting[1].lower() == "[documentation]":
for doc in setting[2:]:
if doc.startswith("#"):
# the start of a comment, so skip rest of the line
break
documentation.append(doc)
break
return "\n".join(documentation)

Multiple vs single asserts per test in selenium?

I'm creating automated smoke tests. I've read that is not a good practice to use more than one assert on Unit tests, is this rule also applied to webdriver tests with selenium?
On my smoke tests sometimes i use more than 20 asserts to verify that some information like section titles, column titles and other text that should appear are shown correct.
Would it be better to separate assert as different tests or is it ok to have multiple asserts in a single test?
If i separate in differents tests the run time will increase a lot.
Here is an example of the code:
if self.claimSummaryPage.check_if_claim_exists():
assert self.claimSummaryPage.return_claim_summary_mosaic_text() == 'RESUMEN'
assert self.claimSummaryPage.return_claim_notes_mosaic_text() == 'NOTAS'
assert self.claimSummaryPage.return_claim_documents_mosaic_text() == 'DOCUMENTOS'
assert self.claimSummaryPage.return_claim_payments_mosaic_text() == 'PAGOS'
assert self.claimSummaryPage.return_claim_services_mosaic_text() == 'SERVICIOS'
assert "Detalles del siniestro: " + claim_number in self.claimSummaryPage.return_claim_title_text()
assert self.claimSummaryPage.return_claim_status_text() in self.claimSummaryPage.CLAIM_STATUS
self.claimSummaryPage.check_claim_back_button_exists()
assert self.claimSummaryPage.return_claim_date_of_loss_title() == 'Fecha y hora'
assert self.claimSummaryPage.return_claim_reported_by_title() == 'Denunciante'
assert self.claimSummaryPage.return_claim_loss_location_title() == 'Lugar'
assert self.claimSummaryPage.return_claim_how_reported_title() == 'Reportado en'
assert self.claimSummaryPage.return_claim_what_happened_title() == '¿Qué sucedió?'
assert self.claimSummaryPage.return_claim_adjuster_title() == 'Tramitadores'
assert self.claimSummaryPage.return_claim_parties_involved_title() == 'Partes implicadas'
if self.claimSummaryPage.check_if_claim_has_exposures():
assert self.claimSummaryPage.return_claim_adjuster_table_name_column_title() == 'Nombre'
assert self.claimSummaryPage.return_claim_adjuster_table_segment_column_title() == 'Segmento'
assert self.claimSummaryPage.return_claim_adjuster_table_incident_column_title() == 'Incidente'
assert self.claimSummaryPage.return_claim_adjuster_table_state_column_title() == 'Estado'
else:
assert self.claimSummaryPage.return_claim_adjuster_table_no_exposures_label_text() == 'No se encontraron exposiciones'
if self.claimSummaryPage.return_claim_lob(claim_number) == "AUTO":
assert self.claimSummaryPageAuto.return_claim_loss_cause() in self.claimSummaryPageAuto.CLAIM_AUTO_LOSS_CAUSE
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_title() == 'Vehículos involucrados'
self.claimSummaryPageAuto.verify_claim_has_involved_vehicles()
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_make_column_title() == 'Marca'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_model_column_title() == 'Modelo'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_year_column_title() == 'Año'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_license_column_title() == 'Patente'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_loss_party_column_title() == 'Parte vinculada'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_column_title() == 'Daños'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_type_column_title() == 'Tipo de daño'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_first_item_loss_party_text() in self.claimSummaryPageAuto.VEHICLE_LOSS_PARTY
Tests should test the system and user behavior not just the assertion .
you can change your tests as ("generic example") :
let user and summarypage be the pageobjects so:
summary class:
class summary(){
public static expectedDetails = ["something1", "something2"]
function getDetails(){
return [self.claimSummaryPage.return_claim_payments_mosaic_text()]
}
}
now your test:
test("validate user can successfully login and vie claim summary"){
user.userlogins()
details = summary.getDetails()
assert(details).to.be.equal(summary.expectedDetails)
}
here instead of individually validaiting each string , we are saving to an array and comparing the resulting array and expected array
This is much cleaner approach. Don't add assertion in pageobject
Given that gui tests take much more time it would probably not be efficient to just have one assert in each test. The best would probably be to have a test suite where in which you execute one assert per test during the same run. I've also had experience where we implemented our own assert methods for the gui tests which caches the results from all asserts in the gui tests and goes through them in the end and fails the test if any of the cached assertions failed. This is due to the nature of the system we worked with at the time. Maybe this could be a way to solve it for you?
This works given that the assertions you do are not on anything that would cause an error if you continue even if the assertion fails, e.g. if a step in a process would fail.
Example:
my_assertion_cache = list()
def assert_equals(a, b):
try:
assert a == b
except AssertionError:
# preferably add a reference to the locator where this failed into the message below
my_assertion_cache.append(f"{a} and {b} was expected to be equal")
def run_after_each_test():
assert mylist == []
Generaly if your first assert fails, then others will not be executed in case where you have multiple assertions in one test.
On the other hand
If you do not perform any new action in your test, like you are on page and you are checking some UI and do not perform any click, or select, or any new action, you can use multiple assertions.
Remember Auto test are used so you dont need to run tests manualy, and they can identify problem faster, and with more precision.This is why recomendation goes to the one assertion, one test.
So the question can be translated like this: Do I want to identify only one issue, or all possible issues with the auto tests?

How can I know the command used? - Discord.py

I am trying to make a errorhandling for my Discord.py, how do I know what command was used for the error to pop up?
#bot.event
async def on_command_error(ctx, error):
print("error: ",error)
if search("not found", str(error)):
c_f = random.choice([f"`{command used}` was not found, silly.", "Ehm.. Since when do we have `{command used}`?", "I don't know what `{command used}` is?"])
embed=discord.Embed(title=c_f, description=f"Please use existing commands. {ctx.author.mention}", color=error_color)
embed.timestamp = datetime.utcnow()
embed.set_footer(text=bot_name, icon_url=icon_uri)
await ctx.send(embed=embed)
elif search("cooldown", str(error)):
c_d = random.choice(["Did you drink energy drinks!?", "Why are you stressing, buddy.", "Duhh, wait, you're on cooldown!"])
second_remain = round(error.retry_after, 1)
embed=discord.Embed(title=c_d, description=f"Try again after {second_remain}s. {ctx.author.mention}", color=error_color)
embed.timestamp = datetime.utcnow()
embed.set_footer(text=bot_name, icon_url=icon_uri)
await ctx.send(embed=embed)
else:
raise error
Any attribute I can use?
You can use ctx.command
#bot.event
async def on_command_error(ctx, exception):
error = getattr(exception, "original", exception)
if hasattr(ctx.command, "on_error"): # If a command has it's own handler
return
elif isinstance(error, CommandNotFound):
return
if isinstance(error, discord.CommandInvokeError):
print(ctx.command)
Your solution is to add them to the command specifically, this also means it can help diagnose an issue with a command more exact.
You can also add any error events to the specific listener, just like how you done it for all commands, instead add them individually.
#bot.command()
async def command_name(ctx):
# ...
#command_name.error
async def command_name_error(ctx, error):
if isinstance(error, commands.CommandInvokeError):
await ctx.send("An error from this command" + error)
With #command_name.error put your command name before the .error, then this makes an error listener for that command, if it produces an error.

How do I catch SocketExceptions in MonkeyRunner?

When using MonkeyRunner, every so often I get an error like:
120830 18:39:32.755:S [MainThread] [com.android.chimpchat.adb.AdbChimpDevice] Unable to get variable: display.density
120830 18:39:32.755:S [MainThread] [com.android.chimpchat.adb.AdbChimpDevice]java.net.SocketException: Connection reset
From what I've read, sometimes the adb connection goes bad, and you need to reconnect. The only problem is, I'm not able to catch the SocketException. I'll wrap my code like so:
try:
density = self.device.getProperty('display.density')
except:
print 'This will never print.'
But the exception is apparently not raised all the way to the caller. I've verified that MonkeyRunner/jython can catch Java exceptions the way I'd expect:
>>> from java.io import FileInputStream
>>> def test_java_exceptions():
... try:
... FileInputStream('bad mojo')
... except:
... print 'Caught it!'
...
>>> test_java_exceptions()
Caught it!
How can I deal with these socket exceptions?
You will get that error every odd time you start MonkeyRunner because the monkey --port 12345 command on the device isn't stopped when your script stops. It is a bug in monkey.
A nicer way to solve this issue is killing monkey when SIGINT is sent to your script (when you ctrl+c). In other words: $ killall com.android.commands.monkey.
Quick way to do it:
from sys, signal
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
device = None
def execute():
device = MonkeyRunner.waitForConnection()
# your code
def exitGracefully(self, signum, frame=None):
signal.signal(signal.SIGINT, signal.getsignal(signal.SIGINT))
device.shell('killall com.android.commands.monkey')
sys.exit(1)
if __name__ == '__main__':
signal.signal(signal.SIGINT, exitGracefully)
execute()
Edit:
as an addendum, I also found a way to notice the Java errors: Monkey Runner throwing socket exception broken pipe on touuch
Edit:
The signal seems to require 2 parameters, not sure it's always the case, made the third optional.
Below is the workaround I ended up using. Any function that can suffer from adb failures just needs to use the following decorator:
from subprocess import call, PIPE, Popen
from time import sleep
def check_connection(f):
"""
adb is unstable and cannot be trusted. When there's a problem, a
SocketException will be thrown, but caught internally by MonkeyRunner
and simply logged. As a hacky solution, this checks if the stderr log
grows after f is called (a false positive isn't going to cause any harm).
If so, the connection will be repaired and the decorated function/method
will be called again.
Make sure that stderr is redirected at the command line to the file
specified by config.STDERR. Also, this decorator will only work for
functions/methods that take a Device object as the first argument.
"""
def wrapper(*args, **kwargs):
while True:
cmd = "wc -l %s | awk '{print $1}'" % config.STDERR
p = Popen(cmd, shell=True, stdout=PIPE)
(line_count_pre, stderr) = p.communicate()
line_count_pre = line_count_pre.strip()
f(*args, **kwargs)
p = Popen(cmd, shell=True, stdout=PIPE)
(line_count_post, stderr) = p.communicate()
line_count_post = line_count_post.strip()
if line_count_pre == line_count_post:
# the connection was fine
break
print 'Connection error. Restarting adb...'
sleep(1)
call('adb kill-server', shell=True)
call('adb start-server', shell=True)
args[0].connection = MonkeyRunner.waitForConnection()
return wrapper
Because this may create a new connection, you need to wrap your current connection in a Device object so that it can be changed. Here's my Device class (most of the class is for convenience, the only thing that's necessary is the connection member:
class Device:
def __init__(self):
self.connection = MonkeyRunner.waitForConnection()
self.width = int(self.connection.getProperty('display.width'))
self.height = int(self.connection.getProperty('display.height'))
self.model = self.connection.getProperty('build.model')
def touch(self, x, y, press=MonkeyDevice.DOWN_AND_UP):
self.connection.touch(x, y, press)
An example on how to use the decorator:
#check_connection
def screenshot(device, filename):
screen = device.connection.takeSnapshot()
screen.writeToFile(filename + '.png', 'png')