Multiple vs single asserts per test in selenium? - selenium

I'm creating automated smoke tests. I've read that is not a good practice to use more than one assert on Unit tests, is this rule also applied to webdriver tests with selenium?
On my smoke tests sometimes i use more than 20 asserts to verify that some information like section titles, column titles and other text that should appear are shown correct.
Would it be better to separate assert as different tests or is it ok to have multiple asserts in a single test?
If i separate in differents tests the run time will increase a lot.
Here is an example of the code:
if self.claimSummaryPage.check_if_claim_exists():
assert self.claimSummaryPage.return_claim_summary_mosaic_text() == 'RESUMEN'
assert self.claimSummaryPage.return_claim_notes_mosaic_text() == 'NOTAS'
assert self.claimSummaryPage.return_claim_documents_mosaic_text() == 'DOCUMENTOS'
assert self.claimSummaryPage.return_claim_payments_mosaic_text() == 'PAGOS'
assert self.claimSummaryPage.return_claim_services_mosaic_text() == 'SERVICIOS'
assert "Detalles del siniestro: " + claim_number in self.claimSummaryPage.return_claim_title_text()
assert self.claimSummaryPage.return_claim_status_text() in self.claimSummaryPage.CLAIM_STATUS
self.claimSummaryPage.check_claim_back_button_exists()
assert self.claimSummaryPage.return_claim_date_of_loss_title() == 'Fecha y hora'
assert self.claimSummaryPage.return_claim_reported_by_title() == 'Denunciante'
assert self.claimSummaryPage.return_claim_loss_location_title() == 'Lugar'
assert self.claimSummaryPage.return_claim_how_reported_title() == 'Reportado en'
assert self.claimSummaryPage.return_claim_what_happened_title() == '¿Qué sucedió?'
assert self.claimSummaryPage.return_claim_adjuster_title() == 'Tramitadores'
assert self.claimSummaryPage.return_claim_parties_involved_title() == 'Partes implicadas'
if self.claimSummaryPage.check_if_claim_has_exposures():
assert self.claimSummaryPage.return_claim_adjuster_table_name_column_title() == 'Nombre'
assert self.claimSummaryPage.return_claim_adjuster_table_segment_column_title() == 'Segmento'
assert self.claimSummaryPage.return_claim_adjuster_table_incident_column_title() == 'Incidente'
assert self.claimSummaryPage.return_claim_adjuster_table_state_column_title() == 'Estado'
else:
assert self.claimSummaryPage.return_claim_adjuster_table_no_exposures_label_text() == 'No se encontraron exposiciones'
if self.claimSummaryPage.return_claim_lob(claim_number) == "AUTO":
assert self.claimSummaryPageAuto.return_claim_loss_cause() in self.claimSummaryPageAuto.CLAIM_AUTO_LOSS_CAUSE
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_title() == 'Vehículos involucrados'
self.claimSummaryPageAuto.verify_claim_has_involved_vehicles()
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_make_column_title() == 'Marca'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_model_column_title() == 'Modelo'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_year_column_title() == 'Año'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_license_column_title() == 'Patente'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_loss_party_column_title() == 'Parte vinculada'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_column_title() == 'Daños'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_type_column_title() == 'Tipo de daño'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_first_item_loss_party_text() in self.claimSummaryPageAuto.VEHICLE_LOSS_PARTY

Tests should test the system and user behavior not just the assertion .
you can change your tests as ("generic example") :
let user and summarypage be the pageobjects so:
summary class:
class summary(){
public static expectedDetails = ["something1", "something2"]
function getDetails(){
return [self.claimSummaryPage.return_claim_payments_mosaic_text()]
}
}
now your test:
test("validate user can successfully login and vie claim summary"){
user.userlogins()
details = summary.getDetails()
assert(details).to.be.equal(summary.expectedDetails)
}
here instead of individually validaiting each string , we are saving to an array and comparing the resulting array and expected array
This is much cleaner approach. Don't add assertion in pageobject

Given that gui tests take much more time it would probably not be efficient to just have one assert in each test. The best would probably be to have a test suite where in which you execute one assert per test during the same run. I've also had experience where we implemented our own assert methods for the gui tests which caches the results from all asserts in the gui tests and goes through them in the end and fails the test if any of the cached assertions failed. This is due to the nature of the system we worked with at the time. Maybe this could be a way to solve it for you?
This works given that the assertions you do are not on anything that would cause an error if you continue even if the assertion fails, e.g. if a step in a process would fail.
Example:
my_assertion_cache = list()
def assert_equals(a, b):
try:
assert a == b
except AssertionError:
# preferably add a reference to the locator where this failed into the message below
my_assertion_cache.append(f"{a} and {b} was expected to be equal")
def run_after_each_test():
assert mylist == []

Generaly if your first assert fails, then others will not be executed in case where you have multiple assertions in one test.
On the other hand
If you do not perform any new action in your test, like you are on page and you are checking some UI and do not perform any click, or select, or any new action, you can use multiple assertions.
Remember Auto test are used so you dont need to run tests manualy, and they can identify problem faster, and with more precision.This is why recomendation goes to the one assertion, one test.
So the question can be translated like this: Do I want to identify only one issue, or all possible issues with the auto tests?

Related

How to extract the [Documentation] text from Robot framework test case

I am trying to extract the content of the [Documentation] section as a string for comparision with other part in a Python script.
I was told to use Robot framework API https://robot-framework.readthedocs.io/en/stable/
to extract but I have no idea how.
However, I am required to work with version 3.1.2
Example:
*** Test Cases ***
ATC Verify that Sensor Battery can enable and disable manufacturing mode
[Documentation] E1: This is the description of the test 1
... E2: This is the description of the test 2
[Tags] E1 TRACE{Trace_of_E1}
... E2 TRACE{Trace_of_E2}
Extract the string as
E1: This is the description of the test 1
E2: This is the description of the test 2
Have a look at these examples. I did something similar to generate testplans descritio. I tried to adapt my code to your requirements and this could maybe work for you.
import os
import re
from robot.api.parsing import (
get_model, get_tokens, Documentation, EmptyLine, KeywordCall,
ModelVisitor, Token
)
class RobotParser(ModelVisitor):
def __init__(self):
# Create object with remarkup_text to store formated documentation
self.text = ''
def get_text(self):
return self.text
def visit_TestCase(self, node):
# The matched `TestCase` node is a block with `header` and
# `body` attributes. `header` is a statement with familiar
# `get_token` and `get_value` methods for getting certain
# tokens or their value.
for keyword in node.body:
# skip empty lines
if keyword.get_value(Token.DOCUMENTATION) == None:
continue
self.text += keyword.get_value(Token.ARGUMENT)
def visit_Documentation(self,node):
# The matched "Documentation" node with value
self.remarkup_text += node.value + self.new_line
def visit_File(self, node):
# Call `generic_visit` to visit also child nodes.
return self.generic_visit(node)
if __name__ == "__main__":
path = "../tests"
for filename in os.listdir(path):
if re.match(".*\.robot", filename):
model = get_model(os.path.join(path, filename))
robot_parser = RobotParser()
robot_parser.visit(model)
text=robot_parser._text()
The code marked as best answer didn't quite work for me and has a lot of redundancy but it inspired me enough to get into the parsing and write it in a much readable and efficient way that actually works as is. You just have to have your own way of generating & iterating through filesystem where you call the get_robot_metadata(filepath) function.
from robot.api.parsing import (get_model, ModelVisitor, Token)
class RobotParser(ModelVisitor):
def __init__(self):
self.testcases = {}
def visit_TestCase(self, node):
testcasename = (node.header.name)
self.testcases[testcasename] = {}
for section in node.body:
if section.get_value(Token.DOCUMENTATION) != None:
documentation = section.value
self.testcases[testcasename]['Documentation'] = documentation
elif section.get_value(Token.TAGS) != None:
tags = section.values
self.testcases[testcasename]['Tags'] = tags
def get_testcases(self):
return self.testcases
def get_robot_metadata(filepath):
if filepath.endswith('.robot'):
robot_parser = RobotParser()
model = get_model(filepath)
robot_parser.visit(model)
metadata = robot_parser.get_testcases()
return metadata
This function will be able to extract the [Documentation] section from the testcase:
def documentation_extractor(testcase):
documentation = []
for setting in testcase.settings:
if len(setting) > 2 and setting[1].lower() == "[documentation]":
for doc in setting[2:]:
if doc.startswith("#"):
# the start of a comment, so skip rest of the line
break
documentation.append(doc)
break
return "\n".join(documentation)

Jenkins - Display names of failed tests TestNG

In my Jenkinsfile i'm counting all test results as
AbstractTestResultAction testResultAction = currentBuild.rawBuild.getAction(AbstractTestResultAction.class)
if (testResultAction != null) {
def total = testResultAction.totalCount
def failed = testResultAction.failCount
def skipped = testResultAction.skipCount
def passed = total - failed - skipped
But I also want to display all names of failed tests for slack message.
So far, i've tried to generate it with def failedTests = testResultAction.getResult().getFailedTests() but it returns not specific name like hudson.tasks.junit.CaseResult#37e0fb97.
Is there anyway to display full name of test? I am using Selenium + TestNG.
You can get description using .getTitle() method:
def failedTests = testResultAction.getResult().getFailedTests().collect { it.getTitle() }

Is there a way to assert and fail a request after polling in karate?

I have a request where i get Processing or Submitted in a response parameter if the request is in process or passed respectively.
I am able to poll and get if the status is "Processing" or"Submitted" but after that I am unable to fail the request if still i am not getting the expected status after polling for 5 times.
How can i fail request after certain retries do not provide me expected response?
The answer is in your question,
I assume you are polling using a js function,
If so you can add a boolean return from that, if you condition not met return false or if condition met return true then assert the value returned from your feature file.
* def pollingFunc =
"""
function(x) {
// your polling logic which retrives status
if (status == x) {
return true;
}
else{
return false;
}
}
"""
In feature
* def statusFound = pollingFunc("Processed" )
* assert (statusFound == true)
If the expected status not obtained after polling the assert will fail the test

Detect whether test has failed within fixture

I am debugging an intermittent test failure. For this purposes I want to dump a lot of debug information if a test failed. Dumping debug stuff is quite slow process which produces a lot of data, so I do not want to do this for every test.
I am using pytest and yield autouse fixture should work great
#pytest.yield_fixture(scope="function", autouse=True)
def dump_on_failue(request):
prepare_debug_dump()
yield
if test_failed(request):
debug_dump()
The problem is that I can't figure out how do I detect whether test has failed or not. There was a questions already and even note on pytest website:
if request.node.rep_setup.failed:
print ("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print ("executing test failed", request.node.nodeid)
Unfortunately this code does not work anymore. There are no rep_setup and rep_calls symbols in node object. I tried to dig request and node object, but no luck.
Anybody knows how to detect whether test failed?
There are no rep_setup and rep_calls symbols in node object.
No.rep_setup and rep_calls symbols are still there.
Add this code into your root conftest.py. It will check pass/fail for every test function.
import pytest
#pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
rep = __multicall__.execute()
setattr(item, "rep_" + rep.when, rep)
return rep
#pytest.fixture(scope='function', autouse=True)
def test_debug_log(request):
def test_result():
if request.node.rep_setup.failed:
print ("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print ("executing test failed", request.node.nodeid)
request.addfinalizer(test_result)
I use something like this, maybe it can be useful in your case:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
if rep.failed:
if rep.when == "setup":
print("Setup failed!")
elif rep.when == "call":
print("Test method failed")
elif rep.passed and rep.when == "call":
print("Test {} passed.".format(item.name))

Selenium how check an element loads correctly more than once and log results

I am trying to test a website with selenium using robot framework, I am going backwards and forwards between 2 pages and checking that the second one (problem page is loaded each time)and keeping a count of the results, I'm using the following code
def check_contents_page_loads(self, passed, failed, attempts, counter):
self.driver.get(self.mm + '/config/views')
attempts = int(attempts)
passed = int(passed)
failed = int(failed)
counter = int(counter)
return_passed = str(passed)
return_counter = str(counter)
try:
while attempts > 0:
attempts -= 1
counter += 1
self.driver.find_element_by_class_name("nav-config").click()
time.sleep(5)
self.driver.find_element_by_class_name("nav-content").click()
time.sleep(10)
test = self.driver.find_element_by_class_name("resource-navigator").is_displayed()
print(test)
if test == "True":
passed += 1
else:
failed += 1
except Exception, ex:
logging.exception('dasse %s , %s' % (ex, Exception))
return False
return return_passed, return_counter
When the page is present this works but if the page is not I get the following error message
Cannot assign return values: Expected list-like object, got bool instead.
From robot I am sending the following values
${passed} Set Variable 0
${failed} Set Variable 0
${attempts} Set Variable 20
${counter} Set Variable 0
${return_passed} ${return_counter} Check Contents Page Loads ${passed} ${failed} ${attempts} ${counter}
Should be Equal ${return_passed} ${return_counter}
When an exception is thrown, you're returning a bool of false. You need to correct that.