I am writing an XCTest
in my test i have BOOL stam = true;
and then XCTest(stam == true,#"value is expected to be true");
however, the test is failing and i get this error message
((value == true) is true) failed - value is expected to be true
You typically need to use the Assertion methods in testing frameworks.
Did you try
XCTAssertEqual(stam,true,#"value is expected to be true");
Related
I would like to call an external graphql API (without authentication for the moment).
Here is my code :
local open_api = "https://graphqlzero.almansi.me/api"
local payload = '{"query": "query { post(id: 1) { id title body }}"}'
local headers = {
}
local function craftCall()
local response
local data
pcall(function ()
response = HttpService:PostAsync(open_api, payload, Enum.HttpContentType.ApplicationJson, false, headers)
data = HttpService:JSONDecode(response)
end)
if not data then return false end
print(data)
return false
end
if craftCall() then
print("Success")
else
print("Something went wrong")
end
I get always something went wrong. I need some help on what is going wrong... Specially I don't know if am I correctly formatting the Payload.
After your http call, you never return a success result. You've only outlined failure cases :
if not data then return false end
print(data)
return false
So your conditional, if craftCall() then always evaluates to false.
Why not make it return true or data after the print(data)? Then you'll know that it made it to the end of the call successfully.
local function craftCall()
local success, result = pcall(function()
local response = HttpService:PostAsync(open_api, payload, Enum.HttpContentType.ApplicationJson, false, headers)
return HttpService:JSONDecode(response)
end)
if not success then
warn("PostAsync failed with error : ", result)
return false
end
-- return the parsed data
return result
end
It looks like a bug but asking the question just in case I have missed something.
Karate version is 1.1.0
The post request is as below. Note that content type is text/turtle and I use logPrettyResponse because further requests in test are testing RDF/XML and other serialisations.
* configure logPrettyResponse = true
Given path '/graph'
* text payload =
"""
<http://example.com/a7460f22-561a-4dde-922b-665bd9cf3bd9> <http://schema.org/description> "Test"#en.
"""
And request payload
And header Accept = 'text/turtle'
And header Content-Type = 'text/turtle'
When method POST
Then status 200
And I get below error
ERROR com.intuit.karate - org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 8; Element type "http:" must be followed by either attribute specifications, ">" or "/>"., http call failed
Reason this seems to be happening is that fromString method in JsValue.java is trying to parse turtle as XML because condition case '<' is true when response is turtle.
public static Object fromString(String raw, boolean jsonStrict, ResourceType resourceType) {
String trimmed = raw.trim();
if (trimmed.isEmpty()) {
return raw;
}
if (resourceType != null && resourceType.isBinary()) {
return raw;
}
switch (trimmed.charAt(0)) {
case '{':
case '[':
return jsonStrict ? JsonUtils.fromJsonStrict(raw) : JsonUtils.fromJson(raw);
case '<':
if (resourceType == null || resourceType.isXml()) {
return XmlUtils.toXmlDoc(raw);
} else {
return raw;
}
default:
return raw;
}
}
To solve the problem I have set logPrettyResponse to false. I would love to know if anyone has any other thoughts or if someone can confirm that it is a bug?
I'm creating automated smoke tests. I've read that is not a good practice to use more than one assert on Unit tests, is this rule also applied to webdriver tests with selenium?
On my smoke tests sometimes i use more than 20 asserts to verify that some information like section titles, column titles and other text that should appear are shown correct.
Would it be better to separate assert as different tests or is it ok to have multiple asserts in a single test?
If i separate in differents tests the run time will increase a lot.
Here is an example of the code:
if self.claimSummaryPage.check_if_claim_exists():
assert self.claimSummaryPage.return_claim_summary_mosaic_text() == 'RESUMEN'
assert self.claimSummaryPage.return_claim_notes_mosaic_text() == 'NOTAS'
assert self.claimSummaryPage.return_claim_documents_mosaic_text() == 'DOCUMENTOS'
assert self.claimSummaryPage.return_claim_payments_mosaic_text() == 'PAGOS'
assert self.claimSummaryPage.return_claim_services_mosaic_text() == 'SERVICIOS'
assert "Detalles del siniestro: " + claim_number in self.claimSummaryPage.return_claim_title_text()
assert self.claimSummaryPage.return_claim_status_text() in self.claimSummaryPage.CLAIM_STATUS
self.claimSummaryPage.check_claim_back_button_exists()
assert self.claimSummaryPage.return_claim_date_of_loss_title() == 'Fecha y hora'
assert self.claimSummaryPage.return_claim_reported_by_title() == 'Denunciante'
assert self.claimSummaryPage.return_claim_loss_location_title() == 'Lugar'
assert self.claimSummaryPage.return_claim_how_reported_title() == 'Reportado en'
assert self.claimSummaryPage.return_claim_what_happened_title() == '¿Qué sucedió?'
assert self.claimSummaryPage.return_claim_adjuster_title() == 'Tramitadores'
assert self.claimSummaryPage.return_claim_parties_involved_title() == 'Partes implicadas'
if self.claimSummaryPage.check_if_claim_has_exposures():
assert self.claimSummaryPage.return_claim_adjuster_table_name_column_title() == 'Nombre'
assert self.claimSummaryPage.return_claim_adjuster_table_segment_column_title() == 'Segmento'
assert self.claimSummaryPage.return_claim_adjuster_table_incident_column_title() == 'Incidente'
assert self.claimSummaryPage.return_claim_adjuster_table_state_column_title() == 'Estado'
else:
assert self.claimSummaryPage.return_claim_adjuster_table_no_exposures_label_text() == 'No se encontraron exposiciones'
if self.claimSummaryPage.return_claim_lob(claim_number) == "AUTO":
assert self.claimSummaryPageAuto.return_claim_loss_cause() in self.claimSummaryPageAuto.CLAIM_AUTO_LOSS_CAUSE
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_title() == 'Vehículos involucrados'
self.claimSummaryPageAuto.verify_claim_has_involved_vehicles()
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_make_column_title() == 'Marca'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_model_column_title() == 'Modelo'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_year_column_title() == 'Año'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_license_column_title() == 'Patente'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_loss_party_column_title() == 'Parte vinculada'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_column_title() == 'Daños'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_damage_type_column_title() == 'Tipo de daño'
assert self.claimSummaryPageAuto.return_claim_involved_vehicles_table_first_item_loss_party_text() in self.claimSummaryPageAuto.VEHICLE_LOSS_PARTY
Tests should test the system and user behavior not just the assertion .
you can change your tests as ("generic example") :
let user and summarypage be the pageobjects so:
summary class:
class summary(){
public static expectedDetails = ["something1", "something2"]
function getDetails(){
return [self.claimSummaryPage.return_claim_payments_mosaic_text()]
}
}
now your test:
test("validate user can successfully login and vie claim summary"){
user.userlogins()
details = summary.getDetails()
assert(details).to.be.equal(summary.expectedDetails)
}
here instead of individually validaiting each string , we are saving to an array and comparing the resulting array and expected array
This is much cleaner approach. Don't add assertion in pageobject
Given that gui tests take much more time it would probably not be efficient to just have one assert in each test. The best would probably be to have a test suite where in which you execute one assert per test during the same run. I've also had experience where we implemented our own assert methods for the gui tests which caches the results from all asserts in the gui tests and goes through them in the end and fails the test if any of the cached assertions failed. This is due to the nature of the system we worked with at the time. Maybe this could be a way to solve it for you?
This works given that the assertions you do are not on anything that would cause an error if you continue even if the assertion fails, e.g. if a step in a process would fail.
Example:
my_assertion_cache = list()
def assert_equals(a, b):
try:
assert a == b
except AssertionError:
# preferably add a reference to the locator where this failed into the message below
my_assertion_cache.append(f"{a} and {b} was expected to be equal")
def run_after_each_test():
assert mylist == []
Generaly if your first assert fails, then others will not be executed in case where you have multiple assertions in one test.
On the other hand
If you do not perform any new action in your test, like you are on page and you are checking some UI and do not perform any click, or select, or any new action, you can use multiple assertions.
Remember Auto test are used so you dont need to run tests manualy, and they can identify problem faster, and with more precision.This is why recomendation goes to the one assertion, one test.
So the question can be translated like this: Do I want to identify only one issue, or all possible issues with the auto tests?
I have a request where i get Processing or Submitted in a response parameter if the request is in process or passed respectively.
I am able to poll and get if the status is "Processing" or"Submitted" but after that I am unable to fail the request if still i am not getting the expected status after polling for 5 times.
How can i fail request after certain retries do not provide me expected response?
The answer is in your question,
I assume you are polling using a js function,
If so you can add a boolean return from that, if you condition not met return false or if condition met return true then assert the value returned from your feature file.
* def pollingFunc =
"""
function(x) {
// your polling logic which retrives status
if (status == x) {
return true;
}
else{
return false;
}
}
"""
In feature
* def statusFound = pollingFunc("Processed" )
* assert (statusFound == true)
If the expected status not obtained after polling the assert will fail the test
I check async write with following code.
BOOL bOk = ::GetOverlappedResult(hFile, pOverlapped, dwBytesTransferred, TRUE);
if ( FALSE == bOk )
{
TRACE_ERROR_NO_ASSERT(GetOverlappedResult);
}
bOk is TRUE, but dwBytesTransferred is 0, and pOverlapped->Internal is 258(timeout).
my questionn is : is my async operation timeout and will be finished later? or just failed? should I call CancelIo to cancel this timeout operation like this?
BOOL bOk = ::GetOverlappedResult(hFile, pOverlapped, dwBytesTransferred, TRUE);
if ( FALSE == bOk )
{
TRACE_ERROR_NO_ASSERT(GetOverlappedResult);
return FALSE;
}
if ( 0 == dwBytesTransferred )
{
CancelIoEx(hFile, pOverlapped); // is this neccessary?
}
I refer the MSDN document, but no description for this condition.
thanks in advance.
your operation is finished (failed) with STATUS_TIMEOUT - this is final status and operation complete. you not need and can not cancel it - it finished. that GetOverlappedResult return TRUE and not set error code - this is only bad (I be say error) design of this win32 api. it unconditionally return TRUE and not set last error if (0 <= status). as result it wrong process STATUS_TIMEOUT (I think you worked with serial (com) port).
so formal answer:
your operation finished.
you should not call CancelIo
however i think that use GetOverlappedResult for asynchronous i/o at all no sense. need use apc or iocp completion.