Does Test Complete (SmartBear) can run unnitest classes in script steps? - testing

My query is if I can write scripts inside TestComplete using unittest?
As per TC (TestComplete) documentation it is possible to run unittest scripts --> https://support.smartbear.com/testcomplete/docs/working-with/integration/unit-test-frameworks/pyunit.html
However all examples explicitly involve UnitTest module. And I want to use Scripts module.
Simply put my intent is to automate desktop app using PageObjectModel pattern. I am aware that there are python libraries like PyAutoGUI but thing is application under test is a nightmarish mess. It is easier to handle it with help of TC since I gave it a try with PyAutoGui, Appium driver and Java and got stuck after just few elements. (AUT is mixing QT, java swing, java fx and browser locators (and propably some win wpf items too so easier to let TC handle it)
My idea is to:
record some actions on the desktop,
convert them to a script,
pack them in classes respectively per window and/ or widget (like internal webbrowser),
Use unittest test classes (inherit from unittest.TestCase) to run assertions in clean manner. => here is the problem.
Writing in TC things like that won`t work:
import unittest
class SimpleTest(unittest.TestCase):
def foo(self):
a = 'abc'
b = 'abc'
self.assertEqual(a, b)
def bar(self):
a = 'abc'
b = 'def'
self.assertEqual(a, b) # I would be expecting to /
# TC throw an assertion exception here /
# but all I got is 'passed' in log
#if __name__='__main__' won`t work in TC
#if __name__ == '__main__':
# unittest.main()
If I write that way it worked as "intended":
def foo():
a = 'abc'
b = 'abc'
assert a == b
def bar():
a = 'abc'
b = 'def'
assert a == b #BOOM! FAILED

Related

Access Pytest result in teardown of Appium test

my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?

Karate: Scenario fails if contains __arg and run in 'stand-alone' mode

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed
All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

Using mocks in Karate DSL feature file with stanalone run

I have REST service, written in language different from Java.
It have few dependencies from other REST services.
For example service under development and testing is A, other services are respectively B and C.
I want to run system test for A, some tests require B or/and C to be online and perform queries from A.
I wrote b-mock.featue and c-mock.feature to represent that services in mock.
Also I wrote some a-test-smth.feature files to run test against A
Is it possible to add some information into a-test-smth.feature to enable some mocks for concrete test?
Now I should run standalone karate.jar twice, first - for mocking. second - for run tests. That approach works, but, I can't ceck that:
some API calls to A not required B or C
can't emulate service B down or for example slow or incorrect response answer fetching
Thanks.
Are you using Java ? If so then the best approach is to perform the set-up of your test in Java code. You can start 2 mocks for B and c and then start the main test for your service A. And at the end do clean-up if needed.
You can refer this as an example: https://github.com/intuit/karate/tree/master/karate-netty#consumer-provider-example
Row 3 shows how you can start a mock and run a Karate test.
If you are not using Java and would like to use only the stand-alone JAR, it is actually possible using Java-interop and quite easy, I just tried it.
EDIT: This API is now built into Karate, so you don't need to write the extra JS code below: https://github.com/intuit/karate/tree/master/karate-netty#within-a-karate-test
(Obsolete)
First create this bit of JavaScript code that is smart enough to start a Karate mock:
function() {
var Mock = Java.type('com.intuit.karate.netty.FeatureServer');
var file = new java.io.File('src/test/java/mock/web/cats-mock.feature');
var server = Mock.start(file, 0, false, null);
return server.port;
}
And this is how it can look in the Background of your main Karate test. You can see how you can do some conditional logic if needed and you have plenty of ways to change things based on your environment.
Background:
* def starter = read('start-mock.js')
* def port = karate.env == 'mock' ? starter() : 8080
* url 'http://localhost:' + port + '/cats'
Does this answer your question ? Let me know and I will add this trick to the documentation !

How to run an specific test case in the selected environment in SoapUI

I have multiple Environment and a lot of test cases, but not all test cases are needed to be run in all environment. Is there a way to run only an specific test cases from a test suite based on the selected Environment.
For Example
If I select Environment1, it will run the following test cases
TC0001
TC0002
TC0003
TC0004
TC0005
If I select Environment2, it will run only the following test cases
TC0001
TC0003
TC0005
There can be different solution to achieve this since you have multiple environments i.e., pro software being used.
I would achieve the solution using Test Suite's Setup Script:
Create Test Suite level custom property. Use the same name as your environment name. For instance, DEV is the environment defined, use the same as test suite property name and provide the list of values separated by comma as value for that property, say TC1, TC2 etc.,
Similarly defined other environments and its values as well.
Copy the below script in Setup Script for the test suite and execute the script which enables or disables the test cases according to the environment and property value
Test Suite's Setup Script
/**
* This is soapui's Setup Script
* which enables / disables required
* test cases based on the user list
* for that specific environment
**/
def disableTestCase(testCaze) {
testCaze.disabled = true
}
def enableTestCase(testCaze) {
testCaze.disabled = false
}
def getEnvironmentSpecificList(def testSuite) {
def currentEnv = testSuite.project.activeEnvironment.NAME
def enableList = testSuite.getPropertyValue(currentEnv).split(',').collect { it.trim()}
log.info "List of test for enable: ${enableList}"
enableList
}
def userList = getEnvironmentSpecificList(testSuite)
testSuite.testCaseList.each { kase ->
if (userList.contains(kase.name)) {
enableTestCase(kase)
} else {
disableTestCase(kase)
}
}
Other way to achieve this is using Event feature of ReadyAPI, you may use TestRunListener.beforeRun() and filter the test case whether to execute or ignore.
EDIT:
If you are using ReadyAPI, then you can the new feature called tag the test cases. A test case can be tagged with multiple values and you can execute tests using specific tags. In this case, you may not needed to have the setup script as that is for the open source edition. Refer documentation for more details.
This solution is only specific to Pro software and Open Source edition does have this tag feature.

How do I dynamically trigger downstream builds in jenkins?

We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)