How to add test case to testsuite in xcuitest? - xctest

I’m trying to run a custom test suite which includes several test cases. For example, I’ve wrote 4 test scripts(test_login_success,test_login_fail,test_register_xxx,test_register_yyy), and I just want to run test_login_* module, how to set the defaultTestSuite and add testcases to it?

The test cases you create belongs to its class. If you want to customise test runs you shall consider updating to the new Xcode 11. The new version of Xcode has test plans feature allowing you to control tests executions better.
Introduction video:
https://developer.apple.com/videos/play/wwdc2019/413/
If you prefer to stay on previous Xcode, you should add schemes for your scenarios.
Also, you can pass tests names in xcodebuild shell command.

Related

Why do my selenium xunit tests in Visual Studio run in parallel?

I'm writing unit tests using Selenium and xunit and on their own they run great but if I select all of the tests (multiple classes) in Test Explorer (they appear to be grouped by class - this is not intentional) they run in parallel. Run Tests in Parallel is not selected. Each one of my tests creates and then deletes test data so they obviously can't run in parallel. One test might delete data right after another test created that data and so the test would fail. So how can I run all of my tests and not have them run in parallel? I guess I could make them all use one partial class that spans multiple files but that's not my first choice.
I found a solution (although not an explanation). Just put [Collection("Sequential")] as the first line under each namespace. This forces everything to run sequentially.

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test

Is there any advanced console test runner for xUnit.net

I am searching for an advanced console based test runner for xUnit.net. The requirements are:
It should be able list all executed tests
It should be able to list all tests within an assembly
You should be able to filter the tests by namespace, class name and method name
It should work with mono
automatic reexecution of the tests after compilation would be nice
Do you know any?
The new AssemblyRunner should be able to do some of the things on your requirements list now such as listing tests
For an example of it's use see the sample on github.
https://github.com/xunit/samples.xunit/blob/master/TestRunner/Program.cs
the existing xunit.runner.console can filter tests using traits also

What is test harness?

I am facing some difficulties in understanding test harness and related common terms like test case, test scripts in automation testing.
So this is what I got so far:
Automation testing is the use of a special software (other than the software being tested) to control the execution of tests and compare the actual results with the expected results. It also involves the setting up of test pre-conditions. This kind of testing is most suitable for tests that are frequently carried out.
Now, I am having some problems with test harness. I read that it consists of a test suite of test cases, input files, output files, and test scripts.
Now my question is what is the difference between test case and test script?
How do you use the software to test the different functions of the Acceptance Unit Testing (AUT)? I also came across some terms like suite master and case agents.
Several broad questions there, will try to answer based on my experience.
Think of a Test Harness as an 'enabler' that actually does all the work of (1)executing tests using a (2)test library and (3)generating reports. It would require that your test scripts are designed to handle different (4)test data and (5)test scenarios. Essentially, when the test harness is in place and prerequisite data is prepared (aka data prep) someone should be able to click a button or run one command to execute all your tests and generate reports.
A test harness is most likely a collection of different things that make all of the above happen. If you wrote unit tests while developing your application, that would be part of a test harness. You would also have other tests for the functionality of your app, like: user logs in to site, sees favourites pane, recent messages and notifications. Then you add in a 'runner' of sorts that goes through all of your "test scripts" and runs them (instead of you having to execute tests one at a time). If it feels like a test harness is more of a conceptual collection rather than a single piece of software, then you're understanding this correctly :-)
Now my question is what is the difference between test case and test script?
Simple but not entirely correct answer: A Test Case defines test objectives, description, pre-conditions, steps (descriptive or specific), expected results. A Test Script would then be the actual automated script that you execute to do that test. That's in an Automation context. And it changes. A lot.
What certifications like ISTQB define as test scenarios is usually referred to as test cases in some companies and countries. In others, test cases are flipped with test scripts when referring to manual testing (when the steps are given in detail but not part of an automation harness). Others say that test scripts exclusively mean automated tests. On the other hand, one can also argue that several test cases can be combined in a test script and vice-versa. So that begs the question, how does a test procedure fit in?
A test development stage can have: "Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software."
If you assume a > (is larger than/collection of) relation, how would you relate those? Rhetorical question - that differs based on where you work, who your client is, etc. Best thing is to define it with your colleagues/clients and agree on the understanding of the terms rather than the definition. I currently go with test script = automated script, based on a pre-existing manual test case or a test scenario.
Also, how do you use the software to test the different functions of the AUT?
You write different tests to test different things. Each test does certain actions and checks if the AUT's output matches what you expected - If displayed_value == expected_value. An input file could be used to provide data for the test- list of test usernames and passwords, for instance. Or run the same test with different data - login as a different user with different messages, etc.
Take a look at RobotFramework and the Selenium. A robot framework test (written in text or html files) combined with the Selenium library would allow you to write an automated test which tests something specific...like a home page validation. You would write a separate test to ensure that a user can see all his/her messages. Another to test clearing notifications. And so on.
test harness: A test environment comprised of stubs and drivers needed to execute a test.
Test harnesses and stubs will be used to replicate the missing items (components not yet included in the tests or external systems).
Often, when small-scale Integration Testing of several modules or components is performed, it is necessary to devise or improvise methods and tools to get the test data to the components under test. This is often called a test harness. Because of the need to understand the technicalities required to build a test harness this testing is almost always done by the development team.
A test harness may facilitate the testing of components or part of a system by simulating the environment in which that test object will run. This may be done either because other components of that environment are not yet available and are replaced by stubs and/or drivers, or simply to provide a predictable and controllable environment in which any faults can be localized to the object under test. These are usually bespoke programs generated by developers to help in the testing process. If they are used in a mature organisation it is quite possible that these harnesses will be considered as ‘Test Assets’ and subject to Version Control & Configuration Management.
Test harnesses contains all the information required to compile and run a test. This includes, test cases, source files under test, stubs, and Target Deployment Port (TDP) configuration settings.
A Test Harness is the collection of all the items needed to test software at the unit, module, application or system level and provides the mechanism to execute the test. Every item such as input data, test parameters, test case, test script, expected output data, test tool, and test result report is part of the test harness.

How do I run a subset of OCUnit tests in Xcode

I have a suite of unit tests that I use before checking in my project. However, very often it's the case that only one of them finds some regression in the code. In these cases I'd like to only run that particular unit test while debugging the failure. I haven't found any way to do this in Xcode. Is it possible?
If you're happy restricting your testing to a single test class, a simple option is to create a second test target (duplicate the existing target, change the product name and remove the contents of the "Compile Sources" build phase, if you wish) and add only the test source file you're trying to fix to it.
Alternatively, you can use the "Other Test Flags" option to pass a -SenTest argument to otest, the test runner:
% /Developer/Tools/otest
2009-08-29 22:28:39.555 otest[70089:10b] Usage: otest [-SenTest Self | All | None |
<TestCaseClassName/testMethodName>] <path of unit to be tested>
More information about using this method is here.
Thanks for that push in the right direction. I ended using the same basic concept, but I added a GUI that lets you select what gets run as well as get a nice red/green status for each test. If anyone is interested, the code is at the URL below. The UI needs to more spit and polish, but it seems to be working.
http://github.com/nall/XcodeUnitTestGUI/tree/master
After I started the project above, I found this project which is really fantastic.
http://github.com/gabriel/gh-unit
For new readers: A much better way now available in Xcode is to edit the scheme for the target to be tested and select "Test" in the left hand column of the scheme pane.
Use the widgets in the Tests column to expand targets and suites.
You can disable/enable tests on a per test target, per suite or per test basis using the check boxes on the right