How to skip Clojure Midje tests - testing

If I have a Clojure test suite written using the Midje testing framework, how do I skip individual tests? For example, if I were using JUnit in Java and I wished to skip a single test, I would add an #Ignore attribute above that test method. Is there an equivalent to this for Midje?
I realise that I could add a label to my test metadata and then run the test suite excluding that label. For example, if I labelled my test with ":dontrun", I could then run the test suite with "lein midje :filter -dontrun". This would involve a change to my Continuous Integration task that runs the test suite though, and I'd prefer not to do this. Is there an equivalent test label of JUnit's #Ignore so that I only need to change the Midje test code and not change my Continuous Integration task?

future-fact does what you want, just substitute (or wrap) your fact with it:
(future-fact "an interesting sum"
(sum-up 1 2) => 4)
This will, instead of executing the test code, print a message during the test run:
WORK TO DO "an interesting sum" at (some_tests.clj:23)

Related

How to add test case to testsuite in xcuitest?

I’m trying to run a custom test suite which includes several test cases. For example, I’ve wrote 4 test scripts(test_login_success,test_login_fail,test_register_xxx,test_register_yyy), and I just want to run test_login_* module, how to set the defaultTestSuite and add testcases to it?
The test cases you create belongs to its class. If you want to customise test runs you shall consider updating to the new Xcode 11. The new version of Xcode has test plans feature allowing you to control tests executions better.
Introduction video:
https://developer.apple.com/videos/play/wwdc2019/413/
If you prefer to stay on previous Xcode, you should add schemes for your scenarios.
Also, you can pass tests names in xcodebuild shell command.

Skipping test steps in testng

I am following POM approach with Testng for designing my framework . Consider a scenario wherein the test case got failed in the nth test step inside a #Test method.Can anyone suggest how can I skip the remaining test steps(from n+1 onwards)?
Since I am automating the manual test cases, each #Test belongs to 1 test case and so I cannot split up the steps into multiple #Test methods. When the test step is failed, it needs to skip the next steps in that #Test method and proceed to the next test case.
Also I need the count of test steps skipped in the result.
Kindly help.
Looks like you are basically looking for something around what a BDD tool such as Cucumber provides you. Cucumber lets you create a .feature file which contains one or more scenario (You can visualize each of your scenario to be one #Test annotated test method).
You could then leverage one of the Cucumber integrations i.e.,
Choose either JUnit (or)
Choose TestNG integration
and let one of these run your BDD tests.
Here when a particular step fails, then all subsequent steps would be aborted (which is what you are asking for)
Outside of Cucumber, I dont think you can have this done via any other mechanism. The reporting needs (such as how many steps were skipped etc.,) can be fulfilled by any cucumber based reports.
You should start from here : http://docs.cucumber.io/guides/10-minute-tutorial/

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test

What is test harness?

I am facing some difficulties in understanding test harness and related common terms like test case, test scripts in automation testing.
So this is what I got so far:
Automation testing is the use of a special software (other than the software being tested) to control the execution of tests and compare the actual results with the expected results. It also involves the setting up of test pre-conditions. This kind of testing is most suitable for tests that are frequently carried out.
Now, I am having some problems with test harness. I read that it consists of a test suite of test cases, input files, output files, and test scripts.
Now my question is what is the difference between test case and test script?
How do you use the software to test the different functions of the Acceptance Unit Testing (AUT)? I also came across some terms like suite master and case agents.
Several broad questions there, will try to answer based on my experience.
Think of a Test Harness as an 'enabler' that actually does all the work of (1)executing tests using a (2)test library and (3)generating reports. It would require that your test scripts are designed to handle different (4)test data and (5)test scenarios. Essentially, when the test harness is in place and prerequisite data is prepared (aka data prep) someone should be able to click a button or run one command to execute all your tests and generate reports.
A test harness is most likely a collection of different things that make all of the above happen. If you wrote unit tests while developing your application, that would be part of a test harness. You would also have other tests for the functionality of your app, like: user logs in to site, sees favourites pane, recent messages and notifications. Then you add in a 'runner' of sorts that goes through all of your "test scripts" and runs them (instead of you having to execute tests one at a time). If it feels like a test harness is more of a conceptual collection rather than a single piece of software, then you're understanding this correctly :-)
Now my question is what is the difference between test case and test script?
Simple but not entirely correct answer: A Test Case defines test objectives, description, pre-conditions, steps (descriptive or specific), expected results. A Test Script would then be the actual automated script that you execute to do that test. That's in an Automation context. And it changes. A lot.
What certifications like ISTQB define as test scenarios is usually referred to as test cases in some companies and countries. In others, test cases are flipped with test scripts when referring to manual testing (when the steps are given in detail but not part of an automation harness). Others say that test scripts exclusively mean automated tests. On the other hand, one can also argue that several test cases can be combined in a test script and vice-versa. So that begs the question, how does a test procedure fit in?
A test development stage can have: "Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software."
If you assume a > (is larger than/collection of) relation, how would you relate those? Rhetorical question - that differs based on where you work, who your client is, etc. Best thing is to define it with your colleagues/clients and agree on the understanding of the terms rather than the definition. I currently go with test script = automated script, based on a pre-existing manual test case or a test scenario.
Also, how do you use the software to test the different functions of the AUT?
You write different tests to test different things. Each test does certain actions and checks if the AUT's output matches what you expected - If displayed_value == expected_value. An input file could be used to provide data for the test- list of test usernames and passwords, for instance. Or run the same test with different data - login as a different user with different messages, etc.
Take a look at RobotFramework and the Selenium. A robot framework test (written in text or html files) combined with the Selenium library would allow you to write an automated test which tests something specific...like a home page validation. You would write a separate test to ensure that a user can see all his/her messages. Another to test clearing notifications. And so on.
test harness: A test environment comprised of stubs and drivers needed to execute a test.
Test harnesses and stubs will be used to replicate the missing items (components not yet included in the tests or external systems).
Often, when small-scale Integration Testing of several modules or components is performed, it is necessary to devise or improvise methods and tools to get the test data to the components under test. This is often called a test harness. Because of the need to understand the technicalities required to build a test harness this testing is almost always done by the development team.
A test harness may facilitate the testing of components or part of a system by simulating the environment in which that test object will run. This may be done either because other components of that environment are not yet available and are replaced by stubs and/or drivers, or simply to provide a predictable and controllable environment in which any faults can be localized to the object under test. These are usually bespoke programs generated by developers to help in the testing process. If they are used in a mature organisation it is quite possible that these harnesses will be considered as ‘Test Assets’ and subject to Version Control & Configuration Management.
Test harnesses contains all the information required to compile and run a test. This includes, test cases, source files under test, stubs, and Target Deployment Port (TDP) configuration settings.
A Test Harness is the collection of all the items needed to test software at the unit, module, application or system level and provides the mechanism to execute the test. Every item such as input data, test parameters, test case, test script, expected output data, test tool, and test result report is part of the test harness.

Using categories vs parametrized tests from JUnit with selenium rc

I currently have a set of unit tests which are run with the Parametrized test harness built into JUnit. As I do not want to create a new Selenium instance with each test case (i'd rather log in, navigate to a screen, and run a set of tests), I am looking at other options in which I can automate my tests.
I want to set up different tests in different classes which all leverage the same test method.I found the Categories also offered by JUnit however as this appears to be a way to set up a TestSuite, I am not sure if this will help.
I guess the short question is, if I have a bunch of selenium tests spread out to different classes is there a way I can get these tests to run in one test method specified elsewhere?
You can create test suite in Junit over there you can write your different classes containing ur tests.
In Junit 4 a test suit is written like this.
import org.junit.runners.Suite;
#RunWith(Suite.class)
#Suite.SuiteClasses({TestClass1.class, TestClass2.class})
public class TestSuite {
//nothing
}