What are good patterns to describe your class and test methods using #DisplayName? - testing

I've started to use the #DisplayName annotation from Jupiter API (Junit 5) to describe what's in going on my tests. This feature is very usefull to help other developers to better understand what the tests are intended to do (because your texts can contain spaces, special characters, and even emojis).
For now, I'm using the following strategy to create my descriptions:
Class level: "Check [the general feature being tested]"
Method level: "When [condition to be meet]"
However, I'm wondering if there are betters ways to describe the tests. So my question is, what are the good patterns to be followed when using #DisplayName annotation to improve my tests descriptions?
I'm looking for things like:
Keywords that help to categorize tests by their testing objectives
Emojis to indicate features, importance, etc...

Related

When to use BDD and when just unittests?

I have a task to write tests for future Django Channels+DRF project, don't ask why (we only have swagger documentation for now). So the tests have to test the user use cases (like scenario that may be complex). I have researched about that and found BDD. Here is the question, considering that our project later may have simple unit tests too what should I use, i.e. BDD seems decent but I think it may be excessive for use and may be there is a way of just writing unittests for user use case scenarious and I can get by with that. Does anyone have experience with that? It would be great if you provide articles and code examples.
Scenarios are a bit different to use-cases. A use-case often covers several capabilities. For instance, in the simple laundry use-case shown here, a housekeeper does several things when performing a wash:
washes each load
dries each load.
folds certain items
irons some items
All of these go into the "weekly laundry" use-case.
A scenario in BDD is much more fine-grained. It describes one capability taking place in a particular context or set of contexts. So for instance you might have:
Given the weekly laundry has been washed and dried
And it contains several sheets
And some underpants
When the housekeeper does the folding
Then the sheets should be folded
But the underpants should not.
You can see that we've skipped a couple of the capabilities. This scenario is focused on the capability of folding, and shows how a well-behaved housekeeper would do it. Washing and drying would have to be covered in separate scenarios.
So that's the difference between a use-case and a scenario. Now let's look at a unit test.
When we write code, we don't write it all in one big class or function. We split it up into small pieces. In the same way that a scenario describes an example of the behaviour of the system from the perspective of the users, a unit test describes the behaviour of a class or other small piece of code from the perspective of its users - usually other classes!
So let's imagine that we're on a car purchasing site. We have several capabilities:
Authentication
Searching for cars
Purchasing a car
Listing a car
Removing a car from the list
Each of these will have lots of different classes making it up. Even searching for a car could involve a front-end, a search component, a database of cars, a persistence layer, a webserver, etc.. For each piece of code, we describe the behaviour of that code.
(BDD actually started out at this level; with examples of how classes behave - JBehave was intended to replace JUnit. But JUnit got better and we didn't need this bit any more. I still find it helpful to think of these as examples rather than tests.)
Typically I'll have both scenarios and unit tests in my codebase; one set of them looking from a user / stakeholder perspective at the whole system, and the other set describing my classes in finer detail.
The scenarios help me show how the system behaves and why it's valuable. The unit tests help me drive out good design and separate responsibilities. Both of them provide living documentation which helps to keep the system maintainable and make it easier for newcomers to come on board.
Generally this is how I program:
I have a rough idea of what I want to achieve
I talk to someone about it and write down some scenarios
If we don't quite know what we're looking for, I'll get something working (a spike)
Once we understand better what we're looking for, I automate the scenario first
I take the simplest case and start writing the UI
When the UI needs another class to work, I write some examples of how that code should work (unit tests) first
Then I write the code (or refactor it, because spikes are messy)
When that code needs another class to work, I write some examples of it
If I don't have code that's needed at any point in my unit tests, I use mocks.
Generally we keep the scenarios and the unit tests in different places.
You can see some examples of how I've done this here. It's a tetris game with scenarios which automate the whole game through the UI, and lower-level unit tests which describe the behaviour of particular pieces like the heartbeat which drops the shapes.
Having said that - if your codebase is very simple, you can probably get away with just the scenarios or just the unit tests; you might not need both. But if it starts getting more complex, consider refactoring and adding whatever you need. It's OK to be pragmatic about it, as long as it's easy to change.

Code reuse in automated acceptance tests without excessive abstraction

I'm recently hired as part of a team at my work whose focus is on writing acceptance test suites for our company's 3D modeling software. We use an in-house C# framework for writing and running them which essentially amounts to subclassing the TestBase class and overriding the Test() method, where generally all the setup, testing, and teardown is done.
While writing my tests, I've noticed that a lot of my code ends up being boilerplate code I rewrite often. I've been interested in trying to extract that code to be reusable and make my code DRYer, but I've struggled to find a way to do so without overabstracting my tests when they should be largely self-contained. I've tried a number of approaches, but they've run into issues:
Using inheritance: the most naive solution, this works well at first and lets me write tests quickly but has run into the usual trappings, i.e., test classes becoming too rigid and being unable to share my code across cousin subclasses, and logic being obfuscated within the inheritance tree. For instance, I have abstract RotateVolumeTest and TranslateVolumeTest classes that both inherit from ModifyVolumeTest, but I know relatively soon I'm going to want to rotate and translate a volume, so this is going to be a problem.
Using composition through interfaces: this solves a lot of the problems with the previous approach, letting me reuse code flexibly for my tests, but it leads to a lot of abstraction and seeming 'class bloat' -- now I have IVolumeModifier, ISetsUp, etc., all of which make the code less clear in what it's actually doing in the test.
Helper methods in a static utility class: this has been very helpful, especially for using a Factory pattern to generate the complex objects I need for tests quickly. However, it's felt 'icky' to put some methods in there that I know aren't actually very general, instead being used for a small subset of tests that need to share very specific code.
Using a testing framework like xUnit.net or similar to share code through [SetUp] and [TearDown] methods in a test suite, generally all in the same class: I've strongly preferred this approach, as it offers the reusability I've wanted without abstracting away from the test code, but my team isn't interested in it; I've tried to show the potential benefits of adopting a framework like that for our tests, but the consensus seems to be that it's not worth the refactoring effort in rewriting our existing test classes. It's a valid point and I think it's unlikely I'll be able to convince them further, especially as a relatively new hire, so unless I want to make my test classes vastly different from the rest of the code base, this one's off the table.
Copy and paste code where it's needed: this is the current approach we use, along with #3 and adding methods to TestBase. I know opinions differ on whether copy/paste coding is acceptable for test code where it of course isn't for production, but I feel that using this approach is going to make my tests much harder to maintain or change in the long run, as I now have N places I need to fix logic if a bug shows up (which plenty have already and only needed to be fixed in one).
At this point I'm really not sure what other options I have but to opt for #5, as much as it slows down my ability to write tests quickly or robustly, in order to stay consistent with the current code base. Any thoughts or input are very much appreciated.
I personally believe the most important thing for a successful testing framework is abstractions. Make it as easy as possible to write the test. The key points for me are that you will end up with more tests and the writer focuses more on what they are testing than how to write the test. Every testing framework I have seen that doesn't focus on abstraction has failed in more ways than one and ended up being maintainability nightmares.
If the logic is not used anywhere else but a single test class then leave in that test class but refactor later if it is needed in more than one place
I would opt in for all except #5.

How an automation script of 100 test cases should be written using selenium webdriver?

Kindly explain whether I should write only 1 java file for all the test cases or individual java file for single test case
You don't give enough details to give a specific answer, so I try to put down some guiding principles. Those principles are just software design 101, so you might want to do some learning and reading in that direction.
The key question really is: how similar are your tests.
they vary just in the values
You really have just one test, that you put in a loop in order to iterate through all the values. Note that also a behavior can be a value. In this case you could use the Strategy Pattern.
they are variations of the same test idea
You probably want some classes representing the elements of the tests, which then get combined into tests. For example elements might be TestSteps, which then get combined into tests. If the combining is really simple. It might be feasible to put it all in one class, but with 100 tests, that is unlikely.
completely independent tests
You are probably better of to put them in different classes/files. But you probably still find lots of stuff to reuse (for example PageObjects) which should go into separate classes.
In the end I would expect for 100 tests maybe 50 classes: Many test classes, containing 1-20 tests each, that have a lot of stuff they share, plus a healthy does of classes that encapsulate common functionality (PageObjects, Matcher, predefined TestSteps and so on)
According to one source you should used one class per test, but to used inheritance of classes:
Think of each class as a test case, and focus it on a particular aspect (or component) of the system you're testing. This provides an easy way to add new test cases (simply create a new class) and modify and update existing tests (by removing/disabling test methods within a class). It can greatly help organize your test suites by allowing existing tests (e.g., individual methods) to be easily combined together.

Testing in Lisp

I am new to Lisp, and I am learning Scheme through the SICP videos. One thing that seems not to be covered (at least at the point where I am) is how to do testing in Lisp.
In usual object oriented programs there is a kind of horizontal separation of concerns: methods are tied to the object they act upon, and to decompose a problem you need to fragment it in the construction of several objects that can be used side by side.
In Lisp (at least in Scheme), a different kind of abstraction seems prevalent: in order to attack a problem you design a hierarchy of domain specific languages, each of which is buil upon the previous one and acts at a coarser level of detail and higher level of abstraction.
(Of course this is a very rough description, and objects can be used vertically, or even as building blocks of DSLs.)
I was wondering whether this has some effect on testing best practices. So the quetsion is two-fold:
What are the best practices while testing in Lisp? Are unit tests as fundamental as in other languages?
What are the main test frameworks (if any) for Lisp? Are there mocking frameworks as well? Of course this will depend on the dialect, but I'd be interested in answers for Scheme, CL, Clojure or other Lisps.
Here's a Clojure specific answer, but I expect most of it would be equally applicable to other Lisps as well.
Clojure has its own testing framework called clojure.test. This lets you simply define assertions with the "is" macro:
(deftest addition
(is (= 4 (+ 2 2)))
(is (= 7 (+ 3 4))))
In general I find that unit testing in Clojure/Lisp follows very similar best practices to testing for other languages. It's the sample principle: you want to write focused tests that confirm your assumptions about a specific piece of code behaviour.
The main differences / features I've noticed in Clojure testing are:
Since Clojure encourages functional programming, it tends to be the case that tests are simpler to write because you don't have to worry as much about mutable state - you only need to confirm that the output is correct for a given input, and not worry about lots of setup code etc.
Macros can be handy for testing - e.g. if you want to generate a large number of tests that follow a similar pattern programatically
It's often handy to test at the REPL to get a quick check of expected behaviour. You can then copy the test code into a proper unit test if you like.
Since Clojure is a dynamic language you may need to write some extra tests that check the type of returned objects. This would be unnecessary in a statically typed language where the compiler could provide such checks.
RackUnit is the unit-testing framework that's part of Racket, a language and implementation that grew out of Scheme. Its documentation contains a chapter about its philosophy: http://docs.racket-lang.org/rackunit/index.html.
Two testing frameworks that I am aware of for Common Lisp are Stefil (in two flavours, hu.dwim.stefil and the older stefil), FiveAM, and lisp-unit. Searching in the quicklisp library list also turned up "unit-test", "xlunit", and monkeylib-test-framework.
I think that Stefil and FiveAM are most commonly used.
You can get all from quicklisp.
Update: Just seen on Vladimir Sedach's blog: Eos, which is claimed to be a drop-in replacement for FiveAM without external dependencies.

How to plan for whitebox testing

I'm relatively new to the world of WhiteBox Testing and need help designing a test plan for 1 of the projects that i'm currently working on. At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done. Please could you give me advice as to how best prepare myself for testing this project? Any tools or test plan templates that I could use? THe language being used is C++ if it'll make difference.
One of the goals of white-box testing is to cover 100% (or as close as possible) of the code statements. I suggest finding a C++ code coverage tool so that you can see what code your tests execute and what code you have missed. Then design tests so that as much code as possible is tested.
Another suggestion is to look at boundary conditions in if statments, for loops, while loops etc. and test these for any 'gray' areas, false positives and false negatives.
You could also design tests to look at the life cycle of important variables. Test their definition, their usage and their destruction to make sure they are being used correctly :)
There's three ideas to get you started. Good luck
At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done.
People say that one of the main benefits of 'test driven development' is that it ecourages you to design your components with testability in mind: it makes your components more testable.
My personal (non-TDD) approach is as follows:
Understand the functionality required and implemented: both 'a priori' (i.e. by reading/knowing the software functional specification), and by reading the source code to reverse-engineer the functionality
Implement black box tests for all the implemented/required functionality (see for example 'Should one test internal implementation, or only test public behaviour?').
My testing therefore isn't quite 'white box', except that I reverse-engineer the functionality being tested. I then test that reverse-engineered functionality, and avoid having any useless (and therefore untested) code. I could (but don't often) use a code coverage tool to see how much of the source code is exercised by the black box tests.
Try "Working Effectively with Legacy Code": http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
It's relevant since by 'legacy' he means code that has no tests. It's also a rather good book.
Relevant tools are: http://code.google.com/p/googletest/ and http://code.google.com/p/gmock/
There may be other unit test and mock frameworks, but I have familiarity with these and I recommend them highly.