Let's say that you have to test a new way of getting a sum of two numbers from software and you have to test that functionality and do BDD automation.
From below two what would be a better approach for automation (Also why)?
1) Using fixed input and expecting the same output. Ex: Input -> 3,5 Output -> 8
OR
2) Using random two number on every run and validating it against the conventional sum.
The first.
BDD isn't really about testing; it's about using examples to illustrate desired behaviour. The examples we use are "exemplars"; specifically picked for that illustration.
In your case, the sum is a pretty trivial problem. When we're dealing with more complex business behaviour though, we'll ask, "Can you give me an example?" The conversation that follows is the most important part of BDD. From that, we get realistic examples of the kind of inputs we'll be handling, and not just the expected output but also the value of that output, and who it's valuable to.
Once we automate the scenarios, they provide tests as a nice by-product, but that's not all they do. They're also living documentation. Business people can read them to see what a system does, and team members can use them to get a feel for the capabilities already in place.
That's a lot harder if the scenarios are generic ("a random number" and "another random number" and "a result") rather than specific ("2" and "3" and "5").
I like Lunivore's answer, it's interesting. Expect this thread to be banned from SO in 15 mins! :)
You haven't explained the business value in what you're doing. Trivially, using a random input is more complex than a fixed input, so more things could go wrong, making it worse. But maybe you WANT to test random inputs? Maybe there is some kind of value, but you haven't given enough information?
I'd say that you're putting the cart before the horse, and looking at it from a junior TDD programmer's perspective, not a senior BDD developer's perspective. The "business value" is something that is often above someone's particular paygrade. But that's where BDD starts from, it's not where it end's up.
I'd ask you, "What is the perceived value in being able to sum these numbers? Are you a crazy person? Why are you talking to a consultant about this?"
Related
If you have nothing, you cannot write a test because there is nothing to test. This seems pretty obvious to me but never seems to be addressed by proponents of TDD.
In order to write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. That is what comes first, not the test.
Tests can never come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
It's true that in order to write a test, the test writer must form some conception on how the test code can interact with the System Under Test. In that sense, conceptual design 'comes first'.
Test-driven development (TDD), however, is valuable because it's not (only) a quality assurance methodology. It's first and foremost a fast feedback loop.
While you may have an initial design in mind, once you start to write a test, you may discover that this design doesn't work (or is awkward to use). This often happens, and should cause you to immediately adjust course.
The red-green-refactor cycle suggests a model to think of TDD. Each such cycle may be a minute or two.
Thus, you may start with an initial design in mind, but then adjust it (or completely rethink it) every other minute.
never seems to be addressed by proponents of TDD
I disagree. Plenty of introductions to TDD discuss this. Two good books that discuss this (and much more) are Kent Beck's Test-Driven Development by Example and Nat Pryce and Steve Freeman's Growing Object-Oriented Code Guided by Tests.
It's the other way round.
If you write a test that calls a function which does not exist, your test suite fails and you get an error forcing you to define that function, just like writing any other test forces you to write the implementation.
Your tests don't need to run to be good tests. But this kind of test is not meant to stay in your test suite. They are sometimes referred to as "staircase tests": you need to write them to get going but they are only instrumental.
What happens generally is that as soon as this test passes, you make it fail by being more specific. Technically the test you end up with is the same you would have written after the fact and it didn't take more time to write it, but during this process you were able to run the test suite one or more times, so you're spending less time in an invalid state, so to speak.
I would like to add that there is nothing untrue in your question, but your conclusion doesn't follow the premise: it is true that what come first is the specification, but there is nothing inconsistent with formalising this specification in a test before the code is written. the spec, and the tests, force you to write the code. TDD is an incremental way of formalising the spec that ensures the spec always comes first.
To write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. THAT is what comes first, NOT the test. Tests can NEVER come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
Not quite right (not entirely wrong either - It's Complicated[tm])
If you look at the first example in Test Driven Development by Example, you'll see that Beck doesn't begin with classes and methods. He doesn't even begin with a test.
The very first thing that he creates is a "to-do" list, where each of the entries in the todo list is a representation of a behavior (my terminology, not his). So we see things like
$5 + 10 CHF = $10 if rate is 2:1
These days, you'd be more likely to see this idea expressed as Hoare triple (Given/When/Then, Arrange/Act/Assert, etc). But what we have here is a reminder to the programmer that we want an automated check that measures the result of adding two different currencies together, and confirms that the result matches some specification.
In his exercise, his to do list includes a "simpler" test, which is the one he attempts first
$5 * 2 = $10
That same todo list also includes some other concerns the has about the design, NOT expressed in test form. Also, the list grows as he works through the problem.
In this sense, the test absolutely comes first. We write the test in a language to be consumed by humans. Translating the test into a language understood by the machine comes later.
In the second step, where we describe the test to the machine, things get messier. It is absolutely the case that, as we are designing the test, we are also designing the communication protocol that allows the test to measure what the production code does. So there's a certain amount of communication design that is happening in parallel with the "test" design.
But even here, the test is not specifying all of the classes that are going to exist, it's only specifying what it needs to perform its measurement. We describe a facade, but we aren't specifying what lies beyond that facade.
It can happen, as we design more of the system, that the facade we specify is used only by tests, as a way of communicating with a different underlying design of production code.
(Note: I say classes here for consistency with the question and with early literature, taken primarily from examples in Smalltalk or Java. Feel free to substitute "functions" for "classes" if that makes you more comforatble.)
Now, the most common case is that the facade is the production code; we don't typically add elements to the design until we have a non-speculative motivation for them.
"Unit testing" puts some strain on these ideas - how can you possibly write a unit test without first designing your unit boundaries?
The real answer is an unfortunate one -- Kent Beck didn't write unit tests. He wrote "programmer tests" (a term that got retconned in later) and called them unit tests.
Using the testing language of the 1990s (which is when all this mess started), a more appropriate term is probably "composite tests".
You've also got "the London School", that was trying to figure out how to TDD a particular design style; writing a test for that style requires a more complicated testing facade "up front" (roles and interfaces and stable substitute implementations and so on).
It can also be worth keeping in mind the setting.
(Disclaimer: this isn't something I witnessed first hand - think "based on a true story" rather than "facts")
TDD (and its parent idea "test first" programming in XP) are pushing back against "up front design" of the sort where you decide what the class hierarchy and relationships should be, and document them, before you actually sit down to write the code.
The core argument being that the design process needs shorter feedback loops; that we don't get deeply committed to a particular design until we've acquired a lot of evidence that it is going to work out OK.
All that said, yes it is absolutely the case that TDD, as a technique, works so much better in the hands of someone who is already good at software design. See Michael Feathers on the Look, Ma, no hands! era.
There is no magic.
I was wandering if anyone uses BDD for testing a Big Data ETL application?
I can see how BDD can be used for testing applications having a client interact with them, but in case of Big Data ETL application there is no client interaction so its hard to see what 'When' I might use.
For example:
Give 100 event of type A occur
And 50 event of type B occur after 5 minute
Then database rows should be:
|Type|Count|Bucket|
|A|100|1|
|B|50|2|
But that seems wrong.
Any one with an insight?
Can you give me an example of what you'd expect to see in an ETL output?
There are a couple of responses you could give to this. One might be the different kinds of database rows you'd expect, and the fact that some of them will probably be repeated, but not others. That was something that struck me as weird, but if you're used to working with star schemas then you'll probably notice other differences instead.
Normally I'd steer people away from talking about the database, but if you're working with star schemas, I think it's OK to mention the facts and dimensions (I haven't worked with ETL a lot, but I do remember talking through specific examples of these and what I would expect to see).
The alternative is to use the client.
I saw that you said there was no client; however, there's always a client, even if it's one that might exist in the future. There are implications for ETL which run across security, performance and access, amongst others. It's worth having a client, even if it's a string-based or SQL-based toy, to explore the things which might trip you up.
Why are you doing this? What's new about the thing the business or users or customers will be able to do when this is in place, that they can't do already? And can you get hold of an example of that?
"We'll be able to understand how X is performing against Y standard."
Great. Can you give me an example of some X, some Y, and some standard? How will you measure the performance? What data will you be looking for? Should everyone be able to see that data? Can you think of any scenario where someone shouldn't be able to access that?
Those examples become the ETL equivalent of scenarios; the conversations retain the same pattern. You just end up automating them at a different level, since your API is machine-oriented rather than human-oriented, and some of your conversations will be about monitoring instead of testing. Your conversations should still be with the people.
Your "when" will be the query or report that you run, within the data, permission and security context in which you run it.
BDD always works for application logic inside Big data space. Remember the testing triangle principle. Have your unit tests. Practice BDD and build your integration and acceptance tests with BDD and within your sprint. Its not recommended to have your test data externally maintained and thus validating E2E flow with all moving pieces needs to be light weight. Practice TDD model if permits.
My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science.
My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product.
For the vending machine example, there are tons of parts, inside and out.
Simple testing, as the product is designed to work, gives plenty of cases
Does it give the correct change
How fast can it process the request
What if an item is out of stock
What if it is overfilled
What if the change drawer is full
What if the items are too big, or badly racked
What if the user puts in too little money
What if it is out of change
Then there are the interesting cases, which normal users wouldn't think about.
What if you try to tip it over
Give it a fake coin
Steal from it
Put a coin in with a string
Give it funny amounts of change
Give it half-ripped bills
Pry it open with a crow-bar
Feed it bad power/brownout
Turn it off in the middle of various operations
The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game.
You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools.
I feel like this answer was a good start, but I now realize it was only half of the story.
Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain).
The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold.
First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on.
Second, and more importantly, you will help your team expose where they should be focusing their efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know.
The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped.
An example:
Q: Why this test?
A: Because I want to exercise all functionality in the system.
Q: Why does this system function this way?
A: Because of the decisions that the programmer made, based on the product specifications.
Q: Why did our product specifications ask for this?
A: Because the company that we are writing the software for had a requirement that the software works this way.
Q: Why did that company we are contracting for add that as a requirement?
A: Because their users need to do :thing:
Q: Why do the users need to do :thing:?
A: Because they are trying to accomplish :xyz:
Q: Why do they need to accomplish :xyz:
A: Because they save money by doing :abc:
Q: Why did they choose :xyz: to solve :abc:?
A: ... good question.
Q: What could they do instead?
A: ... now that I think about it, there's a ton of options! Maybe one of them works better?
With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach.
This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide.
A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
#brett :
Suppose you have the system with you, which you want to test. Now the main thing that comes into picture is make sure you have the test scenario or test plan. Once you have that, then for you it becomes very much clear about how and what to test about the system.
Once you have test plan then your vision becomes clear regarding what all is expected and what all is something unexpected. For unexpected behavior you can recheck once and file an issue, if you think that that is not correct. I had given you answer in a general case. if you have a real world scrnario, then it may be really helpful to provide guidelines on that.
All,
I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.
Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test.
Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.
I would appreciate your views on this.
Thanks,
Byte
Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.
Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?
Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.
For the example you give, I would do something like this:
For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.
Who
Whose
What
Where
When
Why
How
How much
Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.
Who else
Whose else
etc..
Then, ask the "not" questions--negate or refute your assumptions, and challenge them.
Who not (eg, Who might not need access to this secure feature, and why?)
What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
etc...
Other modifiers to the qustions could be:
W else
W not
W risks
W different
Combine two question words, eg, Who and when.
In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:
Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.
White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.
There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.
Identify your assumptions from different perspectives:
How can users possibly misunderstand this?
Why do I think it acts or should act this way?
What biases might I have about how this software should work?
How do I know the requirements/design/implementation is what's needed?
What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
Is the right software being built?
Do I really know what a valid name/phone number/ID number/address/etc looks like?
What am I missing?
How might I be mistaken about (insert noun here)?
Also, use any of the mnemonics and testing lists noted here:
http://www.qualityperspectives.ca/resources_mnemonics.html
Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.
Group brainstorming sessions. (or informally in pairs when necessary)
see these brainstorming techniques
Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.
Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.
Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.
As a second interview I get people to sit down and write code...I try to make the problem really technology independent.
My programming problems that I have don't really exercise peoples OO abilities. I tend to try and keep the coding problem solvable within 2 hours ish. So, I've struggled to find a problem small enough and involved enough that it exposes peoples OO design skills.
Any suggestions?
This is a problem that I use with some trainings, looks simple but is tricky OOP-wise:
Create model classes that will properly represent the following constructs:
Define a Shape object, where the object is any two dimensional figure, and has the following characteristics: a name, a perimeter, and a surface area.
Define a Circle, retaining and accurately outputting the values of the aforementioned characteristics of a Shape.
Define a Triangle. This time, the name of the triangle should take into account if it is equilateral (all 3 sides are the same length), isoceles (only 2 sides are the same length), or scalene (no 2 sides are the same).
You can go on and on with quadrelaterals (which include squares, rectangles, rhombi, etc) and other polygons.
The way that they would solve the above problems would reveal the people who understand OOP apart from those who don't.
ideally, you want to present a problem that appears difficult, but has a simple, elegant, obvious solution if you think in OO terms
perhaps:
we need to control access to a customer web site
each customer may have one or more people to access the site
different people from different customers may be able to view different parts of the site
the same person may work for more than one customer
customers want to manage permissions based on the person, department, team, or project
design a solution for this using object-oriented techniques
one OO solution is to have a Person, a Customer, an Account, and AccountPermissions, where the Account specifies a Person and a Customer and an optional Parent Account. the use of a recursive Account object collapses the otherwise cumbersome person/team/department/project structure a direct ERD solution might yield
I have used the FizzBuzz Programming Test. And shockingly can corroborate the claims made by the article. As a second follow up I have asked candidates to compute the angle(s) between the hands on an analog clock. We set up a laptop with VS 2008 installed and the stub in place. all they have to do is fill in the implementation.
I am always stunned at how poorly candidates do on these two questions. I really am.
Designing Social Security Application is something which I ask a lot of people during interviews.
The nice thing about this is everyone is aware of how it works and what things to keep track of.
They also have to justify their design and this really helps me get inside their head :)
(As there is lots of flexibility here)
Kind regards,
Whether or not people do some coding in the interview, I make it a point to ask this:
Tell me about a problem you solved recently using object oriented programming. You'd be surprised how often people cannot answer that simple question. A lot of times I get a blank stare, or they say something like "what do you mean? I program in .NET, which is all object oriented."
These aren't specifically OO Questions, but check out the other questions tagged interview-questions
Edit: What about implementing some design patterns? I don't have the best knowledge in the area but it seems as if you would be getting two questions for the price of one. You can test for both OO and Design pattens in the one question.
How about some sort of simple GUI. It's got inheritance, overriding, possibly events. If you mean for them to actually implement as part of the test then you could hand them a blank windows-form with an OnPaint() and tell them to get to it.
You could do worse than ask them to design a MapReduce library with a single-process implementation. Will the interface still work for a distributed implementation? What's the exception-handling policy? Should there be special support for chaining MapReduce jobs in a pipeline? What's the interface to the inputs and outputs? How are inputs chunked up? Can different inputs in one job go to different mappers? What defaults are reasonable?
A good solution in Python takes about a page of code.
I've got a super simple set. The idea is mainly to use them to filter out people who really don't know their stuff rather than filtering in the rock stars.
These are all 5 minute white-board type questions, so they are really not that hard. But the act of writing up code, and talking through it reveals a lot about a candidate - and is brilliant for exposing those that can otherwise BS through the talk.
Write a method that takes a radius of a circle as an argument, and returns the area of the circle (You would be amazed how many people struggle on this one!)
Write a program that accepts a series of numbers as arguments from the command line. Add them up, and print the sum
Write a class that acts as a keyed counter (basically a map that keeps track of how many times each key is "counted")