Is BDD really applicable at the UI layer? - testing

BDD is an "outside-in" methodology, which as I understand it, means you start with what you know. You write your stories and scenarios, and then implement the outermost domain objects, moving "inwards" and "deliberately" discovering collaborators as you go--down through service layers, domain layers, etc. For a collaborator that doesn't exist yet, you mock it (or "fake it") until you make it. (I'm stealing some of these terms straight from Dan North and Kent Beck).
So, how does a UI fit into this?
Borrowing from one of North's blog entries, he rewrites this:
Given an unauthenticated user
When the user tries to navigate to the welcome page
Then they should be redirected to the login page
When the user enters a valid name in the Name field
And the user enters the corresponding password in the Password field
And the user presses the Login button
Then they should be directed to the welcome page
into this:
Given an unauthenticated user
When the user tries to access a restricted asset
Then they should be directed to a login page
When the user submits valid credentials
Then they should be redirected back to the restricted content
He does this to eliminate language from non-relevant domains, one of which is the UI ("Name field", "Password field", "Login button"). Now the UI can change and the story (or rather, the story's intent) doesn't break.
So when I write the implementation for this story, do I use the UI or not? Is it better to fire up a browser and execute "the user submits valid credentials" via a Selenium test, or to connect to the underlying implementation directly (such as an authentication service)? BTW, I'm using jBehave as my BDD framework, but it could just as easily be Cucumber, rSpec, or a number of others.
I tend not to test UI in an automated fashion, and I'm cautious of GUI automation tools like Selenium because I think the tests (1) can be overly brittle and (2) get run where the cost of execution is the greatest. So my inclination is to manually test the UI for aesthetics and usability and leave the business logic to lower, more easily automatible layers. (And possibly layers less likely to change.)
But I'm open to being converted on this. So, is BDD for UI or not?
PS. I have read all the posts on SO I could find on this topic, and none really address my question. This one gets closest, but I'm not talking about separating the UI into a separate story; rather, I'm talking about ignoring it entirely for the purposes of BDD.

Most people who use automated BDD tools use it at the UI layer. I've seen a few teams take it to the next layer down - the controller or presenter layer - because their UI changes too frequently. One team automated from the UI on their customer-facing site and from the controller on the admin site, since if something was broken they could easily fix it.
Mostly BDD is designed to help you have clear, unambiguous conversations with your stakeholders (or to help you discover the places where ambiguity still exists!) and carry the language into the code. The conversations are much more important than the tools.
If you use the language that the business use when writing your steps, and keep them at a high level as Dan suggests, they should be far less brittle and more easily maintainable. These scenarios aren't really tests; they're examples of how you're going to use the system, which you can use in conversation, and which give you tests as a nice by-product. Having the conversations around the examples is more important than the automation, whichever level you do it at.
I'd say, if your UI is stable, give automation a try, and if it doesn't work for you, either drop to a lower level or ensure you've got sufficient manual testing. If you're testing aesthetics anyway that will help (and never, ever use automation to test aesthetics!) If your UI is not stable, don't automate it - you're just adding commitment to something that you know is probably going to change, and automation in that case will make it harder.

I'm new to BDD myself, but I found the cuke4ninja site to help in this regard. What they suggest (my interpretation) is you have your step definitions which are high level and UI agnostic, that calls into a "workflow" class which groups the details like "click this button", "populate this field" into a method that captures the workflow under test, which calls into a "screen driver" class that handles the UI automation for that particular screen. That way all the UI automation code is abstracted away from the step definitions and are in a single location, and if the UI change, you just have to change the code in the "screen driver" instead of all multiple tests. Here is the relevant page where it is discussed.

What does the BDD is describing?
In teams following a Behaviour Driven Development (BDD), the Acceptance Criteria (aka Rules) should describe "what the system does" instead of "how the system does it".
So, where are the UI/UX details are captured in a team which is following BDD?
In teams using BDD, The User Interface (UI) and User Experience (UX) (buttons, clicks, animations, etc) layer details should not be included as an Acceptance Criteria (aka Rules) in text format, under a ticket (e.g. issued with a Software Ticketing Tool such as JIRA, GitLab, etc). Instead they should be included within the design screens (wireframes, user journey, individual screens etc). For example, text notes may be embedded on the design screens with annotations, or just as comments next to the screens.

Related

What can developers do to assist in automation testing?

The company I work for is starting a new web application, and I have requested that front end developers make this application Automation Friendly.
The previous application was using the react framework, very few elements had unique ID's (or any unique identifier at all). This time around, I have asked the developers to include a custom data attribute, specifically for automation.
I am looking for anyone who may have experience in this kind of situation.
What have you asked your developers to do to assist in automation?
Are there any standards, or guidelines for naming elements in an application to suit Selenium automation?
are custom data attributes the best way to go? are there other options?
Any advice/guidance would be greatly appreciated!
Web Applications can be difficult to test if they aren’t made with testing in mind. This is especially true for Single Page Applications (SPAs). SPAs support heavy interaction without incurring additional page loads (e.g. Facebook, Gmail). Instead of page loads, these SPAs use AJAX requests to relay data back and forth from the server.
As per #ChrisChua from ThousandEyes these are some of the best practices to keep in mind as you develop your web application to make testing easier:
Add classes that are meaningful.
Classes should indicate the element’s functionality and state.
Use functional names in IDs and classes for action elements.
Dynamically generated classes and IDs are not helpful for testing.
Add targetable DOM feedback to indicate application state.
Never hard-code content in test code!
Conclusion
It's true some of these are not easy changes, as the developer may have to think harder about using test-friendly designs rather than "something that just works". However, it will definitely help with maintainability of the testing, which would reduce costs in the long run.
tl; dr
A couple of references:
Nicolas Gallagher’s article on HTML Semantics and Front-end Architecture
Yandex’s Block Element Modifier
W3C’s recommendation on good class names
Pamela Fox’s article on Frontend Architectures
Test Driven Development makes it easier to automate. In my opinion, testers should be developers, or ex-developers who are good at finding and preventing mistakes. Write the tests in the same project that the solution is being developed in. Developers can also put in ids, even though they don’t want to sometimes, and if the “testers” are good they can even submit pull requests (in GitHub for example) for code improvements that will allow them to test better. Think of testers as a piece of your development team, where everyone is allowed to contribute code. It helps with accountability and improves autonomy. Everyone is there to help each other and if all the code is in the same project, and all code is reviewed and approved before merging to master, everyone is a potential developer if everyone is IT. The old days of manual testing are dying. Separating Dev from Testing is a brick wall. Tear it down.

Why is it advisable to maintain a clean browser state for each test in automated tests?

It is advisable that for each test we should maintain a clean browser state so that browser sessions cannot be shared between tests.
The user uses a web application. The usage flow is converted into various use cases and we design test cases for them. The user doesn't clean the browser the browser state before he/she goes to another use case. The browser state is being shared between use cases.
Why it is advisable to clean the browser state after a test where the user doesn't.
It depends on your goal. If you are conducting functional regression testing of product features, you are essentially conducting an experiment. When conducting experiments you want to have strict control over as many variables as possible, so that when the experiment fails you'll have a better chance at figuring out what broke.
If your goal is to test the product's behavior in a browser with a variety of states, then you should put the browser into the specific state that you want to test.
"People do the darndest things", so you would run a 'beta test', or conduct user experience trials, or release the product into the wild for people to just use it. you would not attempt to automate that sort of testing for the simple reasons that it's complex, and will drive you crazy trying to reproduce any found problem to the extent that someone could identify and resolve the problem.

Should your BDD specifications be separated from your UI tests?

Yesterday I went to a great presentation by Gojko Adzic about BDD. I might have missed one or two things he said so here is one question that hopefully will clear somethings up for me.
Often when you see BDD example online they have included steps in the UI. In a gherkin language you can often see something like:
Scenario: Successful booking
Given I am on the page ...
When I enter the following ...
Or something like that.
My question is, does that really have with BDD to do? Shouldn't BDD be targeted towards the domain and then you have those kind of tests as UI or regression tests. What I am thinking about is something like this use gherkin syntax to describe somekind of booking system:
BDD Spec:
Scenario: Successful booking
Given I am an authenticated user
When I place an order with the following items
| item | price ($) |
| book1 | 5 |
Then I should expect to pay $5
And I should get a confirmation mail of my order
Note that I am not mentoning the UI at all, I am only describing how the domain works and this test should be run on every build. Then you can have your UI test (also gherkin):
Scenario: Successful booking
Given I am logged in on the site
And I go to the page for item book1
And I click add to basket
Then I should have a basket with 1 item and $5
When I click checkout
Then I should get to the checkout page
and it continues, maybe it should be separated into two or three scenarios but that is not the point. Theses kind of tests are heavier to run and should probably only be run on nightly builds. The point of the question is still: Should you separated you BDD specs from your UI/regression test.
BDD boils down to collaboration between QA other non-technical people with the developers using native language as definitions for the functionality. So from this perspective, both of your examples are likely to be understood as a feature by the stakeholders.
But in general, the more abstract you can make the "story", the better. The difference,or potential issue, isn't so much about mentioning the UI, but potential assumptions and discussions about the layout. The initial workflow of the application is likely to change, so changes in the feature definition might require new discussions with the staeholders. Talks that might not really be required if the goal remains the same.
That said, practicality will come into play very quickly. An ambiguous feature definition will require a more exact definition for the ui, and that is in turn turned into a step definition or tests written with other UI testing tools. Writing the actual step definitions for the feature files is the hardest part so writing new scenarios is rather quick once you have a comprehensive set of step definitions in place. Not reusing these in UI tests seems like a waste, so we use the same set for UI tests.
We separate the UI tests only in the sense that not all scenarios are discussed with the stakeholdes. The most important ones tagged, and each feature has one or two scnearios tagged and the rest are UI tests. Also, sometimes the feature files are not written by the person writing the step definitions, so this allows the UI test writer to be less knowledgeable about the specifics how the UI operations are implemented in the framework in use.
Trying to separate out domain and UI tests is not recommended.
The big advantage of using Cucumber is that your specs (tests) can be read and understood by non-technical stakeholders. Those people probably aren't too concerned about user interface details.
So a good approach is to simply push the UI stuff down a layer to within the step definitions, e.g.
Given /^I place an order for "([^"]*)"$/ do |item|
visit catalog_url
click_link item
click_button "Add To Basket"
click_button "Submit Order"
end

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

User stories vs use cases

Are use cases just multiple user stories??
What are the benefits of using user stories over use cases.. and vice-versa... When to use one over other...
Does all agile methodologies uses user stories??
Actually, the original use cases (see Jacobson's OOSE) were pretty lightweight, much as user stories are now. Over time, they evolved until a common format for "use cases" now is a complicated document with inputs, outputs, inheritance, uses relationships, pseudocode, etc. Programmers, in general, try to convert everything into programming.
In any case, the attempt to defined what distinguishes a "use case" from a "user story" fro a "scenario" is pretty futile, as it's hard to find two authorities who agree.\
Personally, I find the pattern "[Actor] [verbs] [noun] to get [business value]" helpful. If it gets over about a paragraph of text, it may be too big.
When it comes down to it "agile" is just a label, and people disagree over exactly what it means. Similarly people call very different things "use cases."
In my experience the primary difference between the two is that a user story is focused on the user, and is usually shorter and less formal - ideally, it should easily fit on a postcard. It probably doesn't give details of error handling etc.
Use cases can be much more formal (although some people write them informally too) - they focus on every interaction with the system, and may well go into more detail about several different systems/actors/etc within the same use case.
That's just my experience though - chances are everyone will have used these tools in different ways. I wouldn't get too hung up about labels - just use what works for your project.
Use cases aren't compilations of user stories.
User stories are generally much simpler than use cases. I think use cases try to cover absolutely everything to do with the behaviour of some aspect of the system. That is, all behaviours, all error paths and all exception handling.
The recommended template for a user is:
As a (role) I want (something) so that (benefit)
(Thanks Mike Cohn for providing this simple template)
Descriptions of behaviour expressed like this are more agile.
This sort of template lets you describe behaviour using different levels of detail. For example:
for those stories being implemented in a much later sprint, you can describe behaviour in a high level sort of way, e.g. as an ops team member I want to monitor the system remotely so that I can determine system health while on the road.
for those stories being implemented in the next sprint, you can describe behaviour is a slightly more detailed way, e.g. as an ops team member I want to have a dedicated ops only login so that I can check system health.
for those stories being implemented in the current sprint, you can describe behaviour in a highly detailed way, e.g. as an ops team member I want to have a web interface so that I can check current status of the ingest ftp server.
IMHO Use cases are much more carved in stone! And hence can be a problem to update after the initial version.
HTH
cheers,
Rob
In one word, no.
Use Cases are typically detailed specifications laying out how some particular piece of functionality is going to work, or how a specific user is going to utilize the system. It typically is in the voice of a specific user (or Actor) and is fairly self-contained.
A user story on the other hand is "an invitation for discussion". It is typically one or two sentences. Here is one good resource for that. And Mike Cohn's User Stories Applied is well worth it.
The typical syntax is "As a <user> I need <functionality> to achieve <business value>", or "To achieve <business value> as a <user> I need <functionality>" which drives home the value of the story.
User stories are not meant to be stand-alone, but meant to invite discussion of the story between the developer and the customer (or customer proxy).
You can think of a usecase as a user story, but not the other way around. A usecase will cover multiple "endings" to the story, special requirements (e.g. form fields must be entered in format xyz, and show error message 123 if the user enters a field in the wrong format). Also, a usecase can include additional references to external documents, such as security guidelines.
User Stories is a tool used in Agile development to make sure you create the product your user really needs.
It describes rather why you should make this or that feature instead of HOW or WHAT feature.
From my personal experience, it's a great way to balance the client's and developer's vision to create a better product.
Unlike US aUse Case focuses on WHO uses your product. Here is the difference.
I'd say there is no other such instrument for an Agile developer as User Stories. If you want to learn how to write them successfully, check out this post.