robot framework - BDD style- No Given present - api

When writing BDD style test case in robot framework if there is no test or keyword, then what should be written in the Given statement?
I am writing an API test using the BDD style in the robot framework. But for the very first test which will be executed there is no Given statement that will be executed. Is there any placeholder which we can use?
Please suggest

If there is no Given statement, then you should skip it. IMHO, You shouldn't write for styling as well. BDDs are meant to express the business requirement and having a dummy G-W-T statement, which doesn't explain a requirement doesn't make sense.
For an example, your BDD should look like
When the user registers only with First Name and Last Name
And doesn't enter Age
Then the error message "Please enter Age" should be displayed

You just skip it, don't write it; or, if you do want to have something - for styling for example, write Given No Operation.

There's always some context in which the scenario is taking place. Maybe the application is already running, or already installed, or you're already on the home page. In the case of the API, whatever service you're providing is available.
That context is your Given.
The trigger for behaviour - the thing which causes a change to happen - is the event, or the When.
It's often the case that when you start up an application or service, a default state is created. There's no trigger for it; it's just how things start. So for instance, you might see something like:
Given the Tetris game is running
Then the grid should be empty
If your scenario is concerned with whether the game starts up correctly, you can phrase it as a When:
When I start the game
Then the grid should be empty
Even then, there's probably a:
Given the game is installed
If working with an API, with the assumption that the API is available, I might put a check here to find out whether it really is (yes, I really do mean putting an assertion in the Given). If the test fails, it's usually because the service didn't start; which is usually because you've got environment problems. It's a great way to flag it up.
It's also OK to put the step in as English, but leave it empty in code, if you're really confident about the starting state being in good shape.
Given I am on Google.com
The purpose of scenarios in BDD isn't just to automate tests; it's to illustrate the intent and value of the behaviour using concrete examples. So assume you start with nothing. No internet. No application. No API. What needs to change for your scenario to run? That's your context. It's more common, I've found, to have a missing When than a missing Given, for those starting scenarios, since no user has triggered anything.
Thinking of your scenarios as living documentation with examples, rather than tests, might help to clarify what you need to include. BDD scenarios are tests as a nice by-product of exploration through conversation, and automating the result.
You might also like this blog post on 4 different ways of handling Givens.

Related

Is there a way we can chain the scenarios in karate like java method chaining

I have been using Karate for the past 6 months, I am really impressed with the features it offers.
I know karate is meant to test API(s) individually but we are also trying to use it for E2E tests that involves calling multiple scenarios step by step.
Our feature file looks like below
1.Call Feature1:Scenario1
2.Call Feature2:Scenario2
.....
Note: We are re-using a scenarios for both API Testing and E2E testing.Sometimes I find it difficult to remember all feature files.
Can we chain the scenario call like java, I doubt feature file will let us to do that. We need your valuable suggestion. pls let us know if you feel our approach is not correct
First, I'd like to quote the documentation: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every Scenario. If you are looking for a way to do something only once per Feature, take a look at callonce. On the other hand, if you are expecting a variable in the Background to be modified by one Scenario so that later ones can see the updated value - that is not how you should think of them, and you should combine your 'flow' into one Scenario. Keep in mind that you should be able to comment-out a Scenario or skip some via tags without impacting any others. Note that the parallel runner will run Scenario-s in parallel, which means they can run in any order.
So by default, I actually recommend teams to have Scenario-s with multiple API calls within them. There is nothing wrong with that, and I really don't understand why some folks assume that you should have a Scenario for every GET or POST etc. I thought the "hello world" example would have made that clear, but evidently not.
If you have multiple Scenario-s in a Feature just run the feature, and all Scenario-s will be executed or "chained". So what's the problem ?
I think you need to change some of your assumptions. Karate is designed for integration testing. If you really need to have a separate set of tests that test one API at a time, please create separate feature files. The whole point of Karate is that there is so little code needed - that code-duplication is perfectly ok.
Let me point you to this article by Google. For test-automation, you should NOT be trying to re-use things all over the place. It does more harm than good.
For a great example of what happens when you try to apply "too much re-use" in Karate, see this: https://stackoverflow.com/a/54126724/143475

Only having one assertion statement in a functional test?

This question is similar to this one but regarding functional tests rather than unit tests.
I'm currently testing a UI using Selenium and I was wondering if only one assertion statement is needed, or if it depends on the test.
For example if I wanna test a basic Facebook login, would it suffice just use an assertion statement for the end case (ex: finding an element that only exists when logged in) or should the test be more detailed and include more than one assertion statement (check if you're on the correct site, check inputs, check for an element that only exists when logged in, etc).
Let me try to answer your Questions one by one:
if only one assertion statement is needed, or if it depends on the test - Let us speak of a manual Testcase. A Testcase consists of several steps but at the end we do cross check the Actual Result against the Expected Result. The same ideology is implemented in Automation through assertions. So ideally as per best practices, an assertion statement is a must but it's not mandatory.
should the test be more detailed and include more than one assertion statement - You can always have multiple assertions in a Testcase. No issues in that. But you must remember, if one Assertions fails the rest of the assertions won't be executed. Which gives you a single result either Pass or Fail. Now if you want to keep multiple validation points then you have to take help of if/else block so all your validations gets executed irrespective of each of them Pass/Fail.
Let me know if this Answers your query.
The one assertion per testcase rule is bit over the top for functional tests. For functional end-to-end tests I think as a general guide it should be test only ONE behavior. Use as many asserts you need to verify this ONE behavior.
If a test is failing you want to understand what is not working without reading the actual test-code. Having multiple assertions in a single test could lead to testing multiple behaviors and multiple reasons to fail. This is sub-optimal.
Do be practical as functional end-to-end tend to be slow. Multiplying the same steps to test a slightly different assert seems a waste of run time. Your test-suites should also be fast if you want to run them on each check-in. Therefor don't write too many tests on this level. Keep a good balance as the test-pyramid suggests.
For your example I think one assert would be enough. The behavior is to check if the user is now logged in, not if all the elements are on the page. Also keep in mind if the implementation of the site changes you need to update all your tests. The less you assert to more maintainable your tests will be.

What name is given to tests that determine if a system will run an application?

I want to essentially check the environment and see if the necessary conditions for my application are in place. That is not unit testing, what is the name for that?
EDIT: I found an example whilst doing something else: otrs.checkModules, from the much-since-last-time-I-looked OTRS ticketing system. That is what I am talking about. Hopefully that clarifies the question somewhat?
I might call it precondition testing, but I don't think that's an official term. But you are essentially trying to assert the preconditions in order to execute your app.

BDD GUI Automation

I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.

When/how frequently should I test?

As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time.
My question is what rhythm do you like to get into when working on large projects, and where testing fits into it.
Well, if you want to follow the TDD guys, before you start to code ;)
I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle.
Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work..
EDIT:
I thought I should probably expand on this, this is my basic "working process"...
Plan what I want from the code,
possible object design, whatever.
Create my first class, add a huge comment to the top outlining
what my "vision" for the class is.
Outline the basic test scenarios.. These will basically
become the unit tests.
Create my first method.. Also writing a short comment explaining
how it is expected to work.
Write an automated test to see if it does what I expect.
Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5).
I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues.
If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it..
I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine..
Before you check the code in.
First and often.
If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough.
In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test.
Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests.
This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application.
Following this you then need to test it in the main application with the combination of existing features.
This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too.
Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger.
At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling.
To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways.
Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production.
If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at.
A good key to remember is
"Test early, test often and test again, when you think you are done"
When to test? When it's important that the code works correctly!
When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it.
On a larger project, I write tests before I write a class and I run the tests after every change to that class.
I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to.
Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly.
I guess I would say: "Test often."
I've only recently added unit testing to my regular work flow but I write unit tests:
to express the requirements for each new code module (right after I write the interface but before writing the implementation)
every time I think "it had better ... by the time I'm done"
when something breaks, to quantify the bug and prove that I've fixed it
when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks...
I run the tests on most builds, and always before running the code.
Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one.
I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat.
The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.).
Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures.
We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written.
I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor.
Red: Write your test so that it fails. That way you know the test is asserting against the correct variable.
Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away.
Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you.
This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here...
Lots of great answers here!
I try to test at the lowest level that makes sense:
If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm.
Test each function.
Exercise each branch at least once.
Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors.
Test various combinations of valid and invalid inputs.
Look for situations that might break the code, and test them.
Test each module with the same strategy as above.
Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly.
Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc.
In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.*
**I know, -1 for the gratuitous buffer-pointer reference!*