Cucumber and JUnit Testing [closed] - testing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
We are building a new application and we are considering what testing frameworks to use. There are two types of testing we want to do:
Test all the possible logical pathways the execution thread can take (so this is more a technical/developer level testing strategy). We can do this by creating a program to combinatorially generate all the required test data.
Test only the required business use cases for the application (so this is more orientated towards the QAs and BAs).
My thinking is that for (1) we use JUnit and for (2) we use Cucumber. I have no experience in Cucumber. My question is can (1) and (2) be achieved with one framework like Cucumber or is it best practice to separate them out as I describe above.

I'd tend to agree with your assessment that JUnit (or another unit testing framework) is best suited for category 1 while Cucumber is well suited for category 2. Cucumber is a framework for writing natural language (more or less) specifications (in the Gherkin language), and as such its strength really lies in writing executable application specifications.
For purely technical testing, in order to enforce maximum test coverage, you're really only making it more difficult for yourself by writing the tests in a business-level language (Gherkin/Cucumber). Writing the tests with e.g. JUnit will involve much less friction.
For a good understanding of Cucumber and its role in the development chain, in comparison to other (BDD) tools (e.g. RSpec), I'd suggest reading The RSpec Book. In particular, it recommends RSpec, which is more similar to xUnit frameworks, for testing isolated parts of your system, and Cucumber for testing your application as a whole. This book is especially valuable in that it is authored by the creators of said tools (RSpec/Cucumber), so you get to know how these tools are intended to be used.
An example Cucumber specification, the test itself is within the Scenario block:
Feature: Serve coffee
Coffee should not be served until paid for
Coffee should not be served until the button has been pressed
If there is no coffee left then money should be refunded
Scenario: Buy last coffee
Given there are 1 coffees left in the machine
And I have deposited 1$
When I press the coffee button
Then I should be served a coffee

Related

what is the actual meaning of "software testing"? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
testing means to check the software or application is error less?
to check that software or product functionality is proper as per client requirement
testing can improve quality, reliability, performance in product.
Software testing has evolved in a number of ways over the years; from activity that was completed by developers during the software build; to dedicated test factories that validate and verify software deliveries, to approaches that write tests as specifications before the code is written (and more in between).
Many view software testing as verifying the requirements of the software have been delivered and validating they have been delivered in the right way. This type of software testing lends itself well to automated testing approaches where a boolean outcome of the test is achievable. As such, the approach to this testing typically uses a Triple A (Arrange, Act, Assert) model, whether at Unit, Integration, Acceptance or GUI test levels.
However, opponents to this approach reject the term 'testing' in favour or 'automated checking', and believe software 'testing' is the process of defining risk, designing experiments to expose the manifestations of those risk through understanding a better understanding of the software context. purpose, user communities and goals. These attributes can be difficult to describe in boolean terms, and therefore, (as described by James Bach), software testing can be is seen as a learning opportunity to make informed choices about the software being developed.
A strong approach to software testing would consider both approaches and select the most relevant aspects from each.

Test result reporting [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am using TestNG and Selenium webdriver via Java.
Is there any tool that can help generate detailed test results, for example, suppose I have a test case that fails more often than not, is there a tool that can statistically report those test cases that fail more often than the others like in a graph, or pie chart, etc?
XL Testview
Have a look at XL Testview from XebiaLabs.
Test analytics and decision support that spans testing tools
See all your test results in one single dashboard
Analyze test results across multiple test tools
Track release metrics and quality trends over time
Use real-time quality data to make the best go/no-go release decisions
I havent used it, but seems to track results over time. Seems pretty interesting.
Test Result Analyzer
Or have a look a the Test Result Analyzer plugin for Jenkins.
Many of us have a requirement of knowing the execution status of a
test package , test class or test-method across multiple builds. This
plugin is an implementation of the said requirement and shows a table
containing the executions status of a package,class or a test-method
across builds.
This plugin supports jUnit and TestNG results sets. Looks like the minimum you want and it is free. :)
Tesults - it handles all this including identifying recurring / continuing fails. In general it's a great central place for a team to view and assign failing tests. Please be aware that I work at Tesults.

What is the most appropriate GUI Testing Tool for MS Dynamix CRM hybrid application? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have been recently assigned as a tester on a web app that integrates with Microsoft Dynamix CRM.
There are a lot of repetitive testing tasks that could be automated to accelarate the testing effort.
I proposed this to my boss and said that I can start hacking together some watir scripts. However, he wants me to do more research (he is happy to invest the cash if there is something out there that can save us time - he is heavily attached to the idea of there being some kind of record and playback tool out there that cranks out robust scripts but I am not convinced).
This is my tool experience so far:
webdriver (Python)
watir-webdriver (just a dabble for an interview)
TestComplete (small suite of tests for a webapp in 2011)
QTP (in 2009)
Can someone please recommend some tools for me? I don't really know where to start.
It sounds like
Selenium / Webdriver is widely used, widely supported and a good price (free :) )
"Telerik TestStudio" is quite popular but seems like overkill for what I want to do
"QTP" is unreliable and overpriced.
"TestComplete" has some scattered support.
Since I'm already handy with Ruby, I am leaning towards running with the Watir option. Does this seem like a reasonable course?
I would suggest to go with the Open Source solutions: either Watir or Selenium. Both should work, then it depends on your liking. Personally I use Robot Framework with its selenium Library and it works very well and has quite a dynamic community.
Note that you should also consider if you can do part of your testing bellow the UI. You could probably do some tests on the API offered by Dynamix and used by your web app. That would be quicker and more robust.
I would recommend selenium-webdriver. As you said it's widely used, widely supported and good price (free). As you aleady know Ruby you can write tests on ruby using selenium-webdriver.

BDD tests should be written by Developers OR Testers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In our team the Developers are arguing they should not be writing the BDD tests since BDD tests are automation tests and QA team should be writing it.
Is that how everyone else out there does it? Or do you have Developers writing BDD tests?
By the way...we use SCRUM methodology on our team.
Regards
This depends on your team and which development methodology you using.
In SCRUM, developers should write tests and QA (which strictly speaking must not exist as part of the development team) may perform infrequent manual tests, that cannot be automated (such as usability tests, information perception, colour choice). In this sense, QA becomes a 3-rd party service, that does not participate in everyday development. A team may use occasionally this service to get specialised feedback.
From the scrum guide (with my emphasis)
Scrum recognizes no sub-teams in the Development Team, regardless of
particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; [...]
In (iterative) waterfall, QA and customers, can write BDD and acceptance tests. They can do this in plain English, leaving programmatic implementation of the tests to the developers.
The fact that the tests are automated doesn't mean that developers should delegate writing tests to QA.
BDD - behaviour driven development is method by which developer writes automated test cases. Anyone who is writing code can write this. Suppose any team is following TDD then BDD is not required as the case may be. BDD is basically for developers who is developing some software based on behavior by using some tools like specflow.
I hope this will help.

Making test cases maintainable [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How to make the test cases maintainable or generic in an agile environment , where there is frequent changes in the requirements.
This is a question that was asked to my friend in an interview.
Write tests at higher level of abstraction
Write intent revealing tests rather than tests that mimic the user clicks on UI
Use BDD frameworks like Spock, Cucumber etc.
Re-use: Identify the reusable features and re-use them. For e.g. Login feature steps can be written once and re-used across other features
Write more tests at service level than from the end-to-end
Use formal techniques to reduce the number of regression tests
Equivalence Class Partitioning
Combinatorial Testing
Boundary Values
Create a test strategy for the entire team
Move white-box testing to unit and integration tests
Clearly identify what will be automated by testers and what should be automated by developers. For e.g. most of the white box tests can be realized using unit tests. Testing Quadrants is what I use heavily.
And most importantly ditch tools from vendors like mercury and IBM.
My short answer to this is treat your test suite with the same respect you treat the rest of your code base.
Automated test are code - important code. Pay as much attention to keeping them well factored and clean as you do everything else and you can't go far wrong.