BDD - How to Automate Avoid Duplicate Step Definitions When Working in a Team By Java - gherkin

When there are many members working on the same Cucumber project, it is inevitable to duplicate steps with different contexts. Besides the unifying Gherkin convention in the team, is there any way to avoid it automatically by code?

Related

How to re-use Cucumber steps in different projects?

I have a project with login and other functionalities tests in Cucumber. There are different projects which use the same login function. I would like to reuse the cucumber login steps from one project to another project.
Eg:
Project1->TestLogin1
Project2->TestLogin1
In general don't try and do this. Cucumber scenarios should describe the behaviour of your system and their implementation should be specific to each particular system. People have been trying to do this in the cukes community for years, generally with little success.
Sure with something as simple as login you could share ... until one application starts allowing you to register by Facebook whilst the other requires you to confirm via email.
In practice the amount you save in sharing (which is very small) if offset by the amount you lose in being able to make you scenarios specific to the world of your application.
You could definitely benefit from sharing step definitions between projects, because there is likely to be a lot of overlap between certain parts of an app, such as admin tasks.
If you use an IDE for feature editing, you may then be able to benefit from leveraging those step defs through autocomplete.
It should be possible to package step defs into repos that are then included by module. You might be able to leverage tags or hooks to aid in setup so the context is correct.
Whether it’s worth the effort of coordinating across many projects will likely depend on your use case.

Documenting functional tests

I'm about to start writing e2e tests for a web app i've been working on for the last months. I am currently investigating how to best document these tests. In my company the way it's been done before (on older, non-web programs) is to have a big word document that describes the action of each test, and the expected result. Tests are then run with a third party software, and if any test fail, we can use the documentation to troubleshoot.
This way works fine, but i'm wondering if there is a more efficient, "web-based" way of documenting the e2e tests. We have no prior experience with web-based apps, and my research lead me to observablehq's javascript-based notebooks. I thought maybe it is possible to integrate the actual tests into it, along with the test specifications and then run the codeblocks from there. But i'm not sure this approach is worth the extra effort rather than the current way we do things.
I guess what i'm asking is how other developers are documenting e2e tests for web-based apps, and lessons learned around it?
If you can use an automation framework that makes you build the tests from a specification. This is typically a markdown file which describes the business case being tested. Each of the steps are executed by the framework. This means that you can re-use the steps as you build out the specifications. An example of this is Gauge. You can read their documentation on building specifications to get a better idea of what I mean.
There are a few advantages to following this approach:
The specifications are stored alongside the code. This means that the test cases follow the code as it evolves. In the 'old days' where this was stored in documents there was a challenge keeping this in sync with versions of the code.
The tests are self documentation the specification both drives the test and documents the test.
The test reports are produced in HTML and therefore are easier to understand.
Good documentation is key, when talking about end to end testing it could be a little more challenging. Use cases and its data organization is the first thing to address. You want your test case input and output verification organized in a cohesive way, including specification and use case description.
Some project with e2e test case documentation example:
Cloud storage mirror
Cross vendor database synchronizer
Finally you might be interested in test data organization

Can someone explain differences of Test-Driven Development, Agile Development, SCRUM & Unit Testing

And further, how they relate together or even if they do?
What would one do to understand the various pieces to a simple question, how to I properly build a testing facility for my (web or other) application?
Agile Development is a banner term for many things, too numerous to mention, including Scrum and TDD. It typically, but not always follows the Agile Manifesto.
SCRUM
This is a particular flavour of agile. This diagram from wikipedia shows the process:
See wikipedia for more info.
Unit Testing
This is the art of writing code that tests code. Failing tests indicate a problem in your solution.
Test Driven Development
This is the practice of writing tests before code, some of the advantages being that untested code isn't added to the solution, and that the code written is testable.
A proper testing facility, usually leverages something along the lines of xunit, junit, nunit, mstest depending on the framework used, these tests are typically ran via a Continuous Integration build on some kind of build server. That is a build that runs everytime the code changes, that executes tests. This way problems are identified quicker.

Testing reusable components / services across multiple systems

I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.

How do you organize your release tests?

In the company where I work we have major releases twice every year. Extensive testing (automated and manual) is done the weeks before.
The automated tests produce logfiles, the results of the manual tests are written down in test plans (Word documents). As you can imagine this results in a lot of different files to be managed and interpreted by the test engineers.
How do you organize your release tests?
E.g. Do use a bug tracker? Do you use any other tools? How do you specify what has to be tested? Who does the testing? How is the ratio developers / testers?
You could use combination of a bug tracker (JIRA, Mantis, Bugzilla) and test case management tool like testlink
It's almost impossible to properly organise the testing without keeping good track of your test and their results.
We use PMC suite(1) and it has a very useful organisation structure for the tests:
Test Scenarios (batteries of tests)
Test Cases (linked to the Requirements)
Test runs with their respective results
These are linked to the Bugs which are in their turn linked to the Tasks etc.
When a manual test is run the tester executes a scenario and goes through the test cases, with the results being tracked. All found issues are documented as Bugs.
1. it's developed by my company but please don't consider this to be a ad :)
If you develop with MS products and technologies you could always look at Team Foundation Server. I find it fits perfectly for managing automated Unit Testing/builds, managing bugs, managing test results, assigning testing tasks, etc. That is what we use. Its not cheap thoguh, but worth the investment if its in the budget.