Using hooks in Feature files in Specflow - background

For me to execute something at the TestRun, Feature, Scenario or Step level.. I understand we can use Hooks. What i would like to find out is how these can be writte in the feature file when I am writting the spec.
Based on my understanding I can use Backgroud to write something common which is to be run within the feature before all the scenarios. However its adviced that we should not have long list in the Backgroud section. Also if I have something which is common for the testrun or multiple feature where i can use a tag a group them, is there any syntax I can use to write this.

The hook implementations cannot be expressed in Gherkin in the feature files, they must be implemented in the step implementation files:
Tag the scenarios and/or features with a tag #foo, and in some class decorated with the [Binding] attribute, annotate a method with a hook attribute, like [BeforeTestRun("foo")].
More information and available hooks can be found in the specflow wiki.
If you're worried about having a long list of steps in the Background, maybe the steps are too verbose and you can consider joining them into a single step. If you need to do something for all tests in the test run, maybe it's not important to mention it in the feature anyways, so it can go in a step implementation file like described above.

Related

How to link Feature File to multiple step defintion files in Python BDD

I am developing a framework for automation using pytest-bdd based framework. Based on functionality I have multiple feature files and multiple step defintion files. Some scenarios take steps from other step definition files.
For example I have a Login Module , User Details Module. Now for validation of a step in User Module I do have to start with steps from the Login Module.
However in python bdd, I could see a one to one mapping of feature and step definition file.
Please let me know if this a limitation of pytest bdd framework .
Yes as far as i have worked with pytest bdd, you can only map one step definition to a single feature file, but there are work arounds to these.
1.Use conftest to keep all your common steps that you want to call across multiple feature files.
2.Use methods to be called into other step definitions by importing those methods into other step definitions.
I have a similar experience and i realized that if i don't use 1:1 mapping of feature to step definition file then it results in step_def not found errors
e.g.pytest_bdd.exceptions.StepDefinitionNotFoundError: Step definition is not found:
So, I stick with the safe approach of 1:1 mapping
Would like to hear more thoughts feedback on this

ArchUnit to test actual layered architecture

Currently in our project we have layered architecture implemented in following way where Controller, Service, Repository are placed in the same package for each feature, for instance:
feature1:
Feature1Controller
Feature1Service
Feature1Repository
feature2:
Feature2Controller
Feature2Service
Feature2Repository
I've found following example of arch unit test where such classes are placed in dedicated packages https://github.com/TNG/ArchUnit-Examples/blob/master/example-junit5/src/test/java/com/tngtech/archunit/exampletest/junit5/LayeredArchitectureTest.java
Please suggest whether there is possibility to test layered architecture when all layers are in single package
If the file name conventions are followed properly across your project, how about you write custom test cases instead of using layeredArchitecture().
For Example:
classes().that().haveSimpleNameEndingWith("Service")
.should().onlyBeAccessed().byClassesThat().haveSimpleNameEndingWith("Controller")
noClasses().that().haveSimpleNameEndingWith("Service")
.should().accessClassesThat().haveSimpleNameEndingWith("Controller")
I know this question is rather old. But for the record, this has been possible for a while using predicates for the layers, e.g.
layeredArchitecture().consideringAllDependencies()
.layer("Controllers").definedBy(HasName.Predicates.nameEndingWith("Controller"))
.layer("Services").definedBy(HasName.Predicates.nameEndingWith("Service"))
.layer("Repository").definedBy(HasName.Predicates.nameEndingWith("Repository"))
.whereLayer("Controllers").mayNotBeAccessedByAnyLayer()
.whereLayer("Services").mayOnlyBeAccessedByLayers("Controllers")
.whereLayer("Repository").mayOnlyBeAccessedByLayers("Services")
However, I'm not sure how well this works in practice. Because usually you don't just have classes following this naming pattern and that's it. A service might also have some POJO as method parameter type (e.g. MyInput) and that should maybe for example not be used by repositories as well. Also, using forward dependency rules (mayOnlyAccessLayers(..)) this might then cause unwanted violations.

OpenTest custom test actors

I'm really impressed with OpenTest project. Found it highly intriguing how many ideas this project is sharing with some projects I created and worked on. Like your epic architecture with actors pulling tasks.. and many others :)
Did you think about including other automation technologies to base Actors on?
I could see two main groups:
1 Established test automation tooling like TestCafe (support for non-selenium gui testing could leverage the whole solution a lot)
2 Custom tooling needed for specific tasks. Would be great to have an actor with some domain-specific capabilities. Now as I can see this could be achieved by introducing another layer of execution worker called by an actor using rest api. What I mean is the possibility of using/including them as new 'actor types' with custom keywords releted.
Thank you for your nice words. We spent a lot of time thinking through the architecture and implementation of OpenTest and it's very rewarding to see that people understand and appreciate the design.
Implementing new keywords (test actions) can be done without creating custom test actors, by creating a new Java class that inherits from the TestAction base class and override its run method. For a simple example, you can take a look at the implementation of the Delay test action. You can then package the new test action in a JAR and drop it (along with any dependencies) in the user-jars subdirectory in your test actor's working directory. The test actor will dynamically load all the JARs it finds in there and will find the new test action class (using reflection) so you can make use of it in your tests. Some useful info and things to look out for:
Your Java project is going to have to define a dependency on the opentest-base project (which is where the TestAction base class is implemented).
When you copy the JAR to where your test actor is, make sure to copy any dependency JARs along with it. Please note that a lot of the dependencies that you might need are already included with the core test actor binaries (you can have a look at the POM.xml to see what they are).
If you happen to have any dependencies that conflict with the other JARs that included with the core test actor binaries, you can apply a technique called shading to "hide" the conflicting classes under a different package name. Most of the times you're not going to need this, but if you do and you get stuck let me know and I'll give you some pointers.
Here's sample project that demonstrates how to build an OpenTest extension that creates a couple of custom keywords: https://github.com/adrianth/opentest-extension-sample
And here's an extensive video tutorial about creating custom OpenTest keywords: https://getopentest.org/tutorials/custom-keywords.html

How to reuse Javascript functions(written in Feature file) in Karate from other .feature files

So for re usability, how can I reuse some particular amount of code from one feature file to other feature file.
I don't want to keep functions outside in js files.
As of now, this is not possible with karate.
IMHO, this is not even valid enhancement request. If you really want to reuse the code, it would be better idea to keep outside of feature file in js function and calling them from different feature files as and when needed.
Peter Thomas, author of Karate, mentioned here that reuse of feature is possible and one cannot reuse the particular scenario from feature file.
I don't want to keep functions outside in js files.
You don't have to. Please read the documentation. There are multiple ways for code-reuse:
the call keyword for re-usable features
Background / hooks
calling Java

How to provide specific GWT implementations

Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.