Proper Convention in Maintaining Data and Feature file in Framework [closed] - karate

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
As such this is not an issue, but would like to understand a bit on Proper Convention in Maintaining Karate Framework.
we have Karate Framework with Maven build,
Test Data -> is it better to maintain under src/main/resources or src/test/java/{with custom folder}
With Features we have Functional Flow Features which in turn call some re-usable features, this case -> is it better to main both (Functional Flow Features and Re usable Features) under src/main/java or src/test/java
Karate Config -> is it better to be under src/main/resources or src/test/java
To add , With reference to https://github.com/karatelabs/karate/tree/master/karate-core, I observed Test Data, Feature file and config are under src/test.
Another reason to ask is we have lots of products lined up and we use Karate Framework for API Test Automation so would like to understand proper convention so as it makes things easier with further Implementation and Maintenance .
Expecting Recommendations considering using Karate Framework for Long term usage

Related

How to get karate executable jar file without changing the project structure? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Problem statement: Every service has a separate repository. what is the best way to use a common framework across several service repositories?
We are trying to create an API test automation framework using "Karate".
Here we want to create a framework(Which can be distributed(example:jar)) such that it can be used across all of the microservice project repositories.
As the creator of Karate, I strongly recommend you don't do this. In the long term this makes all your projects depend on one common framework - and you should try to reduce the creation of "home grown" frameworks. Especially for a testing framework, you should try not to force teams to depend on an additional library which you need to maintain and version-control. Re-use can cause more harm than good especially in the context of testing, see this article at the Google Testing Blog.
That said, since Karate can read files from the classpath: you can "ship" a JAR file with common Java classes and even feature or JS files that all your projects can inherit from or "re use". In fact the karate-base.js has been designed to solve for common bootstrap logic or variables / parameters being supplied from a JAR file.
Short Answer: use normal Java techniques (Maven / Gradle) to create a re-usable JAR file. There are multiple ways to use resources (Java, *.feature, JS) from a JAR file. It is up to you how to structure your Maven (or Gradle) projects to make this happen.
EDIT: for those looking for how to create a "runnable" JAR, please see https://stackoverflow.com/a/56553194/143475

Testing Symfony3 best practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I would like to know which is the best technology for testing Symfony apps. My idea is to test all the Application layers, starting with the Database (repository and queries), the services , Controllers and Views.
I check about this but i'm not sure which direction to follow. I found out some tool like Phpspec and Behat but i'm not sure they fit my need and which one is the best... What do you guys suggest?
This is what we've been doing at work last 4 years.
Use Behat for testing the behaviour of the app. We mainly cover how GUI related features (twig, templates, browser, file upload, client side validation, API request&response etc) behave however if you wish you can also cover how the app functionality behaves like such as (DB CRUD operations, testing emailing features, commands, queuing systems such as RabbitMQ, Beanstalk etc.) so you're not limited at all.
Use PhpSpec for testing the functionality of the app or a part of a feature or whole. We cover services, util classes, listeners so on.
Do not test controllers because we keep them as "thin" as possible as we follow "thin controller, fat services" approach.
So in general, Behat is pretty powerful to cover most of the cases so we have more behat tests than PhpSpec. You can use any appropriate testing tool such as PhpUnit so you're free to test others.
Plenty of Behat examples
Plenty of Symfony examples
A few PhpSpec examples

Best practices for designing Gherkin based Web Test Automation framework using selenium? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
what is the Best practice for designing gherkin based UI Automation framework using selenium
Browser instance
For Feature wise steps definitions or page wise.
Exception handling
Logging functionality
Execution According to feature or Scenario using MSTest
Integration With Continuous Integration tool like jenkin.
Have you invested any time at looking what's possible so far?
Browser instance - Doesn't that depend on which browser you want selenium to automate, for example, would you want to run the same actions on different browsers to test it works on each one?
Feature wise or page wise steps - Specflow doesn't care, it treats all bindings as global so it really is a personal thing. The only issue comes where you mix bindings from different classes and expect them to share some data, but even then Specflow has some pretty neat DI like instantiation to make it easier.
Exception handling - this isn't relevant during testing. You simply want something that gets out of the way and lets you see it fail when expected.
Logging - During testing you don't care. Just pick something with a null logger.
Execution of specific tests - see ReSharper or built in runner in VS2012+, or even better ncrunch
CI integration - Since Specflow tests are just Nunit or MsTest tests then any CI system should just handle them. I'd pick TeamCity as it's probably the standard for DotNet CI

How to diagram automated testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a large legacy .NET application that has evolved and grown over the years to include many components and moving parts. I want to develop a strategy for developing automated unit and integration tests for this application and to that end I think a graphical representation would be key.
What I am picturing is some sort of diagram I could use to guide the process of writing up the test cases, help achieve better coverage, and eventually refer back to once a specific test fails. Does anyone have any thoughts on what type of diagram could fulfill this goal? My guess is this would be a variant of the classic functional block diagram, but I have not found examples that specifically relate to the design of an automated testing strategy.
Could this be what you are looking for?
The UTP provides extensions to UML to
support the design, visualization,
specification, analysis, construction,
and documentation of the artifacts
involved in testing. It is
independent of implementation
languages and technologies, and can be
applied in a variety of domains of
development.
UML Testing Profile: http://utp.omg.org/

How to organize information about program solution? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Неllо. I develop system that works with stock exchange(system, below). And there is a lot of information my program need to interact with this system. This system has formal declared interface, but different details beside this declaration and requirements to my system is often changed. So how can I organize available information about this system and requirements to my program that it could be both easy to understand and easy to change.
Your first and foremost goal is to create documentation for the relevant APIs your program exposes and then add documentation for the configuration files, maybe even set up a validator for configuration that.
Automatically generated content from code annotations (depending on your solution, it might be .NET's XML docs or PHPdoc, etc.) is the first step – this will help you document classes and interfaces as you work on the code. The next step is documenting non-code assets. If you have XML configuration, you can write schemas to validate against, for example.
After that comes integration documentation – steps that need to be taken on the production server and/or workstations to install, upgrade and maintain the application, including support scripts.