Running all Tests in Production Template in Shopware 6.3.5.2 - testing

We are building a shop for a customer on Shopware 6.3.5.2 and want to use tests to
ensure that core functionality is not broken by our customizations (static plugins)
write new tests for new functionality
There is Running End-to-End Tests but this seems to be for core development and uses psh.phar which is not available in the production template.
How should this be done?
edit
This question is meant a bit broader and concerns also Unit Tests.

Actually, you can use the E2E tests of the platform project - as Cypress itself doesn't care where to run the test against. However, as you already noticed you cannot use psh commands to run them. You may run the tests though the basic Cypress commands, setting your shop's url as baseUrl of the tests, for example via this command:
./node_modules/.bin/cypress run --config baseUrl="<your-url>"
It works with cypress open as well.
The only thing what may become troublesome is the setToInitialState command in most of the tests which takes care about the clean up of shopware's database using psh scripts, unfortunately. You may need to adjust it by overriding the command in order to reset the database of the Production template.
I hope I was able to help a bit. 🙏

There are actually two parts here:
ensure that core functionality is not broken by our customizations (static plugins)
write new tests for new functionality
re 1: For regression tests like this I would suggest end-to-end tests. Either test through the UI with tools like selenium or through the HTTP API (I don't know if the shopware API is sufficient for extensive regression tests).
re 2: Since plugins do not run on their own I would extract all relevant functionality into plain old PHP classes that are independent of shopware and test those in isolation. Explore if some of that functionality can be made visible through an API and test the plugin integration through this. Depending on the actual plugin you might have to resort to UI tests again.

Related

Should be Cypress testing framework be installed separate from the testee project?

I have a big web project with a separate backend and a front-end (webpack). I'm going to use Cypress to create end-to-end tests.
What is not clear is where I should add the Cypress tests and Cypress itself. The documentation says to add it right to the testee project and it shows how to run the tests on the production website (which URL is different from the local, dev project). This means that I'm not able to run the tests on the development project because Cypress testing IDE and the testee project can't be run simultaneously because they share the same terminal.
If so, the best solution is probably to organize one more project, only for testing purposes, and having only Cypress installed and tests themselves? Is it a good practice and if so, which project should it be?
We have the same setup at work. We include the Cypress folder in the front-end repo. I'd agree with keeping it right next to the project because you have access to that code easily i.e. accessing utility functions, selectors, etc. As far as the terminal issue, you should be able to run your project locally in one terminal tab and the cypress test runner in another.

E2E Test Automation workflow with GitLab CI/CD

I am to build a test automation system for E2E testing for a company. The product is React/Node.JS based runs in a cloud (Docker & Kubernetes). The code is stored in GitLab repositories for which there are CI/CD pipelines setup for test/lint/deployment.
I plan to use Jest for test orchestration and Selenium / Appium for the UI testing (FRW being in TypeScript), while creating a generator to test our proprietary backend interface.
My code is in a similar repository and will be containerized and uploaded to the test environment.
In my former workplaces we used TeamCity and similar tools to manage test sessions but I do not seem to be able to find the perfect link between our already set up GitLab CI/CD and the E2E testing framework.
I know it could be implemented as part of the pipeline, but for me it seems lacking (which can also be because of my inexperience)
Could you advise some tools/methods for handling test session management for system testing in such an environment?
(with a GUI where I can see the progress of all sessions, being able to manage them, run / rerun / run on certain platforms only, etc)

Automated Testing for testers with no coding required

I'm trying to improve the testing process where I work, but without adjusting the structure.
What we have: VSTS, Selenium IDE, Testers who write test cases, but not code.
What I'd like to do is manage a way to marry our TFS continuous integration with the Selenium tests we write. These are NOT the code-driven selenium tests, but rather the IDE version where users click through, and set assertions using the IDE (All are just UI tests). I know we can export those tests plans as a .SIDE file, but what I can't figure out, is how to have our TFS server execute those as part of a deployment or build pipeline.
Ideally, developers/devops would setup projects in TFS from the onset with whatever solution makes sense to execute these Selenium .SIDE files, but afterwards, the testers would manage adding/modifying those tests cases elsewhere.
The real goal here is to not have testers writing code, or checking in code. Only writing these UI Selenium tests, but having TFS execute those as part of CI.
Researching this on the internet drives me basically always to something that requires testers to write code.
I don't think it can automate testing without code, at lease, you need a test project containing your automated tests.
Generally, in Azure DevOps, we use Visual Studio Test task to run tests. This task supports using the following tests:
Test assembly: Use this option to specify one or more test assemblies that contain your tests. You can optionally specify a
filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan that have an automated test method associated with it. To learn more about
how to associate tests with a test case work item, see Associate
automated tests with test cases.
Test run: Use this option when you are setting up an environment to run tests from test plans. This option should not be used when
running tests in a continuous integration/continuous deployment
(CI/CD) pipeline.
This was a question that I had as well, and I think I found an imperfect but better solution.
I wasn't able to get my Selenium IDE tests running with Jenkins, but I was able to get them to run with TeamCity, another CI.
I created a build step like the following :
Runner type: Command Line
Working Directory: where the selenium IDE .side file is located
Run: Custom Script
With the build script content that I usually use to run my Selenium IDE Tests, such as selenium-side-runner sidefile.side
I also added the following so I could output the results in Junitor another form: --output-directory=results --output-format=junit
You can also add the following so the tests are run headlessly, this only works in Chrome : -c "goog:chromeOptions.args=[--headless,--nogpu] browserName=chrome"
Finally, I also use --filter to run one test suite at a time, but that is optional too.
I then used another build step to export the results to our test manger, xray, but I think that is beyond the scope of this question.
The problem with this solution is that it runs directly from a users individual machine still, but this can be work around.

BDD Cucumber test management tool

Is there an open source tool available to control the running of BDD cucumber tests?
We are developing BDD cucumber tests and would like the option to control the tests when running them (start/stop/pause/restart) using an open source (or proprietary) test tool.
The short answer it, yes.
The somewhat longer answer is that it depends on your echo system.
If you are using Java, then any build tool will be sufficient. That is Maven, Gradle or similar. These are easy to integrate in your Continuous Integration, CI, environment. With a tool chain like that, you are able to execute Cucumber on every build and will always know if your system works or not.
Yes , but in small scope (Automation tests) and less process control related to run and control tests ,In high scope with multiple branches and projects i think you have to move to Jenkins with full control.
Following link describe the coparsion : https://www.saashub.com/compare-jenkins-vs-cucumber

Organization and Structure of Web Application Testing Framework

So I'm looking to bring web application testing into our .Net environment with a framework such as Selenium. At first, it'll probably be the developers writing the tests, but later it may be just the QA team. I'm wondering where the tests should actually live. Should they live in the same solution that the web application lives or should they live in a completely separate solution that is just for the tests? Please, note these are regression tests that will be done via automating a web browser so access to the web app's assemblies is not required. The answer probably is based on the environment and other factors, but I'm curious about what other people have done in this situation.
Regression Testing covers both Unit and Functional Tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods.
Unit Tests are part of the solution's code and should live with the Primary Code as with Microsoft MVC. Since Functional Tests examine the whole system and not just components, they can live anywhere. However, since your Functional Test are automated scripts, they should be included inside the solution.
The advantage to having both Functional and Unit tests live with the code is the issue of project management. Having all project related files in one repository links code version with test version. Testing scripts need to be stored in a repository (version control system) just like any other project code, so it is good to keep them with the solution.
That way the test team can do white box testing (testing with access to code) by checking-out the solution just like a developer. Their work can be saved, shared, and documented inside Visual Studio. Microsoft even includes some web based management tools with Team Foundation Server that can be used for managing the testing with open communication between test team and developers.