Is there some documentation on the best ways to organize the deployment of Fitnesse for use in projects?
I have many questions:
How should it be stored? Should the whole fitnesse root be stored in SVN? How do you deal with acceptance tests that span multiple svn repositories?
We have some code that runs only on linux (server) and other code that runs only on windows (client) that make up the complete system, how do you run these? Do you have multiple Finesse servers?
In the company that I work we are setting up a FitNesse for Functional Tests integrated with SVN and Selenium.
Here is our basic idea:
Store FitNesse in a repository on SVN (yes, the root)
Store Selenium tests in another repository on SVN (per project and both .html and .java TestNG generated)
Use Hudson to automate checkouts from SVN and put everything to run on a QA Environment. If a FitNesse acceptance test span across multiple svn repositories, Hudson is able to download and build the projects. This way, FitNesse does not need to deal with this issue.
We are still integrating the tools. We also use Jira, Testlink, Sonar and MediaWiki.
Related
I have a big web project with a separate backend and a front-end (webpack). I'm going to use Cypress to create end-to-end tests.
What is not clear is where I should add the Cypress tests and Cypress itself. The documentation says to add it right to the testee project and it shows how to run the tests on the production website (which URL is different from the local, dev project). This means that I'm not able to run the tests on the development project because Cypress testing IDE and the testee project can't be run simultaneously because they share the same terminal.
If so, the best solution is probably to organize one more project, only for testing purposes, and having only Cypress installed and tests themselves? Is it a good practice and if so, which project should it be?
We have the same setup at work. We include the Cypress folder in the front-end repo. I'd agree with keeping it right next to the project because you have access to that code easily i.e. accessing utility functions, selectors, etc. As far as the terminal issue, you should be able to run your project locally in one terminal tab and the cypress test runner in another.
I am to build a test automation system for E2E testing for a company. The product is React/Node.JS based runs in a cloud (Docker & Kubernetes). The code is stored in GitLab repositories for which there are CI/CD pipelines setup for test/lint/deployment.
I plan to use Jest for test orchestration and Selenium / Appium for the UI testing (FRW being in TypeScript), while creating a generator to test our proprietary backend interface.
My code is in a similar repository and will be containerized and uploaded to the test environment.
In my former workplaces we used TeamCity and similar tools to manage test sessions but I do not seem to be able to find the perfect link between our already set up GitLab CI/CD and the E2E testing framework.
I know it could be implemented as part of the pipeline, but for me it seems lacking (which can also be because of my inexperience)
Could you advise some tools/methods for handling test session management for system testing in such an environment?
(with a GUI where I can see the progress of all sessions, being able to manage them, run / rerun / run on certain platforms only, etc)
Presently we built a Automation framework which uses Selenium Webdriver+ specflow + Nunit, and we suing bamboo as our CI to run our Job against our every build.
we written a build.xml to handle our targets (like clean, init, install latest build, run Selenium scripts, uninstall build. etc)
ant command will read the tag name from the build.xml and runs the respective feature/scenarios based on Tags (like #smoke, #Regression)with Nunit in CI machine.
Now our requirement is to use Selenium Grid to divide scripts into different machine and execute with above set-up. Grid has to divide the scripts based on feature file or based on Tags.How to achieve this.
Is there any thing need to done under [BeforeFeature] and [BeforeScenario] ?
If you provide in details steps or any link which explains detail steps that would be a great help.
Please any one can help in this regards.
Thanks,
Ashok
You have misunderstood the role Grid plays in distributed parallel testing. It does not "divide the scripts", but simply provides a single hub resource through which multiple tests can open concurrent sessions.
It is the role of the test runner (in your case Specflow) to divide tests and start multiple threads.
I believe that you require SpecFlow+ (http://www.specflow.org/plus/), but this does have a license cost.
It should be possible to create your own multithread test runner for Specflow but will require programming and technical knowledge.
If you want a free open source approach to parallel test execution in DotNet, then there is MbUnit (http://code.google.com/p/mb-unit) but this would require you to rewrite your tests
I have several projects in Github, and I´d like to linked them to Jenkins in order to automate testing and improve the quality code.
Is there any free online way to do it?
The git jenkins plugin allows you to link a Jenkins job to a git repository (included GitHub). Then use custom actions to run unit tests.
I would like to know what yours Hadoop development environment looks like?
Do you deploy jars to test cluster, or run jars in local mode?
What IDE do you use and what plugins do you use?
How do you deploy completed projects to be run on servers?
What are you other recommendations about setting my own Hadoop development/test environment?
It's extremely common to see people writing java MR jobs in an IDE like Eclipse or IJ. Some even use plugins like Karamasphere's dev tools which are handy. As for testing, the normal process is to unit test business logic as you normally would. You can unit test some of the MR surrounding infrastructure using the MRUnit classes (see Hadoop's contrib). The next step is usually testing in the local job runner, but note there a number of caveats here: the distributed cache doesn't work in local mode, and you're singly threaded (so static variables are accessible in ways they won't be in production). The next step (and most common test environment) is pseudo-distributed mode - all daemons running, but on a single box. This is going to run code in different JVMs with multiple tasks in parallel and will reveal most developer errors.
MR job jars are distributed to the client machine in different ways. Usually custom deployment processes are seen here. Some folks use tools like Capistrano or config management tools like Chef or Puppet to automate this.
My personal development is usually done in Eclipse with Maven. I build jars using Maven's Assembly plugin (packages all dependencies in a single jar for easier deployment, but fatter jars). I regularly test using MRUnit and then pseudo-distributed mode. The local job runner isn't very useful in my experience. Deployment is almost always via a configuration management system. Testing can be automated with a CI server like Hudson.
Hope this helps.