As described in the playframework documentation, I'd like to import data from the yaml-file in order to perform tests. However, I'd like to keep - or at least roll back after the tests - the existing entries in the database.
Any hints are appreciated.
regards
- alex
The easiest approach is to use the Fixtures.
So, in your unit/functional test, you can do
#Before
public void setup() {
Fixtures.deleteAll();
Fixtures.load("data.yml");
}
This will clear all the data out, and reload the data into the database before the test is executed.
To achieve the same thing for your selenium tests, you just do
#{selenium delete:'all', load:'data.yml'}
You can't easily revert the database back to what is was prior to the unit test, but I would suggest that your test database should be entirely populated by your YAML file anyway, so that you have complete control over the data that your tests are be tested over.
As far as I can estimate, dbUnit, an extension of JUnit, would be an appropriate solution for this problem.
Related
We have QA and DEV environment in our automation repo. We are using karate as our framework. We have TestParallel class and integrated allure report.
How could we run all tests in QA first then in DEV back to back using TestParallel Class and see the results in the same report?
Thanks for such a great tool btw.
We are going to try and make this easier in the next version.
For now, you have to aggregate the reports yourself. Can you try this and let us know how it goes.
use the Runner class 2 times to run your tests with different settings and karate.env set for QA and then DEV
the important part is using a different value for the workingDir, e.g. target/reports/qa and then target/reports/dev - else the second run will overwrite the first
now when generating the HTML report, you can provide target/reports as the source folder. this should work for the Maven Cucumber Reports, for Allure, please figure this out on your own
if the above approach does not work well enough for your needs, please figure out a way to manually aggregate the Results object you get from each instance of the Runner, this should not be too complicated as Java code
I'm using Selenium IDE 2.3.0 to record actions in my web application and create tests.
Before every test I have to clear all cookies, load the main page, log in with a specific user and submit the login form. These ~10 commands are fix and every test case needs them, but I don't want to record or copy them from other tests every time.
Is there a way to configure how "empty" test cases are created?
I know I could create a prepare.html file or something and prepend it to a test suite. But I need to be able to run either a single test or all tests at once, so every test case must include the commands.
Ok I finally came up with a solution that suits me. I wrote custom commands setUpTest and tearDownTest, so I only have to add those two manually to each test.
I used this post to get started:
Adding custom commands to Selenium IDE
Selenium supports object-oriented design. You should create a class that takes those commands that you are referring to and always executes those, in each of the tests that you are executing you could then make a call to that class and the supporting method and then execute it.
A great resource for doing this is here.
Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?
I intend to perform some automated integration tests. This requires the db to be put back into a 'clean state'. Is this the fastest/best way to do this?
var cfg = new Configuration();
cfg.Configure();
cfg.AddAssembly("Bla");
new SchemaExport(cfg).Execute(false, true, false);
var se = new SchemaExport(conf);
se.Drop(false, true);
se.Create(false, true);
Yes it almost is. You don't have to create a new configuration object before each test, you can reuse it when created once. You can make it faster by using an inmemory database like sql-lite for testing.
My integration tests do SessionFactory creation in a base class constructor, and SchemaExport in test fixture setup. I also test against SQLite running as an in-memory database for extra speed.
Ayende gave a good example of this approach in this blog post. Tobin Harris' article includes timing data of drop/create vs delete.
Me I use Proteus, it's an open source library. Before each test, there is an auto save of your set of data , load the set you want to test (an empty DB for exemple). After each test, the set of data is reloaded after the last test, the data present in the database before the tests are restored.
Since NHibernate is database independent another interesting option if you are having it generate your database is to run your tests against something like SQLite which is in memory. Things will run MUCH faster that way.
Here is an article on showing how to do this with ActiveRecord but you can take the concept and use it without ActiveRecord.
And here is a discussion if you're having trouble getting it working (I did at first).
In my testing code I need to have a blank/empty database at each method. Is there code that would achieve that, to call in the #Before of the test?
Actually you always can use JPQL,
em
.createQuery("DELETE FROM MyEntity m")
.executeUpdate()
;
But note, there is no grants that entity cache would be cleaned also. But for unit-test purposes it is look like good solution.
In my testing code I need to have a blank/empty database at each method.
I would run the test methods insider a transaction (and rollback at the end of each method). That's the usual approach. I don't see the point of committing a transaction and writing data to the database if you DELETE them just after. Just don't commit.
An alternative (not exclusive) would be to use DbUnit to put your database in a known state before a test execution. When doing this, you usually don't need to clean it up.
Another option would be to use raw JDBC to drop the database if exists and then have JPA recreate the whole database. Will be pretty slow though.