I was thinking of implementing automated tests for different part of an ActivePivot Servers and most importantly post-processors.
Since I am at the very beginning, I would like to know more about the state of the art in this field, what are the best practices and if there are any caveats to avoid.
IF you have any experience, I will be delighted to read from you.
Cheers,
Pascal
That is a very broad question. An ActivePivot solution is a piece of java software and inherits from all the best practices regarding the testing and continuous build of a software project.
But here are some basic ActivePivot entry points:
How, where and when to write tests?
Write junit tests, run them with maven, setup continuous build with Jenkins.
How to embbed a (real, non trivial) ActivePivot instance within a unit test?
Start an embbeded Jetty web application server. The ActivePivot Sandbox application is an example of that (look at com.quartetfs.pivot.jettyserver.JettyServer). If you want a series of unit tests to run against the same ActivePivot instance, you can start the Jetty server statically (for instance in a static method annotated with #BeforeClass). In any case don't forget to stop it at the end of the tests.
How to write performance tests?
In the Sandbox project, there is a small MDX benchmark called com.quartetfs.pivot.client.MDXBenchmark. It is easy to enrich and a good starting point. There is also com.quartetfs.pivot.client.WebServiceClient that illustrates connecting to ActivePivot
How to test post processors?
As of ActivePivot release 4.3.5 there is no framework dedicated to isolated post processor testing. Post processors are tested through queries (MDX queries or GetAggregates queries). Of course if your post processor implementation has some utility methods, those can be tested one by one in standard unit tests.
To test an ActivePivot-based project, the simpler is to re-use your Spring configuration. This can be done with ClassPathXmlApplicationContext:
ApplicationContext context = new ClassPathXmlApplicationContext("beans.xml");
This simple test will check if your Spring is actually Ok. Then, if you want to run a query, you could do the following:
IQueriesService queriesService = context.getBean(IQueriesService.class);
queriesService.execute(new MDXQuery(someMDX));
If you want to check your loading layer, you can do:
IStoreUniverse storeUniverse = context.getBean(IStoreUniverse.class);
for (IRelationalStore store : storeUniverse.values) {
assertEquals(hardcodedValue1, store.getSize())
assertEquals(hardcodedValue2, store.search("someKey", "someValue").size())
}
This way, you don't need to start a web-app container, which may fail because it needs some port to be available (meaning for instance you can't run several tests at the same time).
Post-Processors should be either Basic or DynamicAggregation post-processors, which are easy to test: focus on .init and the evaluation methods called on point ILocations. Advanced Post-processors can not be reasonnably unit-tested. Then, I advice writing MDX queries as simple as possible but relevant given the Post-Processor.
Any unit-test framework and mock framework could be used. Still, I advice using JUnit and Mockito.
I would recommend using Spring4JUnit to launch the context. You can then autowire the beans and access things like the queries service and the active pivot manager directly.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"classpath:SPRING-INF/ActivePivot.xml", "classpath:cusomTestContext-test.xml"})
...
#Resource
private IActivePivotManager manager;
#Resource
private IQueriesService queriesService;
#Test
public void testManagerOk() {
assertNotNull(manager);
assertTrue(manager.getStatus().equals(State.STARTED));
}
#Test
public void testManagerOk() {
// run a query with the queries service
}
...
You can define custom test properties for the tests in a separate context file, say for loading a set of test data.
Related
I have a project with several hundred test files some of the test files use DataJpaTest annotation, some are MockMvc based controller tests and some uses mocked objects without database dependency, Based on test execution order I see context needs to be re-initialized for different flavors of test files, Is there a way to control execution of test files order so that context reload can be avoided? Say all mock tests first followed by controller tests and then DataJpaTest?
Right now test case execution taking about 30 minutes looking for way to improve the speed up test execution.
JUnit Jupiter provides options to control Test Execution Order.
However, you should look into your test setup and verify if your tests create too many Application Contexts.
Spring Test framework can cache Application Contexts and reuse them among different test suites. See Spring Test documentation for more info.
I am working with XDocument within an MVC4 Web API application in Visual Studio 2010 and am unsure about the testing strategy.
Most of my unit tests make use of an in memory XDocument, which works well for controller, service, repository tests.
However, I have the XDocument.Load(filename) and XDocument.Save(filename) scenarios, which I would like to test (either with unit or integration tests).
I have been looking at the following question\answer on SO here but I'm not sure how to proceed.
public class PathProvider
{
public virtual string GetPath()
{
return HttpContext.Current.Server.MapPath("App_Data/policies.xml")
}
}
PathProvider pathProvider = new PathProvider();
XDocument xdoc = XDocument.Load(pathProvider.GetPath());
So, I get that I can now mock calls to whatever calls XDocument.Load(pathProvider.GetPath()).
Should I then be trying to test that PathProvider works? If, so, How would I approach this?
Thanks
Davy
Should I then be trying to test that PathProvider works? If, so, How would I approach this?
My answer is no, at least not with an automated test to begin with.
Simply due to the code snippet you provided, the PathProvider is an wrapper (adapter) around the ASP.NET framework. The only tests I would rely on here would be collaboration tests, e.g. I would verify that GetPath() is invoked when you expect it to be. That being said, context is key here.
PathProvider pathProvider = new PathProvider();
XDocument xdoc = XDocument.Load(pathProvider.GetPath());
The above code reeks of "testing the framework", therefore I would not even bother to unit test such code. If you really wanted to ensure that this portion of code did the right thing with the XML files and so forth, I'd fallback to an integration test. Though do consider this may be slow and brittle.
My solution therefore would be to abstract the concept of the XML document being loaded as you have with the PathProvider. From here, manual testing would suffice. Along the way if there is any domain logic included in such adapters, I would then extract classes/methods which you could test in isolation without the need to worry about XML or document loading etc...
We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..
Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.
I have a project that builds with maven2 and runs a series of JUnit test cases against the code. This has worked fine up until this point, where I now have 2 tests that must run in a certain sequence for things to work correctly, let's say TestA and Test (A then B). Unfortunately, maven2 doesn't understand this, so I'm looking for a way of convincing it that it needs to run the tests in this order.
The problem is that I'm setting some final static fields in TestB, but I'm doing this from TestA, which itself uses those fields, and successful execution of the test depends on those fields being set to their new values (there is absolutely no way around this, otherwise I would have taken that road long before now). So, it is imperative that TestA loads first, and it will of course cause TestB to be loaded when tries to access it. However, maven2 has decided that it will run TestB then TestA, which means those final fields are already set and cannot be changed.
So what I'm looking for is either a way to specify the order in which the tests are executed (A then B, every time), or a way to easily cause TestB to be reloaded by whichever classloader that JUnit uses.
EDIT - one other option might be some option like the old JUnit GUI tool has, that causes all classes to be reloaded for each test. I have looked and looked and not found such a flag in the maven junit plugin, if such a thing exists, then that would also work.
Fork mode can force each test to run in its own JVM so every class is re-loaded for each test.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<forkMode>pertest</forkMode>
</configuration>
</plugin>
The test order in JUnit is intentionally undefined, this is not a Maven issue and it's probably just luck that your tests were running OK up until now.
Sal's answer addresses your question directly, however forking the JVM on each test for large test counts can hugely increase your build time.
An alternative approach would be to use a testing library such as PowerMock (it currently works with EasyMock and Mockito) to clear the static fields in TestB's initialisation, this avoids the need for any JVM forking, and ensures your tests are portable.
From the PowerMock website:
PowerMock is a framework that extend other mock libraries such as EasyMock with more powerful capabilities. PowerMock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initializers and more. By using a custom classloader no changes need to be done to the IDE or continuous integration servers which simplifies adoption. Developers familiar with EasyMock will find PowerMock easy to use, since the entire expectation API is the same, both for static methods and constructors. PowerMock extends the EasyMock API with a small number of methods and annotations to enable the extra features. From version 1.1 PowerMock also has basic support for Mockito.