I am working with XDocument within an MVC4 Web API application in Visual Studio 2010 and am unsure about the testing strategy.
Most of my unit tests make use of an in memory XDocument, which works well for controller, service, repository tests.
However, I have the XDocument.Load(filename) and XDocument.Save(filename) scenarios, which I would like to test (either with unit or integration tests).
I have been looking at the following question\answer on SO here but I'm not sure how to proceed.
public class PathProvider
{
public virtual string GetPath()
{
return HttpContext.Current.Server.MapPath("App_Data/policies.xml")
}
}
PathProvider pathProvider = new PathProvider();
XDocument xdoc = XDocument.Load(pathProvider.GetPath());
So, I get that I can now mock calls to whatever calls XDocument.Load(pathProvider.GetPath()).
Should I then be trying to test that PathProvider works? If, so, How would I approach this?
Thanks
Davy
Should I then be trying to test that PathProvider works? If, so, How would I approach this?
My answer is no, at least not with an automated test to begin with.
Simply due to the code snippet you provided, the PathProvider is an wrapper (adapter) around the ASP.NET framework. The only tests I would rely on here would be collaboration tests, e.g. I would verify that GetPath() is invoked when you expect it to be. That being said, context is key here.
PathProvider pathProvider = new PathProvider();
XDocument xdoc = XDocument.Load(pathProvider.GetPath());
The above code reeks of "testing the framework", therefore I would not even bother to unit test such code. If you really wanted to ensure that this portion of code did the right thing with the XML files and so forth, I'd fallback to an integration test. Though do consider this may be slow and brittle.
My solution therefore would be to abstract the concept of the XML document being loaded as you have with the PathProvider. From here, manual testing would suffice. Along the way if there is any domain logic included in such adapters, I would then extract classes/methods which you could test in isolation without the need to worry about XML or document loading etc...
Related
I was thinking of implementing automated tests for different part of an ActivePivot Servers and most importantly post-processors.
Since I am at the very beginning, I would like to know more about the state of the art in this field, what are the best practices and if there are any caveats to avoid.
IF you have any experience, I will be delighted to read from you.
Cheers,
Pascal
That is a very broad question. An ActivePivot solution is a piece of java software and inherits from all the best practices regarding the testing and continuous build of a software project.
But here are some basic ActivePivot entry points:
How, where and when to write tests?
Write junit tests, run them with maven, setup continuous build with Jenkins.
How to embbed a (real, non trivial) ActivePivot instance within a unit test?
Start an embbeded Jetty web application server. The ActivePivot Sandbox application is an example of that (look at com.quartetfs.pivot.jettyserver.JettyServer). If you want a series of unit tests to run against the same ActivePivot instance, you can start the Jetty server statically (for instance in a static method annotated with #BeforeClass). In any case don't forget to stop it at the end of the tests.
How to write performance tests?
In the Sandbox project, there is a small MDX benchmark called com.quartetfs.pivot.client.MDXBenchmark. It is easy to enrich and a good starting point. There is also com.quartetfs.pivot.client.WebServiceClient that illustrates connecting to ActivePivot
How to test post processors?
As of ActivePivot release 4.3.5 there is no framework dedicated to isolated post processor testing. Post processors are tested through queries (MDX queries or GetAggregates queries). Of course if your post processor implementation has some utility methods, those can be tested one by one in standard unit tests.
To test an ActivePivot-based project, the simpler is to re-use your Spring configuration. This can be done with ClassPathXmlApplicationContext:
ApplicationContext context = new ClassPathXmlApplicationContext("beans.xml");
This simple test will check if your Spring is actually Ok. Then, if you want to run a query, you could do the following:
IQueriesService queriesService = context.getBean(IQueriesService.class);
queriesService.execute(new MDXQuery(someMDX));
If you want to check your loading layer, you can do:
IStoreUniverse storeUniverse = context.getBean(IStoreUniverse.class);
for (IRelationalStore store : storeUniverse.values) {
assertEquals(hardcodedValue1, store.getSize())
assertEquals(hardcodedValue2, store.search("someKey", "someValue").size())
}
This way, you don't need to start a web-app container, which may fail because it needs some port to be available (meaning for instance you can't run several tests at the same time).
Post-Processors should be either Basic or DynamicAggregation post-processors, which are easy to test: focus on .init and the evaluation methods called on point ILocations. Advanced Post-processors can not be reasonnably unit-tested. Then, I advice writing MDX queries as simple as possible but relevant given the Post-Processor.
Any unit-test framework and mock framework could be used. Still, I advice using JUnit and Mockito.
I would recommend using Spring4JUnit to launch the context. You can then autowire the beans and access things like the queries service and the active pivot manager directly.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"classpath:SPRING-INF/ActivePivot.xml", "classpath:cusomTestContext-test.xml"})
...
#Resource
private IActivePivotManager manager;
#Resource
private IQueriesService queriesService;
#Test
public void testManagerOk() {
assertNotNull(manager);
assertTrue(manager.getStatus().equals(State.STARTED));
}
#Test
public void testManagerOk() {
// run a query with the queries service
}
...
You can define custom test properties for the tests in a separate context file, say for loading a set of test data.
Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.
Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.
My team is working on educating some of our developers about testing. They understand why to write tests and are on board that they should write tests, but are falling a little short on writing good tests.
I just saw a commit like this
public void SomeTest{
#Test
public void testSomething{
System.out.println(new mySomething.getData());
}
So they were at least making sure their code gave them the expected output by looking.
It will be a bit before we can really sell the idea of code reviews. In the mean time I was considering having JUnit fail any tests that do not have actual assertXXX or fail statements in them. I would then like to have that failure message say something like "Your tests should use assertions and actually examine the output!".
I fully expect this to lead to calls like assertTrue(1 == 1);. We're working on the team buy in for proper testing and code reviews, are there any technical mechanisms we can use to make life easier for the developers that already get it?? What about technical mechanisms to help the new guys understand?
I think you should consider organizational changes: mentoring, training, code reviews.
The tools can only help you if you're using them in good faith with a base understanding of the goals. If one of these is missing they won't help you.
Humans are just to intelligent to do dump things or work around metrics. I think your assessment is not correct that "they" are on board if they can't write a single useful test. Automatic tools are simply not the correct tools at this stage. You can't learn by being told by a program what to do next.
You can use some static code analyzer.
I use PMD which includes a JUnit rule set. There are a lot of IDE plugins which will mark rule violations in the IDE. You can configure the rule sets to your needs.
You will also profit from the other rule sets - which will warn you on code style / best practice violations (although you have to decide sometimes if the tool or you are the fool :-)).
to answer the stated question for future viewers.
JUnit uses reflection to run tested function if any Exception, Error throws -> test fails, otherwise succeed. Assert class is just a utils class.
Can I test my client side GWT code without GWTTestCase? I've heard somewhere (I think it was one of the Google IO 2009 conferences) that they were successfully testing their code with a fake DOM, in the JVM and not in Javascript with the DOM. That would be brilliant. The point of this would be to gain speed (order of magnitude). Does anybody have any idea about how to do this? My first question on stack overflow, hope I'm doing this right.
Thanks.
Eugen.
You should check out the Google I/O session by Ray Ryan.
Basically, you can use the Model/View/Presenter pattern and abstract away all the DOM-accessing code to the 'View' portion. By doing this, you can create a mock view and test the model/presenter using standard junit tests, running via the jvm, without the need for a browser or a DOM
Not quite what you're looking for, but you should use the Model-View-Presenter pattern. Any code that requires the DOM should go in your View classes, and should be as dumb as possible. Complex logic goes in your Presenter classes. You can then test your presenter classes without needing a GWTTestCase.
E.g, a view might have a method like:
void setResponseText(String text);
Your presenter test case can then look something like:
void testSayHi() {
expect(mockView.setResponseText("hi there"));
replayMocks();
presenter.sayHi();
verifyMocks();
}