How to automate testing of Tridion templates (with TOM.NET) - testing

I have a recurring problem in templating projects. I can't really test my work in any other way than running the templates in Template Builder. This is a major problem if I'm working on a TBB that is used on several different templates because it means that after changing the code in the TBB I should retest all the templates (and probably with several different pages/components as there might be slightly different cases depending on the content).
As you can see in big projects where TBBs are reused a lot changing them costs a lot of time due to the amount of testing necessary and I would be eager to find a solution for this. I know that unit testing is virtually impossible with the current TOM.NET (most classes/methods are internal) so what could be an alternative way to achieve automated testing?
One solution that I have looked into is to use Core Service to initiate rendering process of a template with some test content and then check if the output is as expected but achieving this requires quite a lot of code and thus produces unwanted overhead (I think it still takes less time than manually retesting the cases). Also this doesn't really allow you to test individual TBBs unless you (programmatically) create separate templates with individual (or a subset of) TBBs. The good thing of this solution is that you could run the tests on your local laptop while developing, assuming you can connect to Tridion-server (you'd still have to upload your code to Tridion before running the tests so its not completely ideal solution).
I know that other alternative is to use DD4T/CWA where you can pretty much handle all the testing in the front-end as the templates are (usually) quite simple.
Any other ideas?

I agree that the emphasis is on automated testing rather than unit testing (which, after all, is mostly about object oriented programming). With Tridion work, it's about transforming data. What you need to test data transforms is to have known inputs, and to be able to make assertions about the outputs. I've tried various approaches over the years, but the most effective so far has been the following:
1) For every template, keep test content in a dedicated Folder, and test pages in a dedicated Structure Group. The content is the input to your tests, and isn't intended to change unless the test requirements change.
2) Put the components on the pages. Publish the pages. Keep it simple: you can often have a page for a single test scenario. You can automate publishing the pages if that helps.
3) Use web testing tools to verify the output. This could be HtmlUnit, Selenium or whatever.
Basically - Tridion is an engine for executing transforms. You don't need a specialised test execution engine for this part, although it's useful to use one for testing the output.
Mocking the package sounds attractive, but as Vesa says, it can turn into a huge amount of work. The simple approach I have outlined works in practice, and was proved on a significant project. You could add variations on the theme if you like: one thing I've considered, but never done on a project, is to use the blueprint to give you more isolation. For example, you could test your page templates by localising your component templates to generate static and predictable component presentations. Suffice it to say that there's enough scope for creativity once you unshackle yourself from the baggage of unit testing approaches.

I have some experience with the CoreService scenario. You will just need to write some helpers to upload your templates, create coumpound templates and run it. The tricky part, however, is verification.
You will need to write some test templates that will help you with verification. One way is to write .Net template that you will pass expected values to and it will do the verification. The other way is to write DreamWeaver template that will print values from package and you will then check it against expected. The advantage of this method is that these values will be returned to you as the result of CoreService Render action and you can do all the verification on the client side.
But the most difficult part is the dataset creation. It will probably take most of your time.

You could try to isolate the majority of the code in classes that can be unit tested.
I guess the main problem here is that Engine and Package are sealed, so you cannot easily mock them up. But you can minimize the interaction with those objects and put the meat of your code in classes that take the relevant input and return the output that should be put in the package etc.
I think you could get a lot of coverage of your TBBs just from unit tests with this approach.

At a customer I've seen an implementation where the tests are invoking the same webservice that Template Builder uses, and they use these to execute the templates, evaluate the results, etc.
Probably worth exploring.

I would suggest writing your own TestRunner with 2 goals: Create test data and run tests.
Create test data: The idea is to create a sample dataset (all fields, some fields, and only mandatory fields automatically). (Bonus points for using Chuck Norris quotes instead of lorem ipsum). The title of the Sample content uses a naming scheme - like [TestContent] and/or is in its' own folder with metadata attached (to find it later).
Create test pages: Find the TestContent. Use GetListUsingItems to find pages where the template is used. Copy the page, and paste it into a TestContent StructureGroup, save. Open the page, add the test content, remove the other content, and save page with special naming schema.
Run tests: Find the TestContent, preview each one, write out report with rendering time, success status, and # of chars.

I consider your problem completely technology agnostic regardless of the approach you use (Thinking in the context of Tridion).
The problem is that you are modifying one thing that is used in multiple places (Component/Page Templates) and those places need to be tested before you push
that as a valid change.
Even if you do proper changes, assume the code runs fine and you have a result, maybe is not the result that is expected by other TBBs that consume your
output.
That is the problem itself unfortunately :(
If the problem is that you have to test all the Templates using that TBB, that is still a problem with no solution.
If the problem is that you don't want to impact the current platform with your changes/testing nor interfere with other developments going on
is a different scenario.
I would solve the second one by creating a separate publication inheriting from the publication with valid code/data to test
(or have that created in advance), make your changes there and test.
This approach makes sense if you are using the TBB as part of many Component/Page Templates.
If you have the luxury of the granularity in the front end (your tbb produces an atomic piece of code) the complexity of the scenario would be slightly
reduced, but you still have to test all the scenarios anyway

Related

Test-Automation using MetaProgramming

i want to learn test automation using meta programming.i googled it could not find any thing.can anybody suggest me some resources where can i get info about "how to use Meta Programming for making test automation easy"?
That's a broad topic and not a lot has been written about it, because of the "dark corners" of metaprogramming.
What do you mean by "metaprogramming"?
As background, I consider metaprogramming to be any activity in which a tool (which we call a "metaprogramming tool") is used to inspect or modify the application software to achieve some effect.
Many people consider "reflection" to be a kind of metaprogramming; other consider (C++-style) templates to be metaprogramming; some suggest aspect-oriented programming.
I sort of agree but think these are weak versions of what you want, because each has severe limits on what it can see or do to source code. What you really want is a metaprogramming tool that has access to everything in your source program (yes, comments too!) Such tools are called Program Transformation Systems (PTS); they work by parsing the source code and operating on the parsed representation of the program. (I happen to build one of these, see my bio). PTSes can then analyze the code accurate, and/or make reliable changes to the code and regenerate valid source with the changes. PS: a PTS can implement all those other metaprogramming techniques as special cases, so it is strictly more general.
Where can you use metaprogramming for testing?
There are at least 2 areas in which metaprogramming might play a role:
1) Collection of information from tests
2) Generation of tests
3) Avoidance of tests
Collection.
Collection of test results depends on the nature of tests. Many tests are focused on "is this white/black box functioning correctly"? Assuming the tests are written somehow, they have to have access to the box under test,
be able to invoke that box in a realistic ways, determine if the result is correct, and often tabulate the results to that post-testing quality assessments can be made.
Access is the first problem. The black box to be tested may not be easily accessible to a testing framework: driven by a UI event, in a non-public routine, buried deep inside another function where it hard to get at.
You may need metaprogramming to "temporarily" modify the program to provide access to the box that needs testing (e.g., change a Private method to Public so it can be called from outside). Such changes exist only for the duration of the test project; you throw the modified program away because nobody wants it for anything but the test results. Yes, you have to ensure that the code transformations applied to make things visible don't change the program functionality.
The second problem is exercising the targeted black box in a realistic environment. Each code module runs in a world in which it assumes data and the environment are "properly" configured. The test program can set up that world explicitly by making calls on lots of the program elements or using its own custom code; this is usually the bulk of a test routine, and this code is hard to write and fragile (the application under test keeps changing; so do its assumptions about the world). One might use metaprogramming to instrument the application to collect the environment under which a test might need to run, thus avoiding the problem of writing all the setup code.
Finally, one might want to record more than just "test failed/passed". Often it is useful to know exactly what code got tested ("test coverage"). One can instrument the application to collect what-got-executed data; here's how to do it for code blocks: http://www.semdesigns.com/Company/Publications/TestCoverage.pdf using a PTS. More sophisticated instrumentation might be used to capture information about which paths through the code have been executed. Uncovered code, and/or uncovered paths, show where tests have not been applied and you arguably know nothing about what the program does, let alone whether it is buggy in a straightforward way.
Generation of tests
Someone/thing has to produce tests; we've already discussed how to produce the set-up-the-environment part. What about the functional part?
Under the assumption that the program has been debugged (e.g, already tested by hand and fixed), one could use metaprogramming to instrument the code to capture the results of execution of a black box (e.g., instance execution post-conditions). By exercising the program, one can then produce (by definition) "correctly produces" results which can be transformed into a test. In this way, one might construct a huge variety of regression tests for an existing program; these will be valuable in verifying the further enhancements to the program don't break most of its functionality.
Often a function has qualitatively different behaviors on different ranges of input (e.g., for x<10, produced x+1, else produces x*x). Ideally one would like to provide a test for each qualitively different results (e.g, x<10, x>=10) which means one would like to partition the input ranges. Metaprogrammning can help here, too, by enumerating all (partial) paths through module, and providing the predicate that controls each path.
The separate predicates each represent the input space partition of interest.
Avoidance of Tests
One only tests code one does not trust (surely you aren't testing the JDK?) Any code consructed by a reliable method doesn't need tests (the JDK was constructed this way, or at least Oracle is happy to have you beleive it).
Metaprogramming can be used to automatically generate code from specifications or DSLs, in relaible ways. Such generated code is correct-by-construction (we can argue about what degree of rigour), and doesn't need tests. You might need to test that DSL expression achieves the functionaly you desired, but you don't have to worry about whether the generated code is right.

Is there a good way to run specflow tests in a new app domain?

Due to some constraints on our production code, we have some .NET services that need to be run with their own config file. We've been using app-domains to provide arbitrary config files to these services at test run time.
The problem comes when we try and use SpecFlow for these tests - since each step is called separately and from an overall runner class that we don't have direct access to, pushing test data across app-domain boundaries for every single STEP is pretty messy and results in everything being in all sorts of odd lambdas, plus serializability needs to be considered when most of the time we shouldn't need to care about that in a test code context (internal data objects, that sort of thing).
Does anyone have a method by which SpecFlow can be convinced to run all of its steps in a provided app-domain, or generally just play nicer with the app-domain concept in general?
Would it be possible to write a plugin / test generator that did this, and if so would this be very technically complicated? I had a look at that sort of extensibility but couldn't find the right place to start to do this, so I may have missed it.
(I'm aware that "Refactor your service so you don't need arbitrary config files" would also solve the underlying problem, but for the purposes of this question please assume I can't do that - I'm interested in whether SpecFlow can be configured to solve this, whether on its own or by extending it.)
Edit: After some more investigation I think this -should- be possible by using a custom unit test generator plugin? The problem I then have is there's basically zero documentation on that, and not many examples around on the internet. If you can give me a good example that I can look at to adapt that would go a long way...

does anyone have parasoft .test or jtest experience

First i have no experience on parasoft .test or jtest experience. I have read the datasheet that the product could automatically generate unit test.
but I am woundering how useful the auto generated unit test are. Does it really do not need any other effort by developer?
any experience sharing are welcome.
thanks a lot!
We used JTest for our product recently. We didn't use the standard product, we used the Eclipse Plugin. The standard product is built on the OSGI framework (read: it's like Eclipse), but you have to import and create your projects. We were already using Eclipse, so it made sense for us to simply use the plugin, which has all of the same capabilities.
While there are many things that JTest can do for you, there are also many irritating things about it. For example, Jtest's static analysis tool is what is really worthwhile, IMHO. It can look for lots of errors and has a pretty good reporting system. But, while unit test generation is okay, but I think I spent as much or more time fixing and enhancing the generated tests than I would have just making them myself. Administering Jtest is also somewhat complicated and involved.
The built-in mechanisms to make unit tests, stub objects, parameterized unit tests, etc. are not well documented. At least, my little brain couldn't make good use of them in the two years we used the product. However, a lot of their super awesome features (like GUI tracing, command-line interface, the Bug Detective, reporting system etc.) all require extra, very expensive licenses.
Really, Jtest just gives you an easy way to manage the execution of static and unit testing. But it's really expensive. I can't believe they charge thousands of dollars per license of that stuff. You'll also find that they will want to train you, which you almost need because the documentation is pretty bad. Which is odd, because the user's guide is like 900 pages long.
But here's a big hint: you can do it for free. If I had to do it over, I would have pushed hard for using these products (which, oddly enough, look and feel very similar to Jtest)
http://code.google.com/javadevtools/codepro/doc/index.html
I wouldn't get Jtest thinking that this will be a small something to add to your developer's routine. Jtest can become a huge time and process sink.
Jtest is very very useful.Yes it generates it own test cases which requires lot more efforts for fixing them.I use it in different form.I delete all the generated unnecessary test cases.I made one another file which create database connection and set various other parameters sets.Also after configuration the code will work without mocking if all of the code is ready and if it is not ready than you can stubs the required methods.
Static code analyzer is good(for checking null pointer exception)
Checking code conventions is very good.
Write your custom code guidlines as use cases and execute it on your code.
Code coverage.
Debug while testing.
The auto generated unit tests still needs a developer to decide what results are correct or not, so you have to sit down and do the job. A lot of the boiler plate code is of course auto generated, so a small time saver there. I haven't used it much, but did evaluate jtest for an earlier employer. Seemed like a great product, if I remember correctly. :)
Alas there will never be a silver bullet that addresses all unit testing requirements, but JTest & .Test (& C++Test for that matter) about as close as you will get. Uggwar is correct that the developer will still need to verify outcomes for the basic auto generated tests, however there is a whole lot more to it.
These tools can be used to create basic regression tests, these are there to tell you when something has changes, not whether what it is testing is right or wrong. You can also trace a running application and then generate JUnit/NUnit/CPPUnit tests that recreate what was going on in the application. These tend to be far more useful tests, which are used as regression tests for items of functionality.
Other functionality includes the ability to generate stubs, use spreadsheets as datasources and provide an object repository. There is a while lot more too ....
Give them a try.
http://www.parasoft.com

Best practices for TDD and reporting

I am trying to become more familiar with test driven approaches. One drawback for me is that a major part of my code is generated context for reporting (PDF documents, chart images). There is always a complex designer involved and there is no easy test of correctness. No chance to test just fragments!
Do you know TDD practices for this situation?
Some applications or frameworks are just inheritently unit test-unfriendly, and there's really not a lot you can do about it.
I prefer to avoid such frameworks altogether, but if absolutely forced to deal with such issues, it can be helpful to extract all logic into a testable library, leaving only declarative code behind in the framework.
The question I ask myself in these situations is "how do I know I got it right"?
I've written a lot of code in my career, and almost all of it didn't work the first time. Almost every time I've gone back and changed code for a refactoring, feature change, performance, or bug fix, I've broken it again. TDD protects me from myself (thank goodness!).
In the case of generated code, I don't feel compelled to test the code. That is, I trust the code generator. However, I do want to test my inputs to the code generators. Exactly how to do that depends on the situation, but the general approach is to ask myself how I might be getting it wrong, and then to figure out how to verify that I got it right.
Maybe I write an automated test. Maybe I inspect something manually, but that's pretty risky. Maybe something else. It depends on the situation.
To put a slightly different spin on answers from Mark Seemann and Jay Bazuzi:
Your problem is that the reporting front-end produces a data format whose output you cannot easily inspect in the "verify" part of your tests.
The way to deal with this kind of problem is to:
Have some very high-level integration tests that superficially verify that your back-end code hooks correctly into your front-end code. I usually call those tests "smoke tests", as in "if I turn on the power and it smokes, it's bad".
Find a different way to test your back-end reporting code. Either test an intermediate output data structure, or implement an alternate output front-end that is more test-friendly, HTML, plaintext, whatever.
This similar to the common problem of testing web apps: it is not possible to automatically test that "the page looks right". But it is sufficient to test that the words and numbers in the page data are correct (using a programmatic browser surch as mechanize and a page scraper), and have a few superficial functional tests (with Selenium or Windmill) if the page is critically dependent on Javascript.
You could try using a web service for your reporting data source and test that, but you are not going to have unit tests for the rendering. This is the exact same problem you have when testing views. Sure, you can use a web testing framework like Selenium, but you probably won't be practicing true TDD. You'll be creating tests after your code is done.
In short, use common sense. It probably does not make sense to attempt to test the rendering of a report. You can have manual test cases that a tester will have to go through by hand or simply check the reports yourself.
You might also want to check out "How Much Unit Test Coverage Do You Need? - The Testivus Answer"
You could use Acceptance Test driven Development to replace the unit-tests and have validated reports for well known data used as references.
However this kind of test does not give a fine grained diagnostic as unit-tests do, they usually only provide a PASS/FAIL result, and, should the reports change often, the references need to be regenerated and re-validated as well.
Consider extracting the text from the PDF and checking it. This won't give you formatting, however. Some pdf extraction programs can pull out the images if the charts are in the pdf.
Faced with this situation, I try two approaches.
The Golden Master approach. Generate the report once, check it yourself, then save it as the "golden master". Write an automated test to compare its output with the golden master, and fail when they differ.
Automate the tests for the data, but check the format manually. I automate checks for the module that generates the report data, but to check the report format, I generate a report with hardcoded values and check the report by hand.
I strongly encourage you not to generate the full report just to check the correctness of the data on the report. When you want to check the report (not the data), then generate the report; when you want to check the data (not the format), then only generate the data.

BDD GUI Automation

I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.