What to Assert in E2E Test Cases - Best Practices - testing

I recently transferred to an existing (but new) project, still in beta, that already has a fair amount of unit, integration, and system / e2e tests. Presently, a large portion of integration and system tests are asserting equality between an expected JSON payload and the actual JSON payload.
Considering the fast moving nature of this project (and that it's in beta), changes are often causing many tests to flip red, as they're changing the JSON payload. It seems redundant to frequently change expected JSON payload to match the new output, but I'll do it without complaining if I know that's the ideal way to test.
My question (TLDR): In the case of a JSON API, if I am only testing "Happy Path" e2e / system test scenarios; what would be my ideal assertion statement? Am I looking to test the entire payload against an expected payload or would it make more sense to compare status codes, and maybe some high level JSON keys?

Related

Karate Framework as a Rules Engine

I have an api project which requires the payload to be validated against a set of rules. For this, I have written a karate feature file with all the required assertions for my incoming request json payload. The feature file then returns true or false depending upon the satisfied conditions. This mechanism is working perfectly in my local currently.
Is this approach recommended for production use case? Will the karate framework in this format be capable of handling large volumes of requests coming in a very short span of time?
Developer of Karate here, I really like your question because it validates what I personally believe, that Karate just makes it easy to work with JSON.
Why I won't recommend this for production use is because Karate embeds a JS engine that can be targeted using a "script injection" attack. Karate is designed for users running "locally" and has no safeguards built-in to prevent any malicious attacks coming in via JSON payloads.
The other question is performance, personally I am quite confident, because Karate is being used in conjunction with Gatling and some work has gone into improving performance over the years. But at the end of the day, Karate does use a JS engine in interpreted mode. So you need to run a performance test or load test yourself to validate if Karate can handle the volume you expect.
Maybe you can contribute to Karate to address both the above concerns !

How to develop regression tests for a calculation engine

I'm on a team developing a financial information web app. We haven't written many automated tests for it yet, so we've decided to add regression tests to the most critical parts of our program. I'm very new to automated testing, though, so I'm not entirely sure how I should go about writing the tests.
This post is long, so here's the tl;dr question: How can I write a regression test that checks to see if certain calculation is working? I don't just want to test the calculation, though - I also want to know if any of the components the calculation depends on to give it its inputs break. I don't need to know which component broke in particular, just that something's not working. What approach should I use?
This is our situation: We developed the app using a layered architecture, like this:
Views
|
V
Logic Managers <--> Financial Calculation Engines
|
V
Data Accessors
|
V
Database
We've determined that the calculation engines are the parts of our program that most need to have a regression test suite. These components contain the calculations and algorithms that we use to process raw financial data into useful results. Their corresponding Managers use them by calling their public methods, which accept raw financial data as parameters. When the engine methods return, they send back an object that contains processed financial results. The managers, meanwhile, get the raw financial data from the data accessors, which in turn get data from the database.
We decided we want to know as soon as a financial calculation "breaks" so that we know the bug is somewhere in whichever pieces of the program have been touched since the last run of the tests. This would let us use continuous testing to protect us from having the engines producing wrong results and having no idea where to look.
When we thought about what this means, we realized that adding a unit test to each of the engines isn't enough. Let's say, for example, that an erroneous change to the data accessors means that they start pulling the wrong data. This data would then be sent up through the manager to the engine, which would produce the wrong results. However, the engine's algorithms themselves would still be working perfectly, so the unit test would still pass. This means that when we noticed the wrong numbers being generated, we would have no way of knowing when the bug was introduced, making it more difficult to track down and fix.
Instead, we would like to make regression tests that are able to pick up as soon as a bug appears anywhere that would cause the final results the engines output to be incorrect, even if the problem is that the wrong data is sent to the engines and not that the engines themselves have issues. When these tests fail, they wouldn't tell us where the problem is, but if we're continuously testing, we'll know as soon as a bug is checked in and have a small set of changes to look through to fix it.
So that's what we want to do. Unfortunately, we don't know how to create these tests. What approaches or patterns are useful for writing these types of regression tests?
Just a hint: you should check every part of the Financial Calculations Engine with the same inputs every time, and the objects returned should be identical every time.
Test separately the Data Accessors, with the same logic: same input, same output.
To do so, you need to mock some parts of the system (eg. mock the data accessors to always return the same set of data).
Having separate unit tests fore each part also locates the bug with more precision.
A couple of links to get into the idea:
http://www.ibm.com/developerworks/library/j-mocktest/index.html
http://www.slideshare.net/joewilson123/unit-testing-and-mocking
There are a lot of mocking frameworks around that can help you code the tests, like Mockito for Java projects.

Handling test data when going from running Selenium tests in series to parallel

I'd like to start running my existing Selenium tests in parallel, but I'm having trouble deciding on the best approach due to the way my current tests are written.
The first step in of most of my tests is to get the DB into a clean state and then populate it with the data needed for the rest of the test. While this is great to isolate tests from each other, if I start running these same Selenium tests in parallel on the same SUT, they'll end up erasing other tests' data.
After much digging, I haven't been able to find any guidance or best-practices on how to deal with this situation. I've thought of a few ideas, but none have struck me as particularly awesome:
Rewrite the tests to not overwrite other tests' data, i.e. only add test data, never erase -- I could see this potentially leading to unexpected failures due to the variability of the database when each test is run. Anything from a different ordering of tests to an ill-placed failure could throw off the other tests. This just feels wrong.
Don't pre-populate the database -- Instead, create all needed data via Selenium itself. This would most replicate real-world usage, but would also take significantly longer than loading data directly into the database. This would probably negate any benefits from parallelization depending on how much test data each test case needs.
Have each Selenium node test a different copy of the SUT -- This way, each test would be free to do as it pleases with the database, since we are assume that no other test is touching it at the same time. The downside is that I'd need to have multiple databases setup and, at the start of each test case, figure out how to coordinate which database to initialize and how to signal to the node and SUT that this particular test case should be using this particular database. Not awful, but not what I would love to do if there's a better way.
Have each Selenium node test a different copy of the SUT, but break up the tests into distinct suites, one suite per node, before run-time -- Also viable, but not as flexible since over time you'd want to keep going back and even the length of each suite as much as possible.
All in all, none of these seem like clear winners. Option 3 seems the most reasonable, but I also have doubts about whether that is even a feasible approach. After researching a bit, it looks like I'll need to write a custom test runner to facilitate running the tests in parallel anyways, but the parts regarding the initial test data still have me looking for a better way.
Anyone have any better ways of handling database initialization when running Selenium tests in parallel?
FWIW, the app and tests suite is in PHP/PHPUnit.
Update
Since it sounds like the answer I'm looking for is very project-dependent, I'm at least going to attempt to come up with my own solution and report back with my findings.
There's no easy answer and it looks like you've thought out most of it. Also worth considering is to rewrite the tests to use separately partitioned data - this may or may not work depending on your domain (e.g. a separate bank account per node, if it's a banking app). Your pre-population of the DB could be restricted to static reference data, or you could pre-populate the data for each separate 'account'. Again, depends on how easy this is to do for your data.
I'm inclined to vote for 3, though, because database setup is relatively easy to script these days and the hardware requirements probably aren't too high for a small test data suite.

How to automate testing of Tridion templates (with TOM.NET)

I have a recurring problem in templating projects. I can't really test my work in any other way than running the templates in Template Builder. This is a major problem if I'm working on a TBB that is used on several different templates because it means that after changing the code in the TBB I should retest all the templates (and probably with several different pages/components as there might be slightly different cases depending on the content).
As you can see in big projects where TBBs are reused a lot changing them costs a lot of time due to the amount of testing necessary and I would be eager to find a solution for this. I know that unit testing is virtually impossible with the current TOM.NET (most classes/methods are internal) so what could be an alternative way to achieve automated testing?
One solution that I have looked into is to use Core Service to initiate rendering process of a template with some test content and then check if the output is as expected but achieving this requires quite a lot of code and thus produces unwanted overhead (I think it still takes less time than manually retesting the cases). Also this doesn't really allow you to test individual TBBs unless you (programmatically) create separate templates with individual (or a subset of) TBBs. The good thing of this solution is that you could run the tests on your local laptop while developing, assuming you can connect to Tridion-server (you'd still have to upload your code to Tridion before running the tests so its not completely ideal solution).
I know that other alternative is to use DD4T/CWA where you can pretty much handle all the testing in the front-end as the templates are (usually) quite simple.
Any other ideas?
I agree that the emphasis is on automated testing rather than unit testing (which, after all, is mostly about object oriented programming). With Tridion work, it's about transforming data. What you need to test data transforms is to have known inputs, and to be able to make assertions about the outputs. I've tried various approaches over the years, but the most effective so far has been the following:
1) For every template, keep test content in a dedicated Folder, and test pages in a dedicated Structure Group. The content is the input to your tests, and isn't intended to change unless the test requirements change.
2) Put the components on the pages. Publish the pages. Keep it simple: you can often have a page for a single test scenario. You can automate publishing the pages if that helps.
3) Use web testing tools to verify the output. This could be HtmlUnit, Selenium or whatever.
Basically - Tridion is an engine for executing transforms. You don't need a specialised test execution engine for this part, although it's useful to use one for testing the output.
Mocking the package sounds attractive, but as Vesa says, it can turn into a huge amount of work. The simple approach I have outlined works in practice, and was proved on a significant project. You could add variations on the theme if you like: one thing I've considered, but never done on a project, is to use the blueprint to give you more isolation. For example, you could test your page templates by localising your component templates to generate static and predictable component presentations. Suffice it to say that there's enough scope for creativity once you unshackle yourself from the baggage of unit testing approaches.
I have some experience with the CoreService scenario. You will just need to write some helpers to upload your templates, create coumpound templates and run it. The tricky part, however, is verification.
You will need to write some test templates that will help you with verification. One way is to write .Net template that you will pass expected values to and it will do the verification. The other way is to write DreamWeaver template that will print values from package and you will then check it against expected. The advantage of this method is that these values will be returned to you as the result of CoreService Render action and you can do all the verification on the client side.
But the most difficult part is the dataset creation. It will probably take most of your time.
You could try to isolate the majority of the code in classes that can be unit tested.
I guess the main problem here is that Engine and Package are sealed, so you cannot easily mock them up. But you can minimize the interaction with those objects and put the meat of your code in classes that take the relevant input and return the output that should be put in the package etc.
I think you could get a lot of coverage of your TBBs just from unit tests with this approach.
At a customer I've seen an implementation where the tests are invoking the same webservice that Template Builder uses, and they use these to execute the templates, evaluate the results, etc.
Probably worth exploring.
I would suggest writing your own TestRunner with 2 goals: Create test data and run tests.
Create test data: The idea is to create a sample dataset (all fields, some fields, and only mandatory fields automatically). (Bonus points for using Chuck Norris quotes instead of lorem ipsum). The title of the Sample content uses a naming scheme - like [TestContent] and/or is in its' own folder with metadata attached (to find it later).
Create test pages: Find the TestContent. Use GetListUsingItems to find pages where the template is used. Copy the page, and paste it into a TestContent StructureGroup, save. Open the page, add the test content, remove the other content, and save page with special naming schema.
Run tests: Find the TestContent, preview each one, write out report with rendering time, success status, and # of chars.
I consider your problem completely technology agnostic regardless of the approach you use (Thinking in the context of Tridion).
The problem is that you are modifying one thing that is used in multiple places (Component/Page Templates) and those places need to be tested before you push
that as a valid change.
Even if you do proper changes, assume the code runs fine and you have a result, maybe is not the result that is expected by other TBBs that consume your
output.
That is the problem itself unfortunately :(
If the problem is that you have to test all the Templates using that TBB, that is still a problem with no solution.
If the problem is that you don't want to impact the current platform with your changes/testing nor interfere with other developments going on
is a different scenario.
I would solve the second one by creating a separate publication inheriting from the publication with valid code/data to test
(or have that created in advance), make your changes there and test.
This approach makes sense if you are using the TBB as part of many Component/Page Templates.
If you have the luxury of the granularity in the front end (your tbb produces an atomic piece of code) the complexity of the scenario would be slightly
reduced, but you still have to test all the scenarios anyway

In functional testing, should I compare all tabular data rendered in the browser with the one coming from the DB?

I'm working on a test plan for a website where some tests are taking the following path:
Hit the requested URI and get the data rendered inside some table(20 rows per page).
Make a database query to get the data that is supposed to be rendered in that table.
Compare the 2 data row by row, they should match.
Is that a correct way of doing functional testing? If that request was an Ajax request, what will be the answer also? Would the answer differ for integration testing?
I have some reason that makes me believe that this is wrong somehow.... still need your opinions guys!
Yes, this could be a productive test. Either you have a fixed data set or you don't.
If you have a fixed data set, this is much easier to test, because all you're doing is comparing against a fixed output.
If you don't have a fixed data set, then you need to duplicate the business logic, effectively duplicating the work already done by the developer. Then you have two sets of logic to maintain.
The second is the best approach because you get two ways of doing the same thing, effectively a peer review of the specification and code. It's also very expensive in terms of time and resources, which is why most people choose to have a fixed data set.
To answer your question, if your business logic in the query is simple, then you can get a test very easily. However, the value that the test brings isn't great, because you aren't testing very much.
If the business logic is complex, you are getting more value from the test, but it's going to be harder to maintain in the long term.
For me, what your test does bring is a simple integration test that proves that the system reads correctly from the database, and displays the data correctly. This is a good test, even better if it is automated.
This seems fine for functional testing. Integration testing in my mind has to do with the testing of different technologies or components that are supposed to work together which is generally broader than functional testing. But of course this sort of testing could also be considered integration testing, depending on how your application is put together and where the testing is happening in the lifecycle of your development. For example it may be that in order for this site to work you have to put together a few components that were developed independently; this might be one of the tests to validate that the integration works.
Don't see how this being Ajax or not has anything to do with making the answer different.
I will likely be a dissenting opinion here, but I don't consider this to be a productive test. What you are doing is simply duplicating the code which produces the page. And any time you introduce duplicated code (even across departments) you'll be looking at defects cropping up long-term.
It is far better to load the DB with known data (either through the app, or directly) and then check that the output matches what you'd expect. This also ensures that your DB layer, or DB itself, hasn't modified the data in a way you do not expect.
That is:
Load known data (preferably through the app itself)
Load the requested URI
Check that displayed data matches your known data
This kind of test could be good for testing a large set of data with relatively little tester effort if there is not much developer logic between the database and the display to the end user. Our team has done this on a number of occasions, and it is especially useful for running large quantities of real production data through our tests to be sure that actual scenarios are handled as expected. Do make sure you do at least a little fixed input testing for rare scenarios that might be especially likely to be handled differently in the DB and on the web page - null values, special characters, and other oddities.
Personally, I would call this "integration testing", since you are testing the integration of the DB and the web site, and not "functional testing". For "functional testing", I'd probably want to make a mock of the datasource (e.g., the database) that will provide pre-written sets of data in the format you expect.
Having said that, if I had high confidence in the validity of the DB data and if the logic between the DB query and the web page display was very small and low-risk, I would probably not bother with the mock and would let the integration test cover the functionality as well. I don't know that testing the functionality and integration separately would be a big quality win in this case, and there are likely better things you could do with the available testing time. If there is a lot of logic around this data, you should probably test the integration separately from the functionality. Additional integration testing would probably include things like, "What if the database can't be reached?" and "What if the database is slow?".
While this technique will work with Ajax, make sure your testing tools will work with Ajax. Specifically, think about how you will capture the database query results and how you will gather the results displayed on the web page.
I'm assuming that the validity of the data in the query is being tested elsewhere, since you mentioned that this was just one type of test in the test plan. I'm also just discussing integration with the database and this report and not other features or components, and not other aspects of testing (performance, security. etc.), since that was the scope of your question.