Symfony - Unit testing with Lime - testing

I'm trying to write some unit tests in Lime but the list of valid test methods in the documentation seems to be rather limited:
http://www.symfony-project.org/jobeet/1_4/Doctrine/en/08
I'm trying to write a number of tests which attempt to save a model with incorrect values.
Does Lime have a method that will work correctly for this?
A quick google on the topic brought up nothing useful.
Surely there must be an easy way to do this?
Any advice appreciated.
Thanks.

Here is a good article about unit testing database model. Found it very useful myself. It also recommends doing model tests in an in-memory sqlite database, which is much faster than using mysql database.
http://webmozarts.com/2010/03/11/writing-efficient-tests/

Related

Is there a way we can chain the scenarios in karate like java method chaining

I have been using Karate for the past 6 months, I am really impressed with the features it offers.
I know karate is meant to test API(s) individually but we are also trying to use it for E2E tests that involves calling multiple scenarios step by step.
Our feature file looks like below
1.Call Feature1:Scenario1
2.Call Feature2:Scenario2
.....
Note: We are re-using a scenarios for both API Testing and E2E testing.Sometimes I find it difficult to remember all feature files.
Can we chain the scenario call like java, I doubt feature file will let us to do that. We need your valuable suggestion. pls let us know if you feel our approach is not correct
First, I'd like to quote the documentation: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every Scenario. If you are looking for a way to do something only once per Feature, take a look at callonce. On the other hand, if you are expecting a variable in the Background to be modified by one Scenario so that later ones can see the updated value - that is not how you should think of them, and you should combine your 'flow' into one Scenario. Keep in mind that you should be able to comment-out a Scenario or skip some via tags without impacting any others. Note that the parallel runner will run Scenario-s in parallel, which means they can run in any order.
So by default, I actually recommend teams to have Scenario-s with multiple API calls within them. There is nothing wrong with that, and I really don't understand why some folks assume that you should have a Scenario for every GET or POST etc. I thought the "hello world" example would have made that clear, but evidently not.
If you have multiple Scenario-s in a Feature just run the feature, and all Scenario-s will be executed or "chained". So what's the problem ?
I think you need to change some of your assumptions. Karate is designed for integration testing. If you really need to have a separate set of tests that test one API at a time, please create separate feature files. The whole point of Karate is that there is so little code needed - that code-duplication is perfectly ok.
Let me point you to this article by Google. For test-automation, you should NOT be trying to re-use things all over the place. It does more harm than good.
For a great example of what happens when you try to apply "too much re-use" in Karate, see this: https://stackoverflow.com/a/54126724/143475

Is there a good way to run specflow tests in a new app domain?

Due to some constraints on our production code, we have some .NET services that need to be run with their own config file. We've been using app-domains to provide arbitrary config files to these services at test run time.
The problem comes when we try and use SpecFlow for these tests - since each step is called separately and from an overall runner class that we don't have direct access to, pushing test data across app-domain boundaries for every single STEP is pretty messy and results in everything being in all sorts of odd lambdas, plus serializability needs to be considered when most of the time we shouldn't need to care about that in a test code context (internal data objects, that sort of thing).
Does anyone have a method by which SpecFlow can be convinced to run all of its steps in a provided app-domain, or generally just play nicer with the app-domain concept in general?
Would it be possible to write a plugin / test generator that did this, and if so would this be very technically complicated? I had a look at that sort of extensibility but couldn't find the right place to start to do this, so I may have missed it.
(I'm aware that "Refactor your service so you don't need arbitrary config files" would also solve the underlying problem, but for the purposes of this question please assume I can't do that - I'm interested in whether SpecFlow can be configured to solve this, whether on its own or by extending it.)
Edit: After some more investigation I think this -should- be possible by using a custom unit test generator plugin? The problem I then have is there's basically zero documentation on that, and not many examples around on the internet. If you can give me a good example that I can look at to adapt that would go a long way...

NHibernate Override ISession for Faking the Database during testing

I am working on a project that has over 2000 integration tests that full circles the database. I want to speed up the process so I thought why not fake the database.
We are using Fluent NHibernate as our ORM which is probably why I am having such trouble. We have already implemented this concept in a program that does not use NHibernate but rather basic CRUD operations.
Basically on any CRUD operation I would like to save the object in memory to say a dictionary list. This will speed up the process of our tests which will hopefully lessen our build times. Not to mention the cool factor.
I have looked into having two separate sessions and having the session factory use one or the other but I would have to implement many methods/properties that I really don't care about.
I also considering using a go between in class that does the translating myself but it would probably include me changing A LOT of existing code. I am trying to limit the impact on the rest of the project as much as possible so regression testing will not be a HUGE factor.
Please let me know of any other suggestions anyone has!
Thanks!
Use a SQLite in-memory database. Here's the blog post that originally showed me how...
http://jasondentler.com/blog/2009/08/nhibernate-unit-testing-with-sqlite-in-memory-db/

Dependencies in Acceptance Testing

Me and a co-worker are having a debate. We are on a craptastic legacy project and are slowly adding acceptance tests. He believes that we should be doing the work in the gui/watin then using a low level library to query the database directly for to get an 'end to end' test as he puts it.
We are using NHibernate and I advocate using gui/watin then those nhibernate objects to do the assertions in the acceptance testing. He dislikes the dependency of NHibernate in the test. My assertion was that we have/should have integration tests against the NHibernate objects to make sure they are working with DB the way we intend at which point there is no downside in using them in the acceptance test to assert proper operation. I also think his low level sql dependence will make the tests fragile and duplicate business logic in alot of cases.
Integration testing in our shop basically means its a single component with a dependency e.g. fileRepository/FileSystem Domain-NhibernateObject/Database. Acceptance testing means coming in through the GUI. Unit means all dependencies have/can be mocked/stubbed out and you've got a pure test in memory with only the method under test actually doing any real work. Let me know if my defs are off.
Anyway any articles/docs/parchment with opinions on this subject you can point me at would be appreciated.
The only reason you'd ever automate tests is to make things easier to change. If you weren't changing them, you could get away with manual testing. Tying the tests to the database will make the database much harder to change.
Tying them to the NHibernate objects won't help very much either, I'm afraid!
The users of your system won't be using the database or NHibernate. How do they get the benefit (or provide the benefit to other stakeholders)? How will they be able to tell that it's working well? If you can capture that in the Acceptance Tests, you'll be able to change the underlying code and data while still maintaining the value of your application. If someone generates reports from the data, why not generate the same reports and check that their contents are what you expect? If the data is read by another system, can you get a copy of that system and see what it outputs to its users?
Anyway, that's my opinion - keep acceptance tests as close to the business value as possible - and here's a blog post I wrote which might help. You could also try the Behavior Driven Development group on Yahoo, who have a fair bit of experience amongst them.
Oh, and doing integration tests to check that your (N)Hibernate bindings are good is an excellent idea. Saved us on a couple of projects.

Best practices for TDD and reporting

I am trying to become more familiar with test driven approaches. One drawback for me is that a major part of my code is generated context for reporting (PDF documents, chart images). There is always a complex designer involved and there is no easy test of correctness. No chance to test just fragments!
Do you know TDD practices for this situation?
Some applications or frameworks are just inheritently unit test-unfriendly, and there's really not a lot you can do about it.
I prefer to avoid such frameworks altogether, but if absolutely forced to deal with such issues, it can be helpful to extract all logic into a testable library, leaving only declarative code behind in the framework.
The question I ask myself in these situations is "how do I know I got it right"?
I've written a lot of code in my career, and almost all of it didn't work the first time. Almost every time I've gone back and changed code for a refactoring, feature change, performance, or bug fix, I've broken it again. TDD protects me from myself (thank goodness!).
In the case of generated code, I don't feel compelled to test the code. That is, I trust the code generator. However, I do want to test my inputs to the code generators. Exactly how to do that depends on the situation, but the general approach is to ask myself how I might be getting it wrong, and then to figure out how to verify that I got it right.
Maybe I write an automated test. Maybe I inspect something manually, but that's pretty risky. Maybe something else. It depends on the situation.
To put a slightly different spin on answers from Mark Seemann and Jay Bazuzi:
Your problem is that the reporting front-end produces a data format whose output you cannot easily inspect in the "verify" part of your tests.
The way to deal with this kind of problem is to:
Have some very high-level integration tests that superficially verify that your back-end code hooks correctly into your front-end code. I usually call those tests "smoke tests", as in "if I turn on the power and it smokes, it's bad".
Find a different way to test your back-end reporting code. Either test an intermediate output data structure, or implement an alternate output front-end that is more test-friendly, HTML, plaintext, whatever.
This similar to the common problem of testing web apps: it is not possible to automatically test that "the page looks right". But it is sufficient to test that the words and numbers in the page data are correct (using a programmatic browser surch as mechanize and a page scraper), and have a few superficial functional tests (with Selenium or Windmill) if the page is critically dependent on Javascript.
You could try using a web service for your reporting data source and test that, but you are not going to have unit tests for the rendering. This is the exact same problem you have when testing views. Sure, you can use a web testing framework like Selenium, but you probably won't be practicing true TDD. You'll be creating tests after your code is done.
In short, use common sense. It probably does not make sense to attempt to test the rendering of a report. You can have manual test cases that a tester will have to go through by hand or simply check the reports yourself.
You might also want to check out "How Much Unit Test Coverage Do You Need? - The Testivus Answer"
You could use Acceptance Test driven Development to replace the unit-tests and have validated reports for well known data used as references.
However this kind of test does not give a fine grained diagnostic as unit-tests do, they usually only provide a PASS/FAIL result, and, should the reports change often, the references need to be regenerated and re-validated as well.
Consider extracting the text from the PDF and checking it. This won't give you formatting, however. Some pdf extraction programs can pull out the images if the charts are in the pdf.
Faced with this situation, I try two approaches.
The Golden Master approach. Generate the report once, check it yourself, then save it as the "golden master". Write an automated test to compare its output with the golden master, and fail when they differ.
Automate the tests for the data, but check the format manually. I automate checks for the module that generates the report data, but to check the report format, I generate a report with hardcoded values and check the report by hand.
I strongly encourage you not to generate the full report just to check the correctness of the data on the report. When you want to check the report (not the data), then generate the report; when you want to check the data (not the format), then only generate the data.