Unit testing and iPhone development - cocoa-touch

I'm currently using OCUnit that ships with Xcode 3.2.4 for doing unit testing of my application. My workflow is often to set some break points in a failing unittestin order to quickly inspect the state. I'm using Apple's OCUnit setup:
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/iphone_development/135-Unit_Testing_Applications/unit_testing_applications.html
but the setup from above gives me some headaches. Apple distinguish between Application tests and Logic tests. As I see it:
You cannot debug logic tests. It's as if they're invisibly run when you build your project.
You can debug application tests, but you have to run these on the device and not the simulator (what is the reason for this?)
This means that everything moves kind of slowly with my current workflow. Any hints on getting app tests to run on the simulator? Or any pin pointers to another test-framework?
Would eg. google-toolbox-for-mac work better in general or for my specific needs?
Also, general comments on using breakpoints in the unit tests are welcome! :-)

I have used the Google Toolbox testing rig in the past and it worked fine, it ran both on the Simulator and the device and I could debug my tests if I wanted to. Recently I got fed up with bundling so much code with each of my projects and tried the Apple way.
I also find the logic/app tests split weird, especially as I can’t find any way to debug the logic tests now. (And if you’re using some parts of AVFoundation that won’t build for Simulator, you are apparently out of luck with logic tests completely, which seems strange.) One of the pros is that I can run the logic tests quickly during build.
I guess this does not help you that much – the point is that you can debug the tests under GTM. And you might also want to check out this related question.

I know this isn't really a good answer, nor is it completely helpful to your cause. But I've been using NSLog to run my unit tests (if nothing outputs to the console, then success). When I comment out my tests method then the tests wouldn't run. I found this much more predictable and reliable than OCUnit. I'd much rather use a real true unit tester, but it was too frustrating to deal with the often strange errors that could occur from OCUnit and also the other shortfalls/lack of features you describe above.

Related

Functional testing coverage Tool on apache and jboss

I'm looking for some tool which will provide me the code coverage of my functional tests (Not the unit testing code coverage ). To elaborate more, assume QA team executes their tests suites using selenium. At the end of the tests, I would like to know the amount of code (target code , not the test code base) got invoked / tested .
I found a similar post for .Net , but in my case the webserver is Apache and application server is jBoss
Coverage analysis for Functional Tests
Also, we have never done this type of analysis before, is this worth the effort, anyone who tried it ?
I used to do code coverage testing on assembly code and Java code. It is definitely worth it. You will find as you get the coverage close to 100% that it gets more and more difficult to construct tests for the remaining code.
You may even find code that you can prove can never be executed. You will find code on the fringes that has never been tested and you will be forced to run multi user tests to force race conditions to occur, assuming that the code had taken these into account.
On the assembly code, I had a 3000 line assembly program that took several months to test, but ran for 9 years without any bugs. Coverage testing proved its worth in that case as this code was deep inside a language interpreter.
As far as Java goes I used Clover: http://www.atlassian.com/software/clover/overview
This post: Open source code coverage libraries for JDK7? recommends Jacoco, but I've never tried it.
Thanks for the pointers #Peter Wooster. I did a lot of digging into the clover documentation but unfortunately there is not even a good indication that functional / integration is supported by clover, leave alone a good documentation.
Lucky I got to a link within the clover documentation itself which talks about this and looks promising (Thanks to Google search). I was using the ant so didn't even search in this Maven2 area. This talks about ant as well though :)
https://confluence.atlassian.com/display/CLOVER/Using+Clover+in+various+environment+configurations
I will be trying this , will update more on this soon !

Frontend testing: what and how to test, and what tool to use?

I have been writing tests for my Ruby code for a while, but as a frontend developer I am obviously interested in bring this into the code I write for my frontend code. There is quite a few different options which I have been playing around with:
CasperJS
Capybara & Rspec
Jasmine
Cucumber or just Rspec
What are people using for testing? And further than that what do people test? Just JavaScript? Links? Forms? Hardcoded content?
Any thoughts would be greatly appreciated.
I had the same questions a few months ago and, after talking to many developers and doing a lot of research, this is what I found out. You should unit test your JavaScript, write a small set of UI integration tests and avoid record and playback testing tools. Let me explain that in more detail.
First, consider the test pyramid. This is a interesting analogy created by Mike Cohn that will help you decide which kind of testing you should be doing. At the bottom of the pyramid are the unit tests, which are solid and provide fast feedback. These should be the foundation of your test strategy and thus occupy the largest part of the pyramid. At the top, you have the UI tests. Those are the tests that interact with your UI directly, like Selenium does for example. Although these tests might help you find bugs, they are more expensive and provide very slow feedback. Also, depending on the tool you use, they become very brittle and you will end up spending more time maintaining these tests than writing actual production code. The service layer, in the middle, includes integration tests that do not require an UI. In Rails, for instance, you would test your REST interface directly instead of interacting with the DOM elements.
Now, back to your question. I found out that I could greatly reduce the number of bugs in my project, which is a web application written in Spring Roo (Java) with tons of JavaScript, simply by writing enough unit tests for JS. In my application, there is a lot of logic written in JS and that is the kind of thing that I am testing here. I am not concerned about how the page will actually look or if the animations plays as they should. I test if the modules I write in JS will execute the expected logic, if element classes are correctly assigned and if error conditions are well handled. For these tests, I've been using Jasmine. This is a great tool. It is very easy to learn and has nice mocking capabilities, which are called spies. Jasmine-jQuery adds more great functionality if you are using jQuery. In particular, it allows you to specify fixtures, which are snippets of the HTML code, so you don't have to manually mock the DOM. I have integrated this tool with maven and these tests are part of my CI strategy.
You have to be careful with UI tests, specially if you rely on record/playback tools like Selenium. Since the UI changes often, these tests keep breaking and you will spend a lot of time finding out if the tests really failed or if they are just outdated. Also, they don't add as much value as unit tests. Since they need an integrated environment to run, you will mostly like run them only after you finished developing, when the cost of fixing things is higher.
For smoke/regression tests, however, UI tests are very useful. If you need to automate these, then you should watch out for some dangers. Write your tests, don't record them. Recorded tests usually rely on automatically generated xpaths that break for every little change you do on your code. I believe Cucumber is a good framework for writing these tests and you can use it along with WebDriver to automate the browser interaction. Code thinking about tests. In UI tests, you will have to make elements easier to find so you don't have to rely on complex xpaths. Adding class and id elements where you usually wouldn't will be frequent. Don't write tests for every small corner case. These tests are expensive to write and take too long to run. You should focus on the cases that explore most of your functionality. If you write too many tests at this level you will probably test the same functionality that you have previously tested on your unit tests (supposing you have written them).
In my current project I am using Spock and Geb to write the UI tests. I find these tools amazing. They are written in Groovy, which suits better my Java project.
There are lots of options and tools for that. But their choice depends on whether you have a web UI or it's a desktop app?
Supposing from the tools you've mentioned it's Web UI. I would suggest Selenium (aka WebDriver): http://seleniumhq.org/docs/
There is a variety of languages it supports (Ruby is in the list). It can be run against a variety of browsers, ad it's quite easy to use with lots of tutorials and tips available.
Oh, and it's free, of course :)
I though as this post gets a lot of likes, I would post my answer to my question as I do write lots of tests now and how you test front end has moved on a lot now.
So in terms of FE testing I spent lot of time using karma with Jasmine, although karma will work nicely with other test suites like mocha & qunit. While these are great and karma allows you to interface directly with browsers to run your tests. The downside is as your test suite gets large it can become quite slow.
So recently I have moved to Jest which is much faster and if your writing react app, using enzyme with snap shot testing give you really good coverage. Talking of coverage Jest has Istanbul coverage built in and set up and mocking is really easy simple to use. The downside it doesn't test in browser and it using something called jsdom which is fast, but does have a few nuisances. Personally I don't find this a big deal particularly when I compile my code through webpack/babel which means the cross browser bugs are fairly few and far between, so generally isn't an issue if you manually test anyway (and imo you should).
In terms of working within the rails stack, this much easy now that the webpacker gem is now available and using npm and node is generally much more excepted. I would recommend using nvm to manage your node versions
While this isn't strictly testing, I would also recommend using linting as this also picks up a lot of issues in your code. For JS I use eslint with prettier and scss/css I use stylelint
In terms on what to test, I think as Carlos talks about the test pyramid is still relevant, after all the theory doesn't change, just the tools. I would also add to be practical about tests, I would always test, but to what level and coverage will depend on the project. It is important to manage your time and spending hours/days testing a short lifecycle project. Larger/longer term projects the benefits of a larger test suite is obviously greater.
Anyway I hope that helps people that look at the question.

What is the most modern way to handle Haskell testing?

I only recently started working on my latest Haskell project, and would really like to test it. I was wondering where testing currently stands, with regards to the cutting edge frameworks, test running procedures and test code organization. It seems that previously tests were just a separate binary that returned a different exit code if tests passed or failed - is this still the currently adopted setup, or are there other ways to integrate with cabal now?
Quickcheck may not be cutting edge anymore (at least for Haskell practitioners).
But in combination with HUnit it's quite easy to get almost 100% coverage (I use HPC for converage analysis).

App doesn't open when unit tests are running

I've written unit testings using xcode 4's built in framework before, and when I run them the process by default seems to be that the app launches and then the tests are run.
I've been moved on to a project that already has tons of unit tests, and I have recently added some of my own. In my unit tests I make a few calls to stuff in [UIApplication sharedInstance]. All of these tests fail because for some reason in this project the app does not open first, so UIApplication sharedInstance returns nil.
I'm assuming that because the default behavior is to have the app open, there has to be an option somewhere that has turned this off. I've been poking around in the scheme editor for a few minutes and can't get anything to make the app run. I've tried clicking the "run" checkbox under the build tab of the unittest scheme and had no success.
Edit: So when comparing a new project to this one, I've noticed that under Build Settings -> Unit Testing -> Test Host there is $(BUNDLE_LOADER) defined. I tried assigning that to the new project, and when I hit done it just magically disappears. No error or explanation of any kind.
I have some strange habit of spending a ton of time trying to figure out a solution, giving up and posting on SO, then finding a solution a few minutes later.
The problem was that my app was running logic tests and needed to be running application tests. The apple documentation is a little sketch on the subject, but someone else on SO wrote a pretty good solution:
how to implement application tests in xcode4?

Code Coverage tool for BlackBerry

I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com