I am writing a simple app that would take data at certain interval from my server, and elaborate the data and spit out the results, saving them on the device.
IS there anything that I can use in Xcode to make something like Unit tests? I would like to run tests (maybe with a cron job) so Xcode can build the app, put it on the device and the tests can run automating the procedure.
I know how to compile and build on the machine and sync on the device, but then I am stuck about the next step...since I do not know how to tell the device to run certain steps (nor i know how to write them)
Is there anything that i can use that would allow me to accomplish this?
I've heard of UIAutomation but I am not sure that in fact it can do what i need (automate operations on a device, like a unit test would do); Unit tests does not run on device, so I am basically stuck doing the test by hand.
Any help is more than welcome!
Related
I am using Xcode 6.3.1 and OS X server 4
i have a template for UI automation and i would like the test success failure to be logged in the bot.
is it possible?
The right answer would be that no, there is no way to get the results from Instruments after the UI Automation runs and display the results with the unit tests bot results.
But...
If you really want to hack some stuff together you really could make it work by using the info given HERE and then parsing the results and modifying the Xcode NodeJS server that displays the data to also display the UI Automation results.
OPINION:
I would say the second option is not worth the time and effort and that it would be better to use a framework like KIF which runs UI tests like unit tests so you can get the results in Xcode Bot.
How automated test works and how can I try one? How tools for automated testing works and what they do?
If possible please post with examples to clarify the ideas.
Any help on this topic is very welcome! Thanks.
Automated testing means writing a script for the tasks that we test manually.
Tools include softwares where we write a few lines of code in a sequence as we wish to perform a partiular test. Then running that script to perform the tests and generate results.
Automated testing saves the hours that we manually spend in repeating a series of test cases for a particular test.
Probably the best place to start is the xUnit libraries (JUnit, PHPUnit, jsUnit, etc.). If you're interested in testing of Web interfaces, there's something called Selenium. They provide lots of code examples to look at. These tools allow you to set up some input values, run some code, and then it allows you to verify the final output of that code matches your expectations (aka assertions).
In more sophisticated development teams, these automated tests are run every time new code gets submitted to the project to make sure no new bugs are introduced. As Priyanka mentioned, they save lots of time and eliminate the possibility of human error because the tests are run automatically where they would otherwise be done manually.
I'm sorry for not being more specific. This is a very broad topic of discussion.
I would like to hear about your workflow for developing test case for Selenium 2 / webdriver. In JUnit, for example, a developer may start writing a test before he writes a functionality. Then he continuously runs that test against the functionality, possibly in a debugger, modifying code (which gets hot-swapped) to his heart's desires. Is there a more interactive way to write bits and pieces of Selenium code (java) and see immediate results? Do you use Selenium IDE to assist you?
https://stackoverflow.com/a/92250/374512
Right now, I have a bunch of PageObjects and a bunch of test codes that I wrote from scratch. Each time I make a change, I run the test and it has to go from the sequence of logging into the application, navigating a bunch of pages to get to the point of my test. Starting FF profile cold takes at least 5 seconds for the webdriver, and navigation takes another few seconds. How do you code and test a selenium test against a piece of UI functionality in an iterative manner. I want to be able to write a line of code and execute it against a UI in a particular state, a state that took some long sequence of steps to get to.
In my experience, there is no fast way to get to a certain state using a browser. Because Selenium tests start from scratch on each execution, there is that start up time to get to the point where you want to test, which makes browser integration testing inherently slow. If there was a way to execute the tests against the UI in a particular state, that would also speed up any regression tests, but I don't believe that's possible or the proper way to test a feature.
I've found it sufficient enough to program the actions in the page object, alongside your development of the UI functionality, and then writing your script to use the actions, similar to what you have been doing. As for when to execute it, I would say run it locally right before you commit to your repo and continuous integration environment. I think of running my tests as a checkpoint, or pre-requisite to committing code.
Does anyone know a good way to mix and merge multiple testing frameworks together so that they can all be run in batch and return an solid overall total of which tests failed for which frameworks and suites/specs.
So lets say my testing setup for a particular project looks like so:
I'm using Rails (Ruby) and using RSpec to test that.
I'm also using Cucumber with my Rails application.
I'm using MochaJS with the Testacular runner for JavaScript testing.
I'm using Jasmine to test for some NodeJS applications that I'm using as well.
Now to test each of these test groups I would have to launch each of their respective frameworks/instances, start and run the "run tests" operation for each, and then tally up the results and figure out which tests failed and which ones didn't.
Does anyone know of a tool that is designed to do this?
You probably need a build automation software to perform all these task together.
Whenever one of your test process fails you'll get a detailed feedback.
As you're developing a ruby application maybe Buildr is the best choice, but you could as well use Ant or Rant...
you can find a complete list of tools is here:
http://en.wikipedia.org/wiki/List_of_build_automation_software
I'm currently using OCUnit that ships with Xcode 3.2.4 for doing unit testing of my application. My workflow is often to set some break points in a failing unittestin order to quickly inspect the state. I'm using Apple's OCUnit setup:
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/iphone_development/135-Unit_Testing_Applications/unit_testing_applications.html
but the setup from above gives me some headaches. Apple distinguish between Application tests and Logic tests. As I see it:
You cannot debug logic tests. It's as if they're invisibly run when you build your project.
You can debug application tests, but you have to run these on the device and not the simulator (what is the reason for this?)
This means that everything moves kind of slowly with my current workflow. Any hints on getting app tests to run on the simulator? Or any pin pointers to another test-framework?
Would eg. google-toolbox-for-mac work better in general or for my specific needs?
Also, general comments on using breakpoints in the unit tests are welcome! :-)
I have used the Google Toolbox testing rig in the past and it worked fine, it ran both on the Simulator and the device and I could debug my tests if I wanted to. Recently I got fed up with bundling so much code with each of my projects and tried the Apple way.
I also find the logic/app tests split weird, especially as I can’t find any way to debug the logic tests now. (And if you’re using some parts of AVFoundation that won’t build for Simulator, you are apparently out of luck with logic tests completely, which seems strange.) One of the pros is that I can run the logic tests quickly during build.
I guess this does not help you that much – the point is that you can debug the tests under GTM. And you might also want to check out this related question.
I know this isn't really a good answer, nor is it completely helpful to your cause. But I've been using NSLog to run my unit tests (if nothing outputs to the console, then success). When I comment out my tests method then the tests wouldn't run. I found this much more predictable and reliable than OCUnit. I'd much rather use a real true unit tester, but it was too frustrating to deal with the often strange errors that could occur from OCUnit and also the other shortfalls/lack of features you describe above.