How do you run one XCTestCase subclass's test method from inside another XCTestCase subclass's test method in XCode 7? - objective-c

How do you run one XCTestCase subclass's test method from inside another XCTestCase subclass's test method in XCode 7?
I have a test suite for my point-of-sale app.
I have an XCTestCase subclass called "MathTest" which does various unit tests on math functions of the app. It also has a test method testTillMath that checks the register to see if the transaction totals all match up to expected values.
Then I have another XCTestCase subclass called "TicketBuildingTest" which has a test method called testCreateTickets that draws from an Excel spreadsheet data source, using whatever data is in the spreadsheet to assemble a specific batch of transactions into a special Core Data store specific to the test environment.
The testTillMath method will only succeed if testCreateTickets has first been run successfully.
How can I make testTillMath get run every time after testCreateTickets has finished?
I tried to #include MathTest.m from inside of TicketBuildingTest so that I could call testTillMath at the end of testCreateTickets, but XCode won't let me do that include. The build fails with the error, "linker command failed with exit code 1" due to "duplicate symbol _OBJC_CLASS_$_MathTest"
I realize there is likely to be more than one way to skin this cat; in PHPUnit I can specify a set of test methods to be run all in a row, in a certain order, running each test after the one before it completes. How can I do that in XCode?

You've deliberately introduced "test pollution", i.e. a situation where the success or failure of one test is dependent on the success or failure of another test. This is a bad practice. Tests should be independent of each other. That way you know that when a test fails, it's failing due to the functionality within your app that it's specifically testing—so you can track down that functionality and fix it. Debugging test pollution is a real ordeal, and you should avoid it at all costs.
Some testing frameworks (Rspec, for one; Cedar for another) randomize the order in which tests are run, precisely to discourage the kind of test coupling you describe.

Related

Test assertions inside test doubles?

Is it a good practice to write a EXPECT(something) inside a test double (e.g. spy or mock) method? To ensure the test double is used in a specific way for testing?
If not, what would be a preferred solution?
If you would write a true Mock (as per definition from xUnit Test Patterns) this is exactly what defines this kind of test double. It is set up with the expectations how it will be called and therefore also includes the assertions. That's also how mocking frameworks produce mock objects under the hood. See also the definition from xUnit Test Patterns:
How do we implement Behavior Verification for indirect outputs of the SUT?
How can we verify logic independently when it depends on indirect inputs from other software components?
Replace an object the system under test (SUT) depends on with a test-specific object that verifies it is being used correctly by the SUT.
Here, indirect outputs means that you don't want to verify that the method under test returns some value but that there is something happening inside the method being tested that is behaviour relevant to callers of the method. For instance, that while executing some method the correct behaviour lead to an expected important action. Like sending an email or sending a message somewhere. The mock would be the doubled dependency that also verifies itself that this really happened, i.e. that the method under test really called the method of the dependency with the expected parameter(s).
A spy on the other hand shall just record things of interest that happened to the doubled dependency. Interrogating the spy about what happened (and sometimes also how often) and then judging if that was correct by asserting on the expected events is the responsibility of the test itself. So a mock is always also a spy with the addition of the assertion (expectation) logic. See also Uncle Bobs blog The Little Mocker for a great explanation of the different types of test doubles.
TL;DR
Yes, the mock includes the expectations (assertion) itself, the spy just records what happened and lets the test itself asks the spy and asserts on the expected events.
Mocking frameworks also implement mocks like explained above as they all follow the specified xunit framework.
mock.Verify(p => p.Send(It.IsAny<string>()));
If you look at the above Moq example (C#), you see that the mock object itself is configured to in the end perform the expected verification. The framework makes sure that the mock's verification methods are executed. A hand-written would be setup and than you would call the verification method on the mock object yourself.
Generally, you want to put all EXPECT statements inside individual tests to make your code readable.
If you want to enforce certain things on your test stub/spy, it is probably better to use exceptions or static asserts because your test is usually using them as a black box, and it uses them in an unintended way, your code will either not get compiled, or it will throw and give you the full stack trace which also will cause your test to fail (so you can catch the misuse).
For mocks, however, you have full control over the use and you can be very specific about how they are called and used inside each test. For example in Google test, using GMock matchers, you can say something like:
EXPECT_CALL(turtle, Forward(Ge(100)));
which means expect Forward to be called on the mock object turtle with a parameter equal or greater than 100. Any other value will cause the test to fail.
See this video for more examples on GMock matchers.
It is also very common to check general things in a test fixture (e.g. in Setup or TearDown). For example, this sample from google test enforces each test to finish in a certain amount of time, and the EXPECT statement is in teardown rather than each individual test.

Extension lifecycle and state in JUnit 5

User guide contains following:
Usually, an extension is instantiated only once.
It's not very clear when extension can be instantiated many times? I'm supporting test suite with multiple extensions and every extension stores it's state in class fields. Everything works fine, but can I rely on this or should I refactor this code to use ExtensionContext.Store?
Usually, an extension is instantiated only once. So the question becomes relevant: How do you keep the state from one invocation of an extension to the next?
I think this sentence shall highlight that the same instance of an extension might be re-used for multiple tests. I doubt that the instance might be replaced in the middle of a test.
Multiple instances of an extension might be instantiated when a test uses programmatic extension registration (with #RegisterExtension). In such case, the test class creates its own instance of the extension. JUnit cannot reuse this instance in other test classes. But an instance created by declarative extension registration (with #ExtendWith) might be used for multiple test classes.

gtest - why does one test affect behavior of other?

Currently I have a gtest which has a gtest object with some member variables and functions.
I have a simple test, as well as more complex tests later on. If I comment out the complex tests, my simple test runs perfectly fine. However, when I include the other tests (even though I'm using gtest_filter to only run the first test), I start getting segfaults. I know it's impossible to debug without posting my code, but I guess I wanted to know more at a high level how this could occur. My understanding is that TEST_F constructs/destructs a new object every time it is run, so how could it be possible that the existence of a test affects another? Especially if I'm filtering, shouldn't the behavior be exactly the same?
TEST_F does not construct/destruct a new "object" ( at this point I assume that object here is to be interpreted as instance of the feature test class) for each test
What is done before each test of the test feature is to call the SetUp method and after each test the TearDown method is called.
Test feature constructor and destructor are called only once.
But because you did not provide a mvce , we can not assume further

Spock extension's start method invoked multiple times

I have bunch of functional tests based on Spock and Geb. I want to perform some actions before and after execution of these tests. So I created global extension and added required functionality to start() and stop() methods of that extension. But the problem is that start/stop methods are invoked before/after each Spock spec though Spock documentation (http://spockframework.org/spock/docs/1.1/all_in_one.html#_global_extensions) states:
start() This is called once at the very start of the Spock execution
stop() This is called once at the very end of the Spock execution
Do I do something wrong or Spock documentation is incorrect about behaviour of these methods?
#MantasG Spock implements a JUnit Runner and does not control how it is executed. Global extensions are managed in a RunContext which is kept in a ThreadLocal. If surefire uses multiple threads to execute Tests then this will create multiple instances of RunContext each with their own list of global extensions. If you are using an EmbeddedSpecRunner then this would also create a new isolated context.
This context will stay around until the thread dies. It would be
more accurate to remove the context once the test run has finished,
but the JUnit Runner SPI doesn't provide an adequate hook. That
said, since most environments fork a new JVM for each test run, this
shouldn't be much of a problem in practice.
Depending on what you want to do there are other ways:
you can use a JUnitRunListener and use the testRunStarted/testRunFinished hooks. Note that you need to register this via surefire.
If you really want to run only once, then you could use failsafe instead of surefire and use the pre- and postintegration goals.
You could hack something using a static fields and a counter for each start/stop call and perform your start action if the counter is 0 and perform your stop action once the counter reaches 0. Of course you'll need to make this thread safe.
Note that surefire also supports forking multiple JVMs and this will also impact options 1 and 3.
I believe Spock is invoked at the start of every Testing Spec in your testing Suite, so start and stop are run with every one of those executions
I think you might want to take a look at the Fixture Methods found in the same doc you linked in the question:
http://spockframework.org/spock/docs/1.1/all_in_one.html#_specification

Custom performance profiler for Objective C

I want to create a simple to use and lightweight performance profile framework for Objective C. My goal is to measure the bottlenecks of my application.
Just to mention that I am not a beginner and I am aware of Instruments/Time Profiler. This is not what I am looking for. Time Profiler is a great tool but is too developer oriented. I want a framework that can collect performance data from a QA or pre production users and even incorporate in a real production environment to gather the real data.
The main part of this framework is the ability to measure how much time was spent in Objective C message (I am going to profile only Objective C messages).
The easiest way is to start timer in the beginning of a message and stop it at the end. It is the simplest way but its disadvantage is that it is to tedious and error prone - if any message has more than 1 return path then it will require to add the "stop timer" code before each return.
I am thinking of using method swizzling (just to note that I am aware that Apple are not happy with method swizzling but these profiled builds will be used internally only - will not be uploaded on the App Store).
My idea is to mark each message I want to profile and to generate automatically code for the method swizzling method (maybe using macros). When started, the application will swizzle the original selector with the generated one. The generated one will just start a timer, will call the original method and then will stop the timer. So in general the swizzled method will be just a wrapper of the original one.
One of the problems of the above idea is that I cannot think of an easy way how to automatically generate the methods to use for swizzling.
So I greatly will appreciate if anyone has any ideas how to automate the whole process. The perfect scenario is just to write one line of code anywhere mentioning the class and the selector I want to profile and the rest to be generated automatically.
Also will be very thankful if you have any other idea (beside method swizzling) of how to measure the performance.
I came up with a solution that works for me pretty well. First just to clarify that I was unable to find out an easy (and performance fast) way to automatically generate the appropriate swizzled methods for arbitrary selectors (i.e. with arbitrary arguments and return value) using only the selector name. So I had to add the arguments types and the return value for each selector, not only the selector name. In reality it should be relatively easy to create a small tool that would be able to parse all source files and detect automatically what are the arguments types and the returned value of the selector which we want to profile (and prepare the swizzled methods) but right now I don't need such an automated solution.
So right now my solution includes the above ideas for method swizzling, some C++ code and macros to automate and minimize some coding.
First here is the simple C++ class that measures time
class PerfTimer
{
public:
PerfTimer(PerfProfiledDataCounter* perfProfiledDataCounter);
~PerfTimer();
private:
uint64_t _startTime;
PerfProfiledDataCounter* _perfProfiledDataCounter;
};
I am using C++ to use that the destructor will be executed when object has exited the current scope. The idea is to create PerfTimer in the beginning of each swizzled method and it will take care of measuring the elapsed time for this method
The PerfProfiledDataCounter is a simple struct that counts the number of execution and the whole elapsed time (so it may find out what is the average time spent).
Also I am creating for each class I'd like profile, a category named "__Performance_Profiler_Category" and to conforms to "__Performance_Profiler_Marker" protocol. For easier creating I am using some macros that automatically create such categories. Also I have a set of macros that take selector name, return type and arguments type and create selectors for each selector name.
For all of the above tasks, I've created a set of macros to help me. Also I have a single file with .mm extension to register all classes and all selectors I'd like to profile. On app start, I am using the runtime to retrieve all classes that conforms to "__Performance_Profiler_Marker" protocol (i.e. the registered ones) and search for selectors that are marked for profiling (these selectors starts with predefined prefix). Note that this .mm file is the only file that needs .mm extension and there is no need to change file extension for each class I want to profile.
Afterwards the code swizzles the original selectors with the profiled ones. In each profiled one, I just create PerfTimer and call the swizzled method.
In brief that is my idea which turned out to work pretty smoothly.