Is it possible to mix jqwik #Property methods and junit5 #Test methods in the same test file? - junit5

Im porting some Python code using hypothesis, and trying to keep the sources as close as possible. The python test file has both parameterized and non-parameterized methods. If I mark them all as #Property, the non-parameterized (and so identical) methods get called 1000 times.
Just learning jqwik, so may be missing something easy. If not, ill just break them into two files. Thanks.

For example based tests jqwik has the annotation #Example, which will run your test method just a single time. Use it for your non-parameterized tests.

Related

gtest - why does one test affect behavior of other?

Currently I have a gtest which has a gtest object with some member variables and functions.
I have a simple test, as well as more complex tests later on. If I comment out the complex tests, my simple test runs perfectly fine. However, when I include the other tests (even though I'm using gtest_filter to only run the first test), I start getting segfaults. I know it's impossible to debug without posting my code, but I guess I wanted to know more at a high level how this could occur. My understanding is that TEST_F constructs/destructs a new object every time it is run, so how could it be possible that the existence of a test affects another? Especially if I'm filtering, shouldn't the behavior be exactly the same?
TEST_F does not construct/destruct a new "object" ( at this point I assume that object here is to be interpreted as instance of the feature test class) for each test
What is done before each test of the test feature is to call the SetUp method and after each test the TearDown method is called.
Test feature constructor and destructor are called only once.
But because you did not provide a mvce , we can not assume further

How do you run one XCTestCase subclass's test method from inside another XCTestCase subclass's test method in XCode 7?

How do you run one XCTestCase subclass's test method from inside another XCTestCase subclass's test method in XCode 7?
I have a test suite for my point-of-sale app.
I have an XCTestCase subclass called "MathTest" which does various unit tests on math functions of the app. It also has a test method testTillMath that checks the register to see if the transaction totals all match up to expected values.
Then I have another XCTestCase subclass called "TicketBuildingTest" which has a test method called testCreateTickets that draws from an Excel spreadsheet data source, using whatever data is in the spreadsheet to assemble a specific batch of transactions into a special Core Data store specific to the test environment.
The testTillMath method will only succeed if testCreateTickets has first been run successfully.
How can I make testTillMath get run every time after testCreateTickets has finished?
I tried to #include MathTest.m from inside of TicketBuildingTest so that I could call testTillMath at the end of testCreateTickets, but XCode won't let me do that include. The build fails with the error, "linker command failed with exit code 1" due to "duplicate symbol _OBJC_CLASS_$_MathTest"
I realize there is likely to be more than one way to skin this cat; in PHPUnit I can specify a set of test methods to be run all in a row, in a certain order, running each test after the one before it completes. How can I do that in XCode?
You've deliberately introduced "test pollution", i.e. a situation where the success or failure of one test is dependent on the success or failure of another test. This is a bad practice. Tests should be independent of each other. That way you know that when a test fails, it's failing due to the functionality within your app that it's specifically testing—so you can track down that functionality and fix it. Debugging test pollution is a real ordeal, and you should avoid it at all costs.
Some testing frameworks (Rspec, for one; Cedar for another) randomize the order in which tests are run, precisely to discourage the kind of test coupling you describe.

testNG : find which test classes will run before any of them are

I am working with testNG where I run an external test framework, receive the result data and assert it. To run the external test framework I need to set up a specification for which tests that should be run. To generate this specification I need to know which tests that are selected in the testNG .xml file.
The only way I could think of doing this is to parse the file manually. But I am hoping for a better solution than this.
Thanks for any answers!
//Flipbed
Edit:
My colleague found solutions to the problem.
In #Factory and #DataProvider annotated methods it is possible to add a parameter of the type ITestContext. Using the variable of that type, one can use the method .getAllTestMethods().
Create a new class that implements IMethodInterceptor. In this class one can override the method 'intercept'. The method takes a parameter of the type List which is a list of all methods that will be run by testNG.
If someone has any other suggestions feel free to add.
//Flipbed
The solution that we used was to number 2 in my edit. We implemented the IMethodInterceptor and used the methods list as well as the ITestContext to both view what tests will run and modify that list.

Custom performance profiler for Objective C

I want to create a simple to use and lightweight performance profile framework for Objective C. My goal is to measure the bottlenecks of my application.
Just to mention that I am not a beginner and I am aware of Instruments/Time Profiler. This is not what I am looking for. Time Profiler is a great tool but is too developer oriented. I want a framework that can collect performance data from a QA or pre production users and even incorporate in a real production environment to gather the real data.
The main part of this framework is the ability to measure how much time was spent in Objective C message (I am going to profile only Objective C messages).
The easiest way is to start timer in the beginning of a message and stop it at the end. It is the simplest way but its disadvantage is that it is to tedious and error prone - if any message has more than 1 return path then it will require to add the "stop timer" code before each return.
I am thinking of using method swizzling (just to note that I am aware that Apple are not happy with method swizzling but these profiled builds will be used internally only - will not be uploaded on the App Store).
My idea is to mark each message I want to profile and to generate automatically code for the method swizzling method (maybe using macros). When started, the application will swizzle the original selector with the generated one. The generated one will just start a timer, will call the original method and then will stop the timer. So in general the swizzled method will be just a wrapper of the original one.
One of the problems of the above idea is that I cannot think of an easy way how to automatically generate the methods to use for swizzling.
So I greatly will appreciate if anyone has any ideas how to automate the whole process. The perfect scenario is just to write one line of code anywhere mentioning the class and the selector I want to profile and the rest to be generated automatically.
Also will be very thankful if you have any other idea (beside method swizzling) of how to measure the performance.
I came up with a solution that works for me pretty well. First just to clarify that I was unable to find out an easy (and performance fast) way to automatically generate the appropriate swizzled methods for arbitrary selectors (i.e. with arbitrary arguments and return value) using only the selector name. So I had to add the arguments types and the return value for each selector, not only the selector name. In reality it should be relatively easy to create a small tool that would be able to parse all source files and detect automatically what are the arguments types and the returned value of the selector which we want to profile (and prepare the swizzled methods) but right now I don't need such an automated solution.
So right now my solution includes the above ideas for method swizzling, some C++ code and macros to automate and minimize some coding.
First here is the simple C++ class that measures time
class PerfTimer
{
public:
PerfTimer(PerfProfiledDataCounter* perfProfiledDataCounter);
~PerfTimer();
private:
uint64_t _startTime;
PerfProfiledDataCounter* _perfProfiledDataCounter;
};
I am using C++ to use that the destructor will be executed when object has exited the current scope. The idea is to create PerfTimer in the beginning of each swizzled method and it will take care of measuring the elapsed time for this method
The PerfProfiledDataCounter is a simple struct that counts the number of execution and the whole elapsed time (so it may find out what is the average time spent).
Also I am creating for each class I'd like profile, a category named "__Performance_Profiler_Category" and to conforms to "__Performance_Profiler_Marker" protocol. For easier creating I am using some macros that automatically create such categories. Also I have a set of macros that take selector name, return type and arguments type and create selectors for each selector name.
For all of the above tasks, I've created a set of macros to help me. Also I have a single file with .mm extension to register all classes and all selectors I'd like to profile. On app start, I am using the runtime to retrieve all classes that conforms to "__Performance_Profiler_Marker" protocol (i.e. the registered ones) and search for selectors that are marked for profiling (these selectors starts with predefined prefix). Note that this .mm file is the only file that needs .mm extension and there is no need to change file extension for each class I want to profile.
Afterwards the code swizzles the original selectors with the profiled ones. In each profiled one, I just create PerfTimer and call the swizzled method.
In brief that is my idea which turned out to work pretty smoothly.

BDD and outside-in approach, how to start with testing

All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!