ConstraintConfiguration not used by ConstraintVerifier in testing class? - optaplanner

I am currently working with dynamic weights configured in a ConstraintConfiguration class. But the class doesn't seem to be used while executing my tests that I wrote. It is however used while actually executing the solver.
For example: one of the weights in the configuration class is 16. While testing for a score of 1, it will not be multiplied with the weight and the result will be 1. But while actually solving, it will use it and it will be 16 as expected.
I am guessing that I'm missing something in my testing class.
Do you have to tell the ConstraintVerifier or the testing methods in the testing class that there is a ConstraintConfiguration? Or am I missing something else?
Thanks in advance.
My current constraintverifier:
ConstraintVerifier<ExamScheduleConstraintProvider, ExamSchedule> constraintVerifier = ConstraintVerifier.build(
new ExamScheduleConstraintProvider(), ExamSchedule.class, Exam.class);
Example of test code that won't pass:
constraintVerifier.verifyThat(ExamScheduleConstraintProvider::TestConstraint)
.given(firstExam, secondExam)
.penalizesBy(16);

The ConstraintVerifier offers two verification methods:
verifyThat(constraintFunction);
verifyThat();
verifyThat(constraintFunction) accepts a method reference to an individual constraint and verifies that the constraint, given certain entities and facts or an entire solution, penalizes or rewards by the expected match weight. Important to note, this verifier ignores constraint weight completely.
verifyThat() does not accept any argument and checks the score impact (including constraint weight coming from ConstraintConfiguration) of all the constraints defined in the ConstraintProvider implementation for the provided entities and facts or an entire solution.

In addition to Radovan's response, I will provide some rationale for why ConstraintVerifier works the way it does.
For individual constraints, it does not take ConstraintConfiguration into account. That is because ConstraintConfiguration is solution-specific, and we want to be able to test consraints individually and independently. In this case, we only test match weights; match weights exist regardless of the solution, and regardless of constraint configuration. This would be your typical unit test. (If you prefer another way of thinking about this, consider the fact that constraint configuration is optional and most do not, in fact, use it.)
For integration testing, we support testing an entire solution. And when you test all the constraints as a whole, that is where we take your ConstraintConfiguration into account.

Related

when to use Property-Based Testing?

I am trying to learn Propery-Based Testing(PBT)I think I know how to implement it but when should I apply PBT?
For example in this case I am trying to compare if the function getCurrentName() returns the expected name. Should I randomize this test?
#Test
public void getNameTest() {
assertEquals(nameProxy, proxyFoto.getCurrentName());
}
Your question is so generic that it cannot have a specific answer. I suggest you look at some of the stuff that has been written about how to come up with good properties, e.g. https://johanneslink.net/how-to-specify-it/
As for your concrete example, the answer if writing a property for the current name makes sense depends on a few things:
How does the name get into the proxy object? Is there a reasonable chance that depending on the shape/length/encoding etc of the name the behaviour is different?
What is the name being used for? Should it be normalised, formatted, shortened or processed in any specific way?
Properties and PBT are about finding and falsifying assumptions about the behaviour of your code under test. If there is nothing that you might get wrong, any form of automated testing can be considered unnecessary. If there are quite a few edge cases and paths that could show unexpected behaviour, then PBT looks like a worthwhile approach.
As a pragmatic recommendation: Start to translate some of your example tests into properties and see which ones are pulling their weight. Then try to add additional properties, eg by using ideas from the article I linked to.

Aconcagua: Base Units - would equality comparison be better than identity?

I learned in another question that BaseUnits must be singletons. This has a number of disadvantages, including making client code a bit harder to work with (you have to store the singleton somewhere and provide access to it), and making it harder to serialize code e.g. via Fuel.
What is the benefit of this constraint? (I'm assuming it's so that users are safe if they load two Aconcagua clients which define e.g. BaseUnit subclass: #Pound differently)
In practice, is it worth it, or would it be better to treat BaseUnits like value objects? Especially in light of the fact that the paper itself uses expressions like 1 dollar, which already precludes units with the same name.
I wrote something about it in the other post.
Units are not really Singletons (as Singleton is defined in the gang of four book), and the idea is not to create a class per unit but to instantiate BaseUnit or DerivedUnit, etc., per unit you need.
So, for example you can create:
meter := BaseUnit named: 'meter'.
centimeter := ProportionalDerivedUnit basedUnit: meter convertionFactor: 1/100
named: 'centimeter'.
And then write:
1*meter = (100*centimeter)
that will return true.
As I post in the other question, equality is defined as the default, therefore identity is used.
So, you can do two things to make equality works:
To have well know objects (using global variables or a global root object to access them as Chalten does)
Modify #= in Unit and make two units equal if the have the same name (or create a subclass with this definition of #=)
The main reason to use the default #= implementation are:
It is the more generic solution
Units (in real life) are unique, so it make sense to be unique in the model
It make sense to have one "meter" object instead of creating one each time you need it.
The main disadvantage is like you see, that the first time you see it could be kind of problematic to understand, but again, you only need to have a way to access to the object and problem solved.
Regarding Fuel, the problem can be solved saving the root object that defined all units (like TimeUnit in Chalten) or implementing option 2) :-)
Hope this help! Let me know if you have more questions!

Writing additional code to perform unit-tests involving ivars OK or should be avoided at all costs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
Still a novice when it comes to writing unit-tests I often come across cases where I'm left scratching my head as to what is the right way to do things. Writing tests for a planned design I came across one of these dandruff-inducing instances. My design:
One ViewController sending the message to a dataFetcherClass based on the user's input. (The below code has been changed to protect the innocent).
-(void) userPushedLocalBusinessButtons{
[_businessDataFetcher fetchLocalData];
}
-(void) userPushedWorldwideBusinessButtons{
[_businessDataFetcher fetchWorldwideData];
}
The data format is identical for these actions, it's the location the dataFetcher should collect the data from that changes. So, in the BusinessDataFetcherClass I have these methods:
-(void) fetchLocalData{
_dataAddress = #"localData.json";
[self fetchData];
}
-(void) fetchWorldwideData{
_dataAddress = #"worldwideData.json";
[self fetchData];
}
The fetchData method fetch the data asynchronously and send a notification with the collected data when done. Now, I would like to write unit tests checking that the ivar _dataAddress has changed when fetchLocalData or fetchWorldwideData has been executed.
This is clearly not possible without altering the code. Some would say that this could easily be remedied by making _dataAddress into a public property, and that's one solution. Another would be to create a method returning the value of the _dataAddress ivar. I am not entirely happy with either alternative as they in both cases force me to change the code just for the tests, rather than improve the overall quality of the actual code-base itself.
I landed on the second alternative and included a method -(NSString *) dataAddress; My question (as stated in the headline) is whether this is OK? Is my design the problem? Obviously the number one goal of TDD is to avoid regression, but I believe improving the overall code-quality is also an important goal. Is adding the occasional fluff to be expected?
I would like to write unit tests checking that the ivar _dataAddress
has changed when fetchLocalData or fetchWorldwideData has been
executed.
When you write a unit test, it should test the external behavior of a class. This is an implementation detail of the class. If you want to change the way fetching data works, the class might still work while the unit tests fail. This makes your unit test annoying, not helpful.
fetch the data asynchronously and send a notification with the
collected data when done.
It sounds like this is the external behavior of those methods in that class. This is what you should write your test to check. I don't know objective-c, so here's a psuedo-code example:
setup expected local data (preferably with a mock)
call fetchLocalData on BusinessDataFetcherClass
wait a little bit
check that local data is populated on ViewController
Is my design the problem?
Your design here does make writing tests a little harder, though it isn't a big problem. In particular, that "wait" which needs to happen in the test. The design problem that your tests are pointing out is that your class has at least two responsibilities: fetching data and managing asynchrony. If you split those responsibilities apart, they would each be easier to test.
Obviously the number one goal of TDD is to
avoid regression, but I believe improving the overall code-quality is
also an important goal. Is adding the occasional fluff to be expected?
I don't think you probably need more fluff in this case, but it does happen with unit tests sometimes. When you write code with tests, you end up with two clients of your code: the test code and the production code. It's the need to satisfy two client in different contexts that forces some of this "fluff" in, or forces some design changes. The good news is that when you have a design that can easily satisfy two client, you will probably be able to satisfy a third and a fourth fairly easily, if the need should arise. To me, this effect is one of the most important benefits to TDD.
You don't want to test the internal state of your class -- this makes no sense. The only thing you care about is what your class is doing in its interactions with the outside world (whether info is going inwards or outwards or both ways).
To put it another way: once you've written a test for your class, rewriting the implementation (internals) of your class while maintaining its visible behaviour should not break your tests. If it does, your tests are broken IMO.
A good way of testing the behaviour of your class is to use Mock objects -- for example, see OCMock for iOS.
Mock objects allow you to test the behaviour of your target class. In order to do this, you need to write your target class in a certain way: in your example, you need to be able to pass in a network provider class, rather than have your class go off and use a certain provider which is hardcoded (re-usable components should never configure themselves, but be configured). Once you've set things up this way, your unit test class can pass in a mock network service provider which checks that the correct URL is being hit.
Mock objects might seem convoluted at first glance, but you're testing the correct thing -- the behaviour of your target class -- without polluting it with any special testing method etc.
Note also that making your code amenable to testing is also making it more amenable to re-use: your test cases become a second 'user' of your code.
I also am not an ObjectiveC developer, but I think that the reason you're posting this is because you're listening to your code, and your code's telling you that something isn't quite right.
I would ask what you're doing with the results of the fetchData call? I suspect you're rendering the data somewhere. If iOS is rendering it, then there's probably a callback somewhere that you can assert rather than asserting the instance variable. If you're updating the UI from within the class, it will make it easier to test if you introduce an Observer to decouple your UI and your code that's fetching the data. You can then have your test register as the receiver and assert your state change there.
Hope that helps!
Brandon

How to test an object when I can't access state?

I have a factory class that creates an object based on a parameter it receives. The parameter is an identifier that tells it which object it should create.
Its first step is to use the data access layer to pull information for the object.
Its next step is to do some cleansing / transformations on the data.
Finally it creates the required object and returns it.
I want to ensure that the cleansing / transformation step went OK but the object that it returns does not expose any state so I'm not sure how to test it easily.
The data access layer and the database structure can't change because they have to work with legacy code.
I could test it further on in the system after the object gets used but that would lead to big tests that are hard to maintain.
I've also thought of exposing the state of the object, or putting the responsibility in another class and testing that, but both those options seem like I'm changing the system for testing.
Any thoughts on other ways to test something like this?
It sounds to me like you are trying to test too much within a unit test.
This is a symptom of your Unit trying to do too much.
You are trying to do three things here.
Get data from the data access layer.
Clean the data.
Build the object
To fix I would move each of these responsibility into their own units ( classes / methods ) as you have suggested. Then you can test each unit on its own.
You are hesitant to do this as you don't want to change the system for testing. However, the advantage of unit testing is that it highlights flaws in the design. Not only are you changing the system for testing, you are improving it and making it more granular and thus more maintainable and reusable.
Your factory object is trying to do too much here. I recommend refactoring your code to give the responsibility of cleansing the data to another object, and testing that object's behaviour.
I've also thought of exposing the state of the object, or putting the
responsibility in another class and testing that, but both those
options seem like I'm changing the system for testing.
That's right, you are changing the system for testing. And it's a good thing. This is an example of Test Driven Design driving out a better design exhibiting looser coupling and higher cohesion, by forcing you down the path of giving your classes fewer responsibilities. (Ideally, each class would only have just one responsibility.) That's one of the key benefits of TDD, so don't fight it.
I know two ways to achieve this:
- in Java, by using reflection.
- (and the best, IMO) programming focused on interfaces, so you can implement the interfaces whereby you can access the data.

What is a mock and when should you use it?

I just read the Wikipedia article on mock objects, but I'm still not entirely clear on their purpose. It appears they are objects that are created by a test framework when the actual object would be too complex or unpredictable (you know 100% sure what the values of the mock object are because you fully control them).
However, I was under the impression that all testing is done with objects of known values, so I must be missing something. For example, in a course project, we were tasked with a calendar application. Our test suite consisted of event objects that we knew exactly what they were so we could test the interactions between multiple event objects, various subsystems, and the user interface. I'm guessing these are mock objects, but I don't know why you wouldn't do this because without the objects of known values, you can't test a system.
A mock object is not just an object with known values. It is an object that has the same interface as a complex object that you cannot use in test (like a database connection and result sets), but with an implementation that you can control in your test.
There are mocking frameworks that allow you to create these objects on the fly and in essence allow you to say something like: Make me an object with a method foo that takes an int and returns a bool. When I pass 0, it should return true. Then you can test the code that uses foo(), to make sure it reacts appropriately.
Martin Fowler has a great article on mocking:
http://martinfowler.com/articles/mocksArentStubs.html
Think of the classic case of having client and server software. To test the client, you need the server; to test the server, you need the client. This makes unit testing pretty much impossible - without using mocks. If you mock the server, you can test the client in isolation and vice versa.
The point of the mock is not to duplicate the behaviour of the things its mocking though. It is more to act as a simple state machine whose state changes can be analysed by the test framework. So a client mock might generate test data, send it to the server and then analyse the response. You expect a certain response to a specific request, and so you can test if you get it.
I agree with everything #Lou Franco says and you should definitely read the excellent Martin Fowler article on test doubles that #Lou Franco points you to.
The main purpose of any test double (fake, stub or mock) is to isolate the object under test so that your unit test is only testing that object (not its dependencies and the other types it collaborates or interacts with).
An object that provides the interface that your object is dependent on can be used in place of the actual dependency so that expectations can be placed that certain interactions will occur. This can be useful but there is some controversy around state-based vs. interaction-based testing. Overuse of mock expectation will lead to brittle tests.
A further reason for test doubles is to remove dependencies on databases or file systems or other types that are expensive to set up or perform time consuming operations. This means you can keep the time required to unit test the object you're interested in to a minimum.
Here's an example: if you're writing code that populates a database you may want to check if a particular method has added data to the database.
Setting up a copy of the database for testing has the problem that if you assume there are no records before the call to the tested method and one record after, then you need to roll back the database to a previous state, thus adding to the overhead for running the test.
If you assume there is only one more record than before, it may clash with a second tester (or even a second test in the same code) connecting to the same database, thus causing dependencies and making the tests fragile.
The mock allows you to keep the tests independent of each other and easy to set up.
This is just one example - I'm sure others can supply more.
I agree 100% with the other contributors on this topic, especially with the recommendation for the Martin Fowler article.
You might be interested in our book, see http://www.growing-object-oriented-software.com/. It's in Java, but the ideas still apply.