Why is assertEquals(double,double) no longer deprecated in JUnit 5?
Jupiter has both, an assert method to compare two doubles exactly and one to compare them with a given delta. You usually want the latter if any kind of calculation is involved that might come with rounding errors. Sometimes, however, you want to make sure that a calculation has an exact result; that’s when ˋassertEquals(double, double)ˋ comes in handy.
The danger of having this method is that people might confuse the two use cases and use exact comparison when delta comparison would be the better choice. The designers of JUnit 4 considered that risk to be dangerous enough to steer users away from it. The devs of Jupiter made a different judgement call.
Related
I'm using both ways for get the simple class name in Kotlin, but I don't know which the best.
I often use for logging, so I use only into current class, therefore I use this in example bellow.
Someone help me, please.
this::class.simpleName
OR:
this.javaClass.simpleName
It's probably more important to pick one method and be consistent. I doubt there is an established best practice on this particular matter (though you never know). It's also unlikely either of these perform better or worse than the other.
That being said, this.javaclass will only be available when running on the JVM: https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.jvm/java-class.html
Whereas this::class I believe is commonly available whether you are targeting JVM, JS or Native. As a rule of thumb, I tend to favor whichever is common across all targets when there is a choice.
I have to take some legacy Delphi code pointing to a database and make it support a new, better, database having a completly different schema. The updated database has the same data. It has a combination of stored procedures and embedded SQL.
Is there a good Test driven development technique that will help make sure I don't break anything? This code has amost no unit tests and I need to make changes to a lot of hard coded SQL.
Just running after every change sounds error prone and time consuming. I love the idea of doing TDD or BDD, just not sure how to do it.
It's good that you want to get into unit testing, but I'd like to caution you against taking it on over-zealously.
Adding unit tests to legacy code is a major undertaking, and it's almost always totally unfeasible to halt other work just to add test cases. Also, unless you already have experience in TDD, that learning curve itself can prove a troublesome hurdle to overcome.
However, if you persevere, and take things one step at a time, your efforts will be rewarded in the end.
The problems you're likely to encounter:
Legacy applications are usually very difficult to 'retro-fit' with test cases. This is because the code wasn't written with testability in mind.
Many routines are doing too many things, so tests have to consider large numbers of side-effects.
Code is not properly self-contained, so setting up pre-conditions for a test is a lot of work.
Entry points for testing/checking behaviour are often missing because they weren't needed for production code; and therefore weren't added in the first place.
Code often relies on global state somewhere. Either directly, or via Singleton's. This global state (regardless of where it lies) plays havoc on your test cases.
Unit testing of databases is inherently more difficult than other kinds of unit testing. The reason for this is that test cases don't like global state - and databases are effectively massive containers of global state. Problems manifest themselves in many ways:
If you're using IDENTITY columns, Auto Inc or number generators of any form: These either result in a specific difference between each test run, or you need a way to reset those numbers between tests.
Databases are slow. Once you've built up a large number of test cases it will be impractical to run all tests between every change. (One of my Db Test suites takes almost 10 minutes to run.)
If your database generates date/time values, these can also complicate testing. Especially if the database runs on a different machine.
Database testing is complicated by the fact that there are two aspects to the database: Its schema, and its data. So if you wish to test a new/changed stored procedure (part of the schema), it needs appropriate changes to the data and possibly to other aspects of the schema (such as tables/views).
Even without the above extra complications, there are the 'normal problems' you'll have to deal with.
Global state often crops up unexpectedly in some awkward places. Consider Now() which returns a TDateTime. It uses global state: the current date-time. If you have time/date based rules in your system, those rules may return different results depending on when your tests are run. Unless you find an effective way to deal with this challenge, you'll have a number of "erratic" test cases.
Writing test cases is a fundamentally different programming paradigm to what most developers are used to. It can be extremely difficult to break old habits. The style of test case code is almost declarative: Given this, When I do This, I expect this to have happened. Test cases need to be simple and clear about what they're trying to achieve.
The learning curve can be tricky. Initially you may find yourself taking 3 times as long to write code if unfamiliar with test cases. And even though it will eventually improve (possibly even to the point where you're faster than you used to be with unstructured and haphazard testing) - other people around you will likely express frustration. (Not cool if it's your boss.)
Hopefully I haven't discouraged you, I do have some practical advice:
As the saying goes "Don't bite off more than you can chew."
Be prepared to start out slow. For the time being, carry on with most of your work in a way that's familiar to you. But force yourself to write 1 or 2 test cases every day. As you get more comfortable, you can increase this number.
Try stick to the "tried and tested principles"
The TDD work flow is : first write the test and ensure the test fails. I know it is difficult to stick to the habit, but the principle serves a very important purpose. It's a level of confirmation that your test case proves the bug / missing feature. Far too often I've seen test case code that would pass with/without the production change - making the test somewhat useless.
For your database tests you'll need to establish a framework that works for you.
First, you'll need a mechanism of getting your database to a 'base-state'. One from which all your tests should be able to pass - no matter what order or how many times they are run. Typically this will involve some sort of Reset between tests (but it needs to be quite quick). Second, you'll need an easy way to update the schema of your database to what is expected by production code.
Initially you'll only want to test new features, or bug fixes.
Avoid the temptation to test everything. Over time, your test case coverage will increase. Once your framework and patterns have been established, then you might get a chance to start adding tests just to increase coverage.
Refactoring existing code.
As you become familiar with testing, you'll learn about the coding habits that make testing more difficult. You'll probably find many such problems in legacy code. Such code will not be testable as is. You may need to refactor your code before you can even test it. Obviously this is not ideal, because you'd rather have tests that always pass to prove that your changes haven't broken anything. A good book on refactoring will give you some techniques you can use that will change the structure of your code without changing its behaviour.
Testing existing code.
When writing a test for an existing routine, look at the code and determine each of the inputs that can effect different behaviour. E.g. When there's an if statement, something will cause the condition to evaluate to True, and something else to False. At a minimum, you'll want a test for each permutation.
In your place I would use DUnit to create a unit test project. For each of the entities I would write testing methods that would run the old and new sentences and then write methods to compare the results.
I would write a TTestCase class named, let´s say TMyTestCase, and add some helper methods to it, then would create my new test classes as subclasses from TMyTestCase.
The idea of the ancestor class is to provide common functionality that makes it easier to write the tests (the comparison methods, for intance) in order the enhance productivity and comfort.
You can start building a database simulator. Connect it instead of the old one and see what it needs to do. Lot of work though
Requirements in my project keeps changing too frequently. It has become very inconvenient to maintain test cases. Is it still advisable to use test cases? Or is there any good way to handle this problem?
This is part of the pain of having unit tests. You should stick with it.
You will be in a much better place when requirements settle down.
Without tests, you will more vulnerable -- when rapid change occurs, things are very likely to be broken accidentally.
If you abandon testing now is you are likely to not pick it back up again....
If you have to change the code, then I think it is more important than ever to maintain the test harness. The test harness is a form of documentation.
One more argument I will be saying to whoever tries to convince me 100% test coverage is the holy grail.
Project requirements do not change, like, altogether (unless this is very small project). There are always some assumptions, assertions, limits dictated by law of physics after all :)
I propose to go through requirements and to split them between tiers. Requirements of tier 1 are less likely to change than those of tier 2. This way you could focus on less volatile parts. Eventually requirements producer will get tired (replaced, bored).
Developers must be in poorer shape. Rapid requirement changes will get spaghetti code. Test harness can be spaghetti to some extent, but it is really a life saver to them. It is very important to keep it fit with such project organization.
Since you've tagged this question "TDD", think about how to implement a changed requirement via test-driven development. In the case of a new requirement, you would write a failing test that demonstrates the absence of the new feature. In the case of a changed requirement, you probably already have tests that show (by passing) that the feature is in its original state. So, test-drive your development. Change your passing tests so that they now require the new behavior - and fail - and now make them pass by implementing the changed behavior.
You should take the opportunities to review your designs to see if there are parts that often change with changing requirements. You can even make changes to the current design to move parts into two partitions: One that mostly stays the same and one that mostly changes.
You might be able to isolate the changing parts so that when requirements change you only need to add new code/classes.
I've recently gotten the testing religion and have started primarily with unit testing. I code unit tests which illustrate that a function works under certain cases, specifically using the exact inputs I'm using. I may do a number of unit tests to exercise the function. Still, I haven't actually proved anything other than the function does what I expect it to do under the scenarios I've tested. There may be other inputs and scenarios I haven't thought of and thinking of edge cases is expensive, particularly on the margins.
This is all not very satisfying to do me. When I start to think of having to come up with tests to satisfy branch and path coverage and then integration testing, the prospective permutations can become a little maddening.
So, my question is, how can one prove (in the same vein of proving a theorem in mathematics) that a function works (and, in a perfect world, compose these 'proofs' into a proof that a system works)?
Is there a certain area of testing that covers an approach where you seek to prove a system works by proving that all of its functions work? Does anybody outside of academia bother with an approach like this? Are there tools and techniques to help?
I realize that my use of the word 'work' is not precise. I guess I mean that a function works when it does what some spec (written or implied) states that it should do and does nothing other than that.
Note, I'm not a mathematician, just a programmer.
In academica, there is a concept similar to induction in mathematics, it's called structural induction. However, it only applies to functional programming languages and methods with no side effects at all. In others, it is very hard, if not impossible, to prove that a method works due to side effects.
In TDD, you try to formulate edge cases which a method has to fulfill to be valid, however, it is possible to miss such a case. Even if a (non-trivial) method fulfills all your tests, there can be a combination of arguments or a sequence of events you simply didn't think of which will break your code. Simply put: That's life. You can possibly predict all outcomes in a non-trivial implementation, but you can assure that the method works for specific edge cases, expect for some cases that are so edgy you will cut yourself upon touching them. (zing, bad pun).
It sounds like you are defining work as a function doing what you want it to do. Or in other words, you want to prove that you typed out the logic correctly.
Well, in this case, at least assuming a near infinite amount of valid input for a function, you can't really prove something is correct, you can only disprove it. So the idea is to create good unit tests that span the various outputs of the function. It doesn't prove something correct, but can be used to prove something is correct enough.
The way to prove that something works, can be done by means of formal proofs. Some techniques are fairly well known, at least in academia. One that comes to mind is "Proof by induction". However, this approach is rather manual, and also fairly error prone to mere mortals, if not simply way too complex.
A different more manageable approach to formal verification is known as "Model Checking". With this approach you express your software in a suitable manner, that allows you to perform certain checks on it (with a tool). One such check could be to check for dead/live-locks in multithreaded applications.
Other kinds of checks you can perform, is to make sure that your application will at all times allow the same kinds of interactions as a simpler model of the same application, thereby bringing down the chances of having made the same mistake in both the model and the real application. A tool for model checking could be Spin, but there are many out there.
It seems that Wikipedia has an article on this subject too: Formal Verification
"It sounds like you are defining work as a function doing what you want it to do"
Usually you'll also want to verify a function doesn't do what you didn't want it to do, the definitions are close but not the same, for example an ADD() function can return the correct answer but also print out some extra debugging garbage.
Let's say you're coding, and you come across an opportunity for simple code resuse (e.g. pulling a common piece of code out to an accessible place like a Utility class or base class). You might find yourself thinking, "I know it's good to do this, but I have to get this done now, and if I need to make a change to this code, and forget to change it in the other place, my testing framework will let me know."
In other words, you let the awesome tests you (or another developer) has written to remind you to change the code in the other places too.
Is this a legitimate problem that we might find in ourselves or other developers?
You're asking whether unit tests encourage you to rely on them as a method of TODO list? Yes, but I don't think that's sloppy coding. You are, afterall, to start with unit tests failing and code to the test; if you refactor some code and then once again code to the test, that isn't sloppy coding -- it's doing what you're supposed to.
I think the problem with unit tests is simply that you can't cover every corner case in a unit test, and sometimes people assume that a working test means a working app, which isn't true.
In the example you provide, good tests are in fact enabling you to implement sloppy design, however in my experience, bad tests wouldn't have discouraged you from doing the same.
The fallacy in your argument centers around the premise that "getting this done now" means you will save time by implementing sloppy design. The truth of the matter is that you are incurring technical debt whether your tests are good or not. Making a change to that code is now a much more complex task, whether you have a good testing framework to remind you of that or not.
Although immature code may work fine
and be completely acceptable to the
customer, excess quantities will make
a program unmasterable, leading to
extreme specialization of programmers
and finally an inflexible product.
- Ward Cunningham
The strength of good testing practices may be in allowing you to incur that debt with some level of safety. As long as you continue to be aware that this area of the code is now weak, as a result of your choices, then it may be worth the tradeoff -- you ship your product sooner, at the cost of higher debt, with a lower risk of incurring bugs in the short run as a result.
If the tests are good and the code (sloppy or otherwise) pass them, all is good. It would be nice to have good code but sloppy working code is better than good broken code.
I don't use tests as my first option to finding the code that needs changes. I'll use my IDE's search (or refactoring) functionality and look for all the places that call the method in question.
The tests are just a nice addition in case I was accidentally sloppy or accidentally introduced a bug. Test don't make me sloppy from the start, they just reassure me once I think I'm done.
I would say that good tests enable you to fix sloppy coding.
You can certainly write incredibly sloppy code with or without tests. Unit testing makes it slightly easier to get away with it, but only in the short run.
If you have a set of logic copied in two places in your code (IMO the worst thing a developer can do), then you probably have inconsistent tests as well.
The most important job any programmer can do is ruthlessly refactor the code, removing ALL duplication. This almost always shows benefits on even a single iteration.
Why would you think if you had an error in copied code in 2 places that your tests would be any better?
It sounds more to me like sloppy developers and sloppy coding practices are what are leading to sloppy code in your example. The tests you described would prevent the sloppy code from ever getting to far.