Why can't a Mono hold a NULL - spring-webflux

Mono.just(null) will not compile. Why is that?
On a procedural level I get it. It does not make sense to have a processing queue without something to process. Can someone phrase this for me with some more technical depth?

There are high risks when letting null into an application/library, and if you can ban it, one should.
Null is like letting a bomb into your application, if something potentially can be null, then something can explode with a NullPointerException at any time.
null creates an enormous uncertainty in an application at all times. You should always clean out null as early as possible.
People solve this by doing null checks, everywhere, which are basically redundant operations.
Sir. Tony Hoare - The inventor of null famously claimed that null was his:
billion dollar mistake.
There are plenty of programming languages out there that doesn't have null, here is a long list:
List of languages without null
Having null in a stream makes no sense as it also means that the reactor library would have to do null checks all over in their code to make sure that they don't try to do an operation on a null value. Everyone would have to do null checks in every flatMap, map, filter, operator etc because there could be a potential NullPointerException in all those operators.
So they had the opportunity to exclude null, which makes perfect sense, as null is not a value, and only values can be transported in streams, so that's probably why they decided against it.
Instead they decided to use a type Void to represent "nothing" which can be obtained by calling Mono<Void> nothing = Mono.empty(); which is type safe, will not explode if you touch it, and behave as you expect, since the developers control it, and not the runtime.
I would instead ask myself, why do you actually need null? i'm guessing because you have gotten used to it, aad coding out of habit is bad.
95% of the times i have seen null used, it could have been removed.
Learn instead how to code without null, make it a habit to use default values, fallback values etc. it will most likely make your program more robust, and more deterministic.

As stipulated by Reactive-Streams
Calling onSubscribe, onNext, onError or onComplete MUST return normally except when any provided parameter is null in which case it MUST throw a java.lang.NullPointerException to the caller, for all other
And Reactor is based on the Reactive Streams.
Personally, I don’t like this rule, but the rule is not set by me. Since I want to use it, I can only follow it.
I guess Flux is the core of the Reactor,Mono is incidental. For streams, banning NULL is probably a reasonable option although it would be inconvenient for Mono.

The reason has already been explained by others, and Mono.justOrEmpty(null) exists to make you life a little easier, if you want the null to result in an empty Mono, but it needs to be a deliberate decision by you.

Related

Is Apples goto-bug possible in Kotlin if-statement without braces?

Imagine in Kotlin:
if (this) doThis()
else if(that) doThat()
else doWhatEver()
I've read using braces always (see Apples goto fail)!
Rule 1.3.a
Braces shall always surround the blocks of code (a.k.a., compound
statements), following if, else, switch, while, do, and for
statements; single statements and empty statements following these
keywords shall also always be surrounded by braces.
How does the Kotlin compiler handle the lack of braces in the above code? I thought Kotlin might be intelligent enough to avoid failures on that?
The example you give is not ambiguous; it could only have one reasonable meaning.  And it's rather different from the issue you link to (which doesn't involve else clauses at all.)  So I'm not sure what you're asking.
Kotlin is similar to most C-like languages in how it interprets if (and else).  So strictly speaking, an error of that type is still possible.  But Kotlin has two features which can reduce the risk of such problems.
First is that, unlike C and Java and similar languages, if can be used as an expression (returning a value).  When used this way, the compiler ensure that every branch returns a value; this will usually result in a compiler error if there's any confusion around multiple branches.
Second is the when structure, which functions like the C/Java switch statement, but avoids fall-through and hence the need for breaks; it can also be used as an expression, enforcing a single path through and a single return value.
So in Kotlin, the linked code would best have been written with a when, which would have been simpler as well as preventing that type of error.
Ultimately, I don't think it's really comparable, though.  The linked code is low-level C, which has very different practices and restrictions from general application code.  In particular, the use of goto for error clean-up is inherently error-prone.  And if they'd used else branches properly, it would have made the code rather clearer as well as preventing this error.
It's possible to write bad code in any language if you're determined enough!  A good language is one which makes it easier to write good code, and harder to write bad code.  (And I think Kotlin scores pretty well in that respect.) 

Should Kotlin's DAO return Optional or null?

Prior to Kotlin/JPA , I used to write my DAO layer like this :
public interface UserDao extends JpaRepository<User,Long> {
Optional<User> findBySsn(String ssn);
}
And in the caller side , if I want to find someone or create user by SSN , I can write this:
val user = userDao.findBySsn(value).orElseGet {
userDao.save(value)
}
It works well and looks fluent.
But since Kotlin introduces null-safety , there is another idiomatic way (dao still in Java ):
public interface UserDao extends JpaRepository<User,Long> {
Optional<User> findBySsn(String ssn);
#Query("select u from User u where u.ssn = :ssn")
#Nullable User findBySsnNullable(#Param("ssn") String ssn)
}
And in the client side :
val user = userDao.findBySsnNullable(value)
.takeIf{ it -> it != null}? : userDao.save(User(value))
Both ways work good. But I wonder which is preferred ? Is it good for Kotlin to dependent on Java8's Optional in API design ? What's the cons for a Kotlin project to depend on (or intercommunicate via) Java8's Optional/Stream API (since Kotlin has its own) ?
Kotlin can compile to JavaScript (I haven't studied that). If the project is depend on Java's Optional/Stream , will it have problem compiling to JS?
---- updated ----
According to Jetbrains
No, common code can only depend on other common libraries. Kotlin has
no support for translating Java bytecode into JS.
I wouldn't use Optional if you don't have to. It only adds unnecessary overhead as working with nullable types is more readable and more idiomatic in Kotlin. There's no advantage of using Optional in Kotlin.
Here's another discussion on that: https://discuss.kotlinlang.org/t/java-api-design-for-kotlin-consumption-optional-or-null/2455
I have to say I disagree with #s1m0nw1
What Kotlin does is to give us a nice way of dealing with a mistake. Kotlin cannot remove that mistake, because it's so inherent in the JVM and would damage the integration with legacy Java and poorly written Java. However, because it has a nice tool to deal with the bad design does not mean that we should embrace the bad design and do it ourselves.
Some reasoning:
Nullable still give the same set of problems the second your Kotlin code is called from Java. Now you just masked the underlying problem, actively damaging your clients. - By itself this is a strong enough reason IMO.
Null is still ambiguous, what does null mean? Error? Value missing? Or? Null has no clear meaning, even if you assign it a meaning, you have to document it everywhere and hope that people reads it. Value.EMPTY, Value.MISSING, throw new Exception() all have (somewhat) clearly defined meanings clearly readable from your code. It's the same problem as with booleans as a shortcut for binary enum values, e.g.:
val testPassed = runTest()
if (testPassed) // ...
this has a clear meaning, only as long a you have the variable name. Passing it on, refactoring etc. can quite fast obfuscate it. What the user meant:
val testResult = runTest()
if (testResult == TestResult.PASSED)
is clear and readable. So per the same argument, let your code communicate your intent. Do not take the shortcut. Again I see the Kotlin dealings with null as extremely nice, for dealing with poor code, I do not see it as an excuse for producing poor code.
Null is logically illogical, essentially it's a pointer that doesn't point. It's basically a nonsense "value" that isn't really a value. It's not really anything.
With nulls you will still have to do weird things for API's that utilize various stream functionality, so by using null you will either have to do hacky things like illustrated in the debate that s1m0nw1 links to, or do explicit conversions anyway, so you'll end up having them both anyway, saving exactly nothing by choosing null, but ending up dealing with both the semantics of the nullable and Optional.
It's a minimum amount work to ensure that you return Optional. I mean come one, we are not talking about a lot of extra code here. Don't be afraid of writing a few extra lines to do things right. Readability, flexibility etc. should come before short code. Just because there's a new smart feature to write things shortly and avoid mistakes does not mean you have to use it all the time. Keep in mind that to a hammer everything looks like a nail :)
The problem with not doing null though is the lack of a proper "Maybe"-type in Java. The more correct solution is the Option/Maybe types utilized by functional languages, however, Java's Optional is only a half measure. It does not fully capture the semantics of a true Maybe/Option type, which is why you shouldn't use Optional for parameters and fields.
For parameters this isn't an issue, since you can easily ensure an overload that doesn't take that parameter to begin with - a task that's even easier in languages like Kotlin.
For fields it seems the current "solutions" are to define a proper Maybe/Option-type, perhaps specific for each of your types, to ensure that type erasure and serializations doesn't hinder you. You could use the NullObject pattern, which seems like a slightly uglier way to do the exact same. Or have the nasty null value, but encapsulate it completely in your class. So far I've been doing the last, but I'm not fond of it :/ but pragmatism has its place ;)

Anti-if purposes: How to check nulls?

I recently heard of the anti-if campaign and the efforts of some OOP gurus to write code without ifs, but using polymorphism instead. I just don't get how that should work, I mean, how it should ALWAYS work.
I already use polymorphism (didn't know about anti-if campaign), so, I was curious about "bad" and "dangerous" ifs and I went to see my code (Java/Swift/Objective-C) to see where I use if most, and it looks like these are the cases:
Check of null values. This is the most common situation where I ever use ifs. If a value could possibly be null, I have to manage it in a correct way. To use it, instead I have to check that it's not null. I don't see how polymorphism could compensate this without ifs.
Check for right values. I'll do an example here: Let's suppose that I have a login/signup application. I want to check that user did actually write a password, or that it's longer than 5 characters. How could it possibly be done without if/switches? Again, it's not about the type but about the value.
(optional) check errors. Optional because it's similar to point 2 about right values. If I get either a value or an error (passed as params) in a block/closure, how can I handle the error object if I just can't check if it's null or isn't?
If you know more about this campaign, please answer in scope of that. I'm asking this to understand their purposes and how effectively it could be done what they say.
So, I know not using ifs at all may not be the smartest idea ever, but just asking if and how it could effectively be done in an OOP program.
You'll never completely get rid of ifs, but you can minimize them.
Regarding null value checks, a method that would otherwise return a null value can return a Null Object instead, an object that doesn't represent a real value but implements some of the same behavior as a real value. Its callers can just call methods on the Null Object instead of checking to see if it's null. There is probably still an if inside the method, but there don't need to be any in the callers.
Regarding correct value checks, the best path is to prevent an object from being instantiated with incorrect attributes. All users of the object can then be confident that they don't have to inspect the object's attributes to use it. Similarly, if an object can have an attribute that is valid or invalid, it can hide that from its users by providing higher-level methods that do the right thing for the current attribute value. Again, there is still a if inside the object, but there don't need to be any in the callers.
Regarding error checks, there are a couple of strategies that are better than returning a possibly null error value that the caller might forget to check. One is raising an exception. Another is to return an object of a type that can hold either a result or an error and provides type-safe, if-free ways to operate on either result when appropriate, like Java's Optional or Haskell's Maybe.
Note also that case statements are just concatenated ifs (in fact I'd have written the code on the campaign's home page with a switch rather than if/else if), and there are also patterns which replace case with polymorphism, such as the Strategy pattern.
This is a great question and is something that's asked at every OO bootcamp I've been a part of. To begin with, we need to understand why code with a lot of ifs is 'bad' or 'dangerous':
they increase the cyclomatic complexity of the code, making it hard to follow/understand.
they make tests more complicated to write. Ensuring that you test each branch flow in the method under test becomes increasingly more difficult with each conditional and makes test setup cumbersome.
they could be a sign that your code has not been broken into small enough methods
they could be a sign that your methods have not been encapsulated well
However, there is one important thing to remember - ifs cannot(and should not) be eliminated from the code completely. But, we can generally abstract them away using techniques like polymorphism, extracting small behaviours, and encapsulating these behaviours into the appropriate classes.
Now that we know some of the reasons why we should avoid ifs, let's tackle your questions:
Checking for null values: The Null object pattern helps you eliminate null checks from your code(polymorphism FTW). Instead of returning null, you return a Special Case NullObject representation of the expected object. This NullObject has the same interfaces as your actual object and you can safely call any of the object's methods without worrying about a null pointer exception being thrown.
Checking for correctness of values: There are a lot of ways to do this. For example, you could create a separate ValidationRule class for each of your validations and then chain calls to them together when you want to validate your object. Notice that the ifs still remain, but they get abstracted away into the individual ValidationRule implementations. Look up the Command pattern and the Chain Of Responsibility pattern for ideas.
It's better to use if to check the null instead of raising an exception. Also in common cases checking for null helps us to prevent operations with non-initialized variables.
Using switch plus SOLID. Other thinks inherited from this.

Functional data-structures, OO notions of dispatched equality and comparison, StructuralEquality, and referential transparency

I have a very CPU intensive F# program that depends on persistent data-structures - about 40% of the total CPU time is spent in the Map module. So I thought I'd try out the PersistentHashMap in FSharpX collections. (BTW, this is already a big improvement over the previous version of F# in VS2013 where the same program spent 70% of its time in Map. I also notice that running programs with the debugger attached doesn't have the huge penalty it did before - good work guys...) There is also a hot-spot where I'm re-sorting all the time, where instead I should be adding to a Heap, so I thought I'd give that a go as well.
Two issue became immediately apparent:
(1) Swapping out one for the other from an interface perspective proved harder than it seems it should - I.e., making a shim that let me switch from a Map to a PersistentMap, preserving both the needed module-based let-bound functions and Types necessary to use the each map. I know that having full HM type-inference (and no type-classes) is orthogonal to LSP-style referential transparency for the most part - but maybe I was missing some way to do this better with a minimal amount of code.
(2) The biggest problem (which I'd like to focus on here) is the reliance of the F# functional data-structs on oo-style dispatched equality and comparison via the IComparison (when 't : comparison), etc., family of interfaces.
Even for OO programs ISTM that the idea of dispatching equality and comparison is a bad idea -- an object "knows" how to perform its own domain-specific tasks, but it doesn't "know" for the most part what notion of equality is going to be necessary at various points in the program for various purposes -- so equality/comparison should not be part of the object's interface, but when these concepts are needed, they should always be mentioned explicitly. For example, there should never be a .Sort(), only a .SortWith(...). One could argue that even something as basic as structural equality in F# could be explicit a.StructEq(b) or a ~= b - otherwise you always get object.Equals -- but even stipulating that doing things this way is the best for a multi-paradigm language that's a first-class .Net citizen, it seems like there should at least be the option of using passed-in comparison and equality functions, but this is not the case.
This means that: (a) type constraints are enforced even if you don't want them, causing ripples of broken inferred typing (and hundreds of wavy red lines with it being unclear where the actual "problem" is) and (b), that by implementing a notion of equality or comparison that makes one container type happy in one part of your program (and in my case I want to use the same container and item type with two different notions of ordering in two different places), it is likely to silently break (or cause inefficiency, if one subsumes the other) in other parts of the code that depended on the default/previous implementation.
The only way around this that I could think of is wrapping each item a adapter object using new...with object expression - but I really don't want to create so much garbage just to get the code to work.
So, ISTM that we could have a "pure" version of each persistent data struct that could be loaded if desired (even basics like List, etc.) that do not depend on dispatched equality/comparison/hashing and do not impose type constraints - all such needs should be via a passed in fn's at the time of the call. (Dispatched eq/cmp would be only for used for interop with BCL collections that don't accept delegates.) Then we could have a [EqCmpHashThrowNotImplemented] attribute, and I could be sure that there were no default operations happening at all, and I would feel better about the efficiency and predictability of my code. (And this also let's one change from a Record to a Class or visa-versa w/o worrying about any changes in behavior due to default implementations.) Again, this would be optional, but done by with a simple import. (Which does mean that each base core collection type would have to be broken out into its own module, which isn't really a bad idea anyway.)
If I've overlooked a better way to do things or there are some patterns people are using here, I'd be interested.

Exceptions or not? If "yes" - checked or not?

I suppose this scratches a vast topic, but what is the best approach in handling program work-flow other than the ideal variant when everything gets done the best way.
Let's be a bit more specific:
a class has a method, that method operates on its parameters and returns result.
e.g.:
public Map<Object,OurObject> doTheWork(OtherObject oo);
One possible outcome that I ruled out was returning null if something went other way, but the ideal.
Is there a correct way ("silver bullet", or so called "best practice") of handling that situation?
I see three other outcomes:
1 - the method returns EMPTY_MAP;
2 - the method has checked exceptions;
3 - throwing a RuntimeException;
If there is no generally correct answer to that question - what are the conditions that should be considered?
Regarding the principles of design by contract (in brief, the method's responsibility is to take care of the output, assuming input parameters are correct) - is it correct for the method to throw any exception or it is better to return empty result, though correct in structure (e.g. not null)
Returning null is like setting a trap for someone. (In cases like HashMaps we've been trained to look out for nulls, so going along with precedent is ok.) The user has to know to look out for the null, and they have to code an alternative path to handle that case (which sounds a lot like handling an exception), or risk an NPE (in which case an exception gets thrown anyway, just a less informative one, at a distance from the place where the original problem occurred, so information about what went wrong will be missing).
Returning an empty map makes it unclear whether anything went wrong or not. It isn't hurtful like returning null, but it is totally ambiguous. If somehow returning the map data is purely a best-effort thing and the user doesn't really need to know, this could be ok. (But this is pretty rare.)
If there is anything you expect the user to be able to do about the exception, throwing a checked exception may be appropriate.
If there is nothing the user can do about the exception, or if the exception would only occur due to a programming error, or due to something that should be caught during testing, then an argument can be made for using an unchecked exception. There is a lot of room for disagreement about what should be checked and what should be unchecked.
Which option would be most beneficial to the callee? If all you need to know is an empty map or null is returned, then that is fine. If you need more information, then an exception would be a better choice.
As for what type of exception, if the callee really needs to do something in order to recover, then a checked exception is appropriate. However, an unchecked exception is likely cleaner.
As for returning null or an empty map, it depends on what the callee is going to do with it. If you expect the callee to take that map and pass it around elsewhere, then an empty map is a better choice.
If something in you method breaks because of erroneous input the user needs to know. Therefore, an exception is the best way to go. Returning empty or null data could potentially make the user think that nothing is wrong and that is definitely not what you want.
I'll bet that there's plenty of good answers for this in stackoverflow. Before I search for them:
Exceptions are thrown in exceptional situations only - they are not used for normal control flow.
Checked exceptions are thrown when the caller should be able to recover from the exception
Unchecked exceptions are thrown when the caller probably should not recover from the exception e.g. exceptions thrown due to programming errors.
An empty collection should be returned if it is a normal state. Especially when the result is a collection, the return value should not be null. If the the return value is null, then it should be clearly documented. If there's need for more complicated analysis of the return state, you might be better of with a appropriately modeled result class.
It all depends on how big of a problem it is if doTheWork.
If the problem is so large that the application might need to shutdown then throw a RuntimeException.
If the callee might be able to recover from the problem then a CheckedException is common.
If an empty Map has no meaning then it can be used as a return, but you have to be careful that an empty Map is not a valid value to return in a successful case.
If you do return just a null, I would suggest making a note in the Javadoc about the return value.
The best pattern in many cases is the try/do pattern, where many routines have two variations, e.g. TryGetValue() and GetValue(). They are designed to handle three scenarios:
The object was obtained successfully
The value couldn't be obtained, for a reason the caller might reasonably anticipate, and the system state is what the caller would expect given such a failure.
The value couldn't be obtained, for some reason the caller probably isn't anticipating, or else the system state is corrupted in a way the caller won't expect.
The idea behind the try/do pattern is that there are times when a caller will be prepared to handle a failure, but there are other times where the only thing the caller could do in case of failure would be to throw an exception. In the latter case, it would be better for the method which failed to throw the exception rather than requiring the caller to do it.