When looking at arrows documentation about functional error handling one of the reason listed to avoid throwing exceptions is performance cost (referencing The hidden performance costs of instantiating Throwables)
So it is suggested to model errors/failures as an Effect.
When building an Effect to interrupt a computation the shift() method should be used (and under the hood it is also what is used to "unwrap" the effects through the bind() method).
Looking at the shift() method implementation seems that its magic is done...throwing an exception, meaning that not only exceptions are created when we want to signal an error, but also to "unwrap" any missing Option, Left instance of an Either and all the other effect types exposed by the library.
What I'm not getting is if there's some optimization done to avoid the issues with "the hidden performance costs of instantiating Throwables", or in the end they are not a real problem?
What I'm not getting is if there's some optimisation done to avoid the issues with "the hidden performance costs of instantiating Throwables", or in the end they are not a real problem?
The argumentation that this is the biggest reason for using typed errors on the JVM is probably an overstatement, there are better reason for using typed errors. Exceptions are not typed, so they are not tracked by the compiler. This is what we want to avoid if you care about type-safety, or purity. This will be better reflected in the documentation or 2.x.x.
Avoiding the performance penalty can be a benefit in hot-loops, but in general application programming it can probably be neglected.
However to answer your question on how this is dealt with in Kotlin, and Arrow:
In Kotlin cancellation of Coroutines works through CancellationException, so it's required to use this mechanism to correctly work in the Kotlin language. You can find more details in the Arrow 2.x.x Raise design document.
It's possible to remove the performance penalty of exceptions. Which is also what Arrow is doing. (Except a small regression in a single version, this was be fixed in the next release).
An example of this can also be found in the official KotlinX Coroutines which applies the same technique for disabled stack traces for JobCancellationException.
Related
I love the idea of Result.
I love having encapsulated try/catch.
But I’m a little confused about how and when to use Result.
I currently use it like this:
My adapters and services return a Result. Failures and stacktraces are logged but do nothing else
runCatching{
.... // do something cool
}.onFailure {
logger.error("Something bad happened ", it)
}
My Resource classes fold and handle the Result
return service.method().fold(
onFailure = {
Response.serverError().entity("Oops").build()
},
onSuccess = {
Response.ok().entity(doSomethingWith(it)).build()
}
)
Is this really the correct way to use a Result? Or is there a more idiomatic way to code in Kotlin?
TL;DR: never in business code (prefer custom sealed classes), but you can consider Result if you build a framework that must relay all kinds of errors through a different way than the exception mechanism (e.g. kotlin coroutines).
First, there is actually a list of use cases for the motivation of the initial introduction of Result, if you find it interesting. Also in the same document:
The Result class is designed to capture generic failures of Kotlin functions for their latter processing and should be used in general-purpose API like futures, etc, that deal with invocation of Kotlin code blocks and must be able to represent both a successful and a failed result of execution. The Result class is not designed to represent domain-specific error conditions.
Also, here is a very nice article by the lead Kotlin designer about exceptions in Kotlin and more specifically when to use the type system instead of exceptions.
Most of what follows is my personal opinion. It's built from facts, but is still just an opinion, so take it with a grain of salt.
Do not use runCatching or catch Throwable in business code
Note that runCatching catches all sorts of Throwable, including JVM Errors like NoClassDefFoundError, ThreadDeath, OutOfMemoryError, or StackOverflowError. There is usually almost nothing you can do with such JVM errors (even reporting them might be impossible, for instance in case of OOME).
According to the Java documentation, the Error class "indicates serious problems that a reasonable application should not try to catch". There are some very special cases where you might want to try to recover from error, but that is quite exceptional (pun intended).
Catch-all mechanisms like this (even catch(Exception)) are usually not recommended unless you're implementing some kind of framework that needs to attempt to report errors in some way.
So, if we don't catch errors (and instead let them bubble up naturally), Result isn't part of the picture here.
Do not catch programming exceptions in business code
Apart from JVM errors, I believe exceptions due to programming errors shouldn't really be handled in a way that bloats the business code either. Using error(), check(), require() in the right places will make use of exceptions to detect bugs that cannot be caught by the compiler (IllegalStateException, IllegalArgumentException). It often doesn't make sense to catch these exceptions in business code, because they appear when the programmer made a mistake and the logic of the code is broken. You should instead fix the code's logic.
Sometimes it might still be useful to catch all exceptions (including these ones) around an area of code, and recover with a high level replacement for the whole failed operation (usually in framework code, but sometimes in business code too). This will probably be done with some try-catch(Exception), though, but Result will not be involved here because the point of such code would be to delimit with try-catch the high-level operation that can be replaced. Low-level code will not return Result because it doesn't know whether there are higher level operations that can be replaced with something in case of programming errors, or if it should just bubble up.
Modeling business errors
That leaves business errors for result-like types. By business errors, I mean things like missing entities, unknown values from external systems, bad user input, etc. However, I usually find better ways to model them than using kotlin.Result (it's not meant for this, as the design document stipulates). Modelling the absence of value is usually easy enough with a nullable type fun find(username: String): User?. Modelling a set of outcomes can be done with a custom sealed class that cover different cases, like a result type but with specific error subtypes (and more interesting business data about the error than just Throwable).
So in short, in the end, I never use kotlin.Result myself in business code (I could consider it for generic framework code that needs to report all errors).
My adapters and services return a Result. Failures and stacktraces are logged but do nothing else
A side note on that. As you can see, you're logging errors in the service itself, but it's unclear from the service consumer's perspective. The consumer receives a Result, so who's reponsible with dealing with the error here? If it's a recoverable error then it may or may not be appropriate to log it as an error, and maybe it's better as a warning or not at all. Maybe the consumer knows better the severity of the problem than the service. Also, the service makes no difference between JVM errors, programming errors (IAE, ISE, etc.), and business errors in the way it logs them.
I am a newbie to Kotlin programming. While going through the advantages of Kotlin over Java I came across this claim that by avoiding checked exceptions Kotlin achieves type safety. Also I do not understand how the exception handled with an empty catch block harms type-safety(I read it on a blog)?
Can it be explained with an example?
By itself, removing checked exceptions doesn't increase type safety. Kotlin's claim of improved type safety comes from the other features and idioms it introduces to replace checked exceptions.
In Kotlin, exceptions are not intended to be used for recoverable failures. They're only there to handle bugs and logic errors.
As a rule of thumb, you should not be catching exceptions in general Kotlin code.
[...]
Use exceptions for logic errors, type-safe results for everything else. Don’t use exceptions as a work-around to sneak a result value out of a function.
https://elizarov.medium.com/kotlin-and-exceptions-8062f589d07
The "type-safe results" referred to above are things like sealed classes, which provide a very controlled way to enumerate the possible types that a function can return. For example, you could have a sealed class with one implementation for a successful result and other implementations for each of the possible types of failure.
If there’s a single error condition and you are only interested in success or failure of the operation without any details, then prefer using null to indicate a failure. If there are multiple error conditions, then create a sealed class hierarchy to represent various results of your function. Kotlin has all those needs covered by design, including a powerful when expression.
https://elizarov.medium.com/kotlin-and-exceptions-8062f589d07
I have a very CPU intensive F# program that depends on persistent data-structures - about 40% of the total CPU time is spent in the Map module. So I thought I'd try out the PersistentHashMap in FSharpX collections. (BTW, this is already a big improvement over the previous version of F# in VS2013 where the same program spent 70% of its time in Map. I also notice that running programs with the debugger attached doesn't have the huge penalty it did before - good work guys...) There is also a hot-spot where I'm re-sorting all the time, where instead I should be adding to a Heap, so I thought I'd give that a go as well.
Two issue became immediately apparent:
(1) Swapping out one for the other from an interface perspective proved harder than it seems it should - I.e., making a shim that let me switch from a Map to a PersistentMap, preserving both the needed module-based let-bound functions and Types necessary to use the each map. I know that having full HM type-inference (and no type-classes) is orthogonal to LSP-style referential transparency for the most part - but maybe I was missing some way to do this better with a minimal amount of code.
(2) The biggest problem (which I'd like to focus on here) is the reliance of the F# functional data-structs on oo-style dispatched equality and comparison via the IComparison (when 't : comparison), etc., family of interfaces.
Even for OO programs ISTM that the idea of dispatching equality and comparison is a bad idea -- an object "knows" how to perform its own domain-specific tasks, but it doesn't "know" for the most part what notion of equality is going to be necessary at various points in the program for various purposes -- so equality/comparison should not be part of the object's interface, but when these concepts are needed, they should always be mentioned explicitly. For example, there should never be a .Sort(), only a .SortWith(...). One could argue that even something as basic as structural equality in F# could be explicit a.StructEq(b) or a ~= b - otherwise you always get object.Equals -- but even stipulating that doing things this way is the best for a multi-paradigm language that's a first-class .Net citizen, it seems like there should at least be the option of using passed-in comparison and equality functions, but this is not the case.
This means that: (a) type constraints are enforced even if you don't want them, causing ripples of broken inferred typing (and hundreds of wavy red lines with it being unclear where the actual "problem" is) and (b), that by implementing a notion of equality or comparison that makes one container type happy in one part of your program (and in my case I want to use the same container and item type with two different notions of ordering in two different places), it is likely to silently break (or cause inefficiency, if one subsumes the other) in other parts of the code that depended on the default/previous implementation.
The only way around this that I could think of is wrapping each item a adapter object using new...with object expression - but I really don't want to create so much garbage just to get the code to work.
So, ISTM that we could have a "pure" version of each persistent data struct that could be loaded if desired (even basics like List, etc.) that do not depend on dispatched equality/comparison/hashing and do not impose type constraints - all such needs should be via a passed in fn's at the time of the call. (Dispatched eq/cmp would be only for used for interop with BCL collections that don't accept delegates.) Then we could have a [EqCmpHashThrowNotImplemented] attribute, and I could be sure that there were no default operations happening at all, and I would feel better about the efficiency and predictability of my code. (And this also let's one change from a Record to a Class or visa-versa w/o worrying about any changes in behavior due to default implementations.) Again, this would be optional, but done by with a simple import. (Which does mean that each base core collection type would have to be broken out into its own module, which isn't really a bad idea anyway.)
If I've overlooked a better way to do things or there are some patterns people are using here, I'd be interested.
Is the JVM (any of them) ever re-compiling code that has already been compiled at runtime?
It depends on what you mean by re-compiling, but the HotSpot VM will discard code that relies on optimistic assumptions when they are proven to be wrong or no longer relevant. See deoptimization:
Deoptimization is the process of changing an optimized stack frame to an unoptimized one. With respect to compiled methods, it is also the process of throwing away code with invalid optimistic optimizations, and replacing it by less-optimized, more robust code.
The fourth point is particularly interesting:
If a class is loaded that invalidates an earlier class hierarchy analysis, any affected method activations, in any thread, are forced to a safepoint and deoptimized.
This applies to optimistic method inlining as described in this paper:
A class hierarchy analysis (CHA) is
used to detect virtual call sites where currently only one suitable method exists.
This method is then optimistically inlined. If a class is loaded later that adds
another suitable method and, therefore, the optimistic assumption no longer
holds, the method is deoptimized.
I've seen sample code (most notably in iOS GL projects) where the current GL texture name is cached, and a comparison done before calling glBindTexture. The goal is to avoid unnecessary calls to glBindTexture.
e.g.
if (textureName != cachedTextureName) {
glBindTexture(GL_TEXTURE_2D, textureName);
cachedTextureName = textureName;
}
This approach requires that ALL calls to glBindTexture be handled similarly, preferably through a wrapper. But the inclusion of third-party code makes this problematic, as well as being Yet Another Thing to Remember.
Q. Is this a useful optimisation? Or is the OpenGL implementation smart enough to disregard calls to glBlindTexture where the texture name has not changed?
Thanks.
This isn't called out in the spec as far as I've seen. In practice is appears many implementations actually do some processing if you rebind the same texture as you'll see a relative performance drop if you don't test the current texture. I'd recommend using the test if possible just to ensure that implementation details don't have an adverse effect on your program.
Generally speaking it is quite useful to abstract over the OpenGL state machine with your own state handling so you can query state such as the bound texture or the active matrices without calling glGet which will almost always be slower than your custom handling. This also allows you to prohibit invalid behavior and generally makes your program easier to reason about.