destructor in Kotlin programming language - kotlin

I am new to Kotlin, have written a class in kotlin to perform database operation
I have defined database connection in constructor using init but I want to close database connection using destructor.
Any Idea of how to achieve this using kotlin destructor?
Currently I have wrote a separate function to close connection, which i want to have it using destructor like any other programming language like php,etc

Handling resources that need to be closed in Kotlin
You can make your Database Wrapper extend Closeable. You can then use it like this.
val result = MyResource().use { resource ->
resource.doThing();
}
This way inside the use block your resource will be available, afterwards you will get back result, which is what doThing() returns, and your resource will be closed. As you haven't stored it in a variable you also avoid accidentally using the resource after it is closed.
Why to avoid finalize
Finalise is not safe, this describes some of problems with them, such as:
They are not guaranteed to run at all
When they do run there can be delays before it runs
The link sums up the problems like this:
Finalizers are unpredictable, often dangerous, and generally unnecessary. Their use can cause erratic behavior, poor performance, and portability problems. Finalizers have a few valid uses, which we’ll cover later in this item, but as a rule of thumb, you should avoid finalizers.
C++ programmers are cautioned not to think of finalizers as Java’s analog of C++ destructors. In C++, destructors are the normal way to reclaim the resources associated with an object, a necessary counterpart to constructors. In Java, the garbage collector reclaims the storage associated with an object when it becomes unreachable, requiring no special effort on the part of the programmer. C++ destructors are also used to reclaim other nonmemory resources. In Java, the try-finally block is generally used for this purpose.
If you really need to use finalize
This link shows how to override finalize, but it is a bad idea unless absolutely necessary.

Related

Unnecessarily mark functions as suspending in favor of common abstraction

I'm working on a project with an API running in the JVM and a JS client to access this API from the browser. The data classes of those objects which are converted to/from JSON are in a multiplatform module so that I can reuse the code on both platforms and don't accidentally end up with mismatched attributes. At this point it would be nice to also have the APIs interface in this mutliplatform module which then would be implemented and hosted in the JVM and implemented and presented in the browser. However, all methods of this interface need to be suspending in the browser since requests are (at least with Ktor's client, which I'm using) while they do not need to be suspending in the JVM.
Is there a good reason against having all those methods suspending even though I don't make use of it in the JVM? I know that methods usually should be suspending only if it's actually needed, but then I would be writing all the same interfaces (besides the suspend keyword) twice which seems like a lot of unnecessary boilerplate code to me. The methods which would unnecessarily be marked as suspending are called from suspending contexts (I'm using Ktor in the JVM too) so restricted usage wouldn't be a problem.
This seems like a matter of preference, really. Both using suspend and not using it have disadvantges, so you have to choose which weigh less.
From what you write, it seems that the advantages of using suspend (write code only once) outweigh the disavantage of polluting the interface with an unnecessary modifier. I am not aware of the possible runtime overheads here. Personally, I would opt to go with suspend.
The methods which would unnecessarily be marked as suspending are called from suspending contexts (I'm using Ktor in the JVM too) so restricted usage wouldn't be a problem.
This is the key point: the biggest hassle of the unnecessary suspend is having to launch a coroutine. If you're already inside a coroutine, the overhead of just one function along the call path being unnecessarily suspend is very low: a single extra object allocated per call.
While it's true that with having one interface you avoid boilerplate and you get the hassle of having to launch a coroutine on JVM, I'd consider another perspective:
When designing your abstraction IMO you shouldn't get much into implementation details, instead of thinking how jvm and/or js handles communication with the api I'd go with the question "Do I want to leave room for the platforms to handle this communication in an async/suspend way?". I believe this way you'll arrive to a more scalable solution, but true you'll lose out on some of the micro-optimizations

Is Polymorphism a waste to apply for the classes that we exactly know the type prior run-time?

Run-time Polymorphism can be used to let the run-time to dynamically load the exact concrete class of an abstract class/interface. (You can take Animal/Dog, Vehicle/Car examples)
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
When I write OO code, I tend to use most-general type I can on the left-hand side of the assignment. This immediately means that my answer to your question is - no.
Here's the example:
Animal x = new Dog();
...
x.move();
The reason why I'm doing this is that I'm probably going to split beginning and end of the operation into two distinct operations. My methods are extremely short in practice.
Applied to the same example:
function moveDog() {
move(new Dog());
}
function move(Animal animal) {
animal.move();
}
As you can see, it would make no sense for the move function to know what kind of animal it is really moving.
Generally, it is compiler's duty to figure whether in a given code base any concrete call has been made with an overridden move() method. Some compilers can detect that no overridden method will be subjected to them and then they remove dynamic dispatch at compile time. With some luck, my code above would compile the same whether move function receives Animal or Dog.
Now, this is theory. In practice, there are two important things. First, compilers that are widely used have still not started using such aggressive optimization techniques as detecting static method calls, as opposed to calls that require dynamic dispatch. Second, the first thing doesn't matter too much with CPU power we have today.
I have been writing highly optimized code for fifteen years already and I have met the situation in which I had to factor polymorphic calls out. That is why I strongly recommend to apply polymorphism as much as possible. When the time comes to add some classes, to incorporate new features, polymorphic calls will likely be the tool to seamlessly add new classes to the existing design. If you used overly concrete types during development, it could easily happen that you cannot add new feature to the given code base.
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
Knowing the type at compile time is not necessarily a yes/no thing across all the code in an app and an object's entire lifetime, given techniques for type erasure. But, ignoring those classic uses of polymorphism, there are still other potential reasons such as...
(sorry - pretty obvious one this) to make it easier to change the implementation should another become available later
to make it easier to "mock" an implementation for testing (i.e. provide objects that pretend to provide some service or function, but have more scripted/controllable/observable behaviours to let tests put some dependent code through its paces)
hide aspects of the implementation that might otherwise have to be exposed (e.g. in C++, a class/struct definition must declare all the protected and private members)
this is sometimes for Intellectual Property protection; at other times, so more changes can be made to the implementation without having to make a change the "header" file that would typically trigger recompilation of a lot of dependent code
to aid in modelling and application design, using the "interfaces" to cleanly specify the intended APIs, which can then provide a more stable reference for comparison as the implementations are fleshed out

Are there any alternative concepts for handling unmanaged resources in garbage collected languages?

Garbage collected object oriented programming languages reclaim unused memory automatically, but all other kinds of resources (i.e. files, sockets...) still require manual release since finalizers cannot be trusted to run in time (or at all).
Therefore such resource objects usually provide some kind of "close"- or "dispose"-method/pattern, which can be problematic for a number of reasons:
Dispose has to be called manually which may pose problems in cases when it is not clear when the resource has to be released (similar problem as with manual memory management)
The disposable-pattern is somewhat "viral", since each class containing a disposable resource must be made disposable as well in order to guarantee correct resource cleanup
An addition of a disposable member to a class, requiring the class to become disposable as well, changes the interface and the usage patterns of the class, thus breaking encapsulation
The disposable-pattern creates problems with inheritance, i.e. when a derived class is disposable, while the base class isn't
So, are there any alternative concepts/approaches for properly releasing such resources? Any papers/research in that direction?
One approach (in languages that support it) is to manually trigger a garbage collection event to cause finalizers to run. However, some languages (like Java) do not provide a reliable mechanism for doing so.

How do I debug singletons in Objective-C

My app contains several singletons (following from this tutorial). I've noticed however, when the app crashes because of a singleton, it becomes nearly impossible to figure out where it came from. The app breakpoints at the main function giving an EXEC_BAD_ACCESS even though the problem lies in one of the Singleton objects. Is there a guide to how would I debug my singleton objects if they were problematic?
if you don't want to change your design (as recommended in my other post), then consider the usual debugging facilities: assertions, unit tests, zombie tests, memory tests (GuardMalloc, scribbling), etc. this should identify the vast majority of issues one would encounter.
of course, you will have some restrictions regarding what you can and cannot do - notably regarding what cannot be tested independently using unit tests.
as well, reproducibility may be more difficult in some contexts when/if you are dealing with a complex global state because you have created several enforced singletons. when the global state is quite large and complex - testing these types independently may not be fruitful in all cases since the bug may appear only in a complex global state found in your app (when 4 singletons interact in a specific manner). if you have isolated the issue to interactions of multiple singleton instances (e.g. MONAudioFileCache and MONVideoCache), placing these objects in a container class will allow you to introduce coupling, which will help diagnose this. although increasing coupling is normally considered a bad thing; this does't really increase coupling (it already exists as components of the global state) but simply concentrates existing global state dependencies -- you're really not increasing it as much as you are concentrating it when the state of these singletons affect other components of the mutable global state.
if you still insist on using singletons, these may help:
either make them thread safe or add some assertions to verify mutations happen only on the main thread (for example). too many people assume an object with atomic properties implies the object is thread safe. that is false.
encapsulate your data better, particularly that which mutates. for example: rather than passing out an array your class holds for the client to mutate, have the singleton class add the object to the array it holds. if you truly must expose the array to the client, then return a copy. ths is just basic ood, but many objc devs expose the majority of their ivars disregarding the importance of encapsualtion.
if it's not thread safe and the class is used in a mutithreaded context, make the class (not the client) implement proper thread safety.
design singletons' error checking to be particularly robust. if the programmer passes an invalid argument or misuses the interface - just assert (with a nice message about the problem/resolution).
do write unit tests.
detach state (e.g. if you can remove an ivar easily, do it)
reduce complexity of state.
if something is still impossible to debug after writing/testing with thorough assertions, unit tests, zombie tests, memory tests (GuardMalloc, scribbling), etc,, you are writing programs which are too complex (e.g. divide the complexity among multiple classes), or the requirements do not match the actual usage. if you're at that point, you should definitely refer to my other post. the more complex the global variable state, the more time it will take to debug, and the less you can reuse and test your programs when things do go wrong.
good luck
I scanned the article, and while it had some good ideas it also had some bad advice, and it should not be taken as gospel.
And, as others have suggested, if you have a lot of singleton objects it may mean that you're simply keeping too much state global/persistent. Normally only one or two of your own should be needed (in addition to those that other "packages" of one sort or another may implement).
As to debugging singletons, I don't understand why you say it's hard -- no worse than anything else, for the most part. If you're getting EXEC_BAD_ACCESS it's because you've got some sort of addressing bug, and that's nothing specific to singleton schemes (unless you're using a very bad one).
Macros make debugging difficult because the lines of code they incorporate can't have breakpoints put in them. Deep six macros, if nothing else. In particular, the SYNTHESIZE_SINGLETON_FOR_CLASS macro from the article is interfering with debugging. Replace the call to this macro function with the code it generates for your singleton class.
ugh - don't enforce singletons. just create normal classes. if your app needs just one instance, add them to something which is created once, such as your app delegate.
most cocoa singleton implementations i've seen should not have been singletons.
then you will be able to debug, test, create, mutate and destroy these objects as usual.
the good part is course that the majority of your global variable pains will disappear when you implement these classes as normal objects.

Is it good convention for a class to perform functions on itself?

I've always been taught that if you are doing something to an object, that should be an external thing, so one would Save(Class) rather than having the object save itself: Class.Save().
I've noticed that in the .Net libraries, it is common to have a class modify itself as with String.Format() or sort itself as with List.Sort().
My question is, in strict OOP is it appropriate to have a class which performs functions on itself when called to do so, or should such functions be external and called on an object of the class' type?
Great question. I have just recently reflected on a very similar issue and was eventually going to ask much the same thing here on SO.
In OOP textbooks, you sometimes see examples such as Dog.Bark(), or Person.SayHello(). I have come to the conclusion that those are bad examples. When you call those methods, you make a dog bark, or a person say hello. However, in the real world, you couldn't do this; a dog decides himself when it's going to bark. A person decides itself when it will say hello to someone. Therefore, these methods would more appropriately be modelled as events (where supported by the programming language).
You would e.g. have a function Attack(Dog), PlayWith(Dog), or Greet(Person) which would trigger the appropriate events.
Attack(dog) // triggers the Dog.Bark event
Greet(johnDoe) // triggers the Person.SaysHello event
As soon as you have more than one parameter, it won't be so easy deciding how to best write the code. Let's say I want to store a new item, say an integer, into a collection. There's many ways to formulate this; for example:
StoreInto(1, collection) // the "classic" procedural approach
1.StoreInto(collection) // possible in .NET with extension methods
Store(1).Into(collection) // possible by using state-keeping temporary objects
According to the thinking laid out above, the last variant would be the preferred one, because it doesn't force an object (the 1) to do something to itself. However, if you follow that programming style, it will soon become clear that this fluent interface-like code is quite verbose, and while it's easy to read, it can be tiring to write or even hard to remember the exact syntax.
P.S.: Concerning global functions: In the case of .NET (which you mentioned in your question), you don't have much choice, since the .NET languages do not provide for global functions. I think these would be technically possible with the CLI, but the languages disallow that feature. F# has global functions, but they can only be used from C# or VB.NET when they are packed into a module. I believe Java also doesn't have global functions.
I have come across scenarios where this lack is a pity (e.g. with fluent interface implementations). But generally, we're probably better off without global functions, as some developers might always fall back into old habits, and leave a procedural codebase for an OOP developer to maintain. Yikes.
Btw., in VB.NET, however, you can mimick global functions by using modules. Example:
Globals.vb:
Module Globals
Public Sub Save(ByVal obj As SomeClass)
...
End Sub
End Module
Demo.vb:
Imports Globals
...
Dim obj As SomeClass = ...
Save(obj)
I guess the answer is "It Depends"... for Persistence of an object I would side with having that behavior defined within a separate repository object. So with your Save() example I might have this:
repository.Save(class)
However with an Airplane object you may want the class to know how to fly with a method like so:
airplane.Fly()
This is one of the examples I've seen from Fowler about an aenemic data model. I don't think in this case you would want to have a separate service like this:
new airplaneService().Fly(airplane)
With static methods and extension methods it makes a ton of sense like in your List.Sort() example. So it depends on your usage pattens. You wouldn't want to have to new up an instance of a ListSorter class just to be able to sort a list like this:
new listSorter().Sort(list)
In strict OOP (Smalltalk or Ruby), all methods belong to an instance object or a class object. In "real" OOP (like C++ or C#), you will have static methods that essentially stand completely on their own.
Going back to strict OOP, I'm more familiar with Ruby, and Ruby has several "pairs" of methods that either return a modified copy or return the object in place -- a method ending with a ! indicates that the message modifies its receiver. For instance:
>> s = 'hello'
=> "hello"
>> s.reverse
=> "olleh"
>> s
=> "hello"
>> s.reverse!
=> "olleh"
>> s
=> "olleh"
The key is to find some middle ground between pure OOP and pure procedural that works for what you need to do. A Class should do only one thing (and do it well). Most of the time, that won't include saving itself to disk, but that doesn't mean Class shouldn't know how to serialize itself to a stream, for instance.
I'm not sure what distinction you seem to be drawing when you say "doing something to an object". In many if not most cases, the class itself is the best place to define its operations, as under "strict OOP" it is the only code that has access to internal state on which those operations depend (information hiding, encapsulation, ...).
That said, if you have an operation which applies to several otherwise unrelated types, then it might make sense for each type to expose an interface which lets the operation do most of the work in a more or less standard way. To tie it in to your example, several classes might implement an interface ISaveable which exposes a Save method on each. Individual Save methods take advantage of their access to internal class state, but given a collection of ISaveable instances, some external code could define an operation for saving them to a custom store of some kind without having to know the messy details.
It depends on what information is needed to do the work. If the work is unrelated to the class (mostly equivalently, can be made to work on virtually any class with a common interface), for example, std::sort, then make it a free function. If it must know the internals, make it a member function.
Edit: Another important consideration is performance. In-place sorting, for example, can be miles faster than returning a new, sorted, copy. This is why quicksort is faster than merge sort in the vast majority of cases, even though merge sort is theoretically faster, which is because quicksort can be performed in-place, whereas I've never heard of an in-place merge-sort. Just because it's technically possible to perform an operation within the class's public interface, doesn't mean that you actually should.