Difference between 'this' and 'self' in programming languages - oop

In some languages, like Python, we use self, but in other languages such as Java, we use this.
Is there any special reason for this difference in name for the same function?

This may not be an answer to its full extent.
In PHP self is used in static class methods while $this refers to instantiated object of a non-static class.
EDIT: In java this similarly to PHP refers to current object. As for python this answer seems to explain self very well: https://stackoverflow.com/a/2709832/4490187

There is nothing special about the name self. It's the name preferred by convention by Pythonistas.
The same goes for Java this, nothing special, just the name chosen by convention.

Related

Is it a good practice to use Nothing in generics?

Like in this example:
sealed class Option<T>
object None : Option<Nothing>() // <-- like this
class Some<T> : Option<T>()
Or, if it's not a good practice, what should I use here instead?
Are there any official response/article on that? Or is there any argumentation that this is a good practice?
I know that Nothing was designed to be used as a type for return value for functions that never returns any value, so I'm not sure if using it as a generic parameter is a valid use.
I know there is an article that says that you can do that, but I'm not sure if I can trust it.
And the author of koptional uses it too, but I don't know if I can trust that either.
Also, it looks like in Scala Option is implemented similar to that, None have type Option[Nothing] and Scala's Nothing is similar to Kotlin's Nothing.
I agree with #zsmb13's comment. Using Nothing in a generic type hierarchy is perfectly valid and even gives benefits over other options:
First, Nothing is embedded in the Kotlin type system as a subtype of any other type, so it plays well with generics variance. For example, Option<Nothing> can be passed where Option<out Foo> is expected.
Second, the compiler will perform control flow checks and detect unreachable code after a Nothing-returning member call when the type is statically known.
See also: A Whirlwind Tour of the Kotlin Type Hierarchy

Precondition functions in Kotlin - good practices

Being a newbie Kotlin coder, I wonder, if there are some good practices or even language constructs for declaring pre-conditions in functions.
In Java I have been using Guava's Preconditions checking utilities:
https://github.com/google/guava/wiki/PreconditionsExplained
After some further investigation I came across the require function:
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/require.html
Is this what is generally used for checking preconditions on functions?
Of course. You can find all of the preconditions in Preconditions.kt. In addition to the require function, there are requireNotNull, check & checkNotNull functions.
Since the documentation describes it poorly in Kotlin, but you can see the Objects#requireNonNull documentation in jdk as further.
Checks that the specified object reference is not null. This method is designed primarily for doing parameter validation in methods and constructors.
I use assert() and require() from the stdlib.
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/assert.html
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/require.html
Actually, 'require' appears to not be inherited - that is, if a subclass overrides a function that has a 'require' statement, the 'require' in the parent function is not enforced. A true precondition would also apply in the case of a redefinition of the inherited function, so (IMO) 'require' does not truly provide full precondition-checking functionality.
(I say "appears" because, being new to kotlin, I've learned this by a simple experiment using inheritance - and it's possible I'm wrong - e.g., there's a bug in the compiler causing incorrect behavior, or I've done something wrong in compiling/setup. I don't think this possibility is likely, though.)
Yes, it seems that toolforger is right about 'require'. I just searched for "require" as a keyword at https://kotlinlang.org and couldn't find it, nor as a documented function. It appears to be undocumented (unless the doc for require is hidden somewhere I couldn't find); and, of course, that means we cannot count on it to implement the standard DBC "require" behavior, and so the logical assumption is that it is simply the equivalent to "assert" in C.

What is formal comp. sci. name of this language property?

As a self-taught programmer, my definitions get fuzzy sometimes.
I'm very used to C and ObjC. In both of those your code must adhere to the language "structure". You can only do certain things in certain places. As an example, this is an error:
// beginning of file
NSLog(#"Hello world!"); // can't do this
#implementation MYClass
...
#end
However, in Ruby, anything you put anywhere is executed as the interpreter goes through it. So what is the difference between Ruby and Objective-C that allows this?
At first I thought it was that one was interpreted and the other compiled. Then I read some SO posts and the wikipedia definitions. Interpreted or compiled is a property of the implementation not the language. So that would mean there could (theoretically) be an interpreted implementation of Objective-C? In that case, the fact that a statement cannot be outside the implementation can't be a property of compiled languages, and vice-versa if there was a compiled implementation of Ruby. Or am I wrong in assuming that different implementations of a language would work the same way?
I'm not sure there's a technical term for it, but in most programming languages the context of the statement is extremely important.
Ruby has a concept of a root or main context where code is allowed. Other scripting languages follow this convention, presumably made popular by languages like Perl which allowed for very concise programming.
This allows things like this to be a complete and valid program:
print "Hello world!\n"
In other languages you need to define an entry point, such as a main routine, that is executed instead. Arbitrary code is not really allowed at the top level, which instead is reserved for things like function, type, constant, structure and class definitions.
A language like Ruby has a lot of control over the order in which the code is executed. C, by comparison, is usually composed of separate source files that are then linked together, where there's no inherent order to the way things are linked. All the modules are simply assembled into the final library or executable. This is why the main entry point is required, it defines which function to run first.
In short, it boils down to syntax, context, and language design considerations.
Ruby hides lots of stuff.
Ruby is OO like C++, Objective C and Java, and has main like C but you don't see this.
puts(42) is method call. It is a method of the main object called main. You can see it by typing puts self.
If you don't specify the receiver (receiver.method()) Ruby will use the implicit one, main.
Check available methods:
puts Object.private_methods.sort
Why you can put everything anywhere?
C/C++ look for main method called main, and when C/C++ find it, it will be executed.
Ruby on other hands doesn't need main or other method/class to run first.
It execute code from the first line until it meet the end of file(or __END__ on the separate line).
class Strongman
puts "I'm the best!"
end
is just syntactic sugar for Class.new method:
Strongman = Class.new do
puts "I'm the best!"
end
The same goes for 'module`.
for calls each and returns some kind of object. So you may think of it as something similar to method.
a = for i in 1..12; 42;end
puts a
# 1..12
In the end, it doesn't matter if it is method call or some kind of structure like C's int main(). Programming language decides what it should run first.

Can we have member variables in Interface?

I read somewhere that interfaces can have member variables.
Static final constants only, can use
them without qualification in classes
that implement the interface. On the
other paw, these unqualified names
pollute the namespace. You can use
them and it is not obvious where they
are coming from since the
qualification is optional.
I am not quite understood by what they meant? Any help?
What you read is incorrect. Interfaces cannot have member variables.
In VB.Net the only allowable definitions inside an interface are
Properties
Methods
Events
Type Definitions (not legal in C#)
I'm not entirely sure what the above paragraph is referring to. Based on the text though it sounds like it's refering to Java. They phrase static and final is most often associated with Java code and not .Net (static and readonly).
Can you give us some more context on it?
If you define a constant like this inside a class MyClass:
public static final int MY_CONSTANT = 1;
you can refer to it from other classes as MyClass.MY_CONSTANT, using the MyClass qualifier. This hints the location of the constant definition.
If you define such a constant in an interface MyInterface, you still can refer to it using MyInterface.MY_CONSTANT. However in the classes implementing MyInsterface you can simply use MY_CONSTANT without "MyInterface" prefix.
It may look convenient (less key strokes), but may lead to confusion because without qualifier (prefix) it is not clear where the constant was originally defined.
Adding member variables to interfaces would be bringing in MI through the back door.
Not available in .NET, sorry.
I wish it were there though.

Why aren't hot-swappable vtables a popular language feature?

In object-oriented programming, it's sometimes nice to be able to modify the behavior of an already-created object. Of course this can be done with relatively verbose techniques such as the strategy pattern. However, sometimes it would be nice to just completely change the type of the object by changing the vtable pointer after instantiation. This would be safe if, assuming you're switching from class A to class B:
class B is a subclass of class A and does not add any new fields, or
class B and class A have the same parent class. Neither do anything except override virtual functions from the parent class. (No new fields or virtual functions.)
In either case, A and B must have the same invariants.
This is hackable in C++ and the D programming language, because pointers can be arbitrarily cast around, but it's so ugly and hard to follow that I'd be scared to do it in code that needs to be understood by anyone else. Why isn't a higher-level way to do this generally provided?
Because the mindset of most languages designers is too static.
While such features are dangerous in the hand of programmers, they are necessary tools for library builders. For example, in Java one can create objects without calling a constructor (yes, you can!) but this power is only given to library designers. Still however, many features that library designers would kill for are alas not possible in Java. C# on the other hand is adding more and more dynamic features in each version. I am really looking forward to all the awesome libraries one can build using the upcoming DLR (dynamic language runtime).
In some dynamic languages such as Smalltalk (and also as far as I know Perl and Python, but not Ruby) it is totally possible to change the class of an object. In Pharo Smalltalk you achieve this with
object primitiveChangeClassTo: anotherObject
which changes the class of object to that of anotherObject. Please note that this is not the same as object become: anotherObject which exchanges all pointers of both objects.
You can do it in Python, by modifying the instance __class__ attribute:
>>> class A(object):
... def foo(self):
... print "I am an A"
...
>>>
>>> class B(object):
... def foo(self):
... print "I am a B"
...
>>>
>>> a = A()
>>> a.foo()
I am an A
>>> a.__class__
<class '__main__.A'>
>>> a.__class__ = B
>>>
>>> a
<__main__.B object at 0x017010B0>
>>> a.foo()
I am a B
However in 12 years of Python programming I have never had a use for it, and never seen anyone else use it. IMHO there is a huge danger that casual use of this feature will make your code hard to maintain and debug.
The only situation where I can imagine using it is for runtime debugging, e.g. to change an instance of a class whose creation I don't have control into a mock object or into a class that has been decorated with logging. I would not use it in production code.
You can do it in higher level languages - see the Smalltalk "become" message. The fact that this feature is almost impossible to use correctly even in ST could be the reason that statically typed languages like C++ don't support it.
To paraphrase the XoTcl documentation, it is because most languages which proclaim to be "object oriented" are not--they are class oriented. It sounds like XoTcl mixins, Ruby mixins, and Perl6 roles provide the functionality you're looking for.
What you're talking about is monkey patching, that is available in several high level dynamic language :
A monkey patch (also spelled
monkey-patch, MonkeyPatch) is a way to
extend or modify the runtime code of
dynamic languages (e.g. Smalltalk,
JavaScript, Objective-C, Ruby, Perl,
Python, Groovy, etc.) without altering
the original source code.