What features do you wish were in common languages? [closed] - language-features

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What features do you wish were in common languages? More precisely, I mean features which generally don't exist at all but would be nice to see, rather than, "I wish dynamic typing was popular."

I've often thought that "observable" would make a great field modifier (like public, private, static, etc.)
GameState {
observable int CurrentScore;
}
Then, other classes could declare an observer of that property:
ScoreDisplay {
observe GameState.CurrentScore(int oldValue, int newValue) {
...do stuff...
}
}
The compiler would wrap all access to the CurrentScore property with notification code, and observers would be notified immediately upon the value's modification.
Sure you can do the same thing in most programming languages with event listeners and property change handlers, but it's a huge pain in the ass and requires a lot of piecemeal plumbing, especially if you're not the author of the class whose values you want to observe. In which case, you usually have to write a wrapper subclass, delegating all operations to the original object and sending change events from mutator methods. Why can't the compiler generate all that dumb boilerplate code?

I guess the most obvious answer is Lisp-like macros. Being able to process your code with your code is wonderfully "meta" and allows some pretty impressive features to be developed from (almost) scratch.
A close second is double or multiple-dispatch in languages like C++. I would love it if polymorphism could extend to the parameters of a virtual function.

I'd love for more languages to have a type system like Haskell. Haskell utilizes a really awesome type inference system, so you almost never have to declare types, yet it's still a strongly typed language.
I also really like the way you declare new types in Haskell. I think it's a lot nicer than, e.g., object-oriented systems. For example, to declare a binary tree in Haskell, I could do something like:
data Tree a = Node a (Tree a) (Tree a) | Nothing
So the composite data types are more like algebraic types than objects. I think it makes reasoning about the program a lot easier.
Plus, mixing in type classes is a lot nicer. A type class is just a set of classes that a type implements -- sort of like an interface in a language like Java, but more like a mixin in a language like Ruby, I guess. It's kind of cool.
Ideally, I'd like to see a language like Python, but with data types and type classes like Haskell instead of objects.

I'm a big fan of closures / anonymous functions.
my $y = "world";
my $x = sub { print #_ , $y };
&$x( 'hello' ); #helloworld
and
my $adder = sub {
my $reg = $_[0];
my $result = {};
return sub { return $reg + $_[0]; }
};
print $adder->(4)->(3);
I just wish they were more commonplace.

Things from Lisp I miss in other languages:
Multiple return values
required, keyword, optional, and rest parameters (freely mixable) for functions
functions as first class objects (becoming more common nowadays)
tail call optimization
macros that operate on the language, not on the text
consistent syntax

To start things off, I wish the standard for strings was to use a prefix if you wanted to use escape codes, rather than their use being the default. E.g. in C# you can prefix with # for a raw string. Similarly, Python has the r prefix. I'd rather use #/r when I don't want a raw string and need escape codes.

More powerful templates that are actually designed to be used for metaprogramming, rather than C++ templates that are really designed for relatively simple generics and are Turing-complete almost by accident. The D programming language has these, but it's not very mainstream yet.

immutable keyword. Yes, you can make immutable objects, but that's lot pain in most of the languages.
class JustAClass
{
private int readonly id;
private MyClass readonly obj;
public MyClass
{
get
{
return obj;
}
}
}
Apparently it seems JustAClass is an immutable class. But that's not the case. Because another object hold the same reference, can modify the obj object.
So it's better to introduce new immutable keyword. When immutable is used that object will be treated immutable.

I like some of the array manipulation capabilities found in the Ruby language. I wish we had some of that built into .Net and Java. Of course, you can always create such a library, but it would be nice not to have to do that!
Also, static indexers are awesome when you need them.

Type inference. It's slowly making it's way into the mainstream languages but it's still not good enough. F# is the gold standard here

I wish there was a self-reversing assignment operator, which rolled back when out of scope. This would be to replace:
type datafoobak = item.datafoobak
item.datafoobak = 'tootle'
item.handledata()
item.datafoobak = datafoobak
with this
item.datafoobar #=# 'tootle'
item.handledata()
One could explicitely rollback such changes, but they'd roll back once out of scope, too. This kind of feature would be a bit error prone, maybe, but it would also make for much cleaner code in some cases. Some sort of shallow clone might be a more effective way to do this:
itemclone = item.shallowclone
itemclone.datafoobak='tootle'
itemclone.handledata()
However, shallow clones might have issues if their functions modified their internal data...though so would reversible assignments.

I'd like to see single-method and single-operator interfaces:
interface Addable<T> --> HasOperator( T = T + T)
interface Splittable<T> --> HasMethod( T[] = T.Split(T) )
...or something like that...
I envision it as being a typesafe implementation of duck-typing. The interfaces wouldn't be guarantees provided by the original class author. They'd be assertions made by a consumer of a third-party API, to provide limited type-safety in cases where the original authors hadn't anticipated.
(A good example of this in practice would be the INumeric interface that people have been clamboring for in C# since the dawn of time.)
In a duck-typed language like Ruby, you can call any method you want, and you won't know until runtime whether the operation is supported, because the method might not exist.
I'd like to be able to make small guarantees about type safety, so that I can polymorphically call methods on heterogeneous objects, as long as all of those objects have the method or operator that I want to invoke.
And I should be able to verify the existence of the methods/operators I want to call at compile time. Waiting until runtime is for suckers :o)

Lisp style macros.
Multiple dispatch.
Tail call optimization.
First class continuations.

Call me silly, but I don't think every feature belongs in every language. It's the "jack of all trades, master of none" syndrome. I like having a variety of tools available, each one of which is the best it can be for a particular task.

Functional functions, like map, flatMap, foldLeft, foldRight, and so on. Type system like scala (builder-safety). Making the compilers remove high-level libraries at compile time, while still having them if you run in "interpreted" or "less-compiled" mode (speed... sometimes you need it).

There are several good answers here, but i will add some:
1 - The ability to get a string representation for the current and caller code, so that i could output a variable name and its value easily, or print the name of the current class, function or a stack trace at any time.
2 - Pipes would be nice too. This feature is common in shells, but uncommon in other types of languages.
3 - The ability to delegate any number of methods to another class easily. This looks like inheritance, but even in the presence of inheritance, once in a while we need some kind of wrapper or stub which cannot be implemented as a child class, and forwarding all methods requires a lot of boilerplate code.

I'd like a language that was much more restrictive and was designed around producing good, maintainable code without any trickiness. Also, it should be designed to give the compiler the ability to check as much as possible at compile time.
Start with a newish VM based heavily OO language.
Remove complexities like Operator Overloading and multiple inheritance if they exist.
Force all non-final variables to Private.
Members should default to "Final" but should have a "Variable" tag to override it. (This may require built-in support for the builder pattern to be fully effective).
Variables should not allow a "Null" value by default, but variables and parameters should have a "nullable" tag that indicates that null is acceptable for that variable.
It would also be nice to be able to avoid some common questionable patterns:
Some built-in way to simplify IOC/DI to eliminate singletons,
Java--eliminate checked exceptions so people stop putting in empty catches.
Finally focus on code readability:
Named Parameters
Remove the ability to create methods more than, say, 100 lines long.
Add some complexity analysis to help detect complicated methods and classes.
I'm sure I haven't named 1/10 of the items possible, but basically I'm talking about something that compiles to the same bytecode as C# or Java, but is so restrictive that a programmer can hardly help but write good code.
And yes, I know there are lint-type tools that will do some of this, but I've never seen them on any project I've worked on (and they wouldn't physically run on the code I'm working on now, for instance) so they aren't being very helpful, and I would love to see a compile actually fail when you type in a 101 line method...

Related

Why is adding methods to a type different than adding a sub or an operator in perl6?

Making subs/procedures available for reuse is one core function of modules, and I would argue that it is the fundamental way how a language can be composable and therefore efficient with programmer time:
if you create a type in your module, I can create my own module that adds a sub that operates on your type. I do not have to extend your module to do that.
# your module
class Foo {
has $.id;
has $.name;
}
# my module
sub foo-str(Foo:D $f) is export {
return "[{$f.id}-{$f.name}]"
}
# someone else using yours and mine together for profit
my $f = Foo.new(:id(1234), :name("brclge"));
say foo-str($f);
As seen in Overloading operators for a class this composability of modules works equally well for operators, which to me makes sense since operators are just some kinda syntactic sugar for subs anyway (in my head at least). Note that the definition of such an operator does not cause any surprising change of behavior of existing code, you need to import it into your code explicitly to get access to it, just like the sub above.
Given this, I find it very odd that we do not have a similar mechanism for methods, see e.g. the discussion at How do you add a method to an existing class in Perl 6?, especially since perl6 is such a method-happy language. If I want to extend the usage of an existing type, I would want to do that in the same style as the original module was written in. If there is a .is-prime on Int, it must be possible for me to add a .is-semi-prime as well, right?
I read the discussion at the link above, but don't quite buy the "action at a distance" argument: how is that different from me exporting another multi sub from a module? for example the rust way of making this a lexical change (Trait + impl ... for) seems quite hygienic to me, and would be very much in line with the operator approach above.
More interesting (to me at least) than the technicalities is the question if language design: isn't the ability to provide new verbs (subs, operators, methods) for existing nouns (types) a core design goal for a language like perl6? If it is, why would it treat methods differently? And if it does treat them differently for a good reason, does that not mean we are using way to many non-composable methods as nouns where we should be using subs instead?
From a language design perspective, it all comes down to a simple question: which language are we speaking? In Perl 6, this is a question about which we always try to be very clear.
The notion of ones current language in Perl 6 is defined entirely in terms of lexical scope. Sub declarations are lexically scoped. When we import symbols from a module, including extra multi candidates, those are lexically scoped. When we perform language tweaks - such as introducing new operators - those are lexically scoped. Verbs in our current language - that is, subroutine calls - are those with a lexical definition. (Operators are simply sub calls with more interesting parsing.) Since lexical scopes are closed at the end of compile time, the compiler has a complete view of the current language. That's why sub calls to non-existent subs, or references to undeclared variables, are detected and reported at compile time, as well as some basic compile-time type checking; future Perl 6 versions are likely to extend the set of compile-time checks that can be expected. The current language is the static, early-bound, part of Perl 6.
By contrast, a method call is a verb to be interpreted in the target object's language. This is the dynamic, late-bound, part of Perl 6. While the most immediate result of that is the typical polymorphism found in various forms in implementations of OO, thanks to meta-programming even the manner in which a verb is interpreted is up for grabs. For example, a monitor will acquire a lock while it interprets the verb and release it afterwards. Other objects might have been constructed based on things other than Perl 6 code, and so the interpretation of a verb doesn't mean invoking code written as a Perl 6 method. Or the code might be somewhere over the network. Who knows? Well, certainly not the caller, and that's the point, and the power, and the risk, of late binding.
The Perl 6 answer to "I want to extend the range of verbs I can use with this object in my current language" is very simple: use language features that relate to extending the current language! There's even a special syntax, $obj.&foo, that allows for a verb foo to be defined in the current language - by writing a sub - and then invoked much as if it's a method on the object. However, the small syntactic distinction makes it clear to the reader - and to the compiler - what is going on, and which language is getting to define that verb.
Through the use of augment it is possible to extend the language defined by some type of objects. However, it's rarely the best way to do things, given that it will have global effect, and also scatter the definition of the language of the object.
Much of what we do in programming is about building languages. By that I don't mean new syntax; most of our new languages - even in a language as open to mutation as Perl 6 - are just nouns and verbs defined using standard language features. However, in any non-trivial program, we can't keep every detail of every language in mind at once. When I go to the restaurant and order a schnitzel, I don't know how the order will be transported to the kitchen, what the kitchen looks like, whether the schnitzel is hammered out, breaded, and cooked on demand, or just served from a (hopefully not too stale) cache of prepared schnitzels. The kitchen and I have just enough shared meaning to make the right kind of thing happen, but I don't know how they'll precisely react to my request and they need not know what I'll do in the meantime. This kind of thinking is acknowledged by OO itself - at least when we fully embrace it - and at a larger scale by concepts such as bounded contexts, as found in Domain Driven Design.
In summary, Perl 6 tries to help us keep our languages straight: to know what is in our current language, and what we express with only limited understanding. That distinction is encoded by the sub/method distinction, which also turns out to be a sensible place to hang a static/dynamic distinction too.

Is Polymorphism a waste to apply for the classes that we exactly know the type prior run-time?

Run-time Polymorphism can be used to let the run-time to dynamically load the exact concrete class of an abstract class/interface. (You can take Animal/Dog, Vehicle/Car examples)
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
When I write OO code, I tend to use most-general type I can on the left-hand side of the assignment. This immediately means that my answer to your question is - no.
Here's the example:
Animal x = new Dog();
...
x.move();
The reason why I'm doing this is that I'm probably going to split beginning and end of the operation into two distinct operations. My methods are extremely short in practice.
Applied to the same example:
function moveDog() {
move(new Dog());
}
function move(Animal animal) {
animal.move();
}
As you can see, it would make no sense for the move function to know what kind of animal it is really moving.
Generally, it is compiler's duty to figure whether in a given code base any concrete call has been made with an overridden move() method. Some compilers can detect that no overridden method will be subjected to them and then they remove dynamic dispatch at compile time. With some luck, my code above would compile the same whether move function receives Animal or Dog.
Now, this is theory. In practice, there are two important things. First, compilers that are widely used have still not started using such aggressive optimization techniques as detecting static method calls, as opposed to calls that require dynamic dispatch. Second, the first thing doesn't matter too much with CPU power we have today.
I have been writing highly optimized code for fifteen years already and I have met the situation in which I had to factor polymorphic calls out. That is why I strongly recommend to apply polymorphism as much as possible. When the time comes to add some classes, to incorporate new features, polymorphic calls will likely be the tool to seamlessly add new classes to the existing design. If you used overly concrete types during development, it could easily happen that you cannot add new feature to the given code base.
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
Knowing the type at compile time is not necessarily a yes/no thing across all the code in an app and an object's entire lifetime, given techniques for type erasure. But, ignoring those classic uses of polymorphism, there are still other potential reasons such as...
(sorry - pretty obvious one this) to make it easier to change the implementation should another become available later
to make it easier to "mock" an implementation for testing (i.e. provide objects that pretend to provide some service or function, but have more scripted/controllable/observable behaviours to let tests put some dependent code through its paces)
hide aspects of the implementation that might otherwise have to be exposed (e.g. in C++, a class/struct definition must declare all the protected and private members)
this is sometimes for Intellectual Property protection; at other times, so more changes can be made to the implementation without having to make a change the "header" file that would typically trigger recompilation of a lot of dependent code
to aid in modelling and application design, using the "interfaces" to cleanly specify the intended APIs, which can then provide a more stable reference for comparison as the implementations are fleshed out

Is it good convention for a class to perform functions on itself?

I've always been taught that if you are doing something to an object, that should be an external thing, so one would Save(Class) rather than having the object save itself: Class.Save().
I've noticed that in the .Net libraries, it is common to have a class modify itself as with String.Format() or sort itself as with List.Sort().
My question is, in strict OOP is it appropriate to have a class which performs functions on itself when called to do so, or should such functions be external and called on an object of the class' type?
Great question. I have just recently reflected on a very similar issue and was eventually going to ask much the same thing here on SO.
In OOP textbooks, you sometimes see examples such as Dog.Bark(), or Person.SayHello(). I have come to the conclusion that those are bad examples. When you call those methods, you make a dog bark, or a person say hello. However, in the real world, you couldn't do this; a dog decides himself when it's going to bark. A person decides itself when it will say hello to someone. Therefore, these methods would more appropriately be modelled as events (where supported by the programming language).
You would e.g. have a function Attack(Dog), PlayWith(Dog), or Greet(Person) which would trigger the appropriate events.
Attack(dog) // triggers the Dog.Bark event
Greet(johnDoe) // triggers the Person.SaysHello event
As soon as you have more than one parameter, it won't be so easy deciding how to best write the code. Let's say I want to store a new item, say an integer, into a collection. There's many ways to formulate this; for example:
StoreInto(1, collection) // the "classic" procedural approach
1.StoreInto(collection) // possible in .NET with extension methods
Store(1).Into(collection) // possible by using state-keeping temporary objects
According to the thinking laid out above, the last variant would be the preferred one, because it doesn't force an object (the 1) to do something to itself. However, if you follow that programming style, it will soon become clear that this fluent interface-like code is quite verbose, and while it's easy to read, it can be tiring to write or even hard to remember the exact syntax.
P.S.: Concerning global functions: In the case of .NET (which you mentioned in your question), you don't have much choice, since the .NET languages do not provide for global functions. I think these would be technically possible with the CLI, but the languages disallow that feature. F# has global functions, but they can only be used from C# or VB.NET when they are packed into a module. I believe Java also doesn't have global functions.
I have come across scenarios where this lack is a pity (e.g. with fluent interface implementations). But generally, we're probably better off without global functions, as some developers might always fall back into old habits, and leave a procedural codebase for an OOP developer to maintain. Yikes.
Btw., in VB.NET, however, you can mimick global functions by using modules. Example:
Globals.vb:
Module Globals
Public Sub Save(ByVal obj As SomeClass)
...
End Sub
End Module
Demo.vb:
Imports Globals
...
Dim obj As SomeClass = ...
Save(obj)
I guess the answer is "It Depends"... for Persistence of an object I would side with having that behavior defined within a separate repository object. So with your Save() example I might have this:
repository.Save(class)
However with an Airplane object you may want the class to know how to fly with a method like so:
airplane.Fly()
This is one of the examples I've seen from Fowler about an aenemic data model. I don't think in this case you would want to have a separate service like this:
new airplaneService().Fly(airplane)
With static methods and extension methods it makes a ton of sense like in your List.Sort() example. So it depends on your usage pattens. You wouldn't want to have to new up an instance of a ListSorter class just to be able to sort a list like this:
new listSorter().Sort(list)
In strict OOP (Smalltalk or Ruby), all methods belong to an instance object or a class object. In "real" OOP (like C++ or C#), you will have static methods that essentially stand completely on their own.
Going back to strict OOP, I'm more familiar with Ruby, and Ruby has several "pairs" of methods that either return a modified copy or return the object in place -- a method ending with a ! indicates that the message modifies its receiver. For instance:
>> s = 'hello'
=> "hello"
>> s.reverse
=> "olleh"
>> s
=> "hello"
>> s.reverse!
=> "olleh"
>> s
=> "olleh"
The key is to find some middle ground between pure OOP and pure procedural that works for what you need to do. A Class should do only one thing (and do it well). Most of the time, that won't include saving itself to disk, but that doesn't mean Class shouldn't know how to serialize itself to a stream, for instance.
I'm not sure what distinction you seem to be drawing when you say "doing something to an object". In many if not most cases, the class itself is the best place to define its operations, as under "strict OOP" it is the only code that has access to internal state on which those operations depend (information hiding, encapsulation, ...).
That said, if you have an operation which applies to several otherwise unrelated types, then it might make sense for each type to expose an interface which lets the operation do most of the work in a more or less standard way. To tie it in to your example, several classes might implement an interface ISaveable which exposes a Save method on each. Individual Save methods take advantage of their access to internal class state, but given a collection of ISaveable instances, some external code could define an operation for saving them to a custom store of some kind without having to know the messy details.
It depends on what information is needed to do the work. If the work is unrelated to the class (mostly equivalently, can be made to work on virtually any class with a common interface), for example, std::sort, then make it a free function. If it must know the internals, make it a member function.
Edit: Another important consideration is performance. In-place sorting, for example, can be miles faster than returning a new, sorted, copy. This is why quicksort is faster than merge sort in the vast majority of cases, even though merge sort is theoretically faster, which is because quicksort can be performed in-place, whereas I've never heard of an in-place merge-sort. Just because it's technically possible to perform an operation within the class's public interface, doesn't mean that you actually should.

Why stick to get-set and not car.speed() and car.speed(55) respectively?

Apart from unambiguous clarity, why should we stick to:
car.getSpeed() and car.setSpeed(55)
when this could be used as well :
car.speed() and car.speed(55)
I know that get() and set() are useful to keep any changes to the data member manageable by keeping everything in one place.
Also, obviously, I understand that car.speed() and car.speed(55) are the same function, which makes this wrong, but then in PHP and also in Zend Framework, the same action is used for GET, POST, postbacks.
In VB and C# there are "properties", and are used by many, much to the disgust of purists I've heard, and there are things in Ruby like 5.times and .each, .to_i etc.
And you have operator overloading, multiple inheritance, virtual functions in C++, certain combinations of which could drive anyone nuts.
I mean to say that there are so many paradigms and ways in which things are done that it seems odd that nobody has tried the particular combination that I mentioned.
As for me, my reason is that it is short and cleaner to read the code.
Am I very wrong, slightly wrong, is this just odd and so not used, or what else?
If I still decide to stay correct, I could use car.speed() and car.setSpeed(55).
Is that wrong in any way (just omitting the "get" )?
Thanks for any explanations.
If I called car.speed(), I might think I am telling the car to speed, in other words to increase speed and break the speed limit. It is not clearly a getter.
Some languages allow you to declare const objects, and then restrict you to only calling functions that do not modify the data of the object. So it is necessary to have seperate functions for modification and read operations. While you could use overloads on paramaters to have two functions, I think it would be confusing.
Also, when you say it is clearer to read, I can argue that I have to do a look ahead to understand how to read it:
car.speed()
I read "car speed..." and then I see there is no number so I revise and think "get car speed".
car.getSpeed()
I read "for this car, get speed"
car.setSpeed(55)
I read "for this car, set speed to 55"
It seems you have basically cited other features of the language as being confusing, and then used that as a defense for making getters/setters more confusing? It almost sounds like are admitting that what you have proposed is more confusing. These features are sometimes confusing because of how general purpose they are. Sometimes abstractions can be more confusing, but in the end they often serve the purpose of being more reusable. I think if you wanted to argue in favor of speed() and speed(55), you'd want to show how that can enable new possibilities for the programmer.
On the other hand, C# does have something like what you describe, since properties behave differently as a getter or setter depending on the context in what they are used:
Console.WriteLine(car.Speed); //getter
car.Speed = 55 //setter
But while it is a single property, there are two seperate sections of code for implementing the getting and setting, and it is clear that this is a getter/setter and not a function speed, because they omit the () for properties. So car.speed() is clearly a function, and car.speed is clearly a property getter.
IMHO the C# style of having properties as syntatic sugar for get and set methods is the most expressive.
I prefer active objects which encapsulate operations rather than getters and setters, so you get a semantically richer objects.
For example, although an ADT rather than a business object, even the vector in C++ has paired functions:
size_type capacity() const // how many elements space is reserved for in the vector
void reserve(size_type n) // ensure space is reserved for at least n elements
and
void push_back ( const T& ) // inserts an element at the end
size_type size () const // the number of elements in the vector
If you drive a car, you can set the accelerator, clutch, brakes and gear selection, but you don't set the speed. You can read the speed off the speedometer. It's relatively rare to want both a setter and a getter on an object with behaviour.
FYI, Objective-C uses car.speed() and car.setSpeed(55) (except in a different syntax, [car speed] and [car setSpeed:55].
It's all about convention.
There is no right answer, it's a matter of style, and ultimately it does not matter. Spend your brain cycles elsewhere.
FWIW I prefer the class.noun() for the getter, and class.verb() for the setter. Sometimes the verb is just setNoun(), but other times not. It depends on the noun. For example:
my_vector.size()
returns the size, and
my_vector.resize(some_size)
changes the size.
The groovy approach to properties is quite excellent IMHO, http://groovy.codehaus.org/Groovy+Beans
The final benchmarks of your code should be this:
Does it work correctly?
Is it easy to fix if it breaks?
Is it easy to add new features in the future?
Is it easy for someone else to come in and fix/enhance it?
If those 4 points are covered, I can't imagine why anybody would have a problem with it. Most of the "Best Practices" are generally geared towards achieving those 4 points.
Use whichever style works for you, just be consistent about it, and you should be fine.
This is just a matter of convention. In Smalltalk, it's done the way you suggest and I don't recall ever hearing anybody complain about it. Getting the car's speed is car speed, and setting the car's speed to 55 is car speed:55.
If I were to venture a guess, I would say the reason this style didn't catch on is because of the two lines down which object-oriented programming have come to us: C++ and Objective-C. In C++ (even more so early in its history), methods are very closely related to C functions, and C functions are conventionally named along the lines of setWhatever() and do not have overloading for different numbers of arguments, so that general style of naming was kept. Objective-C was largely preserved by NeXT (which later became Apple), and NeXT tended to favor verbosity in their APIs and especially to distinguish between different kinds of methods — if you're doing anything but just accessing a property, NeXT wanted a verb to make it clear. So that became the convention in Cocoa, which is the de facto standard library for Objective-C these days.
It's convention Java has a convention of getters and setters C# has properties, python has public fields and JavaScript frameworks tend to use field() to get and field(value) to set
Apart from unambiguous clarity, why should we stick to:
car.getSpeed() and car.setSpeed(55)
when this could be used as well : car.speed() and car.speed(55)
Because in all languages I've encountered, car.speed() and car.speed(55) are the same in terms of syntax. Just looking at them like that, both could return a value, which isn't true for the latter if it was meant to be a setter.
What if you intend to call the setter but forget to put in the argument? The code is valid, so the compiler doesn't complain, and it doesn't throw an immediate runtime error; it's a silent bug.
.() means it's a verb.
no () means it's a noun.
car.Speed = 50;
x = car.Speed
car.Speed.set(30)
car.setProperty("Speed",30)
but
car.Speed()
implies command to exceed speed limit.

Why would I want to use Interfaces? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I understand that they force you to implement methods and such but what I cant understand is why you would want to use them. Can anybody give me a good example or explanation on why I would want to implement this.
One specific example: interfaces are a good way of specifying a contract that other people's code must meet.
If I'm writing a library of code, I may write code that is valid for objects that have a certain set of behaviours. The best solution is to specify those behaviours in an interface (no implementation, just a description) and then use references to objects implementing that interface in my library code.
Then any random person can come along, create a class that implements that interface, instantiate an object of that class and pass it to my library code and expect it to work. Note: it is of course possible to strictly implement an interface while ignoring the intention of the interface, so merely implementing an interface is no guarantee that things will work. Stupid always finds a way! :-)
Another specific example: two teams working on different components that must co-operate. If the two teams sit down on day 1 and agree on a set of interfaces, then they can go their separate ways and implement their components around those interfaces. Team A can build test harnesses that simulate the component from Team B for testing, and vice versa. Parallel development, and fewer bugs.
The key point is that interfaces provide a layer of abstraction so that you can write code that is ignorant of unnecessary details.
The canonical example used in most textbooks is that of sorting routines. You can sort any class of objects so long as you have a way of comparing any two of the objects. You can make any class sortable therefore by implementing the IComparable interface, which forces you to implement a method for comparing two instances. All of the sort routines are written to handle references to IComparable objects, so as soon as you implement IComparable you can use any of those sort routines on collections of objects of your class.
The easiest way of understanding interfaces is that they allow different objects to expose COMMON functionality. This allows the programmer to write much simplier, shorter code that programs to an interface, then as long as the objects implement that interface it will work.
Example 1:
There are many different database providers, MySQL, MSSQL, Oracle, etc. However all database objects can DO the same things so you will find many interfaces for database objects. If an object implements IDBConnection then it exposes the methods Open() and Close(). So if I want my program to be database provider agnostic, I program to the interface and not to the specific providers.
IDbConnection connection = GetDatabaseConnectionFromConfig()
connection.Open()
// do stuff
connection.Close()
See by programming to an interface (IDbconnection) I can now SWAP out any data provider in my config but my code stays the exact same. This flexibility can be extremely useful and easy to maintain. The downside to this is that I can only perform 'generic' database operations and may not fully utilize the strength that each particular provider offers so as with everything in programming you have a trade off and you must determine which scenario will benefit you the most.
Example 2:
If you notice almost all collections implement this interface called IEnumerable. IEnumerable returns an IEnumerator which has MoveNext(), Current, and Reset(). This allows C# to easily move through your collection. The reason it can do this is since it exposes the IEnumerable interface it KNOWS that the object exposes the methods it needs to go through it. This does two things. 1) foreach loops will now know how to enumerate the collection and 2) you can now apply powerful LINQ exprssions to your collection. Again the reason why interfaces are so useful here is because all collections have something in COMMON, they can be moved through. Each collection may be moved through a different way (linked list vs array) but that is the beauty of interfaces is that the implementation is hidden and irrelevant to the consumer of the interface. MoveNext() gives you the next item in the collection, it doesn't matter HOW it does it. Pretty nice, huh?
Example 3:
When you are designing your own interfaces you just have to ask yourself one question. What do these things have in common? Once you find all the things that the objects share, you abstract those properties/methods into an interface so that each object can inherit from it. Then you can program against several objects using one interface.
And of course I have to give my favorite C++ polymorphic example, the animals example. All animals share certain characteristics. Lets say they can Move, Speak, and they all have a Name. Since I just identified what all my animals have in common and I can abstract those qualities into the IAnimal interface. Then I create a Bear object, an Owl object, and a Snake object all implementing this interface. The reason why you can store different objects together that implement the same interface is because interfaces represent an IS-A replationship. A bear IS-A animal, an owl IS-A animal, so it makes since that I can collect them all as Animals.
var animals = new IAnimal[] = {new Bear(), new Owl(), new Snake()} // here I can collect different objects in a single collection because they inherit from the same interface
foreach (IAnimal animal in animals)
{
Console.WriteLine(animal.Name)
animal.Speak() // a bear growls, a owl hoots, and a snake hisses
animal.Move() // bear runs, owl flys, snake slithers
}
You can see that even though these animals perform each action in a different way, I can program against them all in one unified model and this is just one of the many benefits of Interfaces.
So again the most important thing with interfaces is what do objects have in common so that you can program against DIFFERENT objects in the SAME way. Saves time, creates more flexible applications, hides complexity/implementation, models real-world objects / situations, among many other benefits.
Hope this helps.
One typical example is a plugin architecture. Developer A writes the main app, and wants to make certain that all plugins written by developer B, C and D conform to what his app expects of them.
Interfaces define contracts, and that's the key word.
You use an interface when you need to define a contract in your program but you don't really care about the rest of the properties of the class that fulfills that contract as long as it does.
So, let's see an example. Suppose you have a method which provides the functionality to sort a list. First thing .. what's a list? Do you really care what elements does it holds in order to sort the list? Your answer should be no... In .NET (for example) you have an interface called IList which defines the operations that a list MUST support so you don't care the actual details underneath the surface.
Back to the example, you don't really know the class of the objects in the list... neither you care. If you can just compare the object you might as well sort them. So you declare a contract:
interface IComparable
{
// Return -1 if this is less than CompareWith
// Return 0 if object are equal
// Return 1 if CompareWith is less than this
int Compare(object CompareWith);
}
that contract specify that a method which accepts an object and returns an int must be implemented in order to be comparable. Now you have defined an contract and for now on you don't care about the object itself but about the contract so you can just do:
IComparable comp1 = list.GetItem(i) as IComparable;
if (comp1.Compare(list.GetItem(i+1)) < 0)
swapItem(list,i, i+1)
PS: I know the examples are a bit naive but they are examples ...
When you need different classes to share same methods you use Interfaces.
Interfaces are absolutely necessary in an object-oriented system that expects to make good use of polymorphism.
A classic example might be IVehicle, which has a Move() method. You could have classes Car, Bike and Tank, which implement IVehicle. They can all Move(), and you could write code that didn't care what kind of vehicle it was dealing with, just so it can Move().
void MoveAVehicle(IVehicle vehicle)
{
vehicle.Move();
}
The pedals on a car implement an interface. I'm from the US where we drive on the right side of the road. Our steering wheels are on the left side of the car. The pedals for a manual transmission from left to right are clutch -> brake -> accelerator. When I went to Ireland, the driving is reversed. Cars' steering wheels are on the right and they drive on the left side of the road... but the pedals, ah the pedals... they implemented the same interface... all three pedals were in the same order... so even if the class was different and the network that class operated on was different, i was still comfortable with the pedal interface. My brain was able to call my muscles on this car just like every other car.
Think of the numerous non-programming interfaces we can't live without. Then answer your own question.
Imagine the following basic interface which defines a basic CRUD mechanism:
interface Storable {
function create($data);
function read($id);
function update($data, $id);
function delete($id);
}
From this interface, you can tell that any object that implements it, must have functionality to create, read, update and delete data. This could by a database connection, a CSV file reader, and XML file reader, or any other kind of mechanism that might want to use CRUD operations.
Thus, you could now have something like the following:
class Logger {
Storable storage;
function Logger(Storable storage) {
this.storage = storage;
}
function writeLogEntry() {
this.storage.create("I am a log entry");
}
}
This logger doesn't care if you pass in a database connection, or something that manipulates files on disk. All it needs to know is that it can call create() on it, and it'll work as expected.
The next question to arise from this then is, if databases and CSV files, etc, can all store data, shouldn't they be inherited from a generic Storable object and thus do away with the need for interfaces? The answer to this is no... not every database connection might implement CRUD operations, and the same applies to every file reader.
Interfaces define what the object is capable of doing and how you need to use it... not what it is!
Interfaces are a form of polymorphism. An example:
Suppose you want to write some logging code. The logging is going to go somewhere (maybe to a file, or a serial port on the device the main code runs on, or to a socket, or thrown away like /dev/null). You don't know where: the user of your logging code needs to be free to determine that. In fact, your logging code doesn't care. It just wants something it can write bytes to.
So, you invent an interface called "something you can write bytes to". The logging code is given an instance of this interface (perhaps at runtime, perhaps it's configured at compile time. It's still polymorphism, just different kinds). You write one or more classes implementing the interface, and you can easily change where logging goes just by changing which one the logging code will use. Someone else can change where logging goes by writing their own implementations of the interface, without changing your code. That's basically what polymorphism amounts to - knowing just enough about an object to use it in a particular way, while allowing it to vary in all the respects you don't need to know about. An interface describes things you need to know.
C's file descriptors are basically an interface "something I can read and/or write bytes from and/or to", and almost every typed language has such interfaces lurking in its standard libraries: streams or whatever. Untyped languages usually have informal types (perhaps called contracts) that represent streams. So in practice you almost never have to actually invent this particular interface yourself: you use what the language gives you.
Logging and streams are just one example - interfaces happen whenever you can describe in abstract terms what an object is supposed to do, but don't want to tie it down to a particular implementation/class/whatever.
There are a number of reasons to do so. When you use an interface, you're ready in the future when you need to refactor/rewrite the code. You can also provide an sort of standardized API for simple operations.
For example, if you want to write a sort algorithm like the quicksort, all you need to sort any list of objects is that you can successfuuly compare two of the objects. If you create an interface, say ISortable, than anyone who creates objects can implement the ISortable interface and they can use your sort code.
If you're writing code that uses a database storage, and you write to an storage interface, you can replace that code down the line.
Interfaces encourage looser coupling of your code so that you can have greater flexibility.
In an article in my blog I briefly describe three purposes interfaces have.
Interfaces may have different
purposes:
Provide different implementations for the same goal. The typical example
is a list, which may have different
implementations for different
performance use cases (LinkedList,
ArrayList, etc.).
Allow criteria modification. For example, a sort function may accept a
Comparable interface in order to
provide any kind of sort criteria,
based on the same algorithm.
Hide implementation details. This also makes it easier for a user to
read the comments, since in the body
of the interface there are only
methods, fields and comments, no long
chunks of code to skip.
Here's the article's full text: http://weblogs.manas.com.ar/ary/2007/11/
The best Java code I have ever seen defined almost all object references as instances of interfaces instead of instances of classes. It is a strong sign of quality code designed for flexibility and change.
As you noted, interfaces are good for when you want to force someone to make it in a certain format.
Interfaces are good when data not being in a certain format can mean making dangerous assumptions in your code.
For example, at the moment I'm writing an application that will transform data from one format in to another. I want to force them to place those fields in so I know they will exist and will have a greater chance of being properly implemented. I don't care if another version comes out and it doesn't compile for them because it's more likely that data is required anyways.
Interfaces are rarely used because of this, since usually you can make assumptions or don't really require the data to do what you need to do.
An interface, defines merely the interface. Later, you can define method (on other classes), which accepted interfaces as parameters (or more accurately, object which implement that interface). This way your method can operate on a large variety of objects, whose only commonality is that they implement that interface.
First, they give you an additional layer of abstraction. You can say "For this function, this parameter must be an object that has these methods with these parameters". And you probably want to also set the meaning of these methods, in somehow abstracted terms, yet allowing you to reason about the code. In duck-typed languages you get that for free. No need for explicit, syntax "interfaces". Yet you probably still create a set of conceptual interfaces, something like contracts (like in Design by Contract).
Furthermore, interfaces are sometimes used for less "pure" purposes. In Java, they can be used to emulate multiple inheritance. In C++, you can use them to reduce compile times.
In general, they reduce coupling in your code. That's a good thing.
Your code may also be easier to test this way.
Let's say you want to keep track of a collection of stuff. Said collections must support a bunch of things, like adding and removing items, and checking if an item is in the collection.
You could then specify an interface ICollection with the methods add(), remove() and contains().
Code that doesn't need to know what kind of collection (List, Array, Hash-table, Red-black tree, etc) could accept objects that implemented the interface and work with them without knowing their actual type.
In .Net, I create base classes and inherit from them when the classes are somehow related. For example, base class Person could be inherited by Employee and Customer. Person might have common properties like address fields, name, telephone, and so forth. Employee might have its own department property. Customer has other exclusive properties.
Since a class can only inherit from one other class in .Net, I use interfaces for additional shared functionality. Sometimes interfaces are shared by classes that are otherwise unrelated. Using an interface creates a contract that developers will know is shared by all of the other classes implementing it. I also forces those classes to implement all of its members.
In C# interfaces are also extremely useful for allowing polymorphism for classes that do not share the same base classes. Meaning, since we cannot have multiple inheritance you can use interfaces to allow different types to be used. It's also a way to allow you to expose private members for use without reflection (explicit implementation), so it can be a good way to implement functionality while keeping your object model clean.
For example:
public interface IExample
{
void Foo();
}
public class Example : IExample
{
// explicit implementation syntax
void IExample.Foo() { ... }
}
/* Usage */
Example e = new Example();
e.Foo(); // error, Foo does not exist
((IExample)e).Foo(); // success
I think you need to get a good understand of design patterns so see there power.
Check out
Head First Design Patterns