C++0x OOP paradigm shifts? - oop

Are there any and if yes, which ones?

What do you mean by "paradigm shift"?
C++0x introduces many new features that will of course change the way you write programs.
There are little things that will probably have a big impact on the syntax used, but which won't change the semantics that much. Examples are lambda functions and range-based for-loop: they'll provide a better syntax for what we all are already doing.
Then there are big things that will change the way things work. In particular:
Rvalue reference could make you think in a different way about how objects work and how to use them: it'll probably be easier to pass (and return) objects by value.
Explicit conversion operators will let us define conversion operators, while doing this in C++03 was risky.

C++0x does not introduce any new paradigms and doesn't change any paradigms.
Edit: The implementation of those paradigms, however, is subject to some pretty big change with variadic templates and rvalue references, just to begin with.

As a matter of fact, I think that yes, there is a paradigm shift. Caveat: I have never written object-oriented code in C++.
The change that may allow a paradigm shift is the standardization of the smart-pointer std::shared_ptr. Now finally does the standard library contain a well implemented, efficient and probably bug-free shared pointer.
C++ experts know how hard it is to get them right, and that most library implementations of reference-counting pointers probably contain subtle bugs. It’s therefore important to finally have a reliable implementation even if (for some brain-dead reason) the company forbids the use of Boost.
This might have drastic consequences on the number of memory leaks: If object oriented C++ applications stopped leaking memory, that would be a paradigm shift.
On the other hand, companies that use their own smart pointers in OOP code will probably not switch to C++0x in the next ten years anyway.
(Just to emphasize this once more, since it’s been repeatedly misunderstood: I am not referring to the technology of smart pointers as a paradigm shift. I am referring to the complete disappearance of memory leaks in object-oriented architectures.)

Related

How is it possible to have a purely object-oriented language?

Java is considered an OOP language, despite it not quite being purely OOP. Java contains 8 primitives, and in an interview, James Gosling explains why:
Bill Venners: Why are there primitive types in Java? Why wasn't
everything just an object?
James Gosling: Totally an efficiency thing. There are all kinds of
people who have built systems where ints and that are all objects.
There are a variety of ways to do that, and all of them have some
pretty serious problems. Some of them are just slow, because they
allocate memory for everything. Some of them try to do objects where
sometimes they are objects, sometimes they are not (which is what the
standard LISP system did), and then things get really weird. It kind
of works, but it's strange.
So it seems that both memory and speed are issues that Java's primitives solve. However, this got me wondering how can a language be true, pure object-oriented?
If only a byte primitive existed, you could build from there. Creating integers, chars and eventually floats and doubles. But without any base structure at all, how could you build anything? Isn't at least some base primitive necessary? In other words, isn't a base data-structure needed in to expand from?
If you're asking if there are languages that have no way to interact with primitive types, then you might want to look at something like Scala. From that page:
Scala is a pure object-oriented language in the sense that every value is an object.
However, as you point out (for Kotlin):
the compiler maps them to JVM primitives when at all possible to save memory
If your definition of what object-oriented languages can be requires that everything is always represented as an object, then a purely object-oriented language is impossible. You can't build a language that runs on a real computer that only has objects. This is because the computer must have a way to represent the data natively. This is essentially what primitives in object-oriented languages are: The native forms of data that the underlying computer (or VM) can represent. No matter what you do, you will always need to have some non-object representation of data in order for the computer to do operations with it. Even if you built a JavaScript interpreter that really represented primitives as objects, in order to add two integers, the interpreter would have to have load the integers into CPU registers and use some form of an add instruction.
But that explanation sort of misses the point of object-oriented programming. A programming language is not the same as a program. Languages are just a tool for us to make computers do what we want - they don't actually exist at runtime. You would probably say that a program written in Kotlin or Scala is more object-oriented than a program written in C, despite both languages compiling to the same assembly instructions at runtime.
So, if you relax your definition of pure object-oriented programming to no longer be concerned with what the runtime representation of data is, then you'll find that purely object-oriented languages are possible. When programming Scala, you never interact with anything that's not an object. Even if your Int becomes a 'primitive' at runtime, it doesn't really matter, because you, as the programmer, never really have to think about that (at least, in an ideal world where performance and memory never matter). The language definition of Scala doesn't include the concept of primitives at all - they are part of the implementation of the language, not the language itself.
As far your example of Java goes, Java probably isn't a purely object-oriented language by most definitions. It is, however, mostly object-oriented. Java is often mentioned as the de facto object oriented language because it was much more object oriented than what came before it.
Even further, the term object-oriented doesn't really have a definitive meaning. To some people it might mean that everything has to be an object, and to others it might just mean that there need to be objects, some definitions require the concept of classes, some don't, etc.

What are the advantages or features of object oriented programming?

What makes everyone went from sequential languages to ​​object languages ?
According to Wikipedia the features of object oriented programming are data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance. For me data abstraction, encapsulation, messaging, modularity also exist in sequential languages. Only the polymorphism, and inheritance are specific to object oriented programming. Is this correct ?
Many non-OOP languages can certainly build those features. Just looking from a C vs. C++ area, you can provide encapsulation in C by using opaque pointers, with a suite of functions that take/return these opaque objects, and an internal set of functions that are all file-static. You can even do polymorphism and inheritance with function pointers and encapsulated objects.
Then again, we could also all still be programming in assembly or machine language. The reason to bring any feature into a language is to make it easier to use that feature.
Again, looking at C vs. C++, dealing with opaque pointers and the like is annoying, repetitive, and semi-difficult. With C++, you can achieve the same effect with much less code. It's obvious to everyone what is going on. It's a lot more difficult to break (though not impossible). Plus, you make it easy to break encapsulation if you need, since you can define language constructs like friend that provide exceptions where necessary.
And then there are those things that are really hard to implement without direct language support. Operator overloading is impossible of course, but function overloading is really, really hard to do without language support.
Most important of all, if it's in the language, then everyone does it the same way. There are multiple ways of implementing inheritance and polymorphism in C. All of them are incompatible with one another. And while C++ users could do any of those methods, they opt to use the actual language feature 99.9% of the time. This means it's much easier to read someone else's code and know what's going on. You don't have to guess what is opaque and what isn't. You don't have to guess at what is derived from what. You know it, since everyone does it the same way.
In any case, most of the OOP-lite language (C++, Java, C#) can be used more or less like a procedural one if you want. You just ignore the objects. So in many ways, they get the best of both worlds.
The advantage can be summarized this way:
OOP can represent the real world more directly and precisely than previous paradigms, so the program becomes simpler and easier to understand.
And about this:
For me data abstraction, encapsulation, messaging, modularity also exist in sequential languages. Only the polymorphism, and inheritance are specific to object oriented programming.
Most human-readable language can provide data abstraction, encapsulation, messaging and modularity (otherwise they would be machine-languages), but OOP supports better these concepts. For example, to set text of a widget in C, you would do something like this:
HANDLE myEditBox = CreateEditBox(hParent, ...);
SetText(myEditBox, "Hello!");
Notice you have a handle to an object, not an actual object. Now in C++ (OOP) you can make this:
EditBox myEditBox(...);
myEditBox.SetText("Hello!");
The difference is subtle, but important. The C style SetText(handle, "Hello!") does not make any distinction between the handle and other parameters. You don't even know that there's a message to the object. Now the C++ style object.SetText("Hello!") it's like telling explicitly: Hey, object, set your text to "Hello!". Here, the notion of message and receiver (the object) are explicit.
C++ can also destroy objects automatically if they are not declared as pointers, which eliminates calls such as DestroyObject(myEditBox).
Also without OOP you have very poor encapsulation, because most things are implemented with structures which contains only public members. So you can't hide data from users, which mean somenone might try to change things in an unexpected way, that may cause bugs. This is quite common in large programs.

Operator overloading - is it really reasonable to forbid?

Java forbids operator overloading, but coming from C++ I do not see any reason for that. In languages where operator symbols are symbols as any other, same rules apply to "+" as to"plus" and there is no problem. So what is the point?
Edit: To be more concrete, show me which disadvantage overloaded "+" may have over overloaded "equals".
Just as many other things in Java, this is a restriction because it may be confusing if used improperly. (Similarly as pointer arithmetic is forbidden because it is error prone.) I'm a big fan of Java, but I'm generally of the opinion that it shouldn't be forbidden just because it could be misused.
For instance, BigInteger would benefit greatly from overloading the + operator.
OK, I'll try my hand at this under the assumption that Gabriel Ščerbák is doing this for better reasons than railing against a language.
The issue for me is one of manageable complexity: How much of the code in front of me do I have to decode vs. simply read?
In most conventional languages, upon seeing the expression a + b I know what is going to happen. The variables a and b will be added together. I'm pretty confident that behind the scenes the code will be very concise, very fast native machine code that adds the two numbers, whether the numbers are short integers or double-precision or some mixture of the two. (In some languages I may have to also assume that these could be strings being concatenated, but that's a rant for an entirely different question -- but one that flavours this rant if you peer at it from the right angle.)
When I make my own user-defined type -- say the omnipresent Complex type (and why Complex isn't a standard data type in modern languages is way the Hell beyond me, but that, again, is a rant for a different question) -- if I overload an operator (or, rather, if the operator is overloaded for me -- I'm using a library, say), short of peering very closely at the code I will not know that I'm now calling (possibly-virtual) methods on objects instead of having very tight, concise code generated for me behind the scenes. I will not know of the hidden conversions, the hidden temporary variables, the ... well, everything that goes along with writing many operators. To find out what's really going on in my code I have to pay very close attention to every line and keep track of declarations that may be three screens away from my current location in the code. To say that this impedes my understanding of the code flowing before my eyes is an understatement. Important details are being lost because the syntactic sugar is making things taste too tasty.
When I'm forced to use explicit methods on the objects (or even static methods or global methods where that applies) this is a signal to me, while I'm reading, that tells me of the potential cost overheads and bottlenecks and the like. I know, without even having to think for an instant, that I'm dealing with a method, that I've got dispatching overhead, that I may have temporary object creation and deletion overhead, etc. Everything's in front of me right before my eyes -- or at least enough indicators are in front of me that I know to be more careful.
I'm not intrinsically opposed to operator overloading. There are times when it makes code clearer, yes indeed, especially when you have complicated calculations over many baffling expressions. I can understand, however, exactly why someone might not want to put that into their language.
There is a further reason not to like operator overloading from the language designer's viewpoint. Operator overloading makes for very, very, very difficult grammars. C++ is already infamous for being nigh-unparseable and some of its constructs, like operator overloading, are the cause of it. Again from the viewpoint of someone writing the language I can fully understand why operator overloading was left off as a bad idea (or a good idea that's bad in implementation).
(This is all, of course, in addition to the other reasons you've already rejected. I'll submit my own overloading of operator-,() in my old C++ days in that stew just to be really annoying.)
There is no problem with operator overloading itself, but how it's actually has been used. As long as you overload the operators to make sense, the language still makes sense, but if you give other meanings to operators, it makes the language inconsistent.
(One example is how the shift left (<<) and shift right (>>) operators has been overloaded in C++ to mean "input" and "output"...)
So, the reasoning when leaving out operator overloading was probably that the risk of misuse was greater than the benefits of having operator overloading.
I think that Java would benefit greatly from extending its operators to cover built-in Number object types. Early (pre-1.0) versions of Java were said to have it (in that there were no primitives - everything was an object) but the VM technology of the time made it prohibitive from a performance view.
But in terms of in general allowing user defined operator overloading, it is not in the spirit of the Java language. The main problem is simply that it is hard to implement an operator that is consistent with what you expect from mathematics across object types and it will open the door to a lot of bad implementations which lead to a lot of hard to find (therefore expensive) bugs. You can just look at how many bad equals implementations (as in violate the contract) there are in general Java code, and the problem would only get worse from there.
Of course there are languages that prioritize power and syntactical beauty over such concerns, and more power to them. It is just not Java.
Edit: How is a custom + operator different than a custom == implementation (captured in Java in the equals(Object) method)? It isn't, really. It is just that by allowing operator overloading, things that are intuitive to a sixth grader become untrue. The real world experience of equals(Object) implementations shows how such complex contracts become hard to enforce in the real world.
Further Edit: Let me clarify the above, as I shortened it while editing and lost the point. A + operator in math has certain properties, one of which is that it doesn't matter which order the numbers on either side appear - it has the same result. So consider even the simplest case of a + performing an add to a Collection:
Collection a = ...
Collection b = ...
a + b;
System.out.println(a);
System.out.println(b);
The intuitive understanding of + would lead to an expectation that a + b or b + a would give the same result, but of course they would not. Start mixing two object types that take each other as paramaters in their plus method (say Collection and String) and things get harder to follow.
Now certainly it is possible to design operators on objects which are well understood and lead to better, more readable and more understandable code than without them. But the point is that more often than not in home-grown corporate APIs what you would end up seeing is obfuscated code.
There are a few problems:
Overloading logical operators has side effects because of lazy evaluation.
Even in mathematical types there are ambiguities, is (3dpoint*3dpoint) a cross or scaler product
You can't define new operators, so people reuse existing operators in novel ways eg. "string1%string2" to mean split string1 on string2.
But you can't always protect idiots from themselves even with an outright ban.
The point is that whenever you see, for example, a plus sign being used in the code, you know exactly what it does given that you know the types of its operands (which you always do in Java, as it is strongly typed).

Can Procedural Programming use Objects?

I have seen a number of different topics on StackOverFlow discussing the differences between Procedural and Object-Oriented Programming. The question is: If the program uses an object can it still be considered procedural?
Yes, and a lot of early Java was exactly that; you had a bunch of C programmers get into Java because it was "hot", people who didn't think in OOP. Lots of big classes with lots of static methods, lots of RTTI in case statements, lots of use of instanceof.
GLib has GObject which is object oriented programming implemented in pure C. While you can build up an API which begins to "feel" like OOP, it's still just plain "C" code with no actual classes (from the compiler's point of view). If you get far enough so you're starting to implement Object Oriented design patterns then I would call that OOP no matter what language it's written in. It's all about the feel of the code and how you have to think to write against it.
Procedural programming has to do with how you structure your program and model your domain. Just because at some point you instantiate an object, doesn't alone make your program oriented towards objects (i.e., object-oriented).
The distinction is entirely subjective. For example, if you code a C library using state passing, you are implementing something of a "tell" pattern, with the state as the object.
Classes can be considered as super types. When we converted from VB3 to VB6 our first pass was finding all the types we used, then finding all the subroutines and functions that took that type as a parameter. We moved those into the class definition, removed the parameter and then tested leaving the original flow of control intact
Then we refactored our flow of control to use various patterns and object oriented techniques.
The heart of object orientation is about how you decompose the problem into smaller parts, and how these parts work together. It's about the philosophy. Using OO language does not necessarily mean a program written in it is OO; it's just easier to do OO with a language that supports common OO concepts out of the box.
To answer the question: "If the program uses an object can it still be considered procedural?" - That depends on what your definitions of object and procedural programming are. But in my opinion, the answer is resounding "Yes". "Objects" are only a part of the philosophy that is OO and using them "somewhere in your application" does not mean you're doing OO.
The answer to your question is, yes. For example. I've got an old php legacy page to maintain. Most of the code is procedural but I decided that some things can be maintained much easier if I plug Zend Framework into the existing stuff and write some of my own classes to replace some of the old code. In general this application is still written and functioning in a mainly procedural way but here and then a class or another are instantiated and used. I guess there is no clear border between procedural and OO. You can do it cleaner or less clean. If you don't have enough layers for the size and complexity of your app you'll end up with more procedural code automatically too...

Why is Syntactic Sugar sometimes considered a bad thing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Syntactic sugar, IMHO, generally makes programs much more readable and easier to understand than coding from a very minimalistic set of primitives. I don't really see a downside to good, well thought out syntactic sugar. Why do some people basically think that syntactic sugar is at best superfluous and at worst something to be avoided?
Edit: I didn't want to name names, but since people asked, it seems like most C++ and Java programmers, for example, frankly don't care about their language's utter lack of syntactic sugar. In a lot of cases, it's not necessarily that they just like other parts of the language enough to make the lack of sugar worth the tradeoff, it's that they really don't care. Also, Lisp programmers seem almost proud of their language's strange notation (I won't call it syntax because it technically isn't), though in this case, it's more understandable because it allows Lisp's metaprogramming facilities to be as powerful as they are.
Syntactic sugar can in some cases interact in unpleasant ways.
some specific examples:
The first is c# (or java) specific, Auto boxing and the lock/synchronized construct
private int i;
private object o = new object();
private void SomethingNeedingLocking(bool b)
{
object lk = b ? i : o;
lock (lk) { /* do something */ }
}
In this example the helpful lock construct which can use any object as a synchronization point, combined with autoboxing, leads to a possible bug. The lock is simply taken on a new boxed instance of the i each time. It is arguable that the lock construct is over helpful and that some other specific construct on which to lock would be better but certainly the combination is still flawed.
Multiple variable declaration and pointers:
long* first, second;
A classic bug (though easy to spot). The sugar of multiple variables won't fit with the pointer declaration.
Some constructs do not need other aspects of the sugar to cause issues, a classic example is the ++ operator. It neatly lets you avoid writing
i = i + 1;
A widely used construct (and one which itself has scope for bugs since you must remember to update both variables if you wish to change from using i). However since this is easy to embed within other expressions the issue of prefix and postfix rears its head.
When used within a for loop this doesn't matter, the evaluation happens outside of any other evaluations, but used elsewhere it can be a source of confusion (since you may be embedding a very important aspect of the calculation (whether the current or next value should be used) into a very small and easily missed form.
All the above (except perhaps the lock/box one which the compiler really should spot for you) are cases where the usage may well be fine, or experienced programmers may think "that's perfectly clear to me" but the scope for confusion exists, certainly for novice programmers or those moving to a different syntax.
Syntactic sugar causes cancer of the semicolon. Alan Perlis
It is difficult to reason about syntactic sugar if the reasoning takes place without reference to a context. There are lots of examples about why "syntactic sugar" is good or bad, and all of them are meaningless without context.
You mention that syntactic sugar is good when it makes programs readable and easier to understand... and I can counter that saying that sometimes, syntactic sugar can affect the formal structure of a language, especially when syntactic sugar is a late addendum during the design of a programming language.
Instead of thinking in terms of syntactic sugar, I like to think in terms of well-designed languages that foster readability and ease of understanding, and bad-designed languages.
Regards,
Too much unnecessary sugar just adds bloat to the languages. I would name names but then I would just get flamed. :) Also, sometimes language employ syntactic sugar instead of doing a real implementation. For instance, there is a language that shall remain nameless whose "generics implementation" is just a thin layer of syntactic sugar.
Nonsense. C and Lisp programmers use syntactic sugar all the time.
Examples:
a[i] instead of *(a+i)
'(1 2 3) instead of (quote 1 2 3)
Syntax, in general, makes a language hard to learn, let alone master. Therefore, the smaller the set of syntax, the easier it is to learn and to try to master. This is a major reason why many new languages borrow the syntax from popular, existing languages.
Also, while I can simply avoid learning certain features I'm not interested in for whatever reason, I'll eventually find myself reading someone else's code who does like that feature and then I'll need to go learn that feature just to understand their code.
Syntactic sugar can either make your program more understandable, or less so. If you add syntactic sugar for trivial things, you just add cognitive burden, because the language becomes more complicated. On the other hand, if you can add syntactic sugar which somehow accomplishes to pinpoint a specific concept and highlight it, then you can win.
Personally, I've always found the term "syntactic sugar" ambiguous. I mean if you want to get technical, just about anything other than basic arithmetic, an if statement, and a goto is syntactic sugar.
I think what most people mean when they dismiss "syntactic sugar" is that a language feature makes something complicated overly simple. The most notorious example of this is Perl. But since I'm not a Perl expert, I'll give you an example of what I'm talking about in python (taken from this question):
reduce(list.__add__, map(lambda x: list(x), [mi.image_set.all() for mi in list_of_menuitems]))
This is an obvious attempt at making something simpler gone horribly, horribly wrong.
That's not to say I'm on the side of removing such features though. I think that such features just need to be used carefully.
I have always understood "syntactic sugar" to refer to any syntax added to an existing language that do not extend the capabilities of the language. Otherwise, anything less direct than binary machine language could be called syntactic sugar.
Even though they do not extend the capabilities of a language, they can still be very useful.
For example, LINQ is syntactic sugar because it doesn't add any new capabilities to C#3 that were not already possible in C#2. But to do the same thing as a simple LINQ expression in C#2 would take vastly more code to accomplish and be much harder to read.
Conversly, generics are not syntactic sugar, because you can do things with them in C#2 that were impossible with C#1, such as creating a collection class that can contain any value type without boxing.
See the Law of Leaky Abstractions - too much sugar and you just use it without understanding or knowing what is going on, and this makes it increasingly hard to debug if something does go wrong. It's not so much that "syntactic sugar" is a bad thing, just that a lot of programmers rely on it without really being aware of what they are shielded from, and then if the syntactic sugar runs into problems they're screwed.
Possibly because it leads to confusion in programmers who don't know what is really happening behind the scenes, which could in turn lead to some inefficient or poorly written code.. Just a guess, I don't think it is a "bad thing" either.
It's more typing and more layers of abstraction. I'd much rather use a language that is designed to have higher levels of abstraction then a language with syntactic sugar tacked on to do a poor job of imitating features other languages have built in.