Changing interface in C++ - oop

I am reading an article on extension of interface at following link.
http://wiki.hsr.ch/APF/files/ExtensionInterface.pdf
It has been mentioned here on page 142
Over time the addition of these requests can bloat the interface with
functionality not anticipated in the initial framework design. If new
methods are added to the "universalComponent" interface directly, all
client code must be updated and recompiled. This is tedious and
error-prone.
My question is (Assume we are using C++ to develop)
Why we have to compile client code if we add new methods to interface and not
modifying any existing functions in interface?
Thanks!

I haven't read the article, but for starters, I would suggest to de-emphasize the terms "method" and "interface" in C++. Those terms are popular in strict OO languages like Java, but C++ is a broader, multi-paradigm language.
With that said, "adding methods to interfaces" is really just adding more virtual member functions to a base class. Changing the base class changes the definition of all derived classes, and thus all code that requires the complete type of any derived class or of the base class must be recompiled.
C++ types are not a runtime feature. Types only exist at compile time, and the compiler must have full access to the type definitions. (Again in contrast to other languages!) The interface-implementation relationship exists purely at compile-time and cannot be "precompiled". So there's really no such thing as "modifying the interface" that would produce runtime-modularity. The "interface" concept is just a neat mnemonic that you can use when designing your application, but it does not save you from recompiling. Changing a class definition changes the internal representation of the class, and you cannot (in general) make a correct C++ program unless all parts of the program see the same class definitions.

Adding a method to a class that is involved in polymorphism (means it has at least one virtual member function) potentially changes the binary layout of objects of that class and it's subclasses.

Related

extending objects at run-time via categories?

Objective-C’s objects are pretty flexible when compared to similar languages like C++ and can be extended at runtime via Categories or through runtime functions.
Any idea what this sentence means? I am relatively new to Objective-C
While technically true, it may be confusing to the reader to call category extension "at runtime." As Justin Meiners explains, categories allow you to add additional methods to an existing class without requiring access to the existing class's source code. The use of categories is fairly common in Objective-C, though there are some dangers. If two different categories add the same method to the same class, then the behavior is undefined. Since you cannot know whether some other part of the system (perhaps even a system library) adds a category method, you typically must add a prefix to prevent collisions (for example rather than swappedString, a better name would likely be something like rnc_swappedString if this were part of RNCryptor for instance.)
As I said, it is technically true that categories are added at runtime, but from the programmer's point of view, categories are written as though just part of the class, so most people think of them as being a compile-time choice. It is very rare to decide at runtime whether to add a category method or not.
As a beginner, you should be aware of categories, but slow to create new ones. Creating categories is a somewhat intermediate-level skill. It's not something to avoid, but not something you'll use every day. It's very easy to overuse them. See Justin's link for more information.
On the other hand, "runtime functions" really do add new functionality to existing classes or even specific objects at runtime, and are completely under the control of code. You can, at runtime, modify a class such that it responds to a method it didn't previously respond to. You can even generate entirely new classes at runtime that did not exist when the program was compiled, and you can change the class of existing objects. (This is exactly how Key-Value Observation is implemented.)
Modifying classes and objects using the runtime is an advanced skill. You should not even consider using these techniques in production code until you have significant experience. And when you have that experience, it will tell you that you very seldom what to do this anyway. You will know the runtime functions because they are C-based, with names like method_exchangeImplmentations. You won't mistake them for normal ObjC (and you generally have to import objc/runtime.h to get to them.)
There is a middle-ground that bleeds into runtime manipulation called message forwarding and dynamic message resolution. This is often used for proxy objects, and is implemented with -forwardingTargetForSelector, +resolveInstanceMethod, and some similar methods. These are tools that allow classes to modify themselves at runtime, and is much less dangerous than modifying other classes (i.e. "swizzling").
It's also important to consider how all of this translates to Swift. In general, Swift has discouraged and restricted the use of runtime class manipulation, but it embraces (and improves) category-like extensions. By the time you're experienced enough to dig into the runtime, you will likely find it an even more obscure skill than it is today. But you will use extensions (Swift's version of categories) in every program.
A category allows you to add functionality to an existing class that you do not have access to source code for (System frameworks, 3rd party APIs etc). This functionality is possible by adding methods to a class at runtime.
For example lets say I wanted to add a method to NSString that swapped uppercase and lowercase letters called -swappedString. In static languages (such as C++), extending classes like this is more difficult. I would have to create a subclass of NSString (or a helper function). While my own code could take advantage of my subclass, any instance created in a library would not use my subclass and would not have my method.
Using categories I can extend any class, such as adding a -swappedString method and use it on any instance of the class, such asNSString transparently [anyString swappedString];.
You can learn more details from Apple's Docs

What is the difference between subtyping and inheritance in OO programming?

I could not find the main difference. And I am very confused when we could use inheritance and when we can use subtyping. I found some definitions but they are not very clear.
What is the difference between subtyping and inheritance in object-oriented programming?
In addition to the answers already given, here's a link to an article I think is relevant.
Excerpts:
In the object-oriented framework, inheritance is usually presented as a feature that goes hand in hand with subtyping when one organizes abstract datatypes in a hierarchy of classes. However, the two are orthogonal ideas.
Subtyping refers to compatibility of interfaces. A type B is a subtype of A if every function that can be invoked on an object of type A can also be invoked on an object of type B.
Inheritance refers to reuse of implementations. A type B inherits from another type A if some functions for B are written in terms of functions of A.
However, subtyping and inheritance need not go hand in hand. Consider the data structure deque, a double-ended queue. A deque supports insertion and deletion at both ends, so it has four functions insert-front, delete-front, insert-rear and delete-rear. If we use just insert-rear and delete-front we get a normal queue. On the other hand, if we use just insert-front and delete-front, we get a stack. In other words, we can implement queues and stacks in terms of deques, so as datatypes, Stack and Queue inherit from Deque. On the other hand, neither Stack nor Queue are subtypes of Deque since they do not support all the functions provided by Deque. In fact, in this case, Deque is a subtype of both Stack and Queue!
I think that Java, C++, C# and their ilk have contributed to the confusion, as already noted, by the fact that they consolidate both ideas into a single class hierarchy. However, I think the example given above does justice to the ideas in a rather language-agnostic way. I'm sure others can give more examples.
A relative unfortunately died and left you his bookstore.
You can now read all the books there, sell them, you can look at his accounts, his customer list, etc. This is inheritance - you have everything the relative had. Inheritance is a form of code reuse.
You can also re-open the book store yourself, taking on all of the relative's roles and responsibilities, even though you add some changes of your own - this is subtyping - you are now a bookstore owner, just like your relative used to be.
Subtyping is a key component of OOP - you have an object of one type but which fulfills the interface of another type, so it can be used anywhere the other object could have been used.
In the languages you listed in your question - C++, Java and C# - the two are (almost) always used together, and thus the only way to inherit from something is to subtype it and vice versa. But other languages don't necessarily fuse the two concepts.
Inheritance is about gaining attributes (and/or functionality) of super types. For example:
class Base {
//interface with included definitions
}
class Derived inherits Base {
//Add some additional functionality.
//Reuse Base without having to explicitly forward
//the functions in Base
}
Here, a Derived cannot be used where a Base is expected, but is able to act similarly to a Base, while adding behaviour or changing some aspect of Bases behaviour. Typically, Base would be a small helper class that provides both an interface and an implementation for some commonly desired functionality.
Subtype-polymorphism is about implementing an interface, and so being able to substitute different implementations of that interface at run-time:
class Interface {
//some abstract interface, no definitions included
}
class Implementation implements Interface {
//provide all the operations
//required by the interface
}
Here, an Implementation can be used wherever an Interface is required, and different implementations can be substituted at run-time. The purpose is to allow code that uses Interface to be more widely useful.
Your confusion is justified. Java, C#, and C++ all conflate these two ideas into a single class hierarchy. However, the two concepts are not identical, and there do exist languages which separate the two.
If you inherit privately in C++, you get inheritance without subtyping. That is, given:
class Derived : Base // note the missing public before Base
You cannot write:
Base * p = new Derived(); // type error
Because Derived is not a subtype of Base. You merely inherited the implementation, not the type.
Subtyping doesn't have to be implemented via inheritance. Some subtyping that is not inheritance:
Ocaml's variant
Rust's lifetime anotation
Clean's uniqueness types
Go's interface
in a simple word: subtyping and inheritance both are polymorphism, (inheritance is a dynamic polymorphism - overriding). Actually, inheritance is subclassing, it means in inheritance there is no warranty to ensure capability of the subclass with the superclass (make sure subclass do not discard superclass behavior), but subtyping(such as implementing an interface and ... ), ensure the class does not discard the expected behavior.

Can an interface be retroactively implemented in VB.NET?

Is it possible to define an interface (e.g. MyClass Implements MyInterface) whose method/property definitions already match some of the methods/properties defined on a third-party (or a native) class?
For example, the DataRow class has a number of properties/methods that make it "row-like". What if I want to implement an interface (i.e. IRowLike) that defines certain methods and properties already existing on the native DataRow class (which I cannot directly touch or extend). I simply want the class to agree at runtime that it does indeed abide by some interface.
Interfaces afford a poor-man's version of "duck typing". Once I have a set of classes that all abide by a given interface, I can define extension methods against that interface and all classes that support the interface immediately gain new behavior. I know it may seem odd to want to retroactively apply an interface against third-party classes, but it would definitely allows us to do more with less code.
This isn't possible in .Net. A type defines the interfaces that it implements in metadata at compile time and its definition isn't alterable at runtime. It is possible to generate types at runtime which implement specific interfaces but not alter an existing type
There are some alternatives though. In VB.Net you could simply choose to use late binding on the type and access the interface methods in that manner (or dynamic in C#) The downside of course is the code isn't statically verifiable.

OO principle: c#: design to interface and not concrete classes

I have some questions about the affects of using concrete classes and interfaces.
Say some chunk of code (call it chunkCode) uses concrete class A. Would I have to re-compile chunkCode if:
I add some new public methods to A? If so, isn't that a bit stange? After all I still provide the interface chunkCode relies on. (Or do I have to re-compile because chunkCode may never know otherwise that this is true and I haven't omitted some API)
I add some new private methods to A?
I add a new public field to A?
I add a new private field to A?
Factory Design Pattern:
The main code doesn't care what the concrete type of the object is. It relies only on the API. But what would you do if there are few methods which are relevant to only one concrete type? This type implements the interface but adds some more public methods? Would you use some if (A is type1) statements (or the like) the main code?
Thanks for any clarification
1) Compiling is not an activity in OO. It is a detail of specific OO implementations. If you want an answer for a specific implementation (e.g. Java), then you need to clarify.
In general, some would say that adding to an interface is not considered a breaking change, wheras others say you cannot change an interface once it is published, and you have to create a new interface.
Edit: You specified C#, so check out this question regarding breaking changes in .Net. I don't want to do that answer a disservice, so I won't try to replicate it here.
2) People often hack their designs to do this, but it is a sign that you have a poor design.
Good alternatives:
Create a method in your interface that allows you to invoke the custom behavior, but not be required to know what that behavior is.
Create an additional interface (and a new factory) that supports the new methods. The new interface does not have to inherit the old interface, but it can if it makes sense (if an is-a relationship can be expressed between the interfaces).
If your language supports it, use the Abstract Factory pattern, and take advantage of Covariant Return Types in the concrete factory. If you need a specific derived type, accept a concrete factory instead of an abstract one.
Bad alternatives (anti-patterns):
Adding a method to the interface that does nothing in other derived classed.
Throwing an exception in a method that doesn't make sense for your derived class.
Adding query methods to the interface that tell the user if they can call a certain method.
Unless the method name is generic enough that the user wouldn't expect it to do anything (e.g. DoExtraProcessing), then adding a method that is no-op in most derived classes breaks the contract defined by that interface.
E.g.: Someone invoking bird.Fly() would expect it to actually do something. We know that chickens can't fly. So either a Chicken isn't a Bird, or Birds don't Fly.
Adding query methods is a poor work-around for this. E.g. Adding a boolean CanFly() method or property in your interface. So is throwing an exception. Neither of them get around the fact that the type simply isn't substitutable. Check out the Liskov Substitution Principle (LSP).
For your first question the answer is NO for all your points. If it would be that way then backward compatibility would not make any sense. You have to recompile chunkCode only if you brake the API, that is remove some functionality that chunkCode is using, changing calling conventions, modifying number of parameters, these sort of things == breaking changes.
For the second I usually, but only if I really have to, use dynamic_cast in those situations.
Note my answer is valid in the context of C++;I just saw the question is language agnostic(kind of tired at this hour; I'll remove the answer if it offenses anybody).
Question 1: Depends on what language you are talking about. Its always safer to recompile both languages though. Mostly because chuckCode does not know what actually exists inside A. Recompiling refreshes its memory. But it should work in Java without recompiling.
Question 2: No. The entire point of writing a Factory is to get rid of if(A is type1). These if statements are terrible from maintenance perspective.
Factory is designed to build objects of similar type. If you are having a situation where you are using this statement then that object is either not a similar type to rest of the classes. If you are sure it is of similar type and have similar interfaces. I would write an extra function in all the concrete base classes and implement it only on this one.
Ideally All these concrete classes should have a common abstract base class or a Interface to define what the API is. Nothing other than what is designed in this Interface should be expected to be called anywhere in the code unless you are writing functions that takes this specific class.

What is the definition of "interface" in object oriented programming

A friend of mine goes back and forth on what "interface" means in programming.
What is the best description of an "interface"?
To me, an interface is a blueprint of a class. Is this the best definition?
An interface is one of the more overloaded and confusing terms in development.
It is actually a concept of abstraction and encapsulation. For a given "box", it declares the "inputs" and "outputs" of that box. In the world of software, that usually means the operations that can be invoked on the box (along with arguments) and in some cases the return types of these operations.
What it does not do is define what the semantics of these operations are, although it is commonplace (and very good practice) to document them in proximity to the declaration (e.g., via comments), or to pick good naming conventions. Nevertheless, there are no guarantees that these intentions would be followed.
Here is an analogy: Take a look at your television when it is off. Its interface are the buttons it has, the various plugs, and the screen. Its semantics and behavior are that it takes inputs (e.g., cable programming) and has outputs (display on the screen, sound, etc.). However, when you look at a TV that is not plugged in, you are projecting your expected semantics into an interface. For all you know, the TV could just explode when you plug it in. However, based on its "interface" you can assume that it won't make any coffee since it doesn't have a water intake.
In object oriented programming, an interface generally defines the set of methods (or messages) that an instance of a class that has that interface could respond to.
What adds to the confusion is that in some languages, like Java, there is an actual interface with its language specific semantics. In Java, for example, it is a set of method declarations, with no implementation, but an interface also corresponds to a type and obeys various typing rules.
In other languages, like C++, you do not have interfaces. A class itself defines methods, but you could think of the interface of the class as the declarations of the non-private methods. Because of how C++ compiles, you get header files where you could have the "interface" of the class without actual implementation. You could also mimic Java interfaces with abstract classes with pure virtual functions, etc.
An interface is most certainly not a blueprint for a class. A blueprint, by one definition is a "detailed plan of action". An interface promises nothing about an action! The source of the confusion is that in most languages, if you have an interface type that defines a set of methods, the class that implements it "repeats" the same methods (but provides definition), so the interface looks like a skeleton or an outline of the class.
Consider the following situation:
You are in the middle of a large, empty room, when a zombie suddenly attacks you.
You have no weapon.
Luckily, a fellow living human is standing in the doorway of the room.
"Quick!" you shout at him. "Throw me something I can hit the zombie with!"
Now consider:
You didn't specify (nor do you care) exactly what your friend will choose to toss;
...But it doesn't matter, as long as:
It's something that can be tossed (He can't toss you the sofa)
It's something that you can grab hold of (Let's hope he didn't toss a shuriken)
It's something you can use to bash the zombie's brains out (That rules out pillows and such)
It doesn't matter whether you get a baseball bat or a hammer -
as long as it implements your three conditions, you're good.
To sum it up:
When you write an interface, you're basically saying: "I need something that..."
Interface is a contract you should comply to or given to, depending if you are implementer or a user.
I don't think "blueprint" is a good word to use. A blueprint tells you how to build something. An interface specifically avoids telling you how to build something.
An interface defines how you can interact with a class, i.e. what methods it supports.
In Programming, an interface defines what the behavior a an object will have, but it will not actually specify the behavior. It is a contract, that will guarantee, that a certain class can do something.
Consider this piece of C# code here:
using System;
public interface IGenerate
{
int Generate();
}
// Dependencies
public class KnownNumber : IGenerate
{
public int Generate()
{
return 5;
}
}
public class SecretNumber : IGenerate
{
public int Generate()
{
return new Random().Next(0, 10);
}
}
// What you care about
class Game
{
public Game(IGenerate generator)
{
Console.WriteLine(generator.Generate())
}
}
new Game(new SecretNumber());
new Game(new KnownNumber());
The Game class requires a secret number. For the sake of testing it, you would like to inject what will be used as a secret number (this principle is called Inversion of Control).
The game class wants to be "open minded" about what will actually create the random number, therefore it will ask in its constructor for "anything, that has a Generate method".
First, the interface specifies, what operations an object will provide. It just contains what it looks like, but no actual implementation is given. This is just the signature of the method. Conventionally, in C# interfaces are prefixed with an I.
The classes now implement the IGenerate Interface. This means that the compiler will make sure, that they both have a method, that returns an int and is called Generate.
The game now is being called two different object, each of which implementant the correct interface. Other classes would produce an error upon building the code.
Here I noticed the blueprint analogy you used:
A class is commonly seen as a blueprint for an object. An Interface specifies something that a class will need to do, so one could argue that it indeed is just a blueprint for a class, but since a class does not necessarily need an interface, I would argue that this metaphor is breaking. Think of an interface as a contract. The class that "signs it" will be legally required (enforced by the compiler police), to comply to the terms and conditions in the contract. This means that it will have to do, what is specified in the interface.
This is all due to the statically typed nature of some OO languages, as it is the case with Java or C#. In Python on the other hand, another mechanism is used:
import random
# Dependencies
class KnownNumber(object):
def generate(self):
return 5
class SecretNumber(object):
def generate(self):
return random.randint(0,10)
# What you care about
class SecretGame(object):
def __init__(self, number_generator):
number = number_generator.generate()
print number
Here, none of the classes implement an interface. Python does not care about that, because the SecretGame class will just try to call whatever object is passed in. If the object HAS a generate() method, everything is fine. If it doesn't: KAPUTT!
This mistake will not be seen at compile time, but at runtime, so possibly when your program is already deployed and running. C# would notify you way before you came close to that.
The reason this mechanism is used, naively stated, because in OO languages naturally functions aren't first class citizens. As you can see, KnownNumber and SecretNumber contain JUST the functions to generate a number. One does not really need the classes at all. In Python, therefore, one could just throw them away and pick the functions on their own:
# OO Approach
SecretGame(SecretNumber())
SecretGame(KnownNumber())
# Functional Approach
# Dependencies
class SecretGame(object):
def __init__(self, generate):
number = generate()
print number
SecretGame(lambda: random.randint(0,10))
SecretGame(lambda: 5)
A lambda is just a function, that was declared "in line, as you go".
A delegate is just the same in C#:
class Game
{
public Game(Func<int> generate)
{
Console.WriteLine(generate())
}
}
new Game(() => 5);
new Game(() => new Random().Next(0, 10));
Side note: The latter examples were not possible like this up to Java 7. There, Interfaces were your only way of specifying this behavior. However, Java 8 introduced lambda expressions so the C# example can be converted to Java very easily (Func<int> becomes java.util.function.IntSupplier and => becomes ->).
To me an interface is a blueprint of a class, is this the best definition?
No. A blueprint typically includes the internals. But a interface is purely about what is visible on the outside of a class ... or more accurately, a family of classes that implement the interface.
The interface consists of the signatures of methods and values of constants, and also a (typically informal) "behavioral contract" between classes that implement the interface and others that use it.
Technically, I would describe an interface as a set of ways (methods, properties, accessors... the vocabulary depends on the language you are using) to interact with an object. If an object supports/implements an interface, then you can use all of the ways specified in the interface to interact with this object.
Semantically, an interface could also contain conventions about what you may or may not do (e.g., the order in which you may call the methods) and about what, in return, you may assume about the state of the object given how you interacted so far.
Personally I see an interface like a template. If a interface contains the definition for the methods foo() and bar(), then you know every class which uses this interface has the methods foo() and bar().
Let us consider a Man(User or an Object) wants some work to be done. He will contact a middle man(Interface) who will be having a contract with the companies(real world objects created using implemented classes). Few types of works will be defined by him which companies will implement and give him results.
Each and every company will implement the work in its own way but the result will be same. Like this User will get its work done using an single interface.
I think Interface will act as visible part of the systems with few commands which will be defined internally by the implementing inner sub systems.
An interface separates out operations on a class from the implementation within. Thus, some implementations may provide for many interfaces.
People would usually describe it as a "contract" for what must be available in the methods of the class.
It is absolutely not a blueprint, since that would also determine implementation. A full class definition could be said to be a blueprint.
An interface defines what a class that inherits from it must implement. In this way, multiple classes can inherit from an interface, and because of that inherticance, you can
be sure that all members of the interface are implemented in the derived class (even if its just to throw an exception)
Abstract away the class itself from the caller (cast an instance of a class to the interface, and interact with it without needing to know what the actual derived class IS)
for more info, see this http://msdn.microsoft.com/en-us/library/ms173156.aspx
In my opinion, interface has a broader meaning than the one commonly associated with it in Java. I would define "interface" as a set of available operations with some common functionality, that allow controlling/monitoring a module.
In this definition I try to cover both programatic interfaces, where the client is some module, and human interfaces (GUI for example).
As others already said, an interface always has some contract behind it, in terms of inputs and outputs. The interface does not promise anything about the "how" of the operations; it only guarantees some properties of the outcome, given the current state, the selected operation and its parameters.
As above, synonyms of "contract" and "protocol" are appropriate.
The interface comprises the methods and properties you can expect to be exposed by a class.
So if a class Cheetos Bag implements the Chip Bag interface, you should expect a Cheetos Bag to behave exactly like any other Chip Bag. (That is, expose the .attemptToOpenWithoutSpillingEverywhere() method, etc.)
A boundary across which two systems communicate.
Interfaces are how some OO languages achieve ad hoc polymorphism. Ad hoc polymorphism is simply functions with the same names operating on different types.
Conventional Definition - An interface is a contract that specifies the methods which needs to be implemented by the class implementing it.
The Definition of Interface has changed over time. Do you think Interface just have method declarations only ? What about static final variables and what about default definitions after Java 5.
Interfaces were introduced to Java because of the Diamond problem with multiple Inheritance and that's what they actually intend to do.
Interfaces are the constructs that were created to get away with the multiple inheritance problem and can have abstract methods , default definitions and static final variables.
http://www.quora.com/Why-does-Java-allow-static-final-variables-in-interfaces-when-they-are-only-intended-to-be-contracts
In short, The basic problem an interface is trying to solve is to separate how we use something from how it is implemented. But you should consider interface is not a contract. Read more here.