What is the difference between the concept of 'class' and 'type'? - oop

i know this question has been already asked, but i didnt get it quite right, i would like to know, which is the base one, class or the type. I have few questions, please clear those for me,
Is type the base of a programing data type?
type is hard coded into the language itself. Class is something we can define ourselves?
What is untyped languages, please give some examples
type is not something that fall in to the oop concepts, I mean it is not restricted to oop world
Please clear this for me, thanks.

I didn't work with many languages. Maybe, my questions are correct in terms of : Java, C#, Objective-C
1/ I think type is actually data type in some way people talk about it.
2/ No. Both type and class we can define it. An object of Class A has type A. For example if we define String s = "123"; then s has a type String, belong to class String. But the vice versa is not correct.
For example:
class B {}
class A extends B {}
B b = new A();
then you can say b has type B and belong to both class A and B. But b doesn't have type A.
3/ untyped language is a language that allows you to change the type of the variable, like in javascript.
var s = "123"; // type string
s = 123; // then type integer
4/ I don't know much but I think it is not restricted to oop. It can be procedural programming as well

It may well depend on the language. I treat types and classes as the same thing in OO, only making a distinction between class (the definition of a family of objects) and instance (or object), specific concrete occurrences of a class.
I come originally from a C world where there was no real difference between language-defined types like int and types that you made yourself with typedef or struct.
Likewise, in C++, there's little difference (probably none) between std::string and any class you put together yourself, other than the fact that std::string will almost certainly be bug-free by now. The same isn't always necessary in our own code :-)
I've heard people suggest that types are classes without methods but I don't believe that distinction (again because of my C/C++ background).
There is a fundamental difference in some languages between integral (in the sense of integrated rather than integer) types and class types. Classes can be extended but int and float (examples for C++) cannot.

In OOP languages, a class specifies the definition of an object. In many cases, that object can serve as a type for things like parameter matching in a function.
So, for an example, when you define a function, you specify the type of data that should be passed to the function and the type of data that is returned:
int AddOne(int value) { return value+1; } uses int types for the return value and the parameter being passed in.
In languages that have both, the concepts of type and class/object can almost become interchangeable. However, there are many languages that do not have both. For instance, I believe that standard C has no support for custom-defined objects, but it certainly does still have types. On the otherhand, both PHP and Javascript are examples of languages where type is very loosely defined (basically, types are either single item, collection/array/object, or undefined [js only]), but they have full support for classes/objects.
Another key difference: you can have methods and custom-functions associated with a class/object, but not with a standard data-type.
Hopefully that clarified some. To answer your specific questions:
In some ways, type could be considered a base concept of programming, yes.
Yes, with the exception that classes can be treated as types in functions, as in the example above.
An untyped language is one that lets you use any type of variable interchangeably. Meaning that you can handle a string with the same code that handles an int, for instance. In practice most 'untyped' languages actually implement a concept called duck-typing, so named because they say that 'if it acts like a duck, it should be treated like a duck' and attempt to use any variable as the type that makes sense for the code encountered. Again, php and javascript are two languages which do this.
Very true, type is applicable outside of the OOP world.

Related

What are the pros and cons of typing interface method parameters generic to make the interface more accessible?

I have an application that creates xml strings from abap structures using many different simple transformations and consequently many different abap structures.
I want to use a more flexible OO approach with an Interface to do this but due to the different structures the method signatures importing parameter is always different.
What are the pros and cons of typing the importing parameter generic instead so I can implement one method from one interface in each class handling the different transformations?
INTERFACE if_transformer.
METHODS transform_xml
IMPORTING isource_structure TYPE REF TO data
RETURNING VALUE(rxml_string) TYPE string.
ENDINTERFACE.
...
CLASS material_transformer definition.
PUBLIC SECTION.
INTERFACES if_transformer.
ENDCLASS.
CLASS material_transformer IMPLEMENTATION.
METHOD if_transformer~transform_xml.
FIELD-SYMBOLS <structure> TYPE concrete_structure.
ASSIGN isource_structure->* TO <structure>.
...
ENDMETHOD.
ENDCLASS.
The more specific your types, the earlier you notice errors, the easier your interfaces can be understood, and the less overhead you have in validating and casting/converting to the concrete types you want to handle.
For example, consider you type the method add_number as methods add_number importing n type i. The compiler will then reject the wrong statement add_number( 'xyz' ) and this class of errors will never make it into executable code. In turn, anybody reading that method's declaration can easily see that the method accepts only integer numbers, not floats, not packeds, and definitely not strings that contain numbers. Within the method, you can probably directly take the input n and do something with it, such as result = sum_so_far + n, without having to first validate the input or convert it to something else.
In contrast, consider you type the same method add_number as methods add_number importing n type ref to data. The compiler will gladly accept add_number( ref #( 'xyz' ) ), although it is complete nonsense; this class of error will thus only be detected at runtime, with type conversion exceptions that the code around this will have to react to in a meaningful way. People reading the method's declaration have to consult its docu, unit tests, and/or code to find out what kind of input it accepts; there is no way to guess it from the specification alone. Finally, within the method, you will first have to validate and convert the input before you can process it, such as is_integer( n ), cast, assign, and the like; in case the input is unacceptable, you need to find suitable error handling mechanisms, such as throwing nice exceptions.
With softly typed languages like JavaScript, using generic types is the default. However, history shows that people often prefer stronger typing, at least on the server side, leading to follow-up evolutions such as TypeScript or Deno. With strongly-typed languages like ABAP, the rule of thumb is to choose a data type as precise as possible.
Note that there are several levels of relaxation in generic types. For example, you should consider resorting to "partially" generic types like simple, which accepts ABAP structures, or standard table, which accepts tables, before resorting to the maximum-generic type data.
Excellent answer by Florian. I might add that an approach I've used in similar scenarios is to use a factory class that inspects the input data and instantiates the appropriate class to deal with it.

Why is type inference impractical for object oriented languages?

I'm currently researching ideas for a new programming language where ideally I would like the language to mix some functional and procedural (object oriented) concepts.
One of the things that I'm really fascinated about with languages like Haskell is that it's statically typed, but you do not have to annotate types (magic thanks to Hindley-Milner!).
I would really like this for my language, however after reading up on the subject it seems that most agree that type inference is impractical/impossible with subtyping/object orientation, however I have not understood why this is. I do not know F#, but I understand that it uses Hindley-Milner AND is object-oriented.
I would really like an explanation for this and preferably examples on scenarios where type inference is impossible for object oriented languages.
To add to seppk's response: with structural object types the problem he describes actually goes away (f could be given a polymorphic type like ∀A &leq; {x : Int, y : Int}. A → Int, or even just {x : Int, y : Int} → Int). However, type inference is still problematic.
The fundamental reason is this: in a language without subtyping, the typing rules impose equality constraints on types. These are very nice to deal with during type checking, because they can usually be simplified immediately using unification on the types. With subtyping, however, these constraints are generalised to inequality constraints. You cannot use unification any more, which has at least three unpleasant consequences:
The number and complexity of constraints explodes combinatorially.
The information you have to display to the user in case of errors is incomprehensible.
Certain forms of quantification can quickly become undecidable.
Thus, type inference for subtyping is not impossible (there have been many papers on the subject in the 90s), but it is not very practical.
A much simpler alternative is employed by OCaml, which uses so-called row polymorphism in place of subtyping. That is actually tractable.
When using nominal typing (that is a typing system where two classes whose members have the same name and the same types are not interchangeable), there would be many possible types for a method like the following:
let f(obj) =
obj.x + obj.y
Any class that has both a member x and a member y (of types that support the + operator) would qualify as a possible type for obj and the type inference algorithm would have no way of knowing which one is the one you want.
In F# the above code would need a type annotation. So F# has object orientation and type inference, but not at the same time (with the exception of local type inference (let myVar = expressionWhoseTypeIKNow), which always works).

How to model class hierarchies in Haskell?

I am a C# developer. Coming from OO side of the world, I start with thinking in terms of interfaces, classes and type hierarchies. Because of lack of OO in Haskell, sometimes I find myself stuck and I cannot think of a way to model certain problems with Haskell.
How to model, in Haskell, real world situations involving class hierarchies such as the one shown here: http://www.braindelay.com/danielbray/endangered-object-oriented-programming/isHierarchy-4.gif
First of all: Standard OO design is not going to work nicely in Haskell. You can fight the language and try to make something similar, but it will be an exercise in frustration. So step one is look for Haskell-style solutions to your problem instead of looking for ways to write an OOP-style solution in Haskell.
But that's easier said than done! Where to even start?
So, let's disassemble the gritty details of what OOP does for us, and think about how those might look in Haskell.
Objects: Roughly speaking, an object is the combination of some data with methods operating on that data. In Haskell, data is normally structured using algebraic data types; methods can be thought of as functions taking the object's data as an initial, implicit argument.
Encapsulation: However, the ability to inspect an object's data is usually limited to its own methods. In Haskell, there are various ways to hide a piece of data, two examples are:
Define the data type in a separate module that doesn't export the type's constructors. Only functions in that module can inspect or create values of that type. This is somewhat comparable to protected or internal members.
Use partial application. Consider the function map with its arguments flipped. If you apply it to a list of Ints, you'll get a function of type (Int -> b) -> [b]. The list you gave it is still "there", in a sense, but nothing else can use it except through the function. This is comparable to private members, and the original function that's being partially applied is comparable to an OOP-style constructor.
"Ad-hoc" polymorphism: Often, in OO programming we only care that something implements a method; when we call it, the specific method called is determined based on the actual type. Haskell provides type classes for compile-time function overloading, which are in many ways more flexible than what's found in OOP languages.
Code reuse: Honestly, my opinion is that code reuse via inheritance was and is a mistake. Mix-ins as found in something like Ruby strike me as a better OO solution. At any rate, in any functional language, the standard approach is to factor out common behavior using higher-order functions, then specialize the general-purpose form. A classic example here are fold functions, which generalize almost all iterative loops, list transformations, and linearly recursive functions.
Interfaces: Depending on how you're using an interface, there are different options:
To decouple implementation: Polymorphic functions with type class constraints are what you want here. For example, the function sort has type (Ord a) => [a] -> [a]; it's completely decoupled from the details of the type you give it other than it must be a list of some type implementing Ord.
Working with multiple types with a shared interface: For this you need either a language extension for existential types, or to keep it simple, use some variation on partial application as above--instead of values and functions you can apply to them, apply the functions ahead of time and work with the results.
Subtyping, a.k.a. the "is-a" relationship: This is where you're mostly out of luck. But--speaking from experience, having been a professional C# developer for years--cases where you really need subtyping aren't terribly common. Instead, think about the above, and what behavior you're trying to capture with the subtyping relationship.
You might also find this blog post helpful; it gives a quick summary of what you'd use in Haskell to solve the same problems that some standard Design Patterns are often used for in OOP.
As a final addendum, as a C# programmer, you might find it interesting to research the connections between it and Haskell. Quite a few people responsible for C# are also Haskell programmers, and some recent additions to C# were heavily influenced by Haskell. Most notable is probably the monadic structure underlying LINQ, with IEnumerable being essentially the list monad.
Let's assume the following operations: Humans can speak, Dogs can bark, and all members of a species can mate with members of the same species if they have opposite gender. I would define this in haskell like this:
data Gender = Male | Female deriving Eq
class Species s where
gender :: s -> Gender
-- Returns true if s1 and s2 can conceive offspring
matable :: Species a => a -> a -> Bool
matable s1 s2 = gender s1 /= gender s2
data Human = Man | Woman
data Canine = Dog | Bitch
instance Species Human where
gender Man = Male
gender Woman = Female
instance Species Canine where
gender Dog = Male
gender Bitch = Female
bark Dog = "woof"
bark Bitch = "wow"
speak Man s = "The man says " ++ s
speak Woman s = "The woman says " ++ s
Now the operation matable has type Species s => s -> s -> Bool, bark has type Canine -> String and speak has type Human -> String -> String.
I don't know whether this helps, but given the rather abstract nature of the question, that's the best I could come up with.
Edit: In response to Daniel's comment:
A simple hierarchy for collections could look like this (ignoring already existing classes like Foldable and Functor):
class Foldable f where
fold :: (a -> b -> a) -> a -> f b -> a
class Foldable m => Collection m where
cmap :: (a -> b) -> m a -> m b
cfilter :: (a -> Bool) -> m a -> m a
class Indexable i where
atIndex :: i a -> Int -> a
instance Foldable [] where
fold = foldl
instance Collection [] where
cmap = map
cfilter = filter
instance Indexable [] where
atIndex = (!!)
sumOfEvenElements :: (Integral a, Collection c) => c a -> a
sumOfEvenElements c = fold (+) 0 (cfilter even c)
Now sumOfEvenElements takes any kind of collection of integrals and returns the sum of all even elements of that collection.
Instead of classes and objects, Haskell uses abstract data types. These are really two compatible views on the problem of organizing ways of constructing and observing information. The best help I know of on this subject is William Cook's essay Object-Oriented Programming Versus Abstract Data Types. He has some very clear explanations to the effect that
In a class-based system, code is organized around different ways of constructing abstractions. Generally each different way of constructing an abstraction is assigned its own class. The methods know how to observe properties of that construction only.
In an ADT-based system (like Haskell), code is organized around different ways of observing abstractions. Generally each different way of observing an abstraction is assigned its own function. The function knows all the ways the abstraction could be constructed, and it knows how to observe a single property, but of any construction.
Cook's paper will show you a nice matrix layout of abstractions and teach you how to organize any class as an ADY or vice versa.
Class hierarchies involve one more element: the reuse of implementations through inheritance. In Haskell, such reuse is achieved through first-class functions instead: a function in a Primate abstraction is a value and an implementation of the Human abstraction can reuse any functions of the Primate abstraction, can wrap them to modify their results, and so on.
There is not an exact fit between design with class hierarchies and design with abstract data types. If you try to transliterate from one to the other, you will wind up with something awkward and not idiomatic—kind of like a FORTRAN program written in Java.
But if you understand the principles of class hierarchies and the principles of abstract data types, you can take a solution to a problem in one style and craft a reasonably idiomatic solution to the same problem in the other style. It does take practice.
Addendum: It's also possible to use Haskell's type-class system to try to emulate class hierarchies, but that's a different kettle of fish. Type classes are similar enough to ordinary classes that a number of standard examples work, but they are different enough that there can also be some very big surprises and misfits. While type classes are an invaluable tool for a Haskell programmer, I would recommend that anyone learning Haskell learn to design programs using abstract data types.
Haskell is my favorite language, is a pure functional language.
It does not have side effects, there is no assignment.
If you find to hard the transition to this language, maybe F# is a better place to start with functional programming. F# is not pure.
Objects encapsulate states, there is a way to achieve this in Haskell, but this is one of the issues that takes more time to learn because you must learn some category theory concepts to deeply understand monads. There is syntactic sugar that lets you see monads like non destructive assignment, but in my opinion it is better to spend more time understanding the basis of category theory (the notion of category) to get a better understanding.
Before trying to program in OO style in Haskell, you should ask yourself if you really use the object oriented style in C#, many programmers use OO languages, but their programs are written in the structured style.
The data declaration allows you to define data structures combining products (equivalent to structure in C language) and unions (equivalent to union in C), the deriving part o the declaration allows to inherit default methods.
A data type (data structure) belongs to a class if has an implementation of the set of methods in the class.
For example, if you can define a show :: a -> String method for your data type, then it belong to the class Show, you can define your data type as an instance of the Show class.
This is different of the use of class in some OO languages where it is used as a way to define structures + methods.
A data type is abstract if it is independent of it's implementation. You create, mutate, and destroy the object by an abstract interface, you do not need to know how it is implemented.
Abstraction is supported in Haskell, it is very easy to declare.
For example this code from the Haskell site:
data Tree a = Nil
| Node { left :: Tree a,
value :: a,
right :: Tree a }
declares the selectors left, value, right.
the constructors may be defined as follows if you want to add them to the export list in the module declaration:
node = Node
nil = Nil
Modules are build in a similar way as in Modula. Here is another example from the same site:
module Stack (Stack, empty, isEmpty, push, top, pop) where
empty :: Stack a
isEmpty :: Stack a -> Bool
push :: a -> Stack a -> Stack a
top :: Stack a -> a
pop :: Stack a -> (a,Stack a)
newtype Stack a = StackImpl [a] -- opaque!
empty = StackImpl []
isEmpty (StackImpl s) = null s
push x (StackImpl s) = StackImpl (x:s)
top (StackImpl s) = head s
pop (StackImpl (s:ss)) = (s,StackImpl ss)
There is more to say about this subject, I hope this comment helps!

Non-nullable reference types

I'm designing a language, and I'm wondering if it's reasonable to make reference types non-nullable by default, and use "?" for nullable value and reference types. Are there any problems with this? What would you do about this:
class Foo {
Bar? b;
Bar b2;
Foo() {
b.DoSomething(); //valid, but will cause exception
b2.DoSomething(); //?
}
}
My current language design philosophy is that nullability should be something a programmer is forced to ask for, not given by default on reference types (in this, I agree with Tony Hoare - Google for his recent QCon talk).
On this specific example, with the unnullable b2, it wouldn't even pass static checks: Conservative analysis cannot guarantee that b2 isn't NULL, so the program is not semantically meaningful.
My ethos is simple enough. References are an indirection handle to some resource, which we can traverse to obtain access to that resource. Nullable references are either an indirection handle to a resource, or a notification that the resource is not available, and one is never sure up front which semantics are being used. This gives either a multitude of checks up front (Is it null? No? Yay!), or the inevitable NPE (or equivalent). Most programming resources are, these days, not massively resource constrained or bound to some finite underlying model - null references are, simplistically, one of...
Laziness: "I'll just bung a null in here". Which frankly, I don't have too much sympathy with
Confusion: "I don't know what to put in here yet". Typically also a legacy of older languages, where you had to declare your resource names before you knew what your resources were.
Errors: "It went wrong, here's a NULL". Better error reporting mechanisms are thus essential in a language
A hole: "I know I'll have something soon, give me a placeholder". This has more merit, and we can think of ways to combat this.
Of course, solving each of the cases that NULL current caters for with a better linguistic choice is no small feat, and may add more confusion that it helps. We can always go to immutable resources, so NULL in it's only useful states (error, and hole) isn't much real use. Imperative technqiues are here to stay though, and I'm frankly glad - this makes the search for better solutions in this space worthwhile.
Having reference types be non-nullable by default is the only reasonable choice. We are plagued by languages and runtimes that have screwed this up; you should do the Right Thing.
This feature was in Spec#. They defaulted to nullable references and used ! to indicate non-nullables. This was because they wanted backward compatibility.
In my dream language (of which I'd probably be the only user!) I'd make the same choice as you, non-nullable by default.
I would also make it illegal to use the . operator on a nullable reference (or anything else that would dereference it). How would you use them? You'd have to convert them to non-nullables first. How would you do this? By testing them for null.
In Java and C#, the if statement can only accept a bool test expression. I'd extend it to accept the name of a nullable reference variable:
if (myObj)
{
// in this scope, myObj is non-nullable, so can be used
}
This special syntax would be unsurprising to C/C++ programmers. I'd prefer a special syntax like this to make it clear that we are doing a check that modifies the type of the name myObj within the truth-branch.
I'd add a further bit of sugar:
if (SomeMethodReturningANullable() into anotherObj)
{
// anotherObj is non-nullable, so can be used
}
This just gives the name anotherObj to the result of the expression on the left of the into, so it can be used in the scope where it is valid.
I'd do the same kind of thing for the ?: operator.
string message = GetMessage() into m ? m : "No message available";
Note that string message is non-nullable, but so are the two possible results of the test above, so the assignment is value.
And then maybe a bit of sugar for the presumably common case of substituting a value for null:
string message = GetMessage() or "No message available";
Obviously or would only be validly applied to a nullable type on the left side, and a non-nullable on the right side.
(I'd also have a built-in notion of ownership for instance fields; the compiler would generate the IDisposable.Dispose method automatically, and the ~Destructor syntax would be used to augment Dispose, exactly as in C++/CLI.)
Spec# had another syntactic extension related to non-nullables, due to the problem of ensuring that non-nullables had been initialized correctly during construction:
class SpecSharpExampleClass
{
private string! _nonNullableExampleField;
public SpecSharpExampleClass(string s)
: _nonNullableExampleField(s)
{
}
}
In other words, you have to initialize fields in the same way as you'd call other constructors with base or this - unless of course you initialize them directly next to the field declaration.
Have a look at the Elvis operator proposal for Java 7. This does something similar, in that it encapsulates a null check and method dispatch in one operator, with a specified return value if the object is null. Hence:
String s = mayBeNull?.toString() ?: "null";
checks if the String s is null, and returns the string "null" if so, and the value of the string if not. Food for thought, perhaps.
A couple of examples of similar features in other languages:
boost::optional (C++)
Maybe (Haskell)
There's also Nullable<T> (from C#) but that is not such a good example because of the different treatment of reference vs. value types.
In your example you could add a conditional message send operator, e.g.
b?->DoSomething();
To send a message to b only if it is non-null.
Have the nullability be a configuration setting, enforceable in the authors source code. That way, you will allow people who like nullable objects by default enjoy them in their source code, while allowing those who would like all their objects be non-nullable by default have exactly that. Additionally, provide keywords or other facility to explicitly mark which of your declarations of objects and types can be nullable and which cannot, with something like nullable and not-nullable, to override the global defaults.
For instance
/// "translation unit 1"
#set nullable
{ /// Scope of default override, making all declarations within the scope nullable implicitly
Bar bar; /// Can be null
non-null Foo foo; /// Overriden, cannot be null
nullable FooBar foobar; /// Overriden, can be null, even without the scope definition above
}
/// Same style for opposite
/// ...
/// Top-bottom, until reset by scoped-setting or simply reset to another value
#set nullable;
/// Nullable types implicitly
#clear nullable;
/// Can also use '#set nullable = false' or '#set not-nullable = true'. Ugly, but human mind is a very original, mhm, thing.
Many people argue that giving everyone what they want is impossible, but if you are designing a new language, try new things. Tony Hoare introduced the concept of null in 1965 because he could not resist (his own words), and we are paying for it ever since (also, his own words, the man is regretful of it). Point is, smart, experienced people make mistakes that cost the rest of us, don't take anyones advice on this page as if it were the only truth, including mine. Evaluate and think about it.
I've read many many rants on how it's us poor inexperienced programmers who really don't understand where to really use null and where not, showing us patterns and antipatterns that are meant to prevent shooting ourselves in the foot. All the while, millions of still inexperienced programmers produce more code in languages that allow null. I may be inexperienced, but I know which of my objects don't benefit from being nullable.
Here we are, 13 years later, and C# did it.
And, yes, this is the biggest improvement in languages since Barbara and Stephen invented types in 1974.:
Programming With Abstract Data Types
Barbara Liskov
Massachusetts Institute of Technology
Project MAC
Cambridge, Massachusetts
Stephen Zilles
Cambridge Systems Group
IBM Systems Development Division
Cambridge, Massachusetts
Abstract
The motivation
behind the work in very-high-level languages is to ease the
programming task by providing the programmer with a language
containing primitives or abstractions suitable to his problem area.
The programmer is then able to spend his effort in the right place; he
concentrates on solving his problem, and the resulting program will be
more reliable as a result. Clearly, this is a worthwhile goal.
Unfortunately, it is very difficult for a designer to select in
advance all the abstractions which the users of his language might
need. If a language is to be used at all, it is likely to be used to
solve problems which its designer did not envision, and for which the
abstractions embedded in the language are not sufficient. This paper
presents an approach which allows the set of built-in abstractions to
be augmented when the need for a new data abstraction is discovered.
This approach to the handling of abstraction is an outgrowth of work
on designing a language for structured programming. Relevant aspects
of this language are described, and examples of the use and
definitions of abstractions are given.
I think null values are good: They are a clear indication that you did something wrong. If you fail to initialize a reference somewhere, you'll get an immediate notice.
The alternative would be that values are sometimes initialized to a default value. Logical errors are then a lot more difficult to detect, unless you put detection logic in those default values. This would be the same as just getting a null pointer exception.

What's the difference between a method and a function?

Can someone provide a simple explanation of methods vs. functions in OOP context?
A function is a piece of code that is called by name. It can be passed data to operate on (i.e. the parameters) and can optionally return data (the return value). All data that is passed to a function is explicitly passed.
A method is a piece of code that is called by a name that is associated with an object. In most respects it is identical to a function except for two key differences:
A method is implicitly passed the object on which it was called.
A method is able to operate on data that is contained within the class (remembering that an object is an instance of a class - the class is the definition, the object is an instance of that data).
(this is a simplified explanation, ignoring issues of scope etc.)
A method is on an object or is static in class.
A function is independent of any object (and outside of any class).
For Java and C#, there are only methods.
For C, there are only functions.
For C++ and Python it would depend on whether or not you're in a class.
But in basic English:
Function: Standalone feature or functionality.
Method: One way of doing something, which has different approaches or methods, but related to the same aspect (aka class).
'method' is the object-oriented word for 'function'. That's pretty much all there is to it (ie., no real difference).
Unfortunately, I think a lot of the answers here are perpetuating or advancing the idea that there's some complex, meaningful difference.
Really - there isn't all that much to it, just different words for the same thing.
[late addition]
In fact, as Brian Neal pointed out in a comment to this question, the C++ standard never uses the term 'method' when refering to member functions. Some people may take that as an indication that C++ isn't really an object-oriented language; however, I prefer to take it as an indication that a pretty smart group of people didn't think there was a particularly strong reason to use a different term.
In general: methods are functions that belong to a class, functions can be on any other scope of the code so you could state that all methods are functions, but not all functions are methods:
Take the following python example:
class Door:
def open(self):
print 'hello stranger'
def knock_door():
a_door = Door()
Door.open(a_door)
knock_door()
The example given shows you a class called "Door" which has a method or action called "open", it is called a method because it was declared inside a class. There is another portion of code with "def" just below which defines a function, it is a function because it is not declared inside a class, this function calls the method we defined inside our class as you can see and finally the function is being called by itself.
As you can see you can call a function anywhere but if you want to call a method either you have to pass a new object of the same type as the class the method is declared (Class.method(object)) or you have to invoke the method inside the object (object.Method()), at least in python.
Think of methods as things only one entity can do, so if you have a Dog class it would make sense to have a bark function only inside that class and that would be a method, if you have also a Person class it could make sense to write a function "feed" for that doesn't belong to any class since both humans and dogs can be fed and you could call that a function since it does not belong to any class in particular.
Simple way to remember:
Function → Free (Free means it can be anywhere, no need to be in an object or class)
Method → Member (A member of an object or class)
A very general definition of the main difference between a Function and a Method:
Functions are defined outside of classes, while Methods are defined inside of and part of classes.
The idea behind Object Oriented paradigm is to "treat" the software is composed of .. well "objects". Objects in real world have properties, for instance if you have an Employee, the employee has a name, an employee id, a position, he belongs to a department etc. etc.
The object also know how to deal with its attributes and perform some operations on them. Let say if we want to know what an employee is doing right now we would ask him.
employe whatAreYouDoing.
That "whatAreYouDoing" is a "message" sent to the object. The object knows how to answer to that questions, it is said it has a "method" to resolve the question.
So, the way objects have to expose its behavior are called methods. Methods thus are the artifact object have to "do" something.
Other possible methods are
employee whatIsYourName
employee whatIsYourDepartmentsName
etc.
Functions in the other hand are ways a programming language has to compute some data, for instance you might have the function addValues( 8 , 8 ) that returns 16
// pseudo-code
function addValues( int x, int y ) return x + y
// call it
result = addValues( 8,8 )
print result // output is 16...
Since first popular programming languages ( such as fortran, c, pascal ) didn't cover the OO paradigm, they only call to these artifacts "functions".
for instance the previous function in C would be:
int addValues( int x, int y )
{
return x + y;
}
It is not "natural" to say an object has a "function" to perform some action, because functions are more related to mathematical stuff while an Employee has little mathematic on it, but you can have methods that do exactly the same as functions, for instance in Java this would be the equivalent addValues function.
public static int addValues( int x, int y ) {
return x + y;
}
Looks familiar? That´s because Java have its roots on C++ and C++ on C.
At the end is just a concept, in implementation they might look the same, but in the OO documentation these are called method.
Here´s an example of the previously Employee object in Java.
public class Employee {
Department department;
String name;
public String whatsYourName(){
return this.name;
}
public String whatsYourDeparmentsName(){
return this.department.name();
}
public String whatAreYouDoing(){
return "nothing";
}
// Ignore the following, only set here for completness
public Employee( String name ) {
this.name = name;
}
}
// Usage sample.
Employee employee = new Employee( "John" ); // Creates an employee called John
// If I want to display what is this employee doing I could use its methods.
// to know it.
String name = employee.whatIsYourName():
String doingWhat = employee.whatAreYouDoint();
// Print the info to the console.
System.out.printf("Employee %s is doing: %s", name, doingWhat );
Output:
Employee John is doing nothing.
The difference then, is on the "domain" where it is applied.
AppleScript have the idea of "natural language" matphor , that at some point OO had. For instance Smalltalk. I hope it may be reasonable easier for you to understand methods in objects after reading this.
NOTE: The code is not to be compiled, just to serve as an example. Feel free to modify the post and add Python example.
In OO world, the two are commonly used to mean the same thing.
From a pure Math and CS perspective, a function will always return the same result when called with the same arguments ( f(x,y) = (x + y) ). A method on the other hand, is typically associated with an instance of a class. Again though, most modern OO languages no longer use the term "function" for the most part. Many static methods can be quite like functions, as they typically have no state (not always true).
Let's say a function is a block of code (usually with its own scope, and sometimes with its own closure) that may receive some arguments and may also return a result.
A method is a function that is owned by an object (in some object oriented systems, it is more correct to say it is owned by a class). Being "owned" by a object/class means that you refer to the method through the object/class; for example, in Java if you want to invoke a method "open()" owned by an object "door" you need to write "door.open()".
Usually methods also gain some extra attributes describing their behaviour within the object/class, for example: visibility (related to the object oriented concept of encapsulation) which defines from which objects (or classes) the method can be invoked.
In many object oriented languages, all "functions" belong to some object (or class) and so in these languages there are no functions that are not methods.
Methods are functions of classes. In normal jargon, people interchange method and function all over. Basically you can think of them as the same thing (not sure if global functions are called methods).
http://en.wikipedia.org/wiki/Method_(computer_science)
A function is a mathematical concept. For example:
f(x,y) = sin(x) + cos(y)
says that function f() will return the sin of the first parameter added to the cosine of the second parameter. It's just math. As it happens sin() and cos() are also functions. A function has another property: all calls to a function with the same parameters, should return the same result.
A method, on the other hand, is a function that is related to an object in an object-oriented language. It has one implicit parameter: the object being acted upon (and it's state).
So, if you have an object Z with a method g(x), you might see the following:
Z.g(x) = sin(x) + cos(Z.y)
In this case, the parameter x is passed in, the same as in the function example earlier. However, the parameter to cos() is a value that lives inside the object Z. Z and the data that lives inside it (Z.y) are implicit parameters to Z's g() method.
Historically, there may have been a subtle difference with a "method" being something which does not return a value, and a "function" one which does.Each language has its own lexicon of terms with special meaning.
In "C", the word "function" means a program routine.
In Java, the term "function" does not have any special meaning. Whereas "method" means one of the routines that forms the implementation of a class.
In C# that would translate as:
public void DoSomething() {} // method
public int DoSomethingAndReturnMeANumber(){} // function
But really, I re-iterate that there is really no difference in the 2 concepts.
If you use the term "function" in informal discussions about Java, people will assume you meant "method" and carry on. Don't use it in proper documents or presentations about Java, or you will look silly.
Function or a method is a named callable piece of code which performs some operations and optionally returns a value.
In C language the term function is used. Java & C# people would say it a method (and a function in this case is defined within a class/object).
A C++ programmer might call it a function or sometimes method (depending on if they are writing procedural style c++ code or are doing object oriented way of C++, also a C/C++ only programmer would likely call it a function because term 'method' is less often used in C/C++ literature).
You use a function by just calling it's name like,
result = mySum(num1, num2);
You would call a method by referencing its object first like,
result = MyCalc.mySum(num1,num2);
Function is a set of logic that can be used to manipulate data.
While, Method is function that is used to manipulate the data of the object where it belongs.
So technically, if you have a function that is not completely related to your class but was declared in the class, its not a method; It's called a bad design.
In OO languages such as Object Pascal or C++, a "method" is a function associated with an object. So, for example, a "Dog" object might have a "bark" function and this would be considered a "Method". In contrast, the "StrLen" function stands alone (it provides the length of a string provided as an argument). It is thus just a "function." Javascript is technically Object Oriented as well but faces many limitations compared to a full-blown language like C++, C# or Pascal. Nonetheless, the distinction should still hold.
A couple of additional facts: C# is fully object oriented so you cannot create standalone "functions." In C# every function is bound to an object and is thus, technically, a "method." The kicker is that few people in C# refer to them as "methods" - they just use the term "functions" because there isn't any real distinction to be made.
Finally - just so any Pascal gurus don't jump on me here - Pascal also differentiates between "functions" (which return a value) and "procedures" which do not. C# does not make this distinction explicitly although you can, of course, choose to return a value or not.
Methods on a class act on the instance of the class, called the object.
class Example
{
public int data = 0; // Each instance of Example holds its internal data. This is a "field", or "member variable".
public void UpdateData() // .. and manipulates it (This is a method by the way)
{
data = data + 1;
}
public void PrintData() // This is also a method
{
Console.WriteLine(data);
}
}
class Program
{
public static void Main()
{
Example exampleObject1 = new Example();
Example exampleObject2 = new Example();
exampleObject1.UpdateData();
exampleObject1.UpdateData();
exampleObject2.UpdateData();
exampleObject1.PrintData(); // Prints "2"
exampleObject2.PrintData(); // Prints "1"
}
}
Since you mentioned Python, the following might be a useful illustration of the relationship between methods and objects in most modern object-oriented languages. In a nutshell what they call a "method" is just a function that gets passed an extra argument (as other answers have pointed out), but Python makes that more explicit than most languages.
# perfectly normal function
def hello(greetee):
print "Hello", greetee
# generalise a bit (still a function though)
def greet(greeting, greetee):
print greeting, greetee
# hide the greeting behind a layer of abstraction (still a function!)
def greet_with_greeter(greeter, greetee):
print greeter.greeting, greetee
# very simple class we can pass to greet_with_greeter
class Greeter(object):
def __init__(self, greeting):
self.greeting = greeting
# while we're at it, here's a method that uses self.greeting...
def greet(self, greetee):
print self.greeting, greetee
# save an object of class Greeter for later
hello_greeter = Greeter("Hello")
# now all of the following print the same message
hello("World")
greet("Hello", "World")
greet_with_greeter(hello_greeter, "World")
hello_greeter.greet("World")
Now compare the function greet_with_greeter and the method greet: the only difference is the name of the first parameter (in the function I called it "greeter", in the method I called it "self"). So I can use the greet method in exactly the same way as I use the greet_with_greeter function (using the "dot" syntax to get at it, since I defined it inside a class):
Greeter.greet(hello_greeter, "World")
So I've effectively turned a method into a function. Can I turn a function into a method? Well, as Python lets you mess with classes after they're defined, let's try:
Greeter.greet2 = greet_with_greeter
hello_greeter.greet2("World")
Yes, the function greet_with_greeter is now also known as the method greet2. This shows the only real difference between a method and a function: when you call a method "on" an object by calling object.method(args), the language magically turns it into method(object, args).
(OO purists might argue a method is something different from a function, and if you get into advanced Python or Ruby - or Smalltalk! - you will start to see their point. Also some languages give methods special access to bits of an object. But the main conceptual difference is still the hidden extra parameter.)
for me:
the function of a method and a function is the same if I agree that:
a function may return a value
may expect parameters
Just like any piece of code you may have objects you put in and you may have an object that comes as a result. During doing that they might change the state of an object but that would not change their basic functioning for me.
There might be a definition differencing in calling functions of objects or other codes. But isn't that something for a verbal differenciations and that's why people interchange them? The mentions example of computation I would be careful with. because I hire employes to do my calculations:
new Employer().calculateSum( 8, 8 );
By doing it that way I can rely on an employer being responsible for calculations. If he wants more money I free him and let the carbage collector's function of disposing unused employees do the rest and get a new employee.
Even arguing that a method is an objects function and a function is unconnected computation will not help me. The function descriptor itself and ideally the function's documentation will tell me what it needs and what it may return. The rest, like manipulating some object's state is not really transparent to me. I do expect both functions and methods to deliver and manipulate what they claim to without needing to know in detail how they do it.
Even a pure computational function might change the console's state or append to a logfile.
From my understanding a method is any operation which can be performed on a class. It is a general term used in programming.
In many languages methods are represented by functions and subroutines. The main distinction that most languages use for these is that functions may return a value back to the caller and a subroutine may not. However many modern languages only have functions, but these can optionally not return any value.
For example, lets say you want to describe a cat and you would like that to be able to yawn. You would create a Cat class, with a Yawn method, which would most likely be a function without any return value.
To a first order approximation, a method (in C++ style OO) is another word for a member function, that is a function that is part of a class.
In languages like C/C++ you can have functions which are not members of a class; you don't call a function not associated with a class a method.
IMHO people just wanted to invent new word for easier communication between programmers when they wanted to refer to functions inside objects.
If you are saying methods you mean functions inside the class.
If you are saying functions you mean simply functions outside the class.
The truth is that both words are used to describe functions. Even if you used it wrongly nothing wrong happens. Both words describe well what you want to achieve in your code.
Function is a code that has to play a role (a function) of doing something.
Method is a method to resolve the problem.
It does the same thing. It is the same thing. If you want to be super precise and go along with the convention you can call methods as the functions inside objects.
Let's not over complicate what should be a very simple answer. Methods and functions are the same thing. You call a function a function when it is outside of a class, and you call a function a method when it is written inside a class.
Function is the concept mainly belonging to Procedure oriented programming where a function is an an entity which can process data and returns you value
Method is the concept of Object Oriented programming where a method is a member of a class which mostly does processing on the class members.
I am not an expert, but this is what I know:
Function is C language term, it refers to a piece of code and the function name will be the identifier to use this function.
Method is the OO term, typically it has a this pointer in the function parameter. You can not invoke this piece of code like C, you need to use object to invoke it.
The invoke methods are also different. Here invoke meaning to find the address of this piece of code. C/C++, the linking time will use the function symbol to locate.
Objecive-C is different. Invoke meaning a C function to use data structure to find the address. It means everything is known at run time.
TL;DR
A Function is a piece of code to run.
A Method is a Function inside an Object.
Example of a function:
function sum(){
console.log("sum")l
}
Example of a Method:
const obj = {
a:1,
b:2,
sum(){
}
}
So thats why we say that a "this" keyword inside a Function is not very useful unless we use it with call, apply or bind .. because call, apply, bind will call that function as a method inside object ==> basically it converts function to method
I know many others have already answered, but I found following is a simple, yet effective single line answer. Though it doesn't look a lot better than others answers here, but if you read it carefully, it has everything you need to know about the method vs function.
A method is a function that has a defined receiver, in OOP terms, a method is a function on an instance of an object.
A class is the collection of some data and function optionally with a constructor.
While you creating an instance (copy,replication) of that particular class the constructor initialize the class and return an object.
Now the class become object (without constructor)
&
Functions are known as method in the object context.
So basically
Class <==new==>Object
Function <==new==>Method
In java the it is generally told as that the constructor name same as class name but in real that constructor is like instance block and static block but with having a user define return type(i.e. Class type)
While the class can have an static block,instance block,constructor, function
The object generally have only data & method.
Function - A function in an independent piece of code which includes some logic and must be called independently and are defined outside of class.
Method - A method is an independent piece of code which is called in reference to some object and are be defined inside the class.
General answer is:
method has object context (this, or class instance reference),
function has none context (null, or global, or static).
But answer to question is dependent on terminology of language you use.
In JavaScript (ES 6) you are free to customising function context (this) for any you desire, which is normally must be link to the (this) object instance context.
In Java world you always hear that "only OOP classes/objects, no functions", but if you watch in detailes to static methods in Java, they are really in global/null context (or context of classes, whithout instancing), so just functions whithout object. Java teachers could told you, that functions were rudiment of C in C++ and dropped in Java, but they told you it for simplification of history and avoiding unnecessary questions of newbies. If you see at Java after 7 version, you can find many elements of pure function programming (even not from C, but from older 1988 Lisp) for simplifying parallel computing, and it is not OOP classes style.
In C++ and D world things are stronger, and you have separated functions and objects with methods and fields. But in practice, you again see functions without this and methods whith this (with object context).
In FreePascal/Lazarus and Borland Pascal/Delphi things about separation terms of functions and objects (variables and fields) are usually similar to C++.
Objective-C comes from C world, so you must separate C functions and Objective-C objects with methods addon.
C# is very similar to Java, but has many C++ advantages.
In C++, sometimes, method is used to reflect the notion of member function of a class. However, recently I found a statement in the book «The C++ Programming Language 4th Edition», on page 586 "Derived Classes"
A virtual function is sometimes called a method.
This is a little bit confusing, but he said sometimes, so it roughly makes sense, C++ creator tends to see methods as functions can be invoked on objects and can behave polymorphic.