It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Could someone provide definition for generator, enumerator, iterator terms. It seems different languages use these arbitrary and I would like to know exact differences.
An enumerator is a way of labeling values. If you had a container of integers, you could define an enumeration for possible values. It is easier to remember named values then numbers, for example imagine if everyone you knew didn't have a name, but instead was given a unique number. It would be harder to remember.
An iterator is an object that allows you to traverse a container. You iterate through it step by step. Some containers are straightforward to step through (like a contiguous array) but others aren't (like a linked list, where each element can be randomly scattered throughout memory, or a binary tree where there may be different possible orders to step through the data). Iterators allow you to traverse the container without worrying about these kinds of details.
As for generators, I am not familiar with them so I will leave you this quote from wikipedia:
In computer science, a generator is a special routine that can be used to control the iteration behaviour of a loop. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator.
Wikipedia provides perfectly good generic definitions:
Iterator
Generator
Enumerator
In short, an iterator is a class that can be read repeated using a loop; a generator is a function that acts like an iterator by returning its values one-by-one; and an enumerator is a data type containing a list of possible values that can be used in a variable definition to force it to only contain one of those values.
As far as I know, the usage of these terms is pretty consistent between languages, albeit with different syntax (obviously). What have you seen that has confused you?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In his book "Clean Code", Robert Martin states that the ideal number of function arguments is zero. Most people will agree that too many function arguments cause lots of issues. But as far as I can see, a function with zero arguments belongs to at least one of these categories:
The function is trivial and always returns the same value.
The function lacks referential transparency, its return value depends on external mutable state.
The purpose of the function are its side effects.
Now functions of type (1) are not really useful while (2) and (3) should be avoided for well-known reasons.
I am aware that the book I mentioned is about OOP, so functions typically belong to an object and get an object reference passed as an implicit argument. But still, accessing object attributes either means (2) or (3).
So what am I missing?
If you answer this question, please don't just communicate your opinion but provide reasonable arguments or specific examples. Otherwise it will probably get closed.
So what am I missing?
The key word in Martin's statement is "ideal". If you continue reading, he writes in depth about functions with one, two, and three arguments, but only mentions niladic functions in one context - testing. He claims that they are trivial to test, presumably because there is only one possible outcome, and it's either right or wrong.
So this is an ideal, and the principle to take from it is the fewer arguments, the better. In reality, obviously, this ideal is rarely achieved. The purpose of a function is usually to take input, whether directly, or indirectly through the object or system state (note that I would call those "arguments" as well to be consistent with Martin's analysis), and provide an output. When you have multiple arguments, the number of test cases increase exponentially, the maintenance is more difficult, etc.
So you are not "missing" anything, so long as you recognize that this is an ideal goal and not something that you should take as an absolute.
Some examples of "pure" niladic functions in C#:
DateTime.MinValue
String.Empty
Niladic functions in C# that return object or system state:
DateTime.Now()
object.GetHashCode()
String.Length
The object's attributes will essentially serve as the arguments to the function. In this way, you can directly manipulate the object you are working with, rather than passing and returning values which you then attribute to the object after performing some behaviour contained in the function.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am very confused about the concepts of polymorphism ,overloading and overriding because it seems same to me. Please explain these concepts, and how are they different from each other
Very confused so please guide me properly.
Thanks
Polymorphism can be achieved through overriding. Put in short words, polymorphism refers to the ability of an object to provide different behaviors (use different implementations) depending on its own nature. Specifically, depending on its position in the class hierarchy.
Method Overriding is when a method defined in a superclass or interface is re-defined by one of its subclasses, thus modifying/replacing the behavior the superclass provides. The decision to call an implementation or another is dynamically taken at runtime, depending on the object the operation is called from. Notice the signature of the method remains the same when overriding.
Method Overloading is unrelated to polymorphism. It refers to defining different forms of a method (usually by receiving different parameter number or types). It can be seen as static polymorphism. The decision to call an implementation or another is taken at coding time. Notice in this case the signature of the method must change.
Operator overloading is a different concept, related to polymorphism, which refers to the ability of a certain language-dependant operator to behave differently based on the type of its operands (for instance, + could mean concatenation with Strings and addition with numeric operands).
The example in Wikipedia is quite illustrative.
The following related questions might be also useful:
Polymorphism vs Overriding vs Overloading
Polymorphism - Define In Just Two Sentences
Shortly, no they are not the same.
Overloading means creating methods with same name but different parameters.
Overriding means re-defining body of a method of superclass in a subclass to change behavior of a method.
Polymorphism is a wide concept which includes overriding and overloading and much more in it's scope. Wikipedia's description of polymorphism can help you understand the polymorphism better. Especially the Subtype polymorphism (or inclusion polymorphism) section is where you should look.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm watching a video course/lectures from Stanford. The course is "The Structure and Interpretation of Computer Programs"
In the first OOP lecture, the instructor (Brian Harvey) describes an OOP method as one that gives different answers for the same question, while a function in functional programming gives a certain output for a certain input.
The following code is an example of a method in OOP that gives a different answer each time it's called:-
(define-class (counter)
instance-vars (count 0))
(method (next)
(set! count (+ count 1))
count) )
Now although the course is illustrated by scheme, I didn't pay much attention to the language itself, and so I can't explain the code; but can't a similar function "next" do the same thing as this "next" function?
In C, I would declare a global variable, and each time increase it by one when calling next. I know C is procedural, but I'm guessing a similar thing can be done in Scheme.
Well. With all due respect to the lecturer, these are slightly fishy definitions of both "OOP" and "functional programming". Both terms are consistently used, well, inconsistently, both in industry and academic contexts, not to mention informal use. If you dig a bit deeper, what's really going on is that there are several orthogonal concepts--different axes along which a choice is made in how to approach a program--that are being conflated, with one set of choices being arbitrarily called "OOP" despite not having anything else tying them together.
Probably the two biggest distinctions involved here are:
Identity vs. value: Do you model things by implicit identity (based on memory location or whatnot) and allow them to change arbitrarily? Or do you model things by their value, with no inherent notion of identity? If you say x = 4 does that mean that x is an alias to the timeless Platonic ideal of the number 4, or is x the name of a thing that's currently a four, but could be something else later (while still being x)?
Data vs. behavior: Do you work with simple data structures whose representation can be inspected, manipulated, and transformed? Or do you work with abstracted behaviors that do things, representing data only in terms of the things you can do with it, and let these behavioral abstractions operate on each other?
Most standard imperative languages lean toward using identity and data--pointers to C structs are about as purely this approach as possible. OOP languages tend to be defined largely by opting for behavior over data, often leaning toward identity as well but not consistently (cf. the popularity of "immutable" objects).
Functional programming usually leans more toward values rather than identity, while mixing data and behavior to various degrees.
There's a lot more going on here as well but I think that's the key part of what you're wondering here.
If anyone's curious I've elaborated a bit on some of this before: Analyzing some essential concepts of many OOP languages, more on the identity/value issue and also formal vs. informal approaches, a look at the data/behavior distinction in functional programming, probably others I can't think of. Warning, I'm kind of long-winded, these are not for the faint of heart. :P
There is a page on the excellent Haskell wiki, where differences in Functional Programming and OOP are contrasted. The Haskell wiki is a wonderful resource for everything about functional programming in general in addition to helping with the Haskell language.
Functional programming and OOP Differences
The important difference between pure functional programming and object-oriented programming is:
Object-oriented:
Data:
OOP asks What can I do with the data?
Producer: Class
Consumer: Class method
State:
The methods and objects in OOP have some internal state (method variables and object attributes) and they possibly have side effects affecting the state of computer’s peripherals, the global scope, or the state of an object or method. Variable assignment is one good sign of something having a state.
Functional:
Data:
Functional programming asks How the data is constructed?
Producer: Type Constructor
Consumer: Function
State:
If a pure functional programming ever assigns to a variable, the variable must be considered and handled as immutable. There must not be a state in pure functional programming.
Code with side effects is often separated from the main purely functional body of code
State can be passed around as an argument to a function, this is called a continuation.
Functional substitutes for OOP generators
The way to do something similar to OOP style generators (which have an internal state) with pure functional programming is to approach the problem from a different point of view, by using one of these solutions depending on the use case:
1. Process some or all values in a sequence:
Type of sequence can be list, array, sequence or vector.
Lisp has car and Haskell has first, which take first item from a list.
Haskell also has take, which takes the first n items, and which supports lazy evaluation and thus infinite or cyclic sequences – like OOP generators do.
Both have first, and different map, reduce or fold functions for processing sequences with a function.
Matrices usually also have some ways to map or apply a function to each item.
2. Some values from a function are needed:
The indices might be from a discrete or continuous scale (integers or floats).
Make one pure function to generate the indices (events) and feed those to another pure function (behaviour). This is called Functional reactive programming. This is a form of Dataflow programming along with cell-oriented programming. The Actor model is also somewhat similar in operation, and a very interesting alternative to threads with handling concurrency!
3. Use a closure to confine and encapsulate the state from the outside
This is the closest subsitute to OOP way with generators (which I think actually originated to imitate closures), and also farthest from pure functional programming, because a closure has a state.
"Functional" in functional programming has traditionally referred to the meaning of mathematical functions. That is, the output of a mathematical function is based solely on the inputs passed to it. Nowadays such programming is more often called pure functional programming.
In pure functional programming reassigning state is not allowed, thus writing a function such as your C example would not be possible. You are only allowed to bind a value to a variable once. An example of a language where this would not be possible is Haskell.
Most functional programming languages (Scheme included) are unpure and would allow you to do so. Said that, what the lecturer is telling is that writing such a function is not possible in the traditional sense of functional programming.
Well, yeah, you could do that in C.
But its not the same - in C++ you can make each object have its own count.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
can u explain me what is the difference between aggregation, containment & delegation
Since you've tagged this with COM, I'll assume you're asking how COM uses these terms - in COM terminology they mean something somewhat more specific than when used in general.
Conveniently, MSDN has pages that define these - I'll give a brief summary:
Containment/Delegation - when one outer object owns (contains) and makes use of (delegates to) an inner object. The two objects maintain distinct identities and separate sets of interfaces.
Aggregation - when two or more COM objects essentially pool their interfaces and behave as though they are a single COM object. The client code is then dealing with what appears to be a single object, but is in fact an 'aggregate' of other objects.
Aggregation is usually used when you want one object to 'inherit' a set of interfaces from another object. It's somewhat complex to implement, however: COM requires that from any interface on an object you must be able to QI to any other interface, so the various objects involved have to cooperate to ensure that you can QI from any interface on one of the objects to any interface on the other, an have ref counting work across both objects.
Containment describes the idea of one class, having a data member that is an object of another class/type.
Delegation expresses the idea that one class uses another class to accomplish a task or goal.
Delegation is usually accomplished by containment
Aggregation and Containment are generic concepts (concepts above com or any other technology) of Object Composition. The Object Composition link, has a separate section on Aggregation in com also.
Similarly, you can read about delegation.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I am trying to understand the core of object oriented programming for php or actionscript proect. As far as I understand, we will have a Main class that control different elements of the project. For example, photoslider class, music control class..etc. I created instance of those classes inside my Main class and use their method or property to control those objects.
I have studied many OOP articles but most of them only talks about inheritance, encapsulation...etc I am not sure if I am right about this and I would appreciate if someone can explain more about it. Thanks!
Same question , i was asking when i were just starting my career but i understood Object Orientation as i progress in my career.
but for very basic startng point in oop.
1- think about object just try to relate your daily household things like ( your laptop, your ipad, your Mobile, your pet)
Step 2-
Try to relate objects like ( Your TV an your remote ) this gives you the basic idea how object should relate to each other.
Step 3-
Try to visulize how things compose to create a full feature like your Body compose of (Heart, Lungs and many other organs)
Step 4-
Try to think about object lifetime ( Like as a example a car enigne is less useful outside Car , so if car is a object than this object must contain a engine and when actual car object destroys engine is also destroyed)
Step 5-
Try to learn about a polymorphism ( Like a ScrewDriver can take may shapes according to your need then map to your objects if your using c# than try to leran about ToString() method overriding)
Step 6 -
Try to create a real life boundry to your real life object ( Like your House ; You secure your house by various means )
this is the initial learning .. read as much as text as you find and try to learn by your own examples
in the last ; oop is an art first , try to visulize it.
my main suggestion is to look at the objects as "smart serfs": each one of these will have memory (the data members) and logic (the member functions).
In my experience, the biggest strength of OOP is the control that you have on the evolution of your design: if your software is remotely useful, it will change, and OOP gives you tools to make the change sustainable. In particular:
a class should change for only one reason, so it must be solve only one problem (SINGLE RESPONSABILITY PRINCIPLE)
changing the behaviour of a class should be made by extending it, not by modifying it (OPEN CLOSED PRINCIPLE)
Focus on interfaces, not on inheritance
Tell, don't ask! Give orders to your objects, do not use them as "data stores"
There are other principles, but I think that these are the ones that must be really understood to succeed in OOP.
I'm not sure I ever understood OOP until I started programming in Ruby but I think I have a reasonable grasp of it now.
It was once explained to me as the components of a car and that helped a lot...
There's such a thing as a Car (the class).
my_car and girlfriends_car are both instances of Car.
my_car has these things that exist called Tyres.
my_car has four instances of Tyres - tyre1, tyre2, tyre3, tyre4
So I have two classes - Car, Tyre
and I have multiple instances of each class.
The Car class has an attribute called Car.colour.
my_car.colour is blue
girlfriends_car is pink
The sticking point for me was understanding the difference between class methods and instance methods.
Instance Methods
An instance method is something like my_car.paint_green. It wouldn't make any sense to call Car.paint_green. Paint what car green? Nope. It has to be girlfriend_car.wrap_around_tree because an instance method has to apply to an instance of that Class.
Class Methods
Say I wanted to build a car? my_new_car = Car.build
I call a Class method because it wouldn't make any sense to call it on an instance? my_car.build? my_car is already built.
Conclusion
If you're struggling to understand OOP then you should make sure that you understand the difference between the Class itself and instances of that Class. Furthermore, you should try to undesrstand the difference between class methods and instance methods. I'd recommend learning some Ruby or Python just so you can get a fuller understanding of OOP withouth the added complicaitons of writing OOP in a non-OOP language.
Great things happen with a true OOP language. In Ruby, EVERYTHING is a class. Even nothing (Nil) is a class. Strings are classes. Numbers are classes and every class is descended from the Object class so you can do neat things like inherit the instance_methods method from Object so String.instance_methods tells you all the instance methods for a string.
Hope that helps!
Kevin.
It seems like you're asking about the procedures or "how-tos" of OOP, not the concepts.
For the how-tos, you're mostly correct: I'm not specifically familiar with PHP or ActionScript, but for those of us in .NET, your program will have some entry point which will take control, and then it will call vairous objects, functions, methods, or whatever- often passing control to other pieces of code- to perform whatever you've decided.
In psuedo-code, it might look something like:
EntryPoint
Initialize (instanciate) a Person
Validate the Person's current properties
Perform some kind of update and/or calculation
provide result to user
Exit
If what you're looking for is the "why" then you're already looking in the right places. The very definitions of the terms Encapsulation, Inheritance, etc. will shed light on why we do OOP.
It's mostly about grouping code that belongs to certain areas together. In non-OOP languages you often have the problem that you can't tell which function is used for what/modifies which structures or functions tend to do too many loosely related things. One work around is to introduce a strict naming scheme (e.g. start every function name with the structure name it's associated with). With OOP, every function is tied to a data structure (the object) and thus makes it easier to organize your code. If you code gets larger/the number of tasks bigger inheritance starts to make a difference.
Good example is a structure representing a shape and a function that returns its center. In non-OOP, that function must distinguish between each structure. That's a problem if you add a new shape. You have to teach your function how to calculate the center for that shape. Now imagine you also had functions to return the circumfence and area and ... Inheritance solves that problem.
Note that you can do OOP programming in non-OOP languages (see for example glib/gtk+ in C) but a "real" OOP language makes it easier and often less error-prone to code in OOP-style. On the other hand, you can mis-use almost every OOP language to write purely imperative code :-) And no language prevents one from writing stupid and inefficient code, but that's another story.
Not sure what sort of answer you're looking for, but I think 10s of 1000s of newly graduated comp sci students will agree: no amount of books and theory is a substitute for practice. In other words, I can explain encapsulation, polymorphism, inheritance at length, but it won't help teach you how to use OO effectively.
No one can tell you how to program. Over time, you'll discover that, no matter how many different projects your working on, you're solving essentially the same problems over and over again. You'll probably ask yourself regularly:
How to represent an object or a process in a meaningful way to the client?
How do I reuse functionality without copy-pasting code?
What actually goes in a class / how fine-grained should classes be?
How do support variations in functionality in a class of objects based on specialization or type?
How do support variations in functionality without rewriting existing code?
How do I structure large applications to make them easy to maintain?
How do I make my code easy to test?
What I'm doing seems really convoluted / hacky, is there an easier way?
Will someone else be able to maintain the code when I'm finished?
Will I be able to maintain the code in 6 months or a year from now?
etc.
There are lots of books on the subject, and they can give you a good head start if you need a little advice. But trust me, time and practice are all you need, and it won't be too long -- maybe 6 or 9 months on a real project -- when OO idioms will be second nature.