Requesting a check in my understanding of Objective-C - objective-c

I have been learning Objective-C as my first language and understand Classes, Objects, instances, methods, OOP in general, etc enough to use the language and make simple applications work, but I wanted to check on a few fundamental questions that have never been explained in examples I followed.
I think the questions are so simple that they will confuse a lot of people, but I hope it will make sense to someone out there.
(While learning Objective-C the authors are assuming I have a basic computer programming background, yet I have found that a basic computer programming background is hard to come by since everyone teaching computer programming assumes you already have one to start teaching you something else. Hence the help with the fundamentals)
Passing and Returning:
When declaring methods with parameters how is the parameter stuff actually working if the arguments being passed into the parameters can have different names then the parameter names? I hope that makes sense. I know parameter names are variables for that very reason, but...
are the arguments themselves getting mapped to a look up table or something?
Second the argument "types" (int for example) have to match the parameter return types in order for them to be passed into the method, and you always have to make your arguments values equal the parameter names somewhere else in your code listing before passing them into the method?
Is the following correct: After a method gets executed it returns a particular value (if it is not void) to the class or instances that is calling the method in the first place.
Is object oriented programming really just passing "your" Objects instance methods around with the system generated classes and methods to produce a result? If we are passing things to methods so they can do some work to them and then return something back why not do the work in the first place eliminating the need to pass anything? Theoretical question I guess? I assume the answer would be: Because that would be a crazy big tangled mess of a method with everything happening all at once, but I wanted to ask anyway.
Thank you for your time.

Variables are just places where values are stored. When you pass a variable as an argument, you aren't actually passing the variable itself — you're passing the value of the variable, which is copied into the argument. There's no mapping table or anything — it just takes the value of the variable and sticks it in the argument.
In the case of objects, the variable is a pointer to an object that exists somewhere in the program's memory space. In this case, the value of the pointer is copied just like any other variable, but it still points to the same object.
(the argument "types" … have to match the parameter return types…) It isn't technically true that the types have to be the same, though they usually should be. Some types can be automatically converted to another type. For example, a char or short will be promoted to an int if you pass them to a function or method that takes an int. There's a complicated set of rules around type conversions. One thing you usually should not do is use casts to shut up compiler warnings about incompatible types — the compiler takes that to mean, "It's OK, I know what I'm doing," even if you really don't. Also, object types cannot ever be converted this way, since the variables are just pointers and the objects themselves live somewhere else. If you assign the value of an NSString*variable to an NSArray* variable, you're just lying to the compiler about what the pointer is pointing to, not turning the string into an array.
Non-void functions and methods return a value to the place where they're called, yes.
(Is object-oriented programming…) Object-oriented programming is a way of structuring your program so that it can be conceptually described as a collection of objects sending messages to each other and doing things in response to those messages.
(why not do the work in the first place eliminating the need to pass anything) The primary problem in computer programming is writing code that humans can understand and improve later. Functions and methods allow us to break our code into manageable chunks that we can reason about. They also allow us to write code once and reuse it all over the place. If we didn't factor repeated code into functions, then we'd have to repeat the code every time it is needed, which both makes the program code much longer and introduces thousands of new opportunities for bugs to creep in. 50,000-line programs would become 500 million-line programs. Not only would the program be horrendously bug-ridden, but it would be such a huge ball of spaghetti that finding the bugs would be a Herculean task.
By the way, I think you might like Uli Kusterer's Masters of the Void. It's a programming tutorial for Mac users who don't know anything about programming.

"If we are passing things to methods so they can do some work to them and then return something back why not do the work in the first place eliminating the need to pass anything?"
In the beginning, that's how it was done.
But then smart programers noticed that they were repeating copies of some work and also running out of memory, so they decided to put that chunk of work in one central place to save memory, and then call it by passing in the data from where it was before.
They gave the locations, where the data was stuffed, names, because the programs were big enough that nobody memorized all the numerical address for every bit of data any more.
Then really really big computers finally got more 16k of memory, and the programs started to become big unmanageable messes, so they codified the practice as part of structured programming. It's now a religious tenet.
But it's still done, by compilers when the inline flag is set, and also sometimes by hand on code that has to be really really fast on some very constrained processors by programmers who know when and where to make targeted trade-offs.
A little reading on the History of Computers is quite informative about how we got to where we are today, and why we do such strange things.

All that type checks used (at most) only during compilation stage, to fix errors in code.
Actually, during execution, all variables are just a block of memory, which is sent somewhere. For example, 'id' type and 'int' are both represented as 4-byte raw value, and you can write (int)id and (id)int to convert those type one to another.
And, about parameters names - they are used by compiler only to let it know, to which memory area send some data.
That's easy explanation, actually all that stuff is complicated, but I think you'll get the main idea - during execution there are no variable names/types, everything is done via operations over memory blocks.

Related

Whats the purpose of using method overloading?

I want to know the exact reason why the method overloading is done in OOP without using different method names to every variation as it was asked at an interview. Please help me to understand this concept.
Without using any fancy terms, let's say you're building an API, and there's a method called crush which let's say crushes or destroys whatever parameter is given to it. If you follow your way, you'll have to use atleast three different methods, each for an int, float and char (I'm using the most general types as an example). Now the more types there are, the more methods you'll have to create with that so many different names. Therefore the developer using your API, is going to have to remember so many different names for something as simple as a method that destroys its parameter. As much as it's difficult, it's also much less readable because again, remembering too many names for a singular function (function as in job).
Method overloading isn't used for everything, it's intended to be used for methods or functions that might take different types of data at different points, but internally follow a constant procedure or does a singular thing no matter what type of data it's passed.
You won't be writing one version of print that takes an int as a parameter, and returns the modulus of that, and another version of print that takes a string as an argument, and prints that to stdout. You can, but that's not how it's meant to be used.
It is mainly so as to be able to follow a relatively well-known software design principle called "Syntactic Consistency" from the book "Principles of Programming Languages" by Bruce J. MacLennan, which says the following:
Similar things should look similar, whereas
different things should look different.
When you see two functions with different names, you might be tempted to believe that they do different things. If they do in fact do different things, it is okay, but what if they do the same thing? In that case, it would be nice if the functions have the exact same name, so as to indicate that they do, in fact, do the same thing.
Of course you can misuse overloading. If you go around writing functions that do different things, taking advantage of overloading to give them the same name, then you are shooting yourself in the foot.

Manipulating Objects in Methods instead of returning new Objects?

Let’s say I have a method that populates a list with some kind of objects. What are the advantages and disadvantages of following method designs?
void populate (ArrayList<String> list, other parameters ...)
ArrayList<String> populate(other parameters ...)
Which one I should prefer?
This looks like a general issue about method design but I couldn't find a satisfying answer on google, probably for not using the right keywords.
The second one seems more functional and thread safe to me. I'd prefer it in most cases. (Like every rule, there are exceptions.)
The owner of the populate method could return an immutable List (why ArrayList?).
It's also thread safe if there is no state modified in the populate method. Only passed in parameters are used, and these can also be immutable.
Other than what #duffymo mentioned, the second one is easier to understand, thus use: it is obvious what its input and output is.
Advantages to the in-out parameter:
You don't have to create as many objects. In languages like C or C++, where allocation and deallocation can be expensive, that can be a plus. In Java/C#, not so much -- GC makes allocation cheap and deallocation all but invisible, so creating objects isn't as big a deal. (You still shouldn't create them willy-nilly, but if you need one, the overhead isn't as bad as in some manual-allocation languages.)
You get to specify the type of the list. Potential plus if you need to pass that array to some other code you don't control later.
Disadvantages:
Readability issues.
In almost all languages that support function arguments, the first case is assumed to mean "do something with the entries in this list". Modifying args violates the Priciple of Least Astonishment. The second is assumed to mean "give me a list of stuff", which is what you're after.
Every time you say "ArrayList", or even "List", you take away a bit of flexibility. You add some overhead to your API. What if i don't want to create an ArrayList before calling your method? I shouldn't have to, if the method's whole purpose in life is to return me some entries. That's the API's job.
Encapsulation issues:
The method being passed a list to fill can't assume anything about that list (even that it's a list at all; it could be null).
The method passing the list can't guarantee anything about what the method does with it. If it's working correctly, sure, the API docs can say "this method won't destroy existing entries". But considering the chance of bugs, that may not be worth trusting. At least if the method returns its own list, the caller doesn't have to worry about what was in it before. And it doesn't have to worry about a bug from a thousand miles away corrupting data it should never have affected.
Thread safety issues.
The list could be locked by another thread, meaning if we try and lock on it now it could potentially lock up the app.
Or, if not locked, it could still be modified by another thread, in which case we're no less screwed. Unless you're going to write extra code to handle concurrent-modification exceptions everywhere.
Returning a new list means every call to the method can have its own list. No thread can mess with another thread's return value, unless the class is very badly designed.
Side point: Being able to specify the type of the list often leads to dependencies on the type of the list. Notice how you're passing ArrayLists around everywhere. You're painting yourself into corners by saying "This is an ArrayList" when you don't need to, but when you're passing it to a dozen methods, that's a dozen methods you'll have to change. (Not entirely related, but only slightly tangential. You could change the types to List rather than ArrayList and get rid of this. But the more you're passing that list around, the more places you'll need to change.)
Short version: Unless you have a damn good reason, use the first syntax only if you're using the existing contents of the list in your method. IE: if you're modifying it, or doing something with the existing values. If you intend to return a list of entries, then return a List of entries.
The second method is the preferred way for many reasons.
primarily because the function signature is more clear and shows what its intentions are.
It is actually recommended that you NEVER change the value of a parameter that is passed in to a function unless you explicitly mark it as an "out" parameter.
it will also be easier to use in expressions
and it will be easier to change in the future. including taking it to a more functional approach (for threading, etc.) if you would like to

Does over-using function calls affect performance? Specifically in Fortran

I habitually write code with lots of functions, I find it makes it clearer. But now I'm writing some code in Fortran which needs to be very efficient, and I'm wondering whether over-using functions will slow it down, or whether the compiler will work out what's going on and optimise?
I know in Java/Python etc each function is an object, and so creating lots of functions would require them to be created in memory. I also know that in Haskell the functions are reduced into each other, so it makes little difference there.
Does anyone know about the case with Fortran? Is there a difference with using intent/pure functions/declaring fewer local variables/anything else?
Function calls carry a performance cost for stack based languages, like Fortran. They have to add on to the stack and such.
Because of this, most compilers will try to inline function calls aggressively, if it is possible. Most of the time the compiler will make the right choice on whether or not to inline certain functions in your program.
This automatic inlining process will mean that there is no extra cost for writing your function (at all).
This means that you should write your code as cleanly and organized as possible, and it is likely that the compiler will do these optimizations for you. It is more important that your overall strategy for solving the problem is the most efficient than worry about performance of function calls.
Just write the code in the simplest and most well-structured way you can, then when you have it written and tested you can profile it to see if there are any hotspots which require optimisation. Only at that point should you concern yourself with micro-optimisations, and this may not even be necessary if your compiler is doing its job.
I've just spent all morning tuning an app consisting of mixed C and Fortran, and of course it uses a lot of functions. What I found (and what I usually find) is not that functions are slow, but that certain function calls (and very few of them) don't really have to be done at all. For example, clearing memory blocks, just to be neat, but doing it at high frequency.
This is not a function of language, and not really a function of inlining either. Function calls could be free and you would still have the problem that the call tree tends to be more bushy than necessary. You need to find out where to prune it. This is the "profiling" method I rely on.
Whatever you do, find out what needs to be fixed. Don't Guess. Many people don't think of this kind of question as guessing, but when they find themselves asking "Will this work, will that help?", they're poking in the dark, rather than finding out where the problems are. Once they know where the problems are, the fix is obvious.
Typically subroutine / function calls in Fortran will have very little overhead. While the language standard doesn't specify argument passing mechanisms, the typical implementation is "by reference" so no copying is involved, only setting up a new procedure. On most modern architectures this has little overhead. Selecting good algorithms is generally far more important than micro-optimizations.
An exception about calling be quick could be case in which the compiler has to create temporary arrays, for example, if the actual argument is a non-contiguous array subsection and the called procedure argument is a plain contiguous array. Suppose that the dummy argument is dimension (:). Calling it with an array of dimension (:) is simple. If you request a non-unit stride in the call, e.g., array (1:12:3), then the array is non-contiguous and the compiler may need to create a temporary copy. Suppose that the actual argument is dimension (:,:). If the call has array (:,j), the sub-array is contiguous since in Fortran the first index varies fastest in memory and shouldn't need a copy. But array (i,:) is non-contiguous and might require a temporary copy.
Some compilers have options to warn you when temporary array copies are needed so that you can change your code, if you wish.

Passing object references needlessly through a middleman

I often find myself needing reference to an object that is several objects away, or so it seems. The options I see are passing a reference through a middle-man or just making something available statically. I understand the danger of global scope, but passing a reference through an object that does nothing with it feels ridiculous. I'm okay with a little bit passing around, I suppose. I suspect there's a line to be drawn somewhere.
Does anyone have insight on where to draw this line?
Or a good way to deal with the problem of distributing references amongst dependent objects?
Use the Law of Demeter (with moderation and good taste, not dogmatically). If you're coding a.b.c.d.e, something IS wrong -- you've nailed forevermore the implementation of a to have a b which has a c which... EEP!-) One or at the most two dots is the maximum you should be using. But the alternative is NOT to plump things into globals (and ensure thread-unsafe, buggy, hard-to-maintain code!), it is to have each object "surface" those characteristics it is designed to maintain as part of its interface to clients going forward, instead of just letting poor clients go through such undending chains of nested refs!
This smells of an abstraction that may need some improvement. You seem to be violating the Law of Demeter.
In some cases a global isn't too bad.
Consider, you're probably programming against an operating system's API. That's full of globals, you can probably access a file or the registry, write to the console. Look up a window handle. You can do loads of stuff to access state that is global across the whole computer, or even across the internet... and you don't have to pass a single reference to your class to access it. All this stuff is global if you access the OS's API.
So, when you consider the number of global things that often exist, a global in your own program probably isn't as bad as many people try and make out and scream about.
However, if you want to have very nice OO code that is all unit testable, I suppose you should be writing wrapper classes around any access to globals whether they come from the OS, or are declared yourself to encapsulate them. This means you class that uses this global state can get references to the wrappers, and they could be replaced with fakes.
Hmm, anyway. I'm not quite sure what advice I'm trying to give here, other than say, structuring code is all a balance! And, how to do it for your particular problem depends on your preferences, preferences of people who will use the code, how you're feeling on the day on the academic to pragmatic scale, how big the code base is, how safety critical the system is and how far off the deadline for completion is.
I believe your question is revealing something about your classes. Maybe the responsibilities could be improved ? Maybe moving some code would solve problems ?
Tell, don't ask.
That's how it was explained to me. There is a natural tendency to call classes to obtain some data. Taken too far, asking too much, typically leads to heavy "getter sequences". But there is another way. I must admit it is not easy to find, but improves gradually in a specific code and in the coder's habits.
Class A wants to perform a calculation, and asks B's data. Sometimes, it is appropriate that A tells B to do the job, possibly passing some parameters. This could replace B's "getName()", used by A to check the validity of the name, by an "isValid()" method on B.
"Asking" has been replaced by "telling" (calling a method that executes the computation).
For me, this is the question I ask myself when I find too many getter calls. Gradually, the methods encounter their place in the correct object, and everything gets a bit simpler, I have less getters and less call to them. I have less code, and it provides more semantic, a better alignment with the functional requirement.
Move the data around
There are other cases where I move some data. For example, if a field moves two objects up, the length of the "getter chain" is reduced by two.
I believe nobody can find the correct model at first.
I first think about it (using hand-written diagrams is quick and a big help), then code it, then think again facing the real thing... Then I code the rest, and any smells I feel in the code, I think again...
Split and merge objects
If a method on A needs data from C, with B as a middle man, I can try if A and C would have some in common. Possibly, A or a part of A could become C (possible splitting of A, merging of A and C) ...
However, there are cases where I keep the getters of course.
But it's less likely a long chain will be created.
A long chain will probably get broken by one of the techniques above.
I have three patterns for this:
Pass the necessary reference to the object's constructor -- the reference can then be stored as a data member of the object, and doesn't need to be passed again; this implies that the object's factory has the necessary reference. For example, when I'm creating a DOM, I pass the element name to the DOM node when I construct the DOM node.
Let things remember their parent, and get references to properties via their parent; this implies that the parent or ancestor has the necessary property. For example, when I'm creating a DOM, there are various things which are stored as properties of the top-level DomDocument ancestor, and its child nodes can access those properties via the reference which each one has to its parent.
Put all the different things which are passed around as references into a single class, and then pass around just that one class instance as the only thing that's passed around. For example, there are many properties required to render a DOM (e.g. the GDI graphics handle, the viewport coordinates, callback events, etc.) ... I put all of these things into a single 'Context' instance which is passed as the only parameter to the methods of the DOM nodes to be rendered, and each method can get whichever properties it needs out of that context parameter.

Understanding Variables in Visual Basic

What's the main problem if I don't declare the type of a variable? Like, Dim var1 versus Dim var1 as Integer.
The main reason to declare variable types in languages which allow you to use variant types is as a check on yourself. If you have a variable that you're using to hold a string, and then by accident you pass it to a function that expects an integer, the compiler can't tell you that you messed up unless you told it that that variable was supposed to always be a string. Instead, you're stuck with your string being reinterpreted as an integer, which is pretty much never going to give you what you want and the results will likely be confusing, and it will be hard to track down the bug.
In pretty much all languages there are a lot of constructs where you could leave it out and your program would work, but exist as a check on the programmer. The first job of a compiler is to make your code into an executable. But the second job of the compiler is to try as much as possible to make sure that the programmer didn't make a mistake. Especially when your program gets large, it's easier to let the compiler find mistakes like this as opposed to trusting that you typed everything exactly right.
Additionally, there is usually some processing overhead associated with variants, but this is a more minor concern.
There are a few reasons:
Vastly improved type safety.
Lower cognitive overhead; the compiler and Intellisense can help you.
Lower performance overhead; transforming things to and from Variant types has a small but nontrivial cost.
Eliminates need for naming warts (e.g., lblTitle to tell you that something is supposed to hold a Label).
Moves some kinds of runtime errors to compile-time errors, which is a big productivity win.
Someone else already mentioned Intellisense, but but it's worth repeating.
Additionally, when you declare an explicit type for your variable you make it possible for the compiler to do all kinds of extra type checking and validation on your code that would not otherwise be possible. What happens is that now certain kinds of very common error are caught and fixed at compile time rather than run time. The user never sees them. You don't want to leave errors for run-time.
You say "it could be anything" — and that is true. But you then go on to say "so it must be good". That doesn't necessarily follow, and it's very dangerous. Not everything can be assigned or combined with everything else. It could be anything — but it is something, or rather some specific thing. Without an explicit type, the compiler has no way to know what and can't help you avoid errors.
Superficially, if you don't declare the type, Intellisense can't help you with it because it doesn't know what type it is.
Contrast this to Python's typing system. Python allows a developer to use a variable without declaring type in advance but once a variable is used, the type is fixed. A variant, by contrast, can be assigned any type of value initially and a different type can be stored later on without any warning or complaint. So you can put a string into a variable that previously held a numeric.
Dim myvar1
myvar1 = 1
'A whole lot more code
myvar1 = "this string"
If you ever have to maintain someone else's code, you'll start to understand why this sort of thing (changing a variable type silently) can be extra tough to maintain. Especially if you're using a module level variable this could lead to some really interesting problems. This is along the same lines as using Option Explicit in VB code. Without Option Explicit you can do this sort of thing without realizing it:
myvar1 = 1
'A whole lot more code here too
myvarl = 2
In some fonts those two variable names would be impossible to distinguish and this could lead to some tough to find bugs.