What does it really mean that a programming language is stackless? - stackless

According to this answer
https://stackoverflow.com/questions/551950/what-stackless-programming-languages-are-available/671296#671296
all of these programming languages are stackless
Stackless Python
PyPy
Lisp
Scheme
Tcl
Lua
Parrot VM
What does it really mean for them to be stackless? Does it mean they don't use a call stack? If they don't use a call stack, what do they use?

What does it really mean for them to be stackless? Does it mean they don't use a call stack?
Yes, that's about right.
If they don't use a call stack, what do they use?
The exact implementation will, of course, vary from language to language. In Stackless Python, there's a dispatcher which starts the Python interpreter using the topmost frame and its results. The interpreter processes opcodes as needed one at a time until it reaches a CALL_FUNCTION opcode, the signal that you're about to enter into a function. This causes the dispatcher to build a new frame with the relevant information and return to the dispatcher with the unwind flag. From there, the dispatcher begins anew, pointing the interpreter at the topmost frame.
Stackless languages eschew call stacks for a number of reasons, but in many cases it's used so that certain programming constructs become much easier to implement. The canonical one is continuations. Continuations are very powerful, very simple control structures that can represent any of the usual control structures you're probably already familiar with (while, do, if, switch, et cetera).
If that's confusing, you may want to try wrapping your head around the Wikipedia article, and in particular the cutesy continuation sandwich analogy:
Say you're in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there's a sandwich on the counter, and all the materials used to make it are gone. So you eat it.

They don't use a call stack, because they operate in continuation-passing style. If you're not familiar with tail call optimization, that's probably a good first step to understanding what this means.
To emulate the traditional call/return on this model, instead of pushing a return address and expecting the remainder of the frame to remain untouched the caller closes over the remainder of its code and any variables that are still needed (the rest are freed). It then performs a tail call to the callee, passing this continuation as an argument. When the callee "returns", it does so by calling this continuation, passing the return value as an argument to it.
As far as the above goes, it's merely a complicated way to do function calls. However, it generalizes very nicely to more complex scenarios:
exception/finally/etc blocks are very easily modeled - if you can pass one "return" continuation as an argument, you can pass 2 (or more) just as easily. lisp-y "condition handler" blocks (which may or may not return control to the caller) are also easy - pass a continuation for the remainder of this function, that might or might not be called.
Multiple return values are similarly made easy - pass several arguments on to the continuation.
Return temporaries/copying are no longer different than function argument passing. This often makes it easier to eliminate temporaries.
Tail recursion optimization is trivial - the caller simply passes on the "return" continuation it received rather than capturing a new one.

Related

Synchronized collection that blocks on every method

I have a collection that is commonly used between different threads. In one thread I need to add items, remove items, retrieve items and iterate over the list of items. What I am looking for is a collection that blocks access to any of its read/write/remove methods whenever any of these methods are already being called. So if one thread retrieves an item, another thread has to wait until the reading has completed before it can remove an item from the collection.
Kotlin doesn't appear to provide this. However, I could create a wrapper class that provides the synchronization I'm looking for. Java does appear to offer the synchronizedList class but from what I read, this is really for blocking calls on a single method, meaning that no two threads can remove an item at the same time but one can remove while the other reads an item (which is what I am trying to avoid).
Are there any other solutions?
A wrapper such as the one returned by synchronizedList
synchronizes calls to every method, using the wrapper itself as the lock. So one thread would be blocked from calling get(), say, while another thread is currently calling put(). (This is what the question seems to ask for.)
However, as the docs to that method point out, this does nothing to protect sequences of calls, such as you might use when iterating through a collection. If another thread changes the collection in between your calls to next(), then anything could happen. (This is what I think the question is really about!)
To handle that safely, your options include:
Manual synchronization. Surround each sequence of calls to the collection in a synchronized block that synchronises on the collection, e.g.:
val list = Collections.synchronizedList(mutableListOf<String>())
// …
synchronized (list) {
for (i in list) {
// …
}
}
This is straightforward, and relatively easy to do if the collection is under your control. But if you miss any sequences, then you could get unexpected behaviour. Also, you'll need to keep your sequences short, to avoid holding the lock for an extended time and affecting performance.
Use a concurrent collection implementation which provides primitives letting you do all the processing you need in a single call, avoiding iteration and other sequences.
For maps, Java provides very good support with its ConcurrentMap interface, and high-performance implementations such as ConcurrentHashMap. These have methods allowing you to iterate, update single or multiple mappings, search, reduce, and many other whole-map operations in a single call, avoiding any concurrency problems.
For sets (as per this question) you can use a ConcurrentSkipListSet, or you can create one from a ConcurrentHashMap with newKeySet().
For lists (as per this question), there are fewer options. (I think concurrent lists are much less commonly needed.) If you don't need random access, ConcurrentLinkedQueue may suffice. Or if modification is much less common than iteration, CopyOnWriteArrayList could work.
There are many other concurrent classes in the java.util.concurrent package, so it's well worth looking through to see if any of those is a better match for your particular case.
If you have specialised requirements, you could write your own collection implementation which supports them. Obviously this is more work, and only worthwhile if none of the above approaches does what you want.
In general, I think it's well worth stepping back and seeing whether iteration is really needed. Historically, in imperative languages all the way from FORTRAN through BASIC and C up to Java, the for loop has traditionally been the tool of choice (sometimes the only structure) for operating on collections of data — and for those of us who grew up on those languages, it's what we reach for instinctively. But the functional programming paradigm provides alternative tools, and so in languages like Kotlin which provide some of them, it's good to stop and ask ourselves “What am I ultimately trying to achieve here?” (Often what we want is actually to update all entries, or map to a new structure, or search for an element, or find the maximum — all of which have better approaches in Kotlin than low-level iteration.)
After all, if you can tell the compiler what you want to do, instead of how to do it, then your program is likely to be shorter and easier to read and maintain, freeing you to think about more important things!

What are the specifics about the continuations upon which Raku(do) relies?

The topic of delimited continuations was barely discussed among programming language enthusiasts in the 1990s and 2000s. It has recently been re-emerging as a major thing in programming language discussions.
My hope is that someone can at least authoritatively say whether the continuations underlying Rakudo (as contrasted with Raku) do or don't have each of the six characteristics listed below. I say a bit more about the sort of answer I'm hoping for after the list.
Quoting verbatim (with a formatting touch up) from an online message[1] written by the person driving the work on adding continuations to the JVM:
Asymmetric: When the continuation suspends or yields, the execution returns to the caller (of Continuation.run()). Symmetric continuations don't have the notion of a caller. When they yield, they must specify another continuation to transfer the execution to. Neither symmetric nor asymetric continuations are more powerful than one another, and each could be used to simulate the other.
Stackful: The continuation can be suspended at any depth in the call-stack, rather than in the same subroutine where the delimited context begins when the continuation is stackless (as is the case in C#). I.e the continuation has its own stack rather than just a single subroutine frame. Stackful continuations are more powerful than stackless ones.
Delimited: The continuation captures the execution context that starts with a specific call (in our case, the body of a certain runnable) rather than the entire execution state all the way up to main(). Delimited continuations are strictly more powerful than undelimited ones (http://okmij.org/ftp/continuations/undelimited.html), the latter considered "not practically useful" (http://okmij.org/ftp/continuations/against-callcc.html).
Multi-prompt: Continuations can be nested, and anywhere in the call stack, any of the enclosing continutions can be suspended. This is similar to nesting of try/catch blocks, and throwing an exception of a certain type that unwinds the stack up to the nearest catch that handles it rather than just the nearest catch. An example of nested continuations can be using a Python-like generator inside a virtual thread. The generator code can do a blocking IO call, which will suspend the enclosing thread continuation, and not just the generator: https://youtu.be/9vupFNsND6o?t=2188
One-shot/non-reentrant: Every time we continue a suspended continuation its state is mutated, and we cannot continue it from the same suspension state multiple times (i.e we can't go back in time). This is unlike reentrant continuations where every time we suspend them, a new immutable continuation object that represents a particular suspension point is returned. I.e. the continuation is a single point in time, and every time we continue it we go back to that state. Reentrant continuations are strictly more powerful than non-reentrant ones; i.e. they can do things that are strictly impossible with just one-shot continuations.
Cloneable: If we are able to clone a one-shot continuation we can provide the same ability as reentrant continuations. Even though the continuation is mutated every time we continue it, we can clone its state before continuing to create a snapshot of that point in time that we can return to later.
Aiui continuations aren't directly exposed in Raku, so perhaps the correct answer related to Raku (as against Rakudo) would be "there are no continuations". But that's not clear to me so in the following, in which I describe what I'm hoping might be in an answer if I'm very lucky, I'll pretend it makes some sense to talk about them in the context of both Raku and Rakudo as two distinct realms.
Here's the sort of answer I'm imagining would be possible (though I'm just somewhat wildly guessing at what is actually true):
"As a "100 year" language design, Raku's current underlying semantic [execution?] model requires, at minimum, stackless one-shot multi prompt delimited continuations.
From a theoretic pov, Raku's design can never expand to require that continuations are cloneable but it could theoretically expand to require they are stackful.
Rakudo implements the currently required continuation semantics.
MoarVM has support for these semantics built in, and could realistically track the theoretically possible expansions of requirements if Raku's design ever so expands.
The JVM and JS backends have suitable shims that achieve the same thing, albeit at a cost to performance. It seems plausible that the JVM backend could switch to using continuations that are native to the JVM if it comes to pass that it gets them, provided of course that they meet requirements, but my current impression is that it would likely realistically be perhaps a decade away, or more, before we would need to consider crossing that bridge."
(Or something vaguely like that.)
If an answer also provided a bit more detail on something like the above, perhaps some code links, that would be a particularly awesome addition.
Similarly, if an answer included a couple brief examples of how this continuation power surfaces in current Raku features, and a speculation about how it might one day, say 10 years from now, surface in other features, that would make an answer an over-the-top brilliant one.
PS. Thank you to #Larry who understood things deeply enough to know continuations needed to be part of the picture; to Stefan O'Rear for his contributions, including the initial implementations of what I think are one-shot multi prompt delimited continuations; and to jnthn for making the dream come true.
Footnotes
1 There is work underway to introduce continuations as a first class construct to the JVM. A key driver of this effort is Ron Pressler. The above is based on a message he wrote in November.
Rakudo uses continuations as an implementation strategy for two features:
gather/take - for implementing lazy iterators
Making await on the thread pool non-blocking
The characteristics of the continuations implemented follow the requirements of these language features. I'll go through them in a slightly different order than above because it eases explaining.
Stackful - yes, because we need to be able to do the take or await at any depth in the callstack relative to the gather or the thread pool worker's work loop. For example, you could write a recursive graph traversal algorithm inside of a gather and then take each encountered node. For await, this is at the heart of the difference between Raku's await and await as seen in many other languages: you don't have to refactor all the way up the call stack.
Delimited - yes. The continuation reset operation installs a tag (or "prompt"), and when we do a continuation control operation, we slice the stack at this delimiter. I can't imagine how you'd implement the Raku features involved without them being delimited.
Multi-prompt - yes, this is required because you can be iterating one data source provided by a gather inside of another gather's implementation, or do an await inside of a gather.
Asymmetric - after the continuation has been taken, execution continues after the reset instruction. In the await case, we go and find another task in the worker task queue, and in the take case we're back in the pull-one method of the iterator and can return the taken value. I think this approach fits well in a language where only a few features use continuations.
One-shot/non-reentrant - yes, and at least in MoarVM the memory safety of the runtime depends on this property. It is enforced by an atomic compare and swap operation, so if two threads were to race to invoke the continuation, only one could ever succeed. No Raku features need the additional complexity that reentrant continuations would imply.
Cloneable - no, because no Raku features need it. In theory this isn't too awful to implement in MoarVM in terms of saying "yes, we can do it", but I suspect it raises a lot of questions like "how deep should be clone". If you just cloned all the invocation records and similar, you'd still share Scalar containers, Arrays, etc. between the clones.
As I understand it - though I follow from a distance - the JVM continuations are at least partly aimed at the same design space that the Raku await mechanism is in, and so I'd be surprised if they didn't end up providing what Raku needs. This would clearly simplify compilation of Raku code to the JVM (currently it does the global CPS transform as it does code generation, which curiously turned out simpler than I expected), and it'd almost certainly perform much better too, because the transform required probably obscures quite a few things from the perspective of the JIT compiler.
So far as code goes, you can see the current continuations implementation, which uses the continuation data structure which in turn has various bits of memory management. At the time of writing, these have all been significantly refactored as part of the new callstack representation required by ongoing dispatcher work; those changes do make working with continuations more efficient, but don't change the overall set of operations.

Differences between Red's 5 function types, and why does it distinguish them?

In Red, there are functions of datatypes function!, op!, native!, routine! and action!. What are the differences between them? As far as I know function! is used for user-defined functions and op! for infix operators, and routine! for functions defined in Red/System, but why is there a need for the other two?
function!
As you've guessed yourself, function!s are user-defined functions that support refinements and typechecking, and can also contain embedded docstrings.
Typically, function! values are created with func, function, does and has constructors, and utilize so-called spec dialect; but, in theory, nothing stops you from making your own constructors or devising your own spec formats.
It's also worth noting that function!s fully support reflection.
op!
op!s are infix wrappers on top of other 4 types of functions - they take one value on the left and result of an expression on the right, and they also take precedence other functions during evaluation.
op! values are limited to two arguments, don't support refinements, and have a limited support for reflection (e.g. you can't inspect their bodies with body-of).
routine!
routines! exist in both realms of Red and Red/System (low-level dialect on top of which Red runtime is build). Their specs are written in spec dialect, but their bodies contain Red/System code. Oh, and they support reflection.
Usually they are used for library bindings (like the SQL lib you've mentioned), interaction with runtime, or for performance bottlenecks (Red/System is a compiled language, so rewriting perfomance-critial parts of your app as a set of routine!s will give you a significant boost, at the cost of mandatory compilation).
native!
native!s are functions written in Red/System (for perfomance, simplicity or feasibility reasons) and compiled down to native code (hence the name). Not sure what else can be said about them, aside from implementation details. native! aren't very user-facing, so you might want to study Red's source code in case you have any questions left.
action!
action!s are a standardized set of function written in Red/System (just like native!s) that each datatype implements (or inherits) as its "method". action! are polymorphic in a sense that they dispatch on their first argument:
>> add 1 2%
== 1.02
>> add 2% 1
== 102%
>> append [1] "2"
== [1 "2"]
>> append "1" [2]
== "12"
In mainstream languages this typically looks like "1".append([2]) or something like that.
Distinction between action!s and native!s boils down to a design choice:
you can have as many native! as you want, but action!s, for efficiency, have a fixed-size dispatch table (which means that maximum number of action!s per datatype is limited; minimum number is two: make [to create value] and mold [to serialize value to string!]).
logically, action!s are organized around datatype to which they belong, in one file, while native!s aren't really concerned with datatypes, and implement control flow, trigonometric functions, operations on sets, etc.
Coincidentially, just recently we have a similar discussion about action!s and native!s in our community chat, which you might want to read. I can also recommend to skim thru Rudolf Meijer's Red specification draft, and, of course, official reference documentation.
As for "why" in your question - distinction between 5 types is just an implementation detail, inherited from Rebol. Logically, they all implement what you might call a "function" from conceptual standpoint, and fall into any-function! camp.
While to a caller it may seem similar to run a function whose body is a BLOCK! of code to one which is implemented as native instructions...the implementation has to go down a different branch.
I don't know precisely what Red does in the compilation case, the interpreter case for Rebol2 and Red are similar. These different types are effectively part of a big switch() statement. If it looks in the cell describing the "function" and finds TYPE_NATIVE it knows to interpret the cell's contents as containing a native function pointer. If it finds TYPE_FUNCTION, it knows to pick apart the cell as containing a pointer to a block of code to execute:
https://github.com/red/red/blob/cb39b45f90585c8f6392dc4ccfc82ebaa2e312f7/runtime/interpreter.reds#L752
Now I myself would agree with your line of questioning. e.g. is this leaking an implementation detail to the user--who shouldn't be concerned with this facet in the type system?
But for what it is worth, there is a catch-all typeset called ANY-FUNCTION!:
>> any-function!
== make typeset! [native! action! op! function! routine!]
And you might think of that as "anything that obeys a function-like interface for calling". There are some complexities however, as OP! gets its first argument from the left...so that really is a matter of concern from an interface perspective.
Anyway... a NATIVE! (body is built as native code into the executable) vs. a FUNCTION! (body is a block of Red code run by interpretation or compilation) is just one distinction. A ROUTINE! is a facade built to interact with a DLL/library a la FFI that did not have a-priori knowledge of Red. An ACTION! is a very oversimplified attempt at what are called in other languages Generics. An OP! just gets its first argument from the left.
Point being that each of these might feel the same to a caller (except OP!), but the implementation has to do something different. The way it knows to do something different is via a type byte in a value cell. That's how Rebol2 did it--and Red followed Rebol2 fairly closely--so that's how it also does it. It means that any novel concept of what provides the implementation behind a function requires a new datatype, and it's probably not the greatest idea.
Red is based on Rebol an so has the same types.
function! is an user defined function defined in red
native! is an function in machinecode
op! is an infix operator written in machinecode
action! is an polymorphic function in machinecode
routine! is an function in imported from dynamic library

What is the exhaustive list of guidelines/practices/rules to fully conform with functional paradigm?

I've started playing around with Kotlin, but I sense my own limitation in the way I program. My problem is that I still think Java therefore the style is still imperative, my question is to all functional programming zealots , which I believe would be very useful to all people who at the very beginning stage and also need to 'brake' their brain to start building it again; to leave comfort zone and start thinking pseudo and not in "whatever is your first language". I believe it is possible for highly experienced polyglot developers to chew the concepts down to plain advices of what makes your program being written in entirely functional way and what violates the paradigm. I don't know all the quirks but please don't hesitate to include universally accepted terms which might be unknown to me(I can always lookup). At this point I need this set of rules to make myself suffer at first and not break them but then I know I will feel it, analyze guidelines and understand how they are worse/better which of course is my own homework.
So example of these guidelines, would be something like:
Never change state, this can be avoided by using x, y, z
Operate using higher order functions only (I maybe wrong, just example)
I hope the answer will give me long term reference to put myself in extreme conditions where I stop escaping to OOP whenever I feel uncomfortable. And now when I look at Kotlin I understand how I've should've been thinking about problems, it is about intention not about the structure imposed by one language or another. Intention can always be converted to a language of your choice and backed up by design patterns applicable to the language, but to find that middle ground I need to jail myself first from the comfort zone.
Avoid mutable state like the plague.
One of the main points of using functional programming, possibly the main one, is to avoid all the little pitfalls, bugs, issues one needs to deal with when using mutable state. You should do everything you can in order to avoid mutating state. For instance, instead of using C-style for-loops where you need to keep a counter variable updated, use map and other higher-order functions in order to abstract away your iteration patterns. This also means that you should never change the value of a variable if you can avoid that. Instead, you should be defining almost all of your variables, preferrably all of them, as constants, and using functions to compute new values from them instead of mutating them.
Avoid side-effects like the plague.
Mutable state's ugly cousin, side-effects. Side effects mean anything other than taking a value and returning a value in a function. If that function prints data, mutates global variables, sends messages to threads, or anything, anything other than simply taking its parameters, computing a value from them, and returning a value, that function has side-effects. Side-effects are important (see next bullet point), but if you use them a lot, they get impossible to track. Just think of how everyone tells you to avoid global variables in imperative programming. Functional programming goes a step further and tries to avoid all side-effects. The bulk of your program should be made of pure functions. (See ahead)
When you need to use side-effects, keep them contained.
Yes, I just told you to run away from side-effects. However, no program is useful without side-effects of some kind. Graphical User Interface? Side-effect. Audio output? Side-effect. Printing to a shell? Side-effect. So you can't really get rid of side-effects if you want to build useful stuff.
What you should do instead is write your code so that all your side-effecting code lives in a thin layer which mostly calls pure functions and then does the required side-effects using the result of these pure function calls.
Use pure functions for everything you can.
This is sort of the flipside of the previous point. A pure function is a function which has no side-effects and does not mutate anything. It can only take in parameters and return a value. You should use these a lot. For instance, instead of doing your logging within functions which are computing stuff, you should be constructing your log strings using pure functions, and then letting your side-effects layer call these pure functions, call more pure functions in order to format the log strings into a full log, and then output the log itself from your side-effects layer.
Use higher-order functions to structure your code.
Higher-order functions are, in a way, the glue that makes functional programming work. A higher-order function is a function which takes one or more functions as parameters and/or returns a function. The power of higher-order functions is that they can encapsulate many of the patterns which you would use in an imperative-style program in a declarative manner. For instance, let's take a look at the three most common higher-order functions:
map is a function which takes a function and a list of values, applies its function argument to each of those values, and returns a new list with the results. map encapsulates the whole pattern of iterating over a list doing an operation on each value in a declarative manner.
filter is a function which takes a function which returns a boolean and a list of values, applies its function argument to each of those values and returns a list containing only those values for which its function argument returns true. It encapsulates the whole pattern of selecting results from a list in a declarative manner.
reduce, also known as fold, takes an initial value, a binary function and a list of values. It uses its function argument to combine the initial value with the first value of the list, then combines the result with the next value of the list and keeps on doing this until it has reduced the list to just one single value. It encapsulates the entire pattern of obtaining an aggregate value from a list of values.
This is in no way an exhaustive list of higher-order functions, but these three are the most common ones. I hope this has been enough to show how you can structure code which would require a lot of tracking variables using only functions in a declarative manner. If you use these higher-order functions well, it's likely you won't ever need a for or while loop again.
This is definitely not an exhaustive list of functional programming practices, but I think most functional programmers would agree these five guidelines form the core of what functional programming is about. If you want to really learn how to apply these, my advice would be to learn a pure functional programming language such as Haskell, so you are forced to abandon the imperative paradigm and to learn how to structure things functionally instead. I would recommend the fantastic Haskell Programming from First Principles as a starting resource if you choose to go this way. In case you don't want to/can't put down the cash, Brent Yorgey's Haskell course at UPenn is also a great free resource.

Requesting a check in my understanding of Objective-C

I have been learning Objective-C as my first language and understand Classes, Objects, instances, methods, OOP in general, etc enough to use the language and make simple applications work, but I wanted to check on a few fundamental questions that have never been explained in examples I followed.
I think the questions are so simple that they will confuse a lot of people, but I hope it will make sense to someone out there.
(While learning Objective-C the authors are assuming I have a basic computer programming background, yet I have found that a basic computer programming background is hard to come by since everyone teaching computer programming assumes you already have one to start teaching you something else. Hence the help with the fundamentals)
Passing and Returning:
When declaring methods with parameters how is the parameter stuff actually working if the arguments being passed into the parameters can have different names then the parameter names? I hope that makes sense. I know parameter names are variables for that very reason, but...
are the arguments themselves getting mapped to a look up table or something?
Second the argument "types" (int for example) have to match the parameter return types in order for them to be passed into the method, and you always have to make your arguments values equal the parameter names somewhere else in your code listing before passing them into the method?
Is the following correct: After a method gets executed it returns a particular value (if it is not void) to the class or instances that is calling the method in the first place.
Is object oriented programming really just passing "your" Objects instance methods around with the system generated classes and methods to produce a result? If we are passing things to methods so they can do some work to them and then return something back why not do the work in the first place eliminating the need to pass anything? Theoretical question I guess? I assume the answer would be: Because that would be a crazy big tangled mess of a method with everything happening all at once, but I wanted to ask anyway.
Thank you for your time.
Variables are just places where values are stored. When you pass a variable as an argument, you aren't actually passing the variable itself — you're passing the value of the variable, which is copied into the argument. There's no mapping table or anything — it just takes the value of the variable and sticks it in the argument.
In the case of objects, the variable is a pointer to an object that exists somewhere in the program's memory space. In this case, the value of the pointer is copied just like any other variable, but it still points to the same object.
(the argument "types" … have to match the parameter return types…) It isn't technically true that the types have to be the same, though they usually should be. Some types can be automatically converted to another type. For example, a char or short will be promoted to an int if you pass them to a function or method that takes an int. There's a complicated set of rules around type conversions. One thing you usually should not do is use casts to shut up compiler warnings about incompatible types — the compiler takes that to mean, "It's OK, I know what I'm doing," even if you really don't. Also, object types cannot ever be converted this way, since the variables are just pointers and the objects themselves live somewhere else. If you assign the value of an NSString*variable to an NSArray* variable, you're just lying to the compiler about what the pointer is pointing to, not turning the string into an array.
Non-void functions and methods return a value to the place where they're called, yes.
(Is object-oriented programming…) Object-oriented programming is a way of structuring your program so that it can be conceptually described as a collection of objects sending messages to each other and doing things in response to those messages.
(why not do the work in the first place eliminating the need to pass anything) The primary problem in computer programming is writing code that humans can understand and improve later. Functions and methods allow us to break our code into manageable chunks that we can reason about. They also allow us to write code once and reuse it all over the place. If we didn't factor repeated code into functions, then we'd have to repeat the code every time it is needed, which both makes the program code much longer and introduces thousands of new opportunities for bugs to creep in. 50,000-line programs would become 500 million-line programs. Not only would the program be horrendously bug-ridden, but it would be such a huge ball of spaghetti that finding the bugs would be a Herculean task.
By the way, I think you might like Uli Kusterer's Masters of the Void. It's a programming tutorial for Mac users who don't know anything about programming.
"If we are passing things to methods so they can do some work to them and then return something back why not do the work in the first place eliminating the need to pass anything?"
In the beginning, that's how it was done.
But then smart programers noticed that they were repeating copies of some work and also running out of memory, so they decided to put that chunk of work in one central place to save memory, and then call it by passing in the data from where it was before.
They gave the locations, where the data was stuffed, names, because the programs were big enough that nobody memorized all the numerical address for every bit of data any more.
Then really really big computers finally got more 16k of memory, and the programs started to become big unmanageable messes, so they codified the practice as part of structured programming. It's now a religious tenet.
But it's still done, by compilers when the inline flag is set, and also sometimes by hand on code that has to be really really fast on some very constrained processors by programmers who know when and where to make targeted trade-offs.
A little reading on the History of Computers is quite informative about how we got to where we are today, and why we do such strange things.
All that type checks used (at most) only during compilation stage, to fix errors in code.
Actually, during execution, all variables are just a block of memory, which is sent somewhere. For example, 'id' type and 'int' are both represented as 4-byte raw value, and you can write (int)id and (id)int to convert those type one to another.
And, about parameters names - they are used by compiler only to let it know, to which memory area send some data.
That's easy explanation, actually all that stuff is complicated, but I think you'll get the main idea - during execution there are no variable names/types, everything is done via operations over memory blocks.