I will preface this question by stating that what I am about to ask is for educational and possibly debug purposes only.
How are block objects created internally in the Objective C runtime?
I see the hierarchy of classes that all represent various block types, and the highest superclass in the hierarchy, below NSObject, is NSBlock. Dumping for class data shows that it implements the + alloc, + allocWithZone:, + copy and + copyWithZone: methods. None of the other block subclasses implement these class methods, which leads me to believe, perhaps mistakenly, that NSBlock is responsible for block handling.
But these methods seem not to be called at any point during a block's life time. I exchanged implementations with my own and put a breakpoint in each, but they never get called. Doing similar exercise with NSObject's implementations gives me exactly what I want.
So I assume blocks are implemented in a different manner? Anyone can shed a light on how this implementation works? Even if I cannot hook into the allocation and copying of blocks, I would like to understand the internal implementation.
tl;dr
The compiler directly translates block literals into structs and functions. That's why you don't see an alloc call.
discussion
While blocks are full-fledged Objective-C objects, this fact is seldom exposed in their use, making them quite funny beasts.
One first quirk is that blocks are generally created on the stack (unless they are global blocks, i.e. blocks with no reference to the surrounding context) and then moved on the heap only if needed. To this day, they are the only Objective-C objects that can be allocated on the stack.
Probably due to this weirdness in their allocation, the language designers decided to allow block creation exclusively through block literals (i.e. using the ^ operator).
In this way the compiler is in complete control of block allocation.
As explained in the clang specification, the compiler will automatically generate two structs and at least one function for each block literal it encounters:
a block literal struct
a block descriptor struct
a block invoke function
For instance for the literal
^ { printf("hello world\n"); }
on a 32-bit system the compiler will produce the following
struct __block_literal_1 {
void *isa;
int flags;
int reserved;
void (*invoke)(struct __block_literal_1 *);
struct __block_descriptor_1 *descriptor;
};
void __block_invoke_1(struct __block_literal_1 *_block) {
printf("hello world\n");
}
static struct __block_descriptor_1 {
unsigned long int reserved;
unsigned long int Block_size;
} __block_descriptor_1 = { 0, sizeof(struct __block_literal_1), __block_invoke_1 };
(by the way, that block qualifies as global block, so it will be created at a fixed location in memory)
So blocks are Objective-C objects, but in a low-level fashion: they are just structs with an isa pointer. Although from a formal point of view they are instances of a concrete subclass of NSBlock, the Objective-C API is never used for allocation, so that's why you don't see an alloc call: literals are directly translated into structs by the compiler.
As described in other answers, block objects are created directly in global storage (by the compiler) or on the stack (by the compiled code). They aren't initially created on the heap.
Block objects are similar to bridged CoreFoundation objects: the Objective-C interface is a cover for an underlying C interface. A block object's -copyWithZone: method calls the _Block_copy() function, but some code calls _Block_copy() directly. That means a breakpoint on -copyWithZone: won't catch all of the copies.
(Yes, you can use block objects in plain C code. There's a qsort_b() function and an atexit_b() function and, uh, that might be it.)
Blocks are basically compiler magic. Unlike normal objects, they are actually allocated directly on the stack — they only get placed on the heap when you copy them.
You can read Clang's block implementation specification to get a good idea what goes on behind the scenes. To my understanding, the short version is that a struct type (representing the block and its captured state) and a function (to invoke the block) are defined, and any reference to the block is replaced with a value of the struct type that has its invoke pointer set to the function that was generated and its fields filled in with the appropriate state.
Related
I was wondering why objective-c (ARC) does not allow me to use a pointer to a struct (NSPoint, in this case)
My code works without one, I just want to question why it doesn't allow it as I have not found a reason on Google for it.
My current guess is because structs cannot contain objects, but I want to double check that; and want to know where the struct itself is saved. Thanks!
When migrating to ARC, the compiler no longer allowed this due to the complexity of how the struct containing object pointers would be initialized, copied, moved, or destroyed.
Apple called this out in the Transitioning to ARC guides line the "ARC Enforces New Rules" section.
You cannot use object pointers in C structures.
Rather than using a struct, you can create an Objective-C class to manage the data
instead.
However this is now allowed in LLVM as of this commit.
To quote directly from the commit message:
Declaring __strong pointer fields in structs was not allowed in
Objective-C ARC until now because that would make the struct
non-trivial to default-initialize, copy/move, and destroy, which is
not something C was designed to do. This patch lifts that restriction.
Special functions for non-trivial C structs are synthesized that are
needed to default-initialize, copy/move, and destroy the structs and
manage the ownership of the objects the __strong pointer fields point
to. Non-trivial structs passed to functions are destructed in the
callee function.
I know that when an object is instantiated on the heap, at the least enough memory is allocated to hold the object's ivars. My question is about how methods are stored by the compiler. Is there only one instance of method code in memory? Or is the code generated an intrinsic part of the object in memory, stored contiguously with the ivars and executed?
It seems like if the latter were the case, even trivial objects such as NSStrings would require a (relatively) large amount of memory (NSString inherits methods from NSObject, also).
Or is the method stored once in memory and passed a pointer to the object which owns it?
In a "standard" Objective-C runtime, every object contains, before any other instance variables, a pointer to the class it is a member of, as if the base Object class had an instance variable called:
Class isa;
Each object of a given class shares the same isa pointer.
The class contains a number of elements, including a pointer to the parent class, as well as an array of method lists. These methods are the ones implemented on this class specifically.
struct objc_class {
Class super_class;
...
struct objc_method_list **methodLists;
...
};
These method lists each contain an array of methods:
struct objc_method_list {
int method_count;
struct objc_method method_list[];
};
struct objc_method {
SEL method_name;
char *method_types;
IMP method_imp;
};
The IMP type here is a function pointer. It points to the (single) location in memory where the implementation of the method is stored, just like any other code.
A note: What I'm describing here is, in effect, the ObjC 1.0 runtime. The current version doesn't store classes and objects quite like this; it does a number of complicated, clever things to make method calls even faster. But what I'm describing still is still the spirit of how it works, if not the exact way it does.
I've also left out a few fields in some of these structures which just confused the situation (e.g, backwards compatibility and/or padding). Read the real headers if you want to see all the gory details.
Methods are stored once in memory. Depending on the platform, they are paged into RAM as needed. If you really want more details, read the Mach-O and runtime guides from Apple. It's not usually something programmers concern themselves with any more unless they're doing something pretty low-level.
Objects don't really "own" methods. I suppose you could think of it as classes owning methods, so if you have 400 NSStrings you still only have one copy of each method in RAM.
When a method gets called, the first parameter is the object pointer, self. That's how a method knows where the data is that it needs to operate on.
Consider the following C++ method:
class Worker{
....
private Node *node
};
void Worker::Work()
{
NSBlockOperation *op=[NSBlockOperation blockOperationWithBlock: ^{
Tool hammer(node);
hammer.Use();
}];
....
}
What, exactly, does the block capture when it captures "node"? The language specification for blocks, http://clang.llvm.org/docs/BlockLanguageSpec.html, is clear for other cases:
Variables used within the scope of the compound statement are bound to the Block in the normal manner with the exception of those in automatic (stack) storage. Thus one may access functions and global variables as one would expect, as well as static local variables. [testme]
Local automatic (stack) variables referenced within the compound statement of a Block are imported and captured by the Block as const copies.
But here, do we capture the current value of this? A copy of this using Worker’s copy constructor? Or a reference to the place where node is stored?
In particular, suppose we say
{
Worker fred(someNode);
fred.Work();
}
The object fred may not exist any more when the block gets run. What is the value of node? (Assume that the underlying Node objects live forever, but Workers come and go.)
If instead we wrote
void Worker::Work()
{
Node *myNode=node;
NSBlockOperation *op=[NSBlockOperation blockOperationWithBlock: ^{
Tool hammer(myNode);
hammer.Use();
}];
....
}
is the outcome different?
According to this page:
In general you can use C++ objects within a block. Within a member
function, references to member variables and functions are via an
implicitly imported this pointer and thus appear mutable. There are
two considerations that apply if a block is copied:
If you have a __block storage class for what would have been a
stack-based C++ object, then the usual copy constructor is used.
If
you use any other C++ stack-based object from within a block, it must
have a const copy constructor. The C++ object is then copied using
that constructor.
Empirically, I observe that it const copies the this pointer into the block. If the C++ instance pointed to by this is no longer at that address when the block executes (for instance, if the Worker instance on which Worker::Work() is called was stack-allocated on a higher frame), then you will get an EXC_BAD_ACCESS or worse (i.e. pointer aliasing). So it appears that:
It is capturing this, not copying instance variables by value.
Nothing is being done to keep the object pointed to by this alive.
Alternately, if I reference a locally stack-allocated (i.e. declared in this stack frame/scope) C++ object, I observe that its copy constructor gets called when it is initially captured by the block, and then again whenever the block is copied (for instance, by the operation queue when you enqueue the operation.)
To address your questions specifically:
But here, do we capture the current value of this? A copy of this using Worker’s copy constructor? Or a reference to the place where node is stored?
We capture this. Consider it a const-copy of an intptr_t if that helps.
The object fred may not exist any more when the block gets run. What is the value of node? (Assume that the underlying Node objects live forever, but Workers come and go.)
In this case, this has been captured by-value and node is effectively a pointer with the value this + <offset of node in Worker> but since the Worker instance is gone, it's effectively a garbage pointer.
I would infer no magic or other behavior other than exactly what's described in those docs.
In C++, when you write an instance variable node, without explicitly writing something->node, it is implicitly this->node. (Similar to how in Objective-C, if you write an instance variable node, without explicitly writing something->node, it is implicitly self->node.)
So the variable which is being used is this, and it is this that is captured. (Technically this is described in the standard as a separate expression type of its own, not a variable; but for all intents and purposes it acts as an implicit local variable of type Worker *const.) As with all non-__block variables, capturing it makes a const copy of this.
Blocks have memory management semantics when they capture a variable of Objective-C object pointer type. However, this does not have Objective-C object pointer type, so nothing is done with it in terms of memory management. (There is nothing that can be done in terms of C++ memory management anyway.) So yes, the C++ object pointed to by this could be invalid by the time the block runs.
I read everywhere that Objective-C has true dynamic binding, where as C++ has only Late binding. Unfortunately none of the books go on to explain it clearly or discuss the underlying implementation. For e.g C++ uses virtual table. How about Objective-C?
http://www.gnu.org/software/gnustep/resources/ObjCFun.html has a pretty good description.
Basically what dynamic binding means is that at the time that the method call is actually made, the decision is made about what method to invoke. And the method can, if you wish, be dynamically chosen at that point.
Edit: Here is a lot more detail to the best of my understanding. I do not promise that it is entirely correct, but it should be mostly right. Every object in Objective C is a struct whose first member, named isa, is a pointer to a class. Each class is itself an object that is traditionally laid out as:
struct objc_class {
Class isa;
Class super_class;
const char *name;
long version;
long info;
long instance_size;
struct objc_ivar_list *ivars;
struct objc_method_list **methodLists;
struct objc_cache *cache;
struct objc_protocol_list *protocols;
};
At runtime, here is pseudo-code for what happens on a method lookup:
Follow isa to find the class
if implementation = class.lookup_method(method):
call implementation
else if get_implementation = class.lookup_method(forwardInvocation):
implementation = get_implementation(method)
if implementation:
call implementation
else:
raise runtime error
else:
raise runtime error
And how does that lookup_method work?
def lookup_method (class, method):
if method in class.objc_cache:
return implementation from objc_cache
else if method in class.objc_method_list:
cache implementation from objc_method_list
return implementation
else if implementation = class.super_class.lookup_method(method):
cache implementation
return implementation
else:
return null
In response to the obvious question, yes this is much slower than C++'s virtual tables. According to benchmarks, about 1/3 of the speed. Every Objective C text immediately follows that up with the fact that in the real world, method lookup speed is almost never a bottleneck.
This is much more flexible than C's method lookups. For instance you can use forwardInvocation to cause unrecognized methods to go to an object that you have in a variable. This kind of delegation can be done without knowing what the type of that object will be at run time, or what methods it will support. You can also add methods to classes - even at runtime if you wish - without having access to the source code. You also have rich runtime introspection on classes and methods.
The obvious flip side, that any C++ programmer will be jumping up and down about, is that you've thrown away any hope of compile time type checking.
Does that explain the differences and give you sufficient detail to understand what is going on?
Both dynamic binding and late binding are the same,in fact. In we have static binding ,or early binding , which checks the issues which happen at compile time(errors regarding thevariables,expressions etc) and these information are stored in a v-table(virtual method table). What late binding does is that it just binds the methods with those in the v-table.
Why does the new operator exist in modern languages such as C# and Java? Is it purely a self documenting code feature, or does it serve any actual purpose?
For instance the following example:
Class1 obj = new Class1();
Class1 foo()
{
return new Class1();
}
Is as easy to read as the more Pythonesque way of writing it:
Class1 obj = Class1();
Class1 foo()
{
return Class1();
}
EDIT: Cowan hit the nail on the head with the clarification of the question: Why did they choose this syntax?
It's a self documenting feature.
It's a way to make it possible to name a method "Class1" in some other class
Class1 obj = Class1();
In C# and Java, you need the "new" keyword because without it, it treats "Class1()" as a call to a method whose name is "Class1".
The usefulness is of documentation - it's easier to distinguish object creations from method invocations than in Python.
The reason is historic, and comes straight from the C++ syntax.
In C++, "Class1()" is an expression creating a Class1 instance on the stack. For instance:
vector a = vector();
In this case, a vector is created and copied to the vector a (an optimizer can remove the redundant copy in some cases).
Instead, "new Class1()" creates a Class1 instance on the heap, like in Java and C#, and returns a pointer to it, with a different access syntax, unlike Java and C++. Actually, the meaning of new can be redefined to use any special-purpose allocator, which still must refer to some kind of heap, so that the obtained object can be returned by reference.
Moreover, in Java/C#/C++, Class1() by itself could refer to any method/function, and it would be confusing. Java coding convention actually would avoid that, since they require class names to start with a upper case letter and method names to start with a lower case one, and probably that's the way Python avoids confusion in this case. A reader expects "Class1()" to create an object, "class1()" to be a function invocation, and "x.class1()" to be a method invocation (where 'x' can be 'self').
Finally, since in Python they chose to make classes be objects, and callable objects in particular, the syntax without 'new' would be allowed, and it would be inconsistent to allow having also another syntax.
The new operator in C# maps directly to the IL instruction called newobj which actually allocates the space for the new object's variables and then executes the constructor (called .ctor in IL). When executing the constructor -- much like C++ -- a reference to the initialized object is passed in as an invisible first parameter (like thiscall).
The thiscall-like convention allows the runtime to load and JIT all of the code in memory for a specific class only one time and reuse it for every instance of the class.
Java may have a similar opcode in its intermediate language, though I am not familiar enough to say.
C++ offers programmers a choice of allocating objects on the heap or on the stack.
Stack-based allocation is more efficient: allocation is cheaper, deallocation costs are truly zero, and the language provides assistance in demarcating object lifecycles, reducing the risk of forgetting to free the object.
On the other hand, in C++, you need to be very careful when publishing or sharing references to stack-based objects because stack-based objects are automatically freed when the stack frame is unwound, leading to dangling pointers.
With the new operator, all objects are allocated on the heap in Java or C#.
Class1 obj = Class1();
Actually, the compiler would try to find a method called Class1().
E.g. the following is a common Java bug:
public class MyClass
{
//Oops, this has a return type, so its a method not a constructor!
//Because no constructor is defined, Java will add a default one.
//init() will not get called if you do new MyClass();
public void MyClass()
{
init();
}
public void init()
{
...
}
}
Note: "all objects are allocated on the heap" does not mean stack allocation is not used under the hood occasionally.
For instance, in Java, Hotspot optimization like escape analysis uses stack allocation.
This analysis performed by the runtime compiler can conclude for example that an object on the heap is referenced only locally in a method and no reference can escape from this scope. If so, Hotspot can apply runtime optimizations. It can allocate the object on the stack or in registers instead of on the heap.
Such optimization though is not always considered decisive...
The reason Java chose it was because the syntax was familiar to C++ developers. The reason C# chose it was because it was familiar to Java developers.
The reason the new operator is used in C++ is probably because with manual memory management it is very important to make it clear when memory is allocated. While the pythonesque syntax could work, it makes is less obvious that memory is allocated.
The new operator allocates the memory for the object(s), which is it's purpose; as you say it also self documents which instance(s) (i.e. a new one) you're working with
As others have noted, Java and C# provide the new syntax because C++ did. And C++ needed some way to distinguish between creating an object on the stack, creating an object on the heap, or calling a function or method that returned a pointer to an object.
C++ used this particular syntax because the early object-oriented language Simula used it. Bjarne Stroustrup was inspired by Simula, and sought to add Simula-like features to C. C had a function for allocating memory, but didn't guarantee that a constructor was also called.
From "The Design and Evolution of C++," 1994, by Bjarne Stroustrup, page 57:
Consequently, I introduced an operator to ensure that both allocation and initialization was done:
monitor* p = new monitor;
The operator was called new because that was the name of the corresponding Simula operator. The new operator invokes some allocation function to obtain memory and then invokes a constructor to initialize that memory. The combined operation is often called instantiation or simply object creation: it creates an object out of raw memory.
The notational convenience offered by operator new is significant. ..."
In addition to remarks above AFAIK they were planning to remove new keyword for Java 7 in early drafts. But later they cancelled it.