Local variable being stored over different function calls. weird - objective-c

I'm bewildered by whats happening. It's more of a trick question I don't know the answer to.
I have the following function inside my main.m of an Objective-C program.
int incrementCounter(){
int counter;
counter+=2;
return counter;
}
and inside my main.m
NSLog(#"Counter: %d",incrementCounter());
NSLog(#"Counter: %d",incrementCounter());
NSLog(#"Counter: %d",incrementCounter());
The weird thing is one mac book(don't know the Xcode version ) when I executed this the output was:
2013-05-28 19:16:27.131 SOQ[4923:707] Counter: 1
2013-05-28 19:16:27.132 SOQ[4923:707] Counter: 3
2013-05-28 19:16:27.132 SOQ[4923:707] Counter: 5
My questions here:
1. How did counter get initialized to 1? (when i use static it get initialized to zero)
2. How is the value being stored over successive method calls?
3. When I check for the memory location of this variable for different executions of the program it remains the same. How does that happen?
And when I executed the same piece of code in another mac(lion Xcode 4.2.1) it gives the following output:
2013-05-28 19:16:27.131 SOQ[4923:707] Counter: 32769
2013-05-28 19:16:27.132 SOQ[4923:707] Counter: 32769
2013-05-28 19:16:27.132 SOQ[4923:707] Counter: 32769
My questions here:
1. How does this behavior change from mac to mac? //I was thinking different compiler versions but it was both using the same(not sure though) . Is there any other way?
2. How did the counter get initialized to 32767? I was thinking garbage value, but isn't the objective-c compiler supposed to initialize primitive like int to zero? And I'm getting the same 32769 over and over again. How?
Maybe I'm missing out on something basic. Any pointers?

How did counter get initialized to 1? (when i use static it get initialized to zero)
How is the value being stored over successive method calls?
When I check for the memory location of this variable for different executions of the program it remains the same. How does that happen?
How does this behavior change from mac to mac? //I was thinking different compiler versions but it was both using the same(not sure though) . Is there any other way? 
How did the counter get initialized to 32767? I was thinking garbage value, but isn't the objective-c compiler supposed to initialize primitive like int to zero? And I'm getting the same 32769 over and over again. How?
The answer to all these questions is: coincidence. You are using an uninitialized value. You can't rely on this value, and you should not use it at all. The compiler probably warns you about it too.
The compiler does not initalize local variables of primitive types. If you use ARC local objects are initialized to nil automatically, but not primitive types.
Primitive instance variables are initialized to 0 and instance variables pointing to objects are initialized to nil.

In your incrementCounter function you are using counter which is an uninitialised variable, so the behaviour is 'undefined' - which means what you're seeing is perfectly valid behaviour (in that any behaviour would be valid since the behaviour is undefined).
So as for what is actually happening (or could potentially be happening, depending on the compiler and runtime implementations):
1-1: It's undefined what the initial value will be, but it might end up always being the same for multiple executions if the object loader behaves in a deterministic way and just happens to leave a certain bit-pattern in that memory prior to beginning execution of your program, or even just by accident.
1-2: When incrementCounter is called some memory on the stack is allocated (the next available chunk, on the top of the stack), then incrementCounter increments and returns it. Then NSLog is called, using the same stack-memory (but perhaps a different size). NSLog happens to not trample on that value (probably because the compiler puts it in a place used for return values and NSLog doesn't have a return value). Then incrementCounter is called again, the same stack-memory is re-used and the value just happens to not have been trampled. Try implementing a second otherIncrementCounter method in exactly the same way to see if they are both sharing the same memory for that return value, also take a look at &counter in both methods, it's probably the same.
1-3: The local address-space that your program uses when it is run as a process is local to that process. A different program writing to the same address won't trample your process's memory. A kernel implementation can map this memory however it likes, for example it could use the same starting-point for the stack address space for all its processes.
2-1 & 2-2: See 1-1 and 1-2 above. Perhaps a different build of the kernel, object loader, OS, etc. In this case perhaps NSLog is doing something with that value, perhaps using it temporarily, to set it to 32767 each time. The implementation of NSLog may have changed or may just be compiled with a different compiler version on that machine - it's in a shared library that's loaded and linked at run-time, not compiled into your executable.
In general it's fairly pointless to wonder about these things, there are too many reasons that things could be working as they are, but it's useful to know how compilers and stacks work so I hope my answers above give you inspiration to learn about that a bit more. However, when programming (and not learning about compilers) simply don't rely on undefined behaviour, turn on lots of warnings and treat them as errors so you don't ever accidentally depend on the implementations of compilers, runtimes, or library implementations.

Related

Difference between a constant and variable member in compiled or interpreted code

For a while now I have been a little confused about the role of constant members within a language, such as Java or C. I understand that at the source code level, they prevent certain critical members from being mutated and changed, but when compiled or interpreted, is there any difference between them and variable members at all or are they all just pointers to memory addresses?
I thought that perhaps the compiler/interpreter has to implement something special to allow a variable to be mutable, something it wouldn't have to when handling a constant member (perhaps making execution faster or making it use less memory?), is this true or am I completely up the wrong tree?
The const variable and the variable are not stored in the same place once your code is executed. The constant values will go in the flash memory with your program. The variables will go in the flash too but will then be copied in the ram to be modified as your program runs. Making a variable const makes your computer save time and space by not pushing everything in the ram. When you need to modify it, you will push it into the Ram anyway, but most of the time const variables will not be modified.
This is in addition to the software fact that you might want to prevent your code to modify a value by mistake.

Does class_getInstanceSize have a known bug about returning incorrect sizes?

Reading through the other questions that are similar to mine, I see that most people want to know why you would need to know the size of an instance, so I'll go ahead and tell you although it's not really central to the problem. I'm working on a project that requires allocating thousands to hundreds of thousands of very small objects, and the default allocation pattern for objects simply doesn't cut it. I've already worked around this issue by creating an object pool class, that allows a tremendous amount of objects to be allocated and initialized all at once; deallocation works flawlessly as well (objects are returned to the pool).
It actually works perfectly and isn't my issue, but I noticed class_getInstanceSize was returning unusually large sizes. For instance, a class that stores one size_t and two (including isA) Class instance variables is reported to be 40-52 bytes in size. I give a range because calling class_getInstanceSize multiple times, even in a row, has no guarantee of returning the same size. In fact, every object but NSObject seemingly reports random sizes that are far from what they should be.
As a test, I tried:
printf("Instance Size: %zu\n", class_getInstanceSize(objc_getClass("MyClassName"));
That line of code always returns a value that corresponds to the size that I've calculated by hand to be correct. For instance, the earlier example comes out to 12 bytes (32-bit) and 24 bytes (64-bit).
Thinking that the runtime may be doing something behind the scenes that requires more memory, I watched the actual memory use of each object. For the example given, the only memory read from or written to is in that 12/24 byte block that I've calculated to be the expected size.
class_getInstanceSize acts like this on both the Apple & GNU 2.0 runtime. So is there a known bug with class_getInstanceSize that causes this behavior, or am I doing something fundamentally wrong? Before you blame my object pool; I've tried this same test in a brand new project using both the traditional alloc class method and by allocating the object using class_createInstance(self, 0); in a custom class method.
Two things I forgot to mention before: I'm almost entirely testing this on my own custom classes, so I know the trickery isn't down to the class actually being a class cluster or any of that nonsense; second, class_getInstanceSize([MyClassName class]) and class_getInstanceSize(self) \\ Ran inside a class method rarely produce the same result, despite both simply referencing isA. Again, this happens in both runtimes.
I think I've solved the problem and it was due to possibly the dumbest reason ever.
I use a profiling/debugging library that is old; in fact, I don't know its actual name (the library is libcsuomm; the header for it has no identifying info). All I know about it is that it was a library available on the computers in the compsci labs (I did a year of Comp-Sci before switching to a Geology major, graduating and never looking back).
Anyway, the point of the library is that it provides a number of profiling and debugging functionalities; the one I use it most for is memory leak detection, since it actually tracks per object unlike my other favorite memory-leak library (now unsupported, MSS) which is based in C and not aware of objects outside of raw allocations.
Because I use it so much when debugging, I always set it up by default without even thinking about it. So even when creating my test projects to try and pinpoint the bug, I set it up without even putting any thought into it. Well, it turns out that the library works by pulling some runtime trickery, so it can properly track objects. Things seem to work correctly now that I've disabled it, so I believe that it was the source of my problems.
Now I feel bad about jumping to conclusions about it being a bug, but at the time I couldn't see anything in my own code that could possibly cause that problem.

objective-c memory management--how long is object guaranteed to exist?

I have ARC code of the following form:
NSMutableData* someData = [NSMutableData dataWithLength:123];
...
CTRunGetGlyphs(run, CGRangeMake(0, 0), someData.mutableBytes);
...
const CGGlyph *glyphs = [someData mutableBytes];
...
...followed by code that reads memory from glyphs but does nothing with someData, which isn't referenced anymore. Note that CGGlyph is not an object type but an unsigned integer.
Do I have to worry that the memory in someData might get freed before I am done with glyphs (which is actually just pointing insidesomeData)?
All this code is WITHIN the same scope (i.e., a single selector), and glyphs and someData both fall out of scope at the same time.
PS In an earlier draft of this question I referred to 'garbage collection', which didn't really apply to my project. That's why some answers below give it equal treatment with what happens under ARC.
You are potentially in trouble whether you use GC or, as others have recommended instead, ARC. What you are dealing with is an internal pointer which is not considered an owning reference in either GC or ARC in general - unless the implementation has special-cased NSData. Without that owning reference either GC or ARC might remove the object. The problem you face is peculiar to internal pointers.
As you describe your situation the safest thing to do is to hang onto the real reference. You could do this by assigning the NSData reference to either an instance variable or a static (method local if you wish) variable and then assigning nil to that variable when you've done with the internal pointer. In the case of static be careful about concurrency!
In practice your code will probably work in both GC and ARC, probably more likely in ARC, but either could conceivably bite you especially as compilers change. For the cost of one variable declaration and one extra assignment you avoid the problem, cheap insurance.
[See this discussion as an example of short lifetime under ARC.]
Under actual, real garbage collection that code is potentially a problem. Objects may be released as soon as there is no longer any reference to them and the compiler may discard the reference at any time if you never use it again. For optimisation purposes scope is just a way of putting an upper limit on that sort of thing, not a way of dictating it absolutely.
You can use NSAllocateCollectable to attach lifecycle calculation to C primitive pointers, though it's messy and slightly convoluted.
Garbage collection was never implemented in iOS and is now deprecated on the Mac (as referenced at the very bottom of this FAQ), in both cases in favour of automatic reference counting (ARC). ARC adds retains and releases where it can see that they're implicitly needed. Sadly it can perform some neat tricks that weren't previously possible, such as retrieving objects from the autorelease pool if they've been used as return results. So that has the same net effect as the garbage collection approach — the object may be released at any point after the final reference to it vanishes.
A workaround would be to create a class like:
#interface PFDoNothing
+ (void)doNothingWith:(id)object;
#end
Which is implemented to do nothing. Post your autoreleased object to it after you've finished using the internal memory. Objective-C's dynamic dispatch means that it isn't safe for the compiler to optimise the call away — it has no way of knowing you (or the KVO mechanisms or whatever other actor) haven't done something like a method swizzle at runtime.
EDIT: NSData being a special case because it offers direct C-level access to object-held memory, it's not difficult to find explicit discussions of the GC situation at least. See this thread on Cocoabuilder for a pretty good one though the same caveat as above applies, i.e. garbage collection is deprecated and automatic reference counting acts differently.
The following is a generic answer that does not necessarily reflect Objective-C GC support. However, various GC implementaitons, including ref-counting, can be thought of in terms of Reachability, quirks aside.
In a GC language, an object is guaranteed to exist as long as it is Strongly-Reachable; the "roots" of these Strong-Reachability graphs can vary by language and executing environment. The exact meaning of "Strongly" also varies, but generally means that the edges are Strong-References. (In a manual ref-counting scenario each edge can be thought of as an unmatched "retain" from a given "owner".)
C# on the CLR/.NET is one such implementation where a variable can remain in scope and yet not function as a "root" for a reachability-graph. See the Systems.Timer.Timer class and look for GC.KeepAlive:
If the timer is declared in a long-running method, use KeepAlive to prevent garbage collection from occurring [on the timer object] before the method ends.
As of summer 2012, things are in the process of change for Apple objects that return inner pointers of non-object type. In the release notes for Mountain Lion, Apple says:
NS_RETURNS_INNER_POINTER
Methods which return pointers (other than Objective C object type)
have been decorated with the clang compiler attribute
objc_returns_inner_pointer (when compiling with clang) to prevent the
compiler from aggressively releasing the receiver expression of those
messages, which no longer appear to be referenced, while the returned
pointer may still be in use.
Inspection of the NSData.h header file shows that this also applies from iOS 6 onward.
Also note that NS_RETURNS_INNER_POINTER is defined as __attribute__((objc_returns_inner_pointer)) in the clang specification, which makes it such that
the object's lifetime will be extended until at least the earliest of:
the last use of the returned pointer, or any pointer derived from it,
in the calling function;
or the autorelease pool is restored to a
previous state.
Caveats:
If you're using anything older then Mountain Lion or iOS 6 you will still need to use any of the methods discussed here (e.g., __attribute__((objc_precise_lifetime))) when declaring your local NSData or NSMutableData objects.
Also, even with the newest compiler and Apple libraries, if you use older or third party libraries with objects that do not decorate their inner-pointer-returning methods with __attribute__((objc_returns_inner_pointer)) you will need to decorate your local variables declarations of such objects with __attribute__((objc_precise_lifetime)) or use one of the other methods discussed in the answers.

Garbage value undetected in debug mode

I've recently discovered the following in my code:
for (NSInteger i; i<x; i++){
...
}
Now, clearly, i should have been initialised. What I find strange is that while in "debug" profile (XCode), this error goes undetected and the for loop executes without issue. When the application is released using the "release" profile, a crash occurs.
What flags are responsible for letting this kind of mistake execute in "debug" profile?
Thanks in advance.
This could be considered a Heisenbug. A declaration without an initialization will typically allocate some space in the stack frame for the variable and if you read the variable you will get whatever happened to be at that location in memory. When compiled for the debug profile the storage for variables can shift around compared to release. It just happens that whatever is in that location in memory for debug mode does not cause a crash (probably a positive number) but when in release mode it is some value that causes a crash (probably a negative number).
The clang static analyser should detect this. I have the analyse when building option switched on always.
In the C language, using an initialized variable isn't an error but an Undefined Behavior.
Undefined behavior exists because C is designed to be a very efficient low-level language. Using an initialized variable is undefined behavior because it allows the compiler to optimize the variable allocation, as no default value is required.
But the compiler is licensed to do whatever he wants when an undefined behavior occurs. The C Standard FAQ says:
Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended.
So any implementation of an undefined behavior is valid (even if it produces code that formats your hard drive).
Xcode uses different optizations for Debug and Release configurations. Debug configuration has no optimization (-O0 flag) so the compiled executable must stays close to your code, allowing you to debug it more easily. On the other hand, Release configuration produces strongly optimized executables (-Os flag) because you want your application to run fast.
Due to that difference, undefined behaviours may (or may not) produce different results in Release and Debug configurations.
Though the LLVM compiler is quite verbose, it does not emit warnings by default for undefined behaviors. You may however run the static analyzer, which can detect that kind of issues.
More information about undefined behaviors and how they are handled by compilers in What Every Programmer Should Know About Undefined Behavior.
I doubt it is so much flags as the compiler is optimizing out the "unused" variable i. Release mode includes far more optimizations then debug mode.
Different compiler optimizations may or may not use a different memory location or register for you uninitialized variable. Different garbage (perhaps from previously used variables, computations or addresses used by your app) will be left in these different locations before you start using the variable.
The "responsibility" goes to not initializing the variable, as what garbage is left in what locations may not be visible to the compiler, especially in debug mode with most optimatizations off (e.g. you got "lucky" with the debug build).
i has not been initialized . You are just declaring the i variable not initializing the variable.
Writing just NSInteger i; just declares a variable not initializes it.
You can initialize the variable by below mentioned code.
for (NSInteger i=1; i<x; i++){
...
}

How does it know where my value is in memory?

When I write a program and tell it int c=5, it puts the value 5 into a little bit of it's memory, but how does it remember which one? The only way I could think of would be to have another bit of memory to tell it, but then it would have to remember where it kept that as well, so how does it remember where everything is?
Your code gets compiled before execution, at that step your variable will be replaced by the actual reference of the space where the value will be stored.
This at least is the general principle. In reality it will be way more complecated, but still the same basic idea.
There are lots of good answers here, but they all seem to miss one important point that I think was the main thrust of the OP's question, so here goes. I'm talking about compiled languages like C++, interpreted ones are much more complex.
When compiling your program, the compiler examines your code to find all the variables. Some variables are going to be global (or static), and some are going to be local. For the static variables, it assigns them fixed memory addresses. These addresses are likely to be sequential, and they start at some specific value. Due to the segmentation of memory on most architectures (and the virtual memory mechanisms), every application can (potentially) use the same memory addresses. Thus, if we assume the memory space programs are allowed to use starts at 0 for our example, every program you compile will put the first global variable at location 0. If that variable was 4 bytes, the next one would be at location 4, etc. These won't conflict with other programs running on your system because they're actually being mapped to an arbitrary sequential section of memory at run time. This is why it can assign a fixed address at compile time without worrying about hitting other programs.
For local variables, instead of being assigned a fixed address, they're assigned a fixed address relative to the stack pointer (which is usually a register). When a function is called that allocates variables on the stack, the stack pointer is simply moved by the required number of bytes, creating a gap in the used bytes on the stack. All the local variables are assigned fixed offsets to the stack pointer that put them into that gap. Every time a local variable is used, the real memory address is calculated by adding the stack pointer and the offset (neglecting caching values in registers). When the function returns, the stack pointer is reset to the way it was before the function was called, thus the entire stack frame including local variables is free to be overwritten by the next function call.
read Variable (programming) - Memory allocation:
http://en.wikipedia.org/wiki/Variable_(programming)#Memory_allocation
here is the text from the link (if you don't want to actually go there, but you are missing all the links within the text):
The specifics of variable allocation
and the representation of their values
vary widely, both among programming
languages and among implementations of
a given language. Many language
implementations allocate space for
local variables, whose extent lasts
for a single function call on the call
stack, and whose memory is
automatically reclaimed when the
function returns. (More generally, in
name binding, the name of a variable
is bound to the address of some
particular block (contiguous sequence)
of bytes in memory, and operations on
the variable manipulate that block.
Referencing is more common for
variables whose values have large or
unknown sizes when the code is
compiled. Such variables reference the
location of the value instead of the
storing value itself, which is
allocated from a pool of memory called
the heap.
Bound variables have values. A value,
however, is an abstraction, an idea;
in implementation, a value is
represented by some data object, which
is stored somewhere in computer
memory. The program, or the runtime
environment, must set aside memory for
each data object and, since memory is
finite, ensure that this memory is
yielded for reuse when the object is
no longer needed to represent some
variable's value.
Objects allocated from the heap must
be reclaimed—especially when the
objects are no longer needed. In a
garbage-collected language (such as
C#, Java, and Lisp), the runtime
environment automatically reclaims
objects when extant variables can no
longer refer to them. In
non-garbage-collected languages, such
as C, the program (and the programmer)
must explicitly allocate memory, and
then later free it, to reclaim its
memory. Failure to do so leads to
memory leaks, in which the heap is
depleted as the program runs, risking
eventual failure from exhausting
available memory.
When a variable refers to a data
structure created dynamically, some of
its components may be only indirectly
accessed through the variable. In such
circumstances, garbage collectors (or
analogous program features in
languages that lack garbage
collectors) must deal with a case
where only a portion of the memory
reachable from the variable needs to
be reclaimed
There's a multi-step dance that turns c = 5 into machine instructions to update a location in memory.
The compiler generates code in two parts. There's the instruction part (load a register with the address of C; load a register with the literal 5; store). And there's a data allocation part (leave 4 bytes of room at offset 0 for a variable known as "C").
A "linking loader" has to put this stuff into memory in a way that the OS will be able to run it. The loader requests memory and the OS allocates some blocks of virtual memory. The OS also maps the virtual memory to physical memory through an unrelated set of management mechanisms.
The loader puts the data page into one place and instruction part into another place. Notice that the instructions use relative addresses (an offset of 0 into the data page). The loader provides the actual location of the data page so that the instructions can resolve the real address.
When the actual "store" instruction is executed, the OS has to see if the referenced data page is actually in physical memory. It may be in the swap file and have to get loaded into physical memory. The virtual address being used is translated to a physical address of memory locations.
It's built into the program.
Basically, when a program is compiled into machine language, it becomes a series of instructions. Some instructions have memory addresses built into them, and this is the "end of the chain", so to speak. The compiler decides where each variable will be and burns this information into the executable file. (Remember the compiler is a DIFFERENT program to the program you are writing; just concentrate on how your own program works for the moment.)
For example,
ADD [1A56], 15
might add 15 to the value at location 1A56. (This instruction would be encoded using some code that the processor understands, but I won't explain that.)
Now, other instructions let you use a "variable" memory address - a memory address that was itself loaded from some location. This is the basis of pointers in C. You certainly can't have an infinite chain of these, otherwise you would run out of memory.
I hope that clears things up.
I'm going to phrase my response in very basic terminology. Please don't be insulted, I'm just not sure how proficient you already are and want to provide an answer acceptable to someone who could be a total beginner.
You aren't actually that far off in your assumption. The program you run your code through, usually called a compiler (or interpreter, depending on the language), keeps track of all the variables you use. You can think of your variables as a series of bins, and the individual pieces of data are kept inside these bins. The bins have labels on them, and when you build your source code into a program you can run, all of the labels are carried forward. The compiler takes care of this for you, so when you run the program, the proper things are fetched from their respective bin.
The variables you use are just another layer of labels. This makes things easier for you to keep track of. The way the variables are stored internally may have very complex or cryptic labels on them, but all you need to worry about is how you are referring to them in your code. Stay consistent, use good variable names, and keep track of what you're doing with your variables and the compiler/interpreter takes care of handling the low level tasks associated with that. This is a very simple, basic case of variable usage with memory.
You should study pointers.
http://home.netcom.com/~tjensen/ptr/ch1x.htm
Reduced to the bare metal, a variable lookup either reduces to an address that is some statically known offset to a base pointer held in a register (the stack pointer), or it is a constant address (global variable).
In an interpreted language, one register if often reserved to hold a pointer to a data structure (the "environment") that associates variable names with their current values.
Computers ultimately only undertand on and off - which we conveniently abstract to binary. This language is the basest level and is called machine language. I'm not sure if this is folklore - but some programmers used to (or maybe still do) program directly in machine language. Typing or reading in binary would be very cumbersome, which is why hexadecimal is often used to abbreviate the actual binary.
Because most of us are not savants, machine language is abstracted into assembly language. Assemply is a very primitive language that directly controls memory. There are a very limited number of commands (push/pop/add/goto), but these ultimately accomplish everything that is programmed. Different machine architectures have different versions of assembly, but the gist is that there are a few dozen key memory registers (physically in the CPU) - in a x86 architecture they are EAX, EBX, ECX, EDX, ... These contain data or pointers that the CPU uses to figure out what to do next. The CPU can only do 1 thing at a time and it uses these registers to figure out what to do next. Computers seem to be able to do lots of things simultaneously because the CPU can process these instructions very quickly - (millions/billions instructions per second). Of course, multi-core processors complicate things, but let's not go there...
Because most of us are not smart or accurate enough to program in assembly where you can easily crash the system, assembly is further abstracted into a 3rd generation language (3GL) - this is your C/C++/C#/Java etc... When you tell one of these languages to put the integer value 5 in a variable, your instructions are stored in text; the assembler compiles your text into an assembly file (executable); when the program is executed, the program and its instructions are queued by the CPU, when it is show time for that specific line of code, it gets read in the the CPU register and processed.
The 'not smart enough' comments about the languages are a bit tongue-in-cheek. Theoretically, the further you get away from zeros and ones to plain human language, the more quickly and efficiently you should be able to produce code.
There is an important flaw here that a few people make, which is assuming that all variables are stored in memory. Well, unless you count the CPU registers as memory, then this won't be completely right. Some compilers will optimize the generated code and if they can keep a variable stored in a register then some compilers will make use of this!
Then, of course, there's the complex matter of heap and stack memory. Local variables can be located in both! The preferred location would be in the stack, which is accessed way more often than the heap. This is the case for almost all local variables. Global variables are often part of the data segment of the final executable and tend to become part of the heap, although you can't release these global memory areas. But the heap is often used for on-the-fly allocations of new memory blocks, by allocating memory for them.
But with Global variables, the code will know exactly where they are and thus write their exact location in the code. (Well, their location from the beginning of the data segment anyways.) Register variables are located in the CPU and the compiler knows exactly which register, which is also just told to the code. Stack variables are located at an offset from the current stack pointer. This stack pointer will increase and decrease all the time, depending on the number of levels of procedures calling other procedures.
Only heap values are complex. When the application needs to store data on the heap, it needs a second variable to store it's address, otherwise it could lose track. This second variable is called a pointer and is located as global data or as part of the stack. (Or, on rare occasions, in the CPU registers.)
Oh, it's even a bit more complex than this, but already I can see some eyes rolling due to this information overkill. :-)
Think of memory as a drawer into which you decide how to devide it according to your spontaneous needs.
When you declare a variable of type integer or any other type, the compiler or interpreter (whichever) allocates a memory address in its Data Segment (DS register in assembler) and reserves a certain amount of following addresses depending on your type's length in bit.
As per your question, an integer is 32 bits long, so, from one given address, let's say D003F8AC, the 32 bits following this address will be reserved for your declared integer.
On compile time, whereever you reference your variable, the generated assembler code will replace it with its DS address. So, when you get the value of your variable C, the processor queries the address D003F8AC and retrieves it.
Hope this helps, since you already have much answers. :-)