The variable address in Objective C language - objective-c

I know that if the computer is 64 bit, then a int variable occupy 4 bytes. I try to test this by the following code.
int c = 16;
int f = 11;
NSLog(#"&c = %p &f = %p", &c,&f);
and the output is:
&c = 0x7fff570c8a4c &f = 0x7fff570c8a48
The discrepancy is 4, does this mean that the variable occupy 4 bytes? Where is the pointer of int and the int variable store in computer, stack or heap? Do pointer and variable store in different places?
I want to understand why the discrepancy of 2 the address of int is 4.

I know that if the computer is 64 bit, then a int variable occupy 64/8=8 bytes.
This is not true in general. For example, 64-bit Windows systems use 32-bit ints.
Regardless, the locations of variables on the stack are somewhat arbitrary, and cannot be relied upon to determine anything about those variables' size. (It is true in this case that these variables occupy four bytes, but the same will not be true in all circumstances.) If you need to determine the "bit-ness" of a system within a program, consider taking the size of a pointer, e.g.
sizeof(void *)
This will always return 4 for systems which use 32-bit addresses, and 8 for systems which use 64-bit addresses.

Yes, it does mean the variable occupies 4 bytes. And the pointers you asked about are stored on the stack for the call to NSLog().

Related

what is diffrence between integerValue and intValue in objective C

hi i am new to obj C and i am assigning a text field value to int variable PaidLeaves as below:
because text field return string value i have to cat it to int value so i use following code:
for example
PaidLeaves = txtPaidLeaves.text.intValue;
and
PaidLeaves = txtPaidLeaves.text.integerValue;
above i am assigning a text field value to int value
and both works but what is difference between two expression
please tell me
thank you
intValue returns an int number.
integerValue returns a NSInteger number.
The difference between them is their number of bits, or in easier terms, the range of values that they can store. Has said in an answer of a different question:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're
32-bits on 32-bit systems, and 64 bits on 64-bit systems.
Reference: https://stackoverflow.com/a/4445467/4370893
Consider that Apple has only made 64 bit systems since Mac OS X 10.7 (Lion), which was released in 2011, so I'm gonna refer to NSInteger has a 64 bit long integer.
So what that means?
The first bit of a signed integer number, like NSInteger and int, is used to define if it's a positive or a negative number. The conclusion is that a signed integer number goes from -2^(number of bits-1) to 2^(number of bits-1)-1, so...
int: - 2,147,483,648 (- 2^31) to 2,147,483,647 (2^31-1)
NSInteger: - 9,223,372,036,854,775,808 (- 2^63) to 9,223,372,036,854,775,807 (2^63-1)

Why does the C standard provide unsized types (int, long long, char vs. int32_t, int64_t, uint8_t etc.)?

Why weren't the contents of stdint.h the standard when it was included in the standard (no int no short no float, but int32_t, int16_t, float32_t etc.)? What advantage did/does ambiguous type sizes provide?
In objective-C, why was it decided that CGFloat, NSInteger, NSUInteger have different sizes on different platforms?
When C was designed, there were computers with different word sizes. Not just multiples of 8, but other sizes like the 18-bit word size on the PDP-7. So sometimes an int was 16 bits, but maybe it was 18 bits, or 32 bits, or some other size entirely. On a Cray-1 an int was 64 bits. As a result, int meant "whatever is convenient for this computer, but at least 16 bits".
That was about forty years ago. Computers have changed, so it certainly looks odd now.
NSInteger is used to denote the computer's word size, since it makes no sense to ask for the 5 billionth element of an array on a 32-bit system, but it makes perfect sense on a 64-bit system.
I can't speak for why CGFloat is a double on 64-bit system. That baffles me.
C is meant to be portable from enbedded devices, over your phone, to descktops, mainfraimes and beyond. These don't have the same base types, e.g the later may have uint128_t where others don't. Writing code with fixed width types would severely restrict portability in some cases.
This is why with preference you should neither use uintX_t nor int, long etc but the semantic typedefs such as size_t and ptrdiff_t. These are really the ones that make your code portable.

Memory addresses, pointers, variables, values - what goes on behind the scenes

This is going to be a pretty loaded question but ever since I started learning about pointers I've been very curious about what happens behind the scenes when a program is run.
As far as I know, computer memory is commonly thought of as a long strip of memory divided evenly into individual bytes. Certainly pictures such as the following evoke such a metaphor:
One thing I've been wondering, what do the memory addresses themselves represent? I'm sure it's no coincidence that memory addresses appear as 8 digit hexadecimal values (eg/ 00EB5748). Why is this?
Furthermore, when I declare a variable x, what is happening at the memory level? Is the compiler simply reserving a random address (+however many consecutive addresses it needs for the variable type) for data storage?
Now suppose x is an unsigned int that occupies 2 bytes of memory (ie values ranging from 0 to 65536). When I declare x = 12, what is happening? What is it that I'm making equal to 12? When I draw conceptual diagrams, I usually have a box for an address (say &x) pointing to a variable (x) that occupies seemingly nothing, and I'm sure that can't be a fully accurate picture of what's going on.
And what's happening at the binary level? Is the address 00EB5748 treated as 111010110101011101001000 and storing a value of 12 somewhere, or 1100?
Mostly my confusion & curiosity stems from the relationship between memory addresses and actual values being declared (eg/ 12, 'a', -355.2). As another example, suppose our address 00EB5748 is pointing to a char 's' whose value is 115 according to ASCII charts. Is the address describing a position that stores the value 115 in 1 byte, by flipping the appropriate 1s and 0s at that position in memory?
Just open any book. You will see pages. Every page has a number. Consecutive pages are numbered by consecutive numbers. Do you have any confusion with numbered pages? I think no. Then you should not have confusion with computer memory.
Books were main memory storage devices before computer era. Computer memory derived basic concept from books: book has pages -> computer memory has memory cells, book has page numbers -> computer memory has memory addresses.
One thing I've been wondering, what do the memory addresses themselves represent?
Numbers. Every memory cell has number, like every page in book.
Furthermore, when I declare a variable x, what is happening at the memory level? Is the compiler simply reserving a random address (+however many consecutive addresses it needs for the variable type) for data storage?
Memory manager marks some memory cells occupied and tells the address of first reserved cell to compiler. Compiler associates name and type of variable with this address. (This picture is from my head, it can be inaccurate).
When I declare x = 12, what is happening?
When you declared variable x, memory cells were reserved for this variable. Now you write 12 into these memory cells. Note that 12 is binary coded in some way, depending on type of variable x. If x is unsigned int which occupies 2 memory cells, then one cell will contain 0, other will contain 12. Because binary integer representation of 12 is
0000 0000 0000 1100
|_______| |_______|
cell cell
If 12 is floating-point number it will be coded in other way.
A memory address is simply the position of a given byte in memory. The zeroth byte is at 0x00000000. The tenth at 0x0000000A. The 65535th at 0x0000FFFF. And so on.
Local variables live on the stack*. When compiling a block of code, the compiler counts how many bytes are needed to hold all the local variables, and then increments the stack pointer so that all the variables can fit below it (along with some other stuff like frame pointers and return addresses and whatnot). Then it just remembers that, for example, local variable x is at an offset -2 from the stack pointer, foo is at an offset -4 and so on, and uses those addresses whenever those variables are referenced in the following code.
Since the compiler knows that x is at address (stack pointer - 2), that's the location that is set to the value 12 when you do x = 12.
Not entirely sure if I understand this question, but say you want to read the memory at address 0x00EB5748. The control unit in the CPU reads the instruction, sees that it is a load instruction, and passes the address (in binary of course) to the load/store unit, along with some other junk like how many bytes to read. Then the LSU sends that address to some memory (probably L1 cache), and after a certain time gets the value 12 back. Then this data is available to, say, put in a register, or send to the ALU to do arithmetic, or whatever.
That seems to be accurate, yes. Going back to the first question, an address simply means "byte number 0xWHATEVER in memory".
Hope this clarified things a bit at least.
*I should probably explain the stack as well. A stack is a portion of memory reserved for local variables (and some other stuff). It starts at a fixed location in memory, and stops at the memory address contained in a special register called the stack pointer. To begin with, the stack is empty, so the stack pointer just contains the start of the stack. As you put more data on the stack, the SP is incremented. This means that you can always put more data on it simply by putting it at the address in the SP, and then incrementing the SP so that once again anything past that address is free memory.

need a 24 bits type in objc

I need to a variable which holds a 24 bits value, what should I use ?
Also, do you know a list of all available types in Objc?
Thanks a lot.
You could use an int. It will hold 24 bits. (32, actually)
Objective-C has exactly the same types as plain C. All object references and the id type are technically pointers.
The size of integer datatypes (char … long long) is not defined but their relation and minimum size is.
The smallest integer data type guaranteed to hold 24bit is long int which must be at least 32bit.
int may be 16bit on some systems.
3 chars will be at least 24bit since a char must have 8bit or more.
An array of 3 unsigned chars will be 24 bits (on most systems).

Is there a practical limit to the size of bit masks?

There's a common way to store multiple values in one variable, by using a bitmask. For example, if a user has read, write and execute privileges on an item, that can be converted to a single number by saying read = 4 (2^2), write = 2 (2^1), execute = 1 (2^0) and then add them together to get 7.
I use this technique in several web applications, where I'd usually store the variable into a field and give it a type of MEDIUMINT or whatever, depending on the number of different values.
What I'm interested in, is whether or not there is a practical limit to the number of values you can store like this? For example, if the number was over 64, you couldn't use (64 bit) integers any more. If this was the case, what would you use? How would it affect your program logic (ie: could you still use bitwise comparisons)?
I know that once you start getting really large sets of values, a different method would be the optimal solution, but I'm interested in the boundaries of this method.
Off the top of my head, I'd write a set_bit and get_bit function that could take an array of bytes and a bit offset in the array, and use some bit-twiddling to set/get the appropriate bit in the array. Something like this (in C, but hopefully you get the idea):
// sets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// result is 0 on success, non-zero on failure (offset out-of-bounds)
int set_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//set the right bit
bytes[offset >> 3] |= (1 << (offset & 0x7));
return 0; //success
}
//gets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// returns (-1) on error, 0 if bit is "off", positive number if "on"
int get_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//get the right bit
return (bytes[offset >> 3] & (1 << (offset & 0x7));
}
I've used bit masks in filesystem code where the bit mask is many times bigger than a machine word. think of it like an "array of booleans";
(journalling masks in flash memory if you want to know)
many compilers know how to do this for you. Adda bit of OO code to have types that operate senibly and then your code starts looking like it's intent, not some bit-banging.
My 2 cents.
With a 64-bit integer, you can store values up to 2^64-1, 64 is only 2^6. So yes, there is a limit, but if you need more than 64-its worth of flags, I'd be very interested to know what they were all doing :)
How many states so you need to potentially think about? If you have 64 potential states, the number of combinations they can exist in is the full size of a 64-bit integer.
If you need to worry about 128 flags, then a pair of bit vectors would suffice (2^64 * 2).
Addition: in Programming Pearls, there is an extended discussion of using a bit array of length 10^7, implemented in integers (for holding used 800 numbers) - it's very fast, and very appropriate for the task described in that chapter.
Some languages ( I believe perl does, not sure ) permit bitwise arithmetic on strings. Giving you a much greater effective range. ( (strlen * 8bit chars ) combinations )
However, I wouldn't use a single value for superimposition of more than one /type/ of data. The basic r/w/x triplet of 3-bit ints would probably be the upper "practical" limit, not for space efficiency reasons, but for practical development reasons.
( Php uses this system to control its error-messages, and I have already found that its a bit over-the-top when you have to define values where php's constants are not resident and you have to generate the integer by hand, and to be honest, if chmod didn't support the 'ugo+rwx' style syntax I'd never want to use it because i can never remember the magic numbers )
The instant you have to crack open a constants table to debug code you know you've gone too far.
Old thread, but it's worth mentioning that there are cases requiring bloated bit masks, e.g., molecular fingerprints, which are often generated as 1024-bit arrays which we have packed in 32 bigint fields (SQL Server not supporting UInt32). Bit wise operations work fine - until your table starts to grow and you realize the sluggishness of separate function calls. The binary data type would work, were it not for T-SQL's ban on bitwise operators having two binary operands.
For example .NET uses array of integers as an internal storage for their BitArray class.
Practically there's no other way around.
That being said, in SQL you will need more than one column (or use the BLOBS) to store all the states.
You tagged this question SQL, so I think you need to consult with the documentation for your database to find the size of an integer. Then subtract one bit for the sign, just to be safe.
Edit: Your comment says you're using MySQL. The documentation for MySQL 5.0 Numeric Types states that the maximum size of a NUMERIC is 64 or 65 digits. That's 212 bits for 64 digits.
Remember that your language of choice has to be able to work with those digits, so you may be limited to a 64-bit integer anyway.