need a 24 bits type in objc - objective-c

I need to a variable which holds a 24 bits value, what should I use ?
Also, do you know a list of all available types in Objc?
Thanks a lot.

You could use an int. It will hold 24 bits. (32, actually)

Objective-C has exactly the same types as plain C. All object references and the id type are technically pointers.
The size of integer datatypes (char … long long) is not defined but their relation and minimum size is.
The smallest integer data type guaranteed to hold 24bit is long int which must be at least 32bit.
int may be 16bit on some systems.
3 chars will be at least 24bit since a char must have 8bit or more.

An array of 3 unsigned chars will be 24 bits (on most systems).

Related

what is diffrence between integerValue and intValue in objective C

hi i am new to obj C and i am assigning a text field value to int variable PaidLeaves as below:
because text field return string value i have to cat it to int value so i use following code:
for example
PaidLeaves = txtPaidLeaves.text.intValue;
and
PaidLeaves = txtPaidLeaves.text.integerValue;
above i am assigning a text field value to int value
and both works but what is difference between two expression
please tell me
thank you
intValue returns an int number.
integerValue returns a NSInteger number.
The difference between them is their number of bits, or in easier terms, the range of values that they can store. Has said in an answer of a different question:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're
32-bits on 32-bit systems, and 64 bits on 64-bit systems.
Reference: https://stackoverflow.com/a/4445467/4370893
Consider that Apple has only made 64 bit systems since Mac OS X 10.7 (Lion), which was released in 2011, so I'm gonna refer to NSInteger has a 64 bit long integer.
So what that means?
The first bit of a signed integer number, like NSInteger and int, is used to define if it's a positive or a negative number. The conclusion is that a signed integer number goes from -2^(number of bits-1) to 2^(number of bits-1)-1, so...
int: - 2,147,483,648 (- 2^31) to 2,147,483,647 (2^31-1)
NSInteger: - 9,223,372,036,854,775,808 (- 2^63) to 9,223,372,036,854,775,807 (2^63-1)

how to save more then 20 digit number in an integer in objective c Xcode any soluction

i am develop a game in Xcode and my game score increase 20 digit like this(1000000000000000000000000) how to manage it help me.i want to store 20 digit number in integer and want to increase and decrease value in objective c
Apple provide the NSDecimal value type, and an object wrapped version of it NSDecimalNumber. This is a floating-point type with a precision of 38 decimal digits, so it can easily hold your 20 digit integer and do arithmetic on it.
If you'd rather stick with "pure" integers then a 128-bit integer type will more than suffice - it will represent 38 digit decimals. This is an optional type in C, __int128 and unsigned __int128, and though the current Clang/Xcode compilers have the type there appears to be no built-in way to convert to/from strings for I/O - but converting an integer to a string is a simple algorithm you can implement yourself.
Another integer option is hidden inside of CFNumber which internally uses 128 bit integers. Apple release this as open source, CFNumber.c, and there are functions in there for addition, negation and conversion to text.
HTH

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

List of Scalar Data Types

Im looking for a list of all the scalar data types in Objective C, complete with their ranges (max/min values etc).
Sorry for the simple question, Im just really struggling to find anything like this.
int An integer value between +/– 2,147,483,647.
unsigned int An integer value between 0 and 4,294,967,296.
float A floating point value between +/– 16,777,216.
double A floating point value between +/– 2,147,483,647.
long An integer value varying in size from 32 bit to 64 bit depending on architecture.
long long A 64-bit integer.
char A single character. Technically it’s represented as an int.
BOOL A boolean value, can be either YES or NO.
NSInteger When compiling for 32-bit architecture, same as an int, when compiling for 64-bit architecture,+/– 4,294,967,296.
NSUInteger When compiling for 32-bit architecture, same as an unsigned int, when compiling for 64-bit architecture, value between 0 and 2^64
Source.
char : A character 1 byte
int :An integer — a whole number 4 bytes
float : Single precision floating point number 4 bytes
Double : Double precision floating point number 8 bytes
short : A short integer 2 bytes
long : A double short 4 bytes
long long : A double long 8 bytes
BOOL : Boolean (signed char) 1 byte
For more on sizes check this post
Integer types are signed 2's complement or unsigned and the standard C variations are provided (char, short, int, long, long long and unsigned variants of these, see C types on Wikipedia), sizes may vary dependent on 32-bit & 64-bit environments - see 64-bit computing.
BOOL is an Objective-C special and is defined as signed char, while it can take any value a signed char can the constants NO and YES are defined for use. The C9X type _Bool(aka bool) is also provided.
float & double are IEEE 32-bit & 64-bit floating point - see Wikipedia for ranges.
Standard macro contants are provided for the minimum and maximum of all the types, e.g. INT_MAX for int - again see C types on Wikipedia for these.

Ints to Bytes: Endianess a Concern?

Do I have to worry about endianness in this case (integers MUST be 0-127):
int a = 120;
int b = 100;
int c = 50;
char theBytes[] = {a, b, c};
I think that, since each integer sits in its own byte, I don't have to worry about Endianess in passing the byte array between systems. This has also worked out empirically. Am I missing something?
Endianness only affects the ordering of bytes within an individual value. Individual bytes are not subject to endian issues, and arrays are always sequential, so byte arrays are the same on big- and little-endian architectures.
Note that this doesn't necessarily mean that only using chars will make datatypes 100% byte-portable. Structs may still include architecture-dependent padding, for example, and one system may have unsigned chars while another uses signed (though I see you sidestep this by only allowing 0-127).
No, you don't need to worry, compiler produces code which makes correct casting and assignment.