what is diffrence between integerValue and intValue in objective C - objective-c

hi i am new to obj C and i am assigning a text field value to int variable PaidLeaves as below:
because text field return string value i have to cat it to int value so i use following code:
for example
PaidLeaves = txtPaidLeaves.text.intValue;
and
PaidLeaves = txtPaidLeaves.text.integerValue;
above i am assigning a text field value to int value
and both works but what is difference between two expression
please tell me
thank you

intValue returns an int number.
integerValue returns a NSInteger number.
The difference between them is their number of bits, or in easier terms, the range of values that they can store. Has said in an answer of a different question:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're
32-bits on 32-bit systems, and 64 bits on 64-bit systems.
Reference: https://stackoverflow.com/a/4445467/4370893
Consider that Apple has only made 64 bit systems since Mac OS X 10.7 (Lion), which was released in 2011, so I'm gonna refer to NSInteger has a 64 bit long integer.
So what that means?
The first bit of a signed integer number, like NSInteger and int, is used to define if it's a positive or a negative number. The conclusion is that a signed integer number goes from -2^(number of bits-1) to 2^(number of bits-1)-1, so...
int: - 2,147,483,648 (- 2^31) to 2,147,483,647 (2^31-1)
NSInteger: - 9,223,372,036,854,775,808 (- 2^63) to 9,223,372,036,854,775,807 (2^63-1)

Related

What does 'Implicit conversion loses integer precision: 'time_t'' mean in Objective C and how do I fix it?

I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..

Objective-C / C constant scalar value declarations

I do not see how adding something like L to a number makes a difference. For example if I took the number 23.54 and made it 23.54L what difference does that actually make and when should I use the L and not use the L or other add ons like that? Doesn't objective-c already know 23.54 is a long so why would I make it 23.54L in any case?
If someone could explain that'd be great thanks!
Actually, when a number has a decimal point like 23.54 the default interpretation is that it's a double, and it's encoded as a 64-bit floating point number. If you put an f at the end 23.54f, then it's encoded as a 32-bit floating pointer number. Putting an L at the end declares that the number is a long double, which is encoded as a 128-bit floating point number.
In most cases, you don't need to add a suffix to a number because the compiler will determine the correct size based on context. For example, in the line
float x = 23.54;
the compiler will interpret 23.54 as a 64-bit double, but in the process of assigning that number to x, the compiler will automatically demote the number to a 32-bit float.
Here's some code to play around with
NSLog( #"%lu %lu %lu", sizeof(typeof(25.43f)), sizeof(typeof(25.43)), sizeof(typeof(25.43L)) );
int x = 100;
float y = x / 200;
NSLog( #"%f", y );
y = x / 200.0;
NSLog( #"%f", y );
The first NSLog displays the number of bytes for the various types of numeric constants. The second NSLog should print 0.000000 since the number 200 is interpreted as in integer, and integer division truncates to an integer. The last NSLog should print 0.500000 since 200.0 is interpreted as a double.
It's a way to force the compiler to treat a constant with a specific type.
23.45 is double, 23.54L is long double, and 23.54f is float.
Use a suffix when you need to specify the type of a constant. Or, create a variable of a specific type: float foo = 23.54;. Most of the time you don't need a suffix.
This is all plain C.

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

List of Scalar Data Types

Im looking for a list of all the scalar data types in Objective C, complete with their ranges (max/min values etc).
Sorry for the simple question, Im just really struggling to find anything like this.
int An integer value between +/– 2,147,483,647.
unsigned int An integer value between 0 and 4,294,967,296.
float A floating point value between +/– 16,777,216.
double A floating point value between +/– 2,147,483,647.
long An integer value varying in size from 32 bit to 64 bit depending on architecture.
long long A 64-bit integer.
char A single character. Technically it’s represented as an int.
BOOL A boolean value, can be either YES or NO.
NSInteger When compiling for 32-bit architecture, same as an int, when compiling for 64-bit architecture,+/– 4,294,967,296.
NSUInteger When compiling for 32-bit architecture, same as an unsigned int, when compiling for 64-bit architecture, value between 0 and 2^64
Source.
char : A character 1 byte
int :An integer — a whole number 4 bytes
float : Single precision floating point number 4 bytes
Double : Double precision floating point number 8 bytes
short : A short integer 2 bytes
long : A double short 4 bytes
long long : A double long 8 bytes
BOOL : Boolean (signed char) 1 byte
For more on sizes check this post
Integer types are signed 2's complement or unsigned and the standard C variations are provided (char, short, int, long, long long and unsigned variants of these, see C types on Wikipedia), sizes may vary dependent on 32-bit & 64-bit environments - see 64-bit computing.
BOOL is an Objective-C special and is defined as signed char, while it can take any value a signed char can the constants NO and YES are defined for use. The C9X type _Bool(aka bool) is also provided.
float & double are IEEE 32-bit & 64-bit floating point - see Wikipedia for ranges.
Standard macro contants are provided for the minimum and maximum of all the types, e.g. INT_MAX for int - again see C types on Wikipedia for these.

need a 24 bits type in objc

I need to a variable which holds a 24 bits value, what should I use ?
Also, do you know a list of all available types in Objc?
Thanks a lot.
You could use an int. It will hold 24 bits. (32, actually)
Objective-C has exactly the same types as plain C. All object references and the id type are technically pointers.
The size of integer datatypes (char … long long) is not defined but their relation and minimum size is.
The smallest integer data type guaranteed to hold 24bit is long int which must be at least 32bit.
int may be 16bit on some systems.
3 chars will be at least 24bit since a char must have 8bit or more.
An array of 3 unsigned chars will be 24 bits (on most systems).