How can I get the actual value of the char instead of the int value - objective-c

So at first i was trying to use character at index, then convert it to nsnumber, and then get the int value, but for 9 i got a value of 57. So I knew what was going wrong, I'm getting the int of the character itself.
So I read a little, and found atoi, but I get this error, that doesnt crash my app jsut pauses it.
My code is:
int current = atoi([startSquares characterAtIndex: i]);
Now startSquares is a big string full of numbers, and this above line is in a for loop where i goes from 0 to 99.

57 is ASCII for the digit '9'. Assuming that by the "value of the char" you mean "the numeric value of the digit the char represents", you can use the simple trick available in ASCII:
int digit = char - '0';
This trick works, because all digits are encoded in order starting with the digit zero (ASCII code 48). So when you subtract '0' (which is another way to write 48) from 57, you get 9, the value of the digit '9'.

This is a bad design, you should use an int array to hold your squares.
But if you absolutely insist on sticking with your approach, dasblinkenlight's way is the way to go. Just subtract the int value of char '0' from the char that you read.

If you have a character c '9' and you want the numeric value 9, you can use c - '0'. It isn't clear that this is what you want, though.
If you have an array of char that contains a series of numbers (of more than one digit), you need to advance a pointer through that array, and then you could call atoi with that pointer, when it points at a digit (see isdigit), or you could use sscanf, or, you could put it in an NSString and get the next number using intValue. But that would give you an NSInteger, not an NSNumber. I don't think you really want an NSNumber, since you can't directly take the square of one.

Related

Using CAST with SUBSTR to limit decial places

I am using the string below to convert the stored length in inches to feet, but I'm getting 15 decimal places in the process. I'm not sure if CAST will work with this string or not, but either way, I can't figure it out. Help?
SUBSTR(LL_WW_HH_INCHES,1,3)/12 AS "LENGTH IN FT"
FYI: LL_WW_HH_INCHES returns 9 digits that look something like this 123456789. I only need the first three digits and want to divide that by 12.

How to use hex counting in filenames for programmatic loading?

I have several directories of 12 .caf files and am loading them programmatically:
NSString *soundToPlay = [NSString stringWithFormat:#"sounds/%d/%d_%d.caf", type, note, timbre];
If I want to, say, increment from 9 to 10 in one of those values, suddenly my string is an extra character long, which makes it harder to manipulate later with something like NSMakeRange. I'd like to keep all these %ds to a single character.
What I'd like to do is name my files using the digits 0-9 but then continue with A, B, C instead of 10, 11, 12. This would keep everything single-character. I'm hoping there's an easy way to do this kind of thing, still allowing stuff like increment, +/-, and modulo. If so, what is it?
X is the hexadecimal format specifier:
[NSString stringWithFormat:#"sounds/%X/%X_%X.caf", type, note, timbre]
Alternatively you could always use two digit numbers. That would allow to select from more than 16 cases:
[NSString stringWithFormat:#"sounds/%02d/%02d_%02d.caf", type, note, timbre]

Trouble converting Dec to Hex (Int to Char)

i know this seems to be a stupid question, but i'm really getting trouble here.
I'm working in a project where i have some functions i can´t modify. That is, i got some C functions (not really my speciality) inside my Obj. C code that i can modify.
So here it is... to explain a little what i have.
I'm receiving a NSData like "\xce\x2d\x1e\x08\x08\xff\x7f" . I have to put each \hex code in a char array, like this:
cArray[1]=ce;
cArray[2]=2d;
cArray[3]=1e;
cArray[4]=08;
etc, etc... of course not LIKE THIS, but just so you understand. My initial move was to separe the NSData with subdataWithRange: and fill in an array with all "subdata". So the next move could be passing each position of that array to a char array, and that's where i got stuck.
I'm using something like (dont have my code right now)
for(int i=0 ; i<=64 ; i++) {
[[arrayOfSubData objectAtIndex:i] getBytes:&charArray[i]];
}
To fill the char array with the hex from my array of subData. That works almost perfectly. Almost.
Taking that example of cArray, my NSLog(#"pos%i: %x",i,charArray[i]) would show me:
pos1: ce
pos2: 2d
pos3: 1e
pos4: 8
And all the "left zeros" are supressed in that same way. My workaround for the moment (and i´m not sure if it is the best practice here) is to take my subDataArray and initWithFormat: a string with it. With that i can transform the string to an int with NSScanner scanHexInt:, but then i´m stucked again when converting back my decimal int to a hexadecimal CHAR. What would be the best approach to fill my char array that way?
Any help or some "tough love" will be greatly appreciated. Thanks
According to the normal rules of printf formatting (which NSLog follows also) you want the following:
NSLog(#"pos%i: %02x", i, charArray[i]);
The '0' says to left pad with 0s and is a flag. The '2' says to ensure that output for that field is at least two characters. So that'll ensure that at least two characters are output and pad to the left with '0's in order to fill space.

simple question about assigning float to int

This is probably something very simple but I'm not getting the results I'm expecting. I apologise if it's a stupid question, I just don't what to google for.
Easiest way to explain is with some code:
int var = 2.0*4.0;
NSLog(#"%d", 2.0*4.0);//1
NSLog(#"%d", var);//2
if ((2.0*4.0)!=0) {//3
NSLog(#"true");
}
if (var!=0) {//4
NSLog(#"true");
}
This produces the following output:
0 //1
8 //2
true //3
true //4
The one that I don't understand is line //1. Why are all the others converting (I'm assuming the correct word is "casting", please correct me if I'm wrong) the float into an int, but inside NSLog it's not happening. Does this have something to do with the string formatting %d parameter and it being fussy (for lack of a better word)?
You're telling NSLog that you're passing it an integer with the #"%d" format specifier, but you're not actually giving it an integer; you're giving it a double-precision floating-point value (8.0, as it happens). When you lie to NSLog, its behavior is undefined, and you get unexpected results like this.
Don't lie to NSLog. If you want to convert the result of 2.0*4.0 to an integer before printing, you need to do that explicitly:
NSLog(#"%d", (int)(2.0*4.0));
If, instead, you want to print the result of 2.0*4.0 as a double-precision floating-point number, you need to use a different format specifier:
NSLog(#"%g", 2.0*4.0);
More broadly, this is true of any function that takes a variable number of arguments and some format string to tell it how to interpret them. It's up to you to make sure that the data you pass it matches the corresponding format specifiers; implicit conversions will not happen for you.
First, you never used floats in your program. They are doubles.
Second, the arguments of NSLog, printf and the likes are not automatically converted to what you specify using %d or %f. It follows the standard promotion rule for untyped arguments. See the ISO specification, sec 6.5.2.2.6 and 6.5.2.2.7. Note the super weird rule that inside these functions,
a float is automatically promoted to double,
and any integer smaller than an int is promoted to int. (see 6.3.1.1.2)
So, strictly speaking, the specification %f is not showing a float, but a double. See the same document, Sec. 7.19.6.1.8.
Note also that in your case 1 and 3, promotions are to double.
In examples 2, 3 and 4, the float is either being assigned to an int (which converts it) or compared with an int (which also converts it). In 1, however, you're passing the float as an argument to a function. The printf function allows all the arguments after the initial format string to be of any type, so this is valid. But since the compiler doesn't know you mean for it to be an int (remember, you haven't done anything to let the compiler know), the float is passed along as a floating-point value. When printf sees the %d formatting specifier, it pops enough bytes for an int from the argument list and interprets those bytes as an int. Those bytes happen to look like an integer 0.
The format string %d expects a decimal number, meaning a base 10 integer, not a floating point. What you want there is %f if you're trying to get it to print out 8.0
The first parameter to NSLog is a format string, then the second (and subsequent) parameters can be any types. The compiler doesn't know what the types should be at compile time and so doesn't try to cast them to anything. At run time NSLog assumes the second (and subsequent) parameters are as specified in the format string. If there's a mismatch unexpected and generally unhappy things happen.
Summary; Make sure you pass variables of the right type in the second (and subsequent) parameter.

String type versus char in abap

What are the drawbacks of the String type in abap? When to use it, when not?
An example : I have a text field that should save values ranging from 0 to 12 chars, better to use a string or a Char(12)?
Thanks!
A string is stored as a dynamic array of characters while a char is statically allocated.
Some of the downsides of strings include:
Overhead - because they are dynamic the length must be stored in addition to the actual string.
The substring and offset operators don't work with strings.
Strings cannot be turned into translatable text elements.
So to answer your question, strings should only be used for fairly long values with a wide range of lengths where the additional overhead is negligible relative to the potential wasted space of a static char(x) variable.
I think CHAR is the best because you are 100% sure that the field has to only hold 0-12 characters.
string is the variable length Data type , while in char you have to define the length ..
for type C(Text field (alphanumeric characters)) and String X or hexadecimal string have initial value (X'0 … 0') .
to avoid initial value , and to use actual length C type is used
Strings are good when:
The text length will be variable.
Spaces are part of the string (trailing spaces in CHAR fields are lost)
You pass them around a lot (when STRING variable metadata is less than char field size)
You need to get the STRING length often. It is more optimal than with CHAR fields.
CHAR fields are good:
If they are small, they are fast (less than around 32 chars on unicode systems)
CHAR field literals using (') quotes instead of (`) can be made into translatable texts.
Things to remember:
All variables have metadata, but strings also has some internal pointer to the string data, which could add up to 64 bytes to memory consumption. Something to keep in mind.
When assigning a literal text to a variable, try to match the literal type to the variable type. Use 'test' for CHAR and `test` for STRING. This is usually slightly faster.
String Variable :
A String is a variable length data type which is used to store any length of data. Variable length fields are used because they save space.
String, can store any number of characters. String will allocate the memory at runtime which is also called as dynamic memory allocation, will allocate the memory as per the size of the string. Strings cannot be declared using parameters as the memory of allocation is dynamic.
But in your case, you already know max-length of field(0 - 12 characters), So CHAR type is best for use in your case. A STRING type generally used to variable length data or a long values.
Read more