Composing unicode char format for NSString - objective-c

I have a list of unicode char "codes" that I'd like to print using \u escape sequence (e.g. \ue415), as soon as I try to compose it with something like this:
// charCode comes as NSString object from PList
NSString *str = [NSString stringWithFormat:#"\u%#", charCode];
the compiler warns me about incomplete character code. Can anyone help me with this trivial task?

I think you can't do that the way you're trying - \uxxx escape sequence is used to indicate that a constant is a unicode character - and that conversion is processed at compile-time.
What you need is to convert your charCode to an integer number and use that value as format parameter:
unichar codeValue = (unichar) strtol([charCode UTF8String], NULL, 16);
NSString *str = [NSString stringWithFormat:#"%C", charCode];
NSLog(#"Character with code \\u%# is %C", charCode, codeValue);
Sorry, that nust not be the best way to get int value from HEX representation, but that's the 1st that came to mind
Edit: It appears that NSScanner class can scan NSString for number in hex representation:
unichar codeValue;
[[NSScanner scannerWithString:charCode] scanHexInt:&codeValue];
...

Beware that not all characters can be encoded in UTF-8. I had a bug yesterday where some Korean characters were failing to be encoded in UTF-8 properly.
My solution was to change the format string from %s to %# and avoid the re-encoding issue, although this may not work for you.

Based on codes from #Vladimir, this works for me:
NSUInteger codeValue;
[[NSScanner scannerWithString:#"0xf8ff"] scanHexInt:&codeValue];
NSLog(#"%C", (unichar)codeValue);
not leading by "\u" or "\\u", from API doc:
The hexadecimal integer representation may optionally be preceded
by 0x or 0X. Skips past excess digits in the case of overflow,
so the receiver’s position is past the entire hexadecimal representation.

Related

Objective-C / C Convert UTF8 Literally to Real string

Im wondering how to convert
NSString = "\xC4"; ....
to real NSString represented in normal format
Fundamentally related to xcode UTF-8 literals. Of course, it is ambiguous what you actually mean by "\xC4" - without an encoding specified, it means nothing.
If you mean the character whose Unicode code point is 0x00C4 then I would think (though I haven't tested) that this will do what you want.
NSString *s = #"\u00C4";
First are you sure you have \xC4 in your string? Consider:
NSString *one = #"\xC4\x80";
NSString *two = #"\\xC4\\x80";
NSLog(#"%# | %#", one, two);
This will output:
Ā | \xC4\x80
If you are certain your string contains the four characters \xC4 are you sure it is UTF-8 encoded as ASCII? Above you will see I added \x80, this is because \xC4 is not valid UTF-8, it is the first byte of a two-byte sequence. Maybe you have only shown a sample of your input and the second byte is present, if not you do not have UTF-8 encoded as ASCII.
If you are certain it is UTF-8 encoded as ASCII you will have to convert it yourself. It might seem the Cocoa string encoding methods would handle it, especially as what you appear to have is a string as it might be written in Objective-C source code. Unfortunately the obvious encoding, NSNonLossyAsciiStringEncoding only handles octal and unicode escapes, not the hexadecimal escapes in your string.
You can use any algorithm you like to convert it. One choice would be a simple finite state machine which scans the input a byte at a time and recognises the four byte sequence: \, x, hex-digit, hex-digit; and combines the two hex-digits into a single byte. NSString is not the best choice for byte-at-time string processing, you may be better off converting to C strings, e.g.:
// sample input, all characters should be ASCII
NSString *input = #"\\xC4\\x80";
// obtain a C string containing the ASCII characters
const char *cInput = [input cStringUsingEncoding:NSASCIIStringEncoding];
// allocate a buffer of the correct length for the result
char cOutput[strlen(c2a)+1];
// call your function to decode the hexadecimal escapes
convertAsciiEncodedUTF8(cInput, cOutput);
// create a NSString from the result
NSString *output = [NSString stringWithCString:cOutput encoding:NSUTF8StringEncoding];
You just need to write the finite state machine, or other algorithm, for convertAsciiEncodedUTF8.
(If you write an algorithm and it fails ask another question showing your code, somebody will probably help you. But don't expect someone to write it for you.)
HTH

What is the right way to replace a given unicode char in an NSString instance?

I have an NSString instance (let's called it myString) containing the following UTF-8 unicode character: \xc2\x96 ( that is the long dash seen in, e.g., MS Word ).
When printing the NSString to the console using NSLog and the %# format specifier, the character is replaced by an upside-down question mark indicating that something is wrong - and when using it as text in a table cell, the unicode character simply appears as blank space ( not the empty string - a blank space ).
To solve this, I would like to replace the \xc2\x96 unicode character with a "normal" dash - at first I thought this should be a 10 sec. task but after some research I have not yet found the "right way" to do this and this is where I would like your help.
What I have tried:
When I print myString in hex like this NSLog(#"%x", myString) I get the hex value: 96 for the unicode character representing the unicode character \xc2\x96.
Using this information I have made the following implementation to replace it with its "normal" dash equivalent:
for(int index = 0; index < [myString length]; index++)
{
NSLog(#"Hex:'%x' Char:'%c'", [myString characterAtIndex:index],[myString characterAtIndex:index]);
if([[NSString stringWithFormat:#"%x", [myString characterAtIndex:index]] isEqualToString:#"96"])
myString = [myString stringByReplacingCharactersInRange:NSMakeRange(index, 1) withString:#"-"];
}
... it works, but my eyes don't like it, and I would like to know if this can be done in much more cleaner and "right" way? E.g. like C#'s String.Replace(char,char) which supports unicode characters .
So to wrap up:
I'm looking for the "right way" to replace unicode chars in a string - I have done some research, but apparently, there is only methods available that replaces occurrences of a given NSString with another NSString.
I have read the following:
https://stackoverflow.com/a/5223737/700926
https://stackoverflow.com/a/5217703/700926
https://stackoverflow.com/a/714009/700926
https://stackoverflow.com/a/668254/700926
https://stackoverflow.com/a/2039396/700926
... but all of them explains how to replace a given NSString with another NSString and do not cover how specific unicode characters ( in particular double byte ) can be replaced.
You can make your string mutable (i. e. use an NSMutableString instead of an NSString). Also, the call to [[NSString stringWithFormat:#"%x", character] isEqualToString:#"96"] is as inefficient as possible - why not simply if (character == 0x96)? All in all, try
NSString *longDash = #"\xc2\x96";
[string replaceOccurrencesOfString:longDash withString:#"-"];

NSString to char[] in Objective-c

I have to convert a NSString to a null ended char[].
The way Im doing it is like this:
NSString *regC = #"1234"
char REG[[regC length] +1];
BOOL result = [regC getCString:REG maxLength:32 encoding:NSUTF8StringEncoding];
and I dont know if is it the correct way.
REG has to be a 5 chars array, with a null in the last one/
Thanks.
char *REG = [#"test" UTF8String];
You can't allocate the length of a char[] at runtime unless you use malloc/free. but with the above way you won't need to manage memory.
This is the correct way to convert as long as you use ASCII characters in your NSString. They are representer as single byte characters in UTF-8 encoding.
If you use some non-ASCII characters you should make your REG bigger.
Upd: one more: you should put your actual buffer size into "maxLength:" parameter in order to avoid buffer overrun. Not 32 but [regC length] +1.

NSString Unicode display

I want to look at the display of a number of Unicodes using a forLoop. However the compiler doesn't like "%x" or "%d" in the string to build a Unicode. Is there a work around?
for (int k = 0; k < 16; k++){
lbl.text =[NSString stringWithFormat:#"\u00B%x", k ];// <-- incomplete universal character name \u00B
}
thanks
I'm not sure a fully understand what you're trying to achieve. For this answer, I assume that you want to generate the Unicode characters int the range between B0 and BF.
Your code doesn't work due to the \u escape sequence (and not because of the %x or %d format specifiers). Just read the error message carefully. The code assumes that the %x specifier will be substituted with a number first and that the escape sequence will be evaluated second. However, it happens the other way round: First the \u sequence is evaluated by the compiler and an error thrown because it is invalid.
A better (and simpler) approach is the following code:
for (unichar ch = 0xB0; ch <= 0xBF; ch++){
lbl.text =[NSString stringWithFormat:#"%C", ch ];
}
This code directly puts a Unicode character into the string.
Use this method instead:
NSString stringWithUTF8String:
From the documentation here on the section Strings and Non-ASCII Characters:
Formatting Strings

method of obtaining the number of bytes

NSString str = xxxxx;
[str length];
This code is number of characters.
I want to get number of byte.
Use -[NSString lengthOfBytesUsingEncoding:].
NSString is a unicode string. Thus, there is no such thing as byte length without specifying an encoding for the unicode code points of each letter in the string. As others have pointed out, once you choose an encoding,
-[NSString lengthOfBytesUsingEncoding:]
is what you need.
You might find this what-you-need-to-know tutorial on Unicode helpful.