I have some NSData that may or may not contain invalid UTF-8, but any parts of it that are valid UTF-8 should be interpreted as such. If I use [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding], it just returns nil if the data has invalid UTF-8 anywhere. How can I get it to replace the invalid UTF-8 with �, like TextEncoder does?
Related
Im wondering how to convert
NSString = "\xC4"; ....
to real NSString represented in normal format
Fundamentally related to xcode UTF-8 literals. Of course, it is ambiguous what you actually mean by "\xC4" - without an encoding specified, it means nothing.
If you mean the character whose Unicode code point is 0x00C4 then I would think (though I haven't tested) that this will do what you want.
NSString *s = #"\u00C4";
First are you sure you have \xC4 in your string? Consider:
NSString *one = #"\xC4\x80";
NSString *two = #"\\xC4\\x80";
NSLog(#"%# | %#", one, two);
This will output:
Ā | \xC4\x80
If you are certain your string contains the four characters \xC4 are you sure it is UTF-8 encoded as ASCII? Above you will see I added \x80, this is because \xC4 is not valid UTF-8, it is the first byte of a two-byte sequence. Maybe you have only shown a sample of your input and the second byte is present, if not you do not have UTF-8 encoded as ASCII.
If you are certain it is UTF-8 encoded as ASCII you will have to convert it yourself. It might seem the Cocoa string encoding methods would handle it, especially as what you appear to have is a string as it might be written in Objective-C source code. Unfortunately the obvious encoding, NSNonLossyAsciiStringEncoding only handles octal and unicode escapes, not the hexadecimal escapes in your string.
You can use any algorithm you like to convert it. One choice would be a simple finite state machine which scans the input a byte at a time and recognises the four byte sequence: \, x, hex-digit, hex-digit; and combines the two hex-digits into a single byte. NSString is not the best choice for byte-at-time string processing, you may be better off converting to C strings, e.g.:
// sample input, all characters should be ASCII
NSString *input = #"\\xC4\\x80";
// obtain a C string containing the ASCII characters
const char *cInput = [input cStringUsingEncoding:NSASCIIStringEncoding];
// allocate a buffer of the correct length for the result
char cOutput[strlen(c2a)+1];
// call your function to decode the hexadecimal escapes
convertAsciiEncodedUTF8(cInput, cOutput);
// create a NSString from the result
NSString *output = [NSString stringWithCString:cOutput encoding:NSUTF8StringEncoding];
You just need to write the finite state machine, or other algorithm, for convertAsciiEncodedUTF8.
(If you write an algorithm and it fails ask another question showing your code, somebody will probably help you. But don't expect someone to write it for you.)
HTH
I'm facing a problem with composing Unicode characters in Obj-C, described by the next example code, that tries to combine 'e' with acute accent:
NSLog(#"Composing with Unicode literal: '%#'\nComposing with UTF-8 literal: '%#'",
[[NSString stringWithUTF8String:"e\u0301"]
precomposedStringWithCanonicalMapping],
[[NSString stringWithUTF8String:"e\xc2\xb4"] // "\xc\xb4" is UTF-8 rep of "\u0301"
precomposedStringWithCanonicalMapping]);
The output is:
Composing with Unicode literal: 'é'
Composing with UTF-8 literal: 'e´'
So the code yields the correct result only when the acute is specified as \u literal, while using UTF-8 representation appears to produce wrong result. My question: Is there a way to use UTF-8 nevertheless?
You have the wrong UTF-8 encoding for the combining accent.
Change \xc2\xb4 to \xcc\x81. This change will give you the expected result.
The accent you were using in the non-combining accent.
You are using the wrong acute accent for combining:
NSString *utf = [[NSString stringWithUTF8String:"e\xcc\x81"] precomposedStringWithCanonicalMapping]; // "\xc\xb4" is UTF-8 rep of "\u0301"
NSLog(#"utf: %#",utf);
Output:
utf: é
See COMBINING ACUTE ACCENT
In my application, I am getting some string values from a server, but I'm not ending up with the right string.
بسيط this is the string from server side, but what I am getting is بسÙØ·
I tried to test the response string in an online decoder:
http://www.cafewebmaster.com/online_tools/utf8_encode
It is UTF-8 encoded, but I couldn't decode the string on the iPhone side.
I took a look at these Stack Overflow links as reference
Converting escaped UTF8 characters back to their original form
unicode escapes in objective-c
utf8_decode for objective-c
but none of them helped.
I don't understand from your question the following points:
Do you have access on the server side (I mean the programming of it)?
How do you send and receive data to the server?
For the first question I will assume that the server is programmed to send you text in UTF-8 encoding.
Now on the iPhone if you are sending to the server using sockets use the following:
NSString *messageToSend = #"The text in the language you like";
const uint8_t *str = (uint8_t *) [messageToSend cStringUsingEncoding:NSUTF8StringEncoding];
[self writeToServer:str];
Where the function writeToServer is your function that will send the data to the server.
If you are willing to put the data in a SQLite3 database use:
sqlite3_bind_text(statement, 2, [#"The text in the language you like" UTF8String], -1, NULL);
If you are receiving the data from the server (again using sockets) do the following:
[rowData appendBytes:(const void *)buf length:len];
NSString *strRowData = [[NSString alloc] initWithData:rowData encoding:NSUTF8StringEncoding];
I hope this covers all the cases you need.
Without any source it is hard to say anything conclusive, but at some point you are interpreting a UTF-8 encoded string as ISO-8859-1, and (wrongfully) converting it to UTF-8:
Analysis for string 'بسيط':
raw length: 8
logical length: 4
raw bytes: 0xD8 0xA8 0xD8 0xB3 0xD9 0x8A 0xD8 0xB7
interpreted as ISO-8859-1 (بسÙØ·): 0xC3 0x98 0xC2 0xA8 0xC3 0x98 0xC2 0xB3 0xC3 0x99 0xC2 0x8A 0xC3 0x98 0xC2 0xB7
So at some point you should probably find some reference to ISO-8859-1 in your code. Find it and remove it.
SOLVED the issue from this link
Different kind of UTF8 decoding in NSString
NSString *string = #"بسÙØ·";
I tried
[NSString stringWithUTF8String:(char*)[string cStringUsingEncoding:NSISOLatin1StringEncoding]]
this method
Thank You.
I have a list of unicode char "codes" that I'd like to print using \u escape sequence (e.g. \ue415), as soon as I try to compose it with something like this:
// charCode comes as NSString object from PList
NSString *str = [NSString stringWithFormat:#"\u%#", charCode];
the compiler warns me about incomplete character code. Can anyone help me with this trivial task?
I think you can't do that the way you're trying - \uxxx escape sequence is used to indicate that a constant is a unicode character - and that conversion is processed at compile-time.
What you need is to convert your charCode to an integer number and use that value as format parameter:
unichar codeValue = (unichar) strtol([charCode UTF8String], NULL, 16);
NSString *str = [NSString stringWithFormat:#"%C", charCode];
NSLog(#"Character with code \\u%# is %C", charCode, codeValue);
Sorry, that nust not be the best way to get int value from HEX representation, but that's the 1st that came to mind
Edit: It appears that NSScanner class can scan NSString for number in hex representation:
unichar codeValue;
[[NSScanner scannerWithString:charCode] scanHexInt:&codeValue];
...
Beware that not all characters can be encoded in UTF-8. I had a bug yesterday where some Korean characters were failing to be encoded in UTF-8 properly.
My solution was to change the format string from %s to %# and avoid the re-encoding issue, although this may not work for you.
Based on codes from #Vladimir, this works for me:
NSUInteger codeValue;
[[NSScanner scannerWithString:#"0xf8ff"] scanHexInt:&codeValue];
NSLog(#"%C", (unichar)codeValue);
not leading by "\u" or "\\u", from API doc:
The hexadecimal integer representation may optionally be preceded
by 0x or 0X. Skips past excess digits in the case of overflow,
so the receiver’s position is past the entire hexadecimal representation.
This String is base64 encoded string:
NSString *string=#"ë§ë ë¼ì´";
This is not show the orginal string:
NSLog(#"String is %#",[string cStringUsingEncoding:NSMacOSRomanStringEncoding]);
That's not a Base64-encoded string. There are a couple other things going on with your code, too:
You can't include literal non-ASCII characters inside a string constant; rather, you have to use the bytes that make up the character, prefixed with \x; or in the case of Unicode, you can use the Unicode code point, prefixed with \u. So your string should look something like NSString *string = #"\x91\xa4\x91 \x91\x93";. But...
The characters ¼ and ´ aren't part of the MacRoman encoding, so you'll have trouble using them. Are you sure you want a MacRoman string, rather than a Unicode string? Not many applications use MacRoman anymore, anyway.
cStringUsingEncoding: returns a C string, which should be printed with %s, not %#, since it's not an Objective-C object.
That said, your code will sort of work with:
// Using MacRoman encoding in string constant
NSString *s = #"\x91\xa4\x91 \x91\x93";
NSLog(#"%s", [s cStringUsingEncoding:NSMacOSRomanStringEncoding]);
I say "sort of work" because, again, you can't represent that code in MacRoman.
That would be because Mac OS Roman is nothing like base-64 encoding. Base-64 encoding is a further encoding applied the bytes that represent the original string. If you want to see the original string, you will first need to base-64 decode the bytestring and then figure out the original string encoding in order to interpret it.