NSString Decoding Problem - objective-c

This String is base64 encoded string:
NSString *string=#"ë§ë ë¼ì´";
This is not show the orginal string:
NSLog(#"String is %#",[string cStringUsingEncoding:NSMacOSRomanStringEncoding]);

That's not a Base64-encoded string. There are a couple other things going on with your code, too:
You can't include literal non-ASCII characters inside a string constant; rather, you have to use the bytes that make up the character, prefixed with \x; or in the case of Unicode, you can use the Unicode code point, prefixed with \u. So your string should look something like NSString *string = #"\x91\xa4\x91 \x91\x93";. But...
The characters ¼ and ´ aren't part of the MacRoman encoding, so you'll have trouble using them. Are you sure you want a MacRoman string, rather than a Unicode string? Not many applications use MacRoman anymore, anyway.
cStringUsingEncoding: returns a C string, which should be printed with %s, not %#, since it's not an Objective-C object.
That said, your code will sort of work with:
// Using MacRoman encoding in string constant
NSString *s = #"\x91\xa4\x91 \x91\x93";
NSLog(#"%s", [s cStringUsingEncoding:NSMacOSRomanStringEncoding]);
I say "sort of work" because, again, you can't represent that code in MacRoman.

That would be because Mac OS Roman is nothing like base-64 encoding. Base-64 encoding is a further encoding applied the bytes that represent the original string. If you want to see the original string, you will first need to base-64 decode the bytestring and then figure out the original string encoding in order to interpret it.

Related

Objective-C / C Convert UTF8 Literally to Real string

Im wondering how to convert
NSString = "\xC4"; ....
to real NSString represented in normal format
Fundamentally related to xcode UTF-8 literals. Of course, it is ambiguous what you actually mean by "\xC4" - without an encoding specified, it means nothing.
If you mean the character whose Unicode code point is 0x00C4 then I would think (though I haven't tested) that this will do what you want.
NSString *s = #"\u00C4";
First are you sure you have \xC4 in your string? Consider:
NSString *one = #"\xC4\x80";
NSString *two = #"\\xC4\\x80";
NSLog(#"%# | %#", one, two);
This will output:
Ā | \xC4\x80
If you are certain your string contains the four characters \xC4 are you sure it is UTF-8 encoded as ASCII? Above you will see I added \x80, this is because \xC4 is not valid UTF-8, it is the first byte of a two-byte sequence. Maybe you have only shown a sample of your input and the second byte is present, if not you do not have UTF-8 encoded as ASCII.
If you are certain it is UTF-8 encoded as ASCII you will have to convert it yourself. It might seem the Cocoa string encoding methods would handle it, especially as what you appear to have is a string as it might be written in Objective-C source code. Unfortunately the obvious encoding, NSNonLossyAsciiStringEncoding only handles octal and unicode escapes, not the hexadecimal escapes in your string.
You can use any algorithm you like to convert it. One choice would be a simple finite state machine which scans the input a byte at a time and recognises the four byte sequence: \, x, hex-digit, hex-digit; and combines the two hex-digits into a single byte. NSString is not the best choice for byte-at-time string processing, you may be better off converting to C strings, e.g.:
// sample input, all characters should be ASCII
NSString *input = #"\\xC4\\x80";
// obtain a C string containing the ASCII characters
const char *cInput = [input cStringUsingEncoding:NSASCIIStringEncoding];
// allocate a buffer of the correct length for the result
char cOutput[strlen(c2a)+1];
// call your function to decode the hexadecimal escapes
convertAsciiEncodedUTF8(cInput, cOutput);
// create a NSString from the result
NSString *output = [NSString stringWithCString:cOutput encoding:NSUTF8StringEncoding];
You just need to write the finite state machine, or other algorithm, for convertAsciiEncodedUTF8.
(If you write an algorithm and it fails ask another question showing your code, somebody will probably help you. But don't expect someone to write it for you.)
HTH

objective c UTF8String not working with japanese

I would like to show the NSString below on my UILabel:
NSString *strValue=#"你好";
but i can not show it on my UILabel i get strange characters!
I use this code to show the text:
[NSString stringWithCString:[strValue UTF8String] encoding:NSUTF8StringEncoding];
I tried [NSString stringWithCString:[strValue cStringUsingEncoding:NSISOLatin1StringEncoding] encoding:NSUTF8StringEncoding] and it worked
but i can not show emoticons with cStringUsingEncoding:NSISOLatin1StringEncoding so i have to use UTF8String.
Any help appreciated.
Your source file is in UTF-8, but the compiler you are using thinks it's ISO-Latin 1. What you think is the string #"你好" is actually the string #"你好". But when you ask NSString* to give you this back as ISO-Latin 1, and treat it as UTF-8, you've reversed the process the compiler took and you end up with the original string.
One solution that you can use here is to tell your compiler what encoding your source file is in. There is a compiler flag (for GCC it's -finput-charset=UTF-8, not sure about clang) that will tell the compiler what encoding to use. Curiously, UTF-8 should be the default already, but perhaps you're overriding this with a locale.
A more portable solution is to use only ASCII in your source file. You can accomplish this by replacing the non-ASCII chars with a string escape using \u1234 or \U12345678. In your case, you'd use
NSString *strValue=#"\u4F60\u597D";
Of course, once you get your string constant to be correct, you can ditch the whole encoding stuff and just use strValue directly.

Cocoa base64 decoding. And NSString initWithData:encoding: return nil

I have MIME header:
Subject: =?ISO-2022-JP?B?GyRCJzEnYidWJ1UnWSdRJ1wnURsoQg==?=
=?ISO-2022-JP?B?GyRCJ1kbKEIgGyRCLWIbKEIxNzUzNTk=?=
=?ISO-2022-JP?B?IBskQidjGyhCIBskQidjJ1EnWydkJ1EbKEI=?=
=?ISO-2022-JP?B?IBskQidXGyhCLRskQideJ2AnUidaJ10nbhsoQg==?=
When i try to decode first string GyRCJzEnYidWJ1UnWSdRJ1wnURsoQg== (base64 decode and then NSSring initWithData: encoding:), all right. My code works fine for hundreds of different MIME headers except follows...
...When i try to decode second sring GyRCJ1kbKEIgGyRCLWIbKEIxNzUzNTk=, NSString initWithData:encoding: return nil
For example, http://2cyr.com/decode/?lang=en decode all strings correctly (dont forget encode this strings from base64 befor using this site).
This isn't a base64 problem, it's an ISO-2022-JP problem. Actually it's a JIS-X-0208 problem. If you look at the base64-decoded (but still ISO-2022-JP encoded) string, you'll see that it contains the sequence ESC $ B - b (bytes 9 through 13). The first three are the ISO-2022-JP shift sequence to shift into JIS-X-0208-1983 (see RFC 1468 for details), and the next two are supposed to be a 2-byte encoding of a character, but if you work it out it's on line 13 of the kuten grid, which isn't defined.
tl;dr: That's not a valid character.
Maybe you are missing a final = in your string?

utf8_decode for objective-c [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
unicode escapes in objective-c
I have a LATIN1 string.
Artîsté
When I json_encode it, it escapes some chars and converts it to single byte UTF8.
Art\u00eest\u00e9
If I just json_decode it, I believe it is decoding in UTF8
Artîsté
In order to get my original string back, I have to call utf8_decode
Artîsté
Is there a way to handle this conversion in objective-c?
You might be looking for this:
NSString *string = (some string with non-ASCII characters in it);
char const *string_as_latin1 = [string cStringUsingEncoding:NSISOLatin1StringEncoding];
or possibly this:
NSData *data_latin1 = [string dataUsingEncoding:NSISOLatin1StringEncoding allowLossyConversion:YES];
I have a LATIN1 string.
I don't think you do. Assuming you are talking about PHP, json_encode() only accepts UTF-8 strings, and bails out if it hits a non-UTF-8 high-byte sequence:
json_encode("Art\xeest\xe9")
"Art"
json_encode("Art\xc3\xaest\xc3\xa9")
"Art\u00eest\u00e9"
I think you had a proper UTF-8 string to start with, then you encoded and decoded it to get the exact same UTF-8 string back. But then you're displaying it or processing it in another step you haven't shown us, that treats your string as if it were Latin-1.

method of obtaining the number of bytes

NSString str = xxxxx;
[str length];
This code is number of characters.
I want to get number of byte.
Use -[NSString lengthOfBytesUsingEncoding:].
NSString is a unicode string. Thus, there is no such thing as byte length without specifying an encoding for the unicode code points of each letter in the string. As others have pointed out, once you choose an encoding,
-[NSString lengthOfBytesUsingEncoding:]
is what you need.
You might find this what-you-need-to-know tutorial on Unicode helpful.