I was wondering how to get a random Unicode character so that every time this is run it generates a different value. This is what I have so far to get a character:
NSFont *arialUnicode = [[NSFontManager sharedFontManager]
fontWithFamily:#"Arial Unicode MS"
traits:0
weight:5
size:dirtyRect.size.height*0.6];
NSGlyph *glyphs = (NSGlyph *)malloc(sizeof(NSGlyph) * 1);
CTFontGetGlyphsForCharacters((CTFontRef)arialUnicode, (const UniChar *)L"\u2668", (CGGlyph *)glyphs, 1);
It is adapted from a drawing tutorial my friend sent me: http://cocoawithlove.com/2011/01/advanced-drawing-using-appkit.html
Quite a nice tutorial actually :)
But I want to know how to use any character as opposed to just the floral heart. I have found that buy changing the value of 2668 to some other value in the line:
CTFontGetGlyphsForCharacters((CTFontRef)arialUnicode, (const UniChar *)L"\u2668", (CGGlyph *)glyphs, 1);
I can make it change character but i want to automate this so that it automatically chooses different characters.
Thank you
If you truly want a random character, something like
UniChar uc = arc4random() % (0xffffu + 1);
CTFontGetGlyphsForCharacters((CTFontRef)arialUnicode, &uc, (CGGlyph *)glyphs, 1);
But depending on what you are trying to do
There are much easier ways to display text in Cocoa, particularly NSTextField
There are so many characters in Unicode it's highly unlikely a single font will contain all the glyphs.
Do you really want a random unicode code point or do you want to select from a subset of the available characters? See http://www.unicode.org/charts/ to get an idea of just how much Unicode covers.
Related
I know how to present number as a bitmap, for instance:
17 = 010001
11 = 001011
This is about numbers, but what about letters? Is there a way to do this? For example:
w = ??
[ = ??
Everything on your computer is represented as a sequence a bits, what you are calling a “bitmap”. So the answer to your question is yes, characters have a binary representation, along with integers, floating-point numbers, machine instructions etc.
Different languages use different binary encodings for characters, Objective-C uses Unicode. See the section Understanding Characters in the NSString documentation.
HTH
You might as well jump in at the deep end and visit www.unicode.org.
Letters are complicated. And anyway, representing them as a bitmap is a bit pointless, isn't it?
I have several directories of 12 .caf files and am loading them programmatically:
NSString *soundToPlay = [NSString stringWithFormat:#"sounds/%d/%d_%d.caf", type, note, timbre];
If I want to, say, increment from 9 to 10 in one of those values, suddenly my string is an extra character long, which makes it harder to manipulate later with something like NSMakeRange. I'd like to keep all these %ds to a single character.
What I'd like to do is name my files using the digits 0-9 but then continue with A, B, C instead of 10, 11, 12. This would keep everything single-character. I'm hoping there's an easy way to do this kind of thing, still allowing stuff like increment, +/-, and modulo. If so, what is it?
X is the hexadecimal format specifier:
[NSString stringWithFormat:#"sounds/%X/%X_%X.caf", type, note, timbre]
Alternatively you could always use two digit numbers. That would allow to select from more than 16 cases:
[NSString stringWithFormat:#"sounds/%02d/%02d_%02d.caf", type, note, timbre]
i know this seems to be a stupid question, but i'm really getting trouble here.
I'm working in a project where i have some functions i can´t modify. That is, i got some C functions (not really my speciality) inside my Obj. C code that i can modify.
So here it is... to explain a little what i have.
I'm receiving a NSData like "\xce\x2d\x1e\x08\x08\xff\x7f" . I have to put each \hex code in a char array, like this:
cArray[1]=ce;
cArray[2]=2d;
cArray[3]=1e;
cArray[4]=08;
etc, etc... of course not LIKE THIS, but just so you understand. My initial move was to separe the NSData with subdataWithRange: and fill in an array with all "subdata". So the next move could be passing each position of that array to a char array, and that's where i got stuck.
I'm using something like (dont have my code right now)
for(int i=0 ; i<=64 ; i++) {
[[arrayOfSubData objectAtIndex:i] getBytes:&charArray[i]];
}
To fill the char array with the hex from my array of subData. That works almost perfectly. Almost.
Taking that example of cArray, my NSLog(#"pos%i: %x",i,charArray[i]) would show me:
pos1: ce
pos2: 2d
pos3: 1e
pos4: 8
And all the "left zeros" are supressed in that same way. My workaround for the moment (and i´m not sure if it is the best practice here) is to take my subDataArray and initWithFormat: a string with it. With that i can transform the string to an int with NSScanner scanHexInt:, but then i´m stucked again when converting back my decimal int to a hexadecimal CHAR. What would be the best approach to fill my char array that way?
Any help or some "tough love" will be greatly appreciated. Thanks
According to the normal rules of printf formatting (which NSLog follows also) you want the following:
NSLog(#"pos%i: %02x", i, charArray[i]);
The '0' says to left pad with 0s and is a flag. The '2' says to ensure that output for that field is at least two characters. So that'll ensure that at least two characters are output and pad to the left with '0's in order to fill space.
I want to get unicode code point for a given unicode character in Objective-C. NSString said it internal use UTF-16 encoding and said,
The NSString class has two primitive methods—length and characterAtIndex:—that provide the basis for all other methods in its interface. The length method returns the total number of Unicode characters in the string. characterAtIndex: gives access to each character in the string by index, with index values starting at 0.
That seems assume characterAtIndex method is unicode aware. However it return unichar is a 16 bits unsigned int type.
- (unichar)characterAtIndex:(NSUInteger)index
The questions are:
Q1: How it present unicode code point above UFFFF?
Q2: If Q1 make sense, is there method to get unicode code point for a given unicode character in Objective-C.
Thx.
The short answer to "Q1: How it present unicode code point above UFFFF?" is: You need to be UTF16 aware and correctly handle Surrogate Code Points. The info and links below should give you pointers and example code that allow you to do this.
The NSString documentation is correct. However, while you said "NSString said it internal use UTF-16 encoding", it's more accurate to say that the public / abstract interface for NSString is UTF16 based. The difference is that this leaves the internal representation of a string a private implementation detail, but the public methods such as characterAtIndex: and length are always in UTF16.
The reason for this is it tends to strike the best balance between older ASCII-centric and Unicode aware strings, largely due to the fact that Unicode is a strict superset of ASCII (ASCII uses 7 bits, for 128 characters, which are mapped to the first 128 Unicode Code Points).
To represent Unicode Code Points that are > U+FFFF, which obviously exceeds what can be represented in a single UTF16 Code Unit, UTF16 uses special Surrogate Code Points to form a Surrogate Pair, which when combined together form a Unicode Code Point > U+FFFF. You can find details about this at:
Unicode UTF FAQ - What are surrogates?
Unicode UTF FAQ - What’s the algorithm to convert from UTF-16 to character codes?
Although the official Unicode UTF FAQ - How do I write a UTF converter? now recommends the use of International Components for Unicode, it used to recommend some code officially sanctioned and maintained by Unicode. Although no longer directly available from Unicode.org, you can still find copies of the "no longer official" example code in various open-source projects: ConvertUTF.c and ConvertUTF.h. If you need to roll your own, I'd strongly recommend examining this code first, as it is well tested.
From the documentation of length:
The number returned includes the
individual characters of composed
character sequences, so you cannot use
this method to determine if a string
will be visible when printed or how
long it will appear.
From this, I would infer that any characters above U+FFFF would be counted as two characters and would be encoded as a Surrogate Pair (see the relevant entry at http://unicode.org/glossary/).
If you have a UTF-32 encoded string with the character you wish to convert, you could create a new NSString with initWithBytesNoCopy:length:encoding:freeWhenDone: and use the result of that to determine how the character is encoded in UTF-16, but if you're going to be doing much heavy Unicode processing, your best bet is probably to get familiar with ICU (http://site.icu-project.org/).
I have an application (IM Client) that I wish to setup custom formatting symbols in similar to mIRC rather than relying on rich text. I will accomplish this by pairing a UniChar 003 with a number 0-15 to process colors and other characters for different things. The only problem that I have is that when these characters are inserted they are invisible so it is difficult for the end user to thus delete them when needed. Is there a way to manipulate NSTextField in a way to show squares for specific invisible characters?
You could replace them whith a visible character in the Text field, and when the user is done replace them back:
NSString *visibleFormatCharacters=[stringWithInvisibleCharacters stringByReplacingOccurrencesOfString:[NSString stringWithFormat:#"%c",0x03] withString:#"§"]];
when the user is done do it backwards:
NSString *invisibleFormatCharacters=[visibleFormatCharacters stringByReplacingOccurrencesOfString:#"§"] withString:[NSString stringWithFormat:#"%c",0x03]];