This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What does the \0 symbol mean in a C string?
I am new at iPhone Development. I want to know, what does '\0' means in C, and what is the equivalent for that in objective c.
The null character '\0' (also null terminator), abbreviated NUL, is a control character with the value zero. Its the same in C and objective C
The character has much more significance in C and it serves as a reserved character used to signify the end of a string,often called a null-terminated string
The length of a C string (an array containing the characters and terminated with a '\0' character) is found by searching for the (first) NUL byte.
In C, \0 denotes a character with value zero. The following are identical:
char a = 0;
char b = '\0';
The utility of this escape sequence is greater inside string literals, which are arrays of characters:
char arr[] = "abc\0def\0ghi\0";
(Note that this array has two zero characters at the end, since string literals include a hidden, implicit terminal zero.)
The '\0' inside character literals and string literals stands for the character with the code zero. The meaning in C and in Objective C is identical.
To illustrate, you can use \0 in an array initializer to construct an array equivalent to a null-terminated string:
char str1[] = "Hello";
char str2[] = {'H', 'e', 'l', 'l', 'o', '\0'};
In general, you can use \ooo to represent an ASCII character in octal notation, where os stand for up to three octal digits.
To the C language, '\0' means exactly the same thing as the integer constant 0 (same value zero, same type int).
To someone reading the code, writing '\0' suggests that you're planning to use this particular zero as a character.
\0 is zero character. In C it is mostly used to indicate the termination of a character string. Of course it is a regular character and may be used as such but this is rarely the case.
The simpler versions of the built-in string manipulation functions in C require that your string is null-terminated(or ends with \0).
In C \0 is a character literal constant store into an int data type that represent the character with value of 0.
Since Objective-C is a strict superset of C this constant is retained.
It means '\0' is a NULL character in C, don't know about Objective-C but its probably the same.
Related
Swift's String.Index is defined in the docs as
A position of a character or code unit in a string.
The endIndex is
A string’s “past the end” position—that is, the position one greater
than the last valid subscript argument.
and startIndex is
The position of the first character in a nonempty string.
Is it correct to think of Swift chars in the context of C lang chars?
My understanding in Clang is that the index of a character in a string returns the memory space of the character and the string ends on a null character.
So for Swift, the endIndex is the null character and the reason we use String.Index instead of using subscript String with an Int (something like string[0] like in say JavaScript) is because we are handling the memory space of the character.
If this is the right thinking is this because Swift runs on top of Objective-C runtime?
Is there a way to use scientific notation in objective c and have it display three significant digits only? What I am current using is:
string = [NSString stringWithFormat:#"%e", floatNumber];
// floatNumber = 100000; string = 1.000000e+06
I just want string = 1.00e+06
Use the format specifier ".2" as follows:
string = [NSString stringWithFormat:#"%.2e", floatNumber];
From apple's documentation:
The format specifiers supported by the NSString formatting methods and CFString formatting functions follow the IEEE printf specification...
And from the IEEE printf specification, if you read under the Description section, you will find:
e, E
The double argument shall be converted in the style "[-]d.ddde±dd", where there is one digit before the radix character (which is non-zero if the argument is non-zero) and the number of digits after it is equal to the precision; if the precision is missing, it shall be taken as 6; if the precision is zero and no '#' flag is present, no radix character shall appear. The low-order digit shall be rounded in an implementation-defined manner. The E conversion specifier shall produce a number with 'E' instead of 'e' introducing the exponent. The exponent shall always contain at least two digits. If the value is zero, the exponent shall be zero.
How can I convert a char datatype into its utf-8 int representation in Processing?
So if I had an array ['a', 'b', 'c'] I'd like to obtain another array [61, 62, 63].
After my answer I figured out a much easier and more direct way of converting to the types of numbers you wanted. What you want for 'a' is 61 instead of 97 and so forth. That is not very hard seeing that 61 is the hexadecimal representation of the decimal 97. So all you need to do is feed your char into a specific method like so:
Integer.toHexString((int)'a');
If you have an array of chars like so:
char[] c = {'a', 'b', 'c', 'd'};
Then you can use the above thusly:
Integer.toHexString((int)c[0]);
and so on and so forth.
EDIT
As per v.k.'s example in the comments below, you can do the following in Processing:
char c = 'a';
The above will give you a hex representation of the character as a String.
// to save the hex representation as an int you need to parse it since hex() returns a String
int hexNum = PApplet.parseInt(hex(c));
// OR
int hexNum = int(c);
For the benefit of the OP and the commenter below. You will get 97 for 'a' even if you used my previous suggestion in the answer because 97 is the decimal representation of hexadecimal 61. Seeing that UTF-8 matches with the first 127 ASCII entries value for value, I don't see why one would expect anything different anyway. As for the UnsupportedEncodingException, a simple fix would be to wrap the statements in a try/catch block. However that is not necessary seeing that the above directly answers the question in a much simpler way.
what do you mean "utf-8 int"? UTF8 is a multi-byte encoding scheme for letters (technically, glyphs) represented as Unicode numbers. In your example you use trivial letters from the ASCII set, but that set has very little to do with a real unicode/utf8 question.
For simple letters, you can literally just int cast:
print((int)'a') -> 97
print((int)'A') -> 65
But you can't do that with characters outside the 16 bit char range. print((int)'二') works, (giving 20108, or 4E8C in hex) but print((int)'𠄢') will give a compile error because the character code for 𠄢 does not fit in 16 bits (it's supposed to be 131362, or 20122 in hex, which gets encoded as a three byte UTF-8 sequence 239+191+189)
So for Unicode characters with a code higher than 0xFFFF you can't use int casting, and you'll actually have to think hard about what you're decoding. If you want true Unicode point values, you'll have to literally decode the byte print, but the Processing IDE doesn't actually let you do that; it will tell you that "𠄢".length() is 1, when in real Java it's really actually 3. There is -in current Processing- no way to actually get the Unicode value for any character with a code higher than 0xFFFF.
update
Someone mentioned you actually wanted hex strings. If so, use the built in hex function.
println(hex((int)'a')) -> 00000061
and if you only want 2, 4, or 6 characters, just use substring:
println(hex((int)'a').substring(4)) -> 0061
I'm trying to do some stuff with characterAtIndex and I'm stumped. if ([myString characterAtIndex:0]==0) works fine, if I'm looking for the number zero--but if I'm looking for a decimal point, if ([myString characterAtIndex:0]==.) just gives me an error. Is there another way to do this?
Check again. [myString characterAtIndex:0] == 0 will compile, but it won't do what you expect. That condition tests if the first character of your string is the character with the ASCII value of 0, which isn't the numeral 0: it's the NUL character.
-characterAtIndex: returns a value that you can compare with a character literal, which is a character enclosed in single quotes: '0' or '.', for example.
Perhaps something got lost in translation, but looking at your current question, I think this is what the solution would look like (compiles and executes as expected for me):
if([myString characterAtIndex:0] == '.') {
// ...
}
Note that you must use single-quotes (apostrophe), as these are c-style char items (technically ints) and not c-style strings (which would use double-quotes, and technically be an array/pointer)
characterAtIndex returns a unichar character, so use '.' instead.
ie: if ([myString characterAtIndex:0]=='.')
I'm looking at inherited code and I found this in a vb.net windows form:
New System.Drawing.SizeF(6.0!, 13.0!)
My question is, what is the significance of the ! (exclamation) operator here? Most of my searching for the exclamation operator ends up returning recordset format or the ! gets ignored in the search and I get hundreds of unrelated items.
It's to force the literal to be a Single.
Visual Basic supports Type Characters:
In addition to specifying a data type in a declaration statement, you can force the data type of some programming elements with a type character. The type character must immediately follow the element, with no intervening characters of any kind.
and:
Literals can also use the identifier type characters (%, &, #, !, #, $), as can variables, constants, and expressions. However, the literal type characters (S, I, L, D, F, R, C) can be used only with literals.
In this case, the ! stands for Single:
Type Characters. Appending the literal type character F to a literal forces it to the Single data type. Appending the identifier type character ! to any identifier forces it to Single.
(emphasis mine)
It is a type character. It means that 6.0 is a Single data type.
http://msdn.microsoft.com/en-us/library/s9cz43ek.aspx show the type characters.