How to convert a System::Char to a C++ char - c++-cli

I need to convert a System::Char that comes from the .NET world into a char in the C++/CLI world (a native C++ type).
I am confident that the incoming character is a regular ASCII character, just in the .NET Unicode world. So I never will have to deal with a character that doesn't have a direct mapping to char.

How about converting it directly?
System::Char neta = System::Char('b');
char c = neta;

I have some code working, but it is officially... very ugly. I'm hoping someone else will add an answer to this question that has a better solution than this!
// I am given neta externally
System::Char neta = System::Char('b');
// Here is the conversion code - is there a better way?
System::String ^str = gcnew System::String(neta,1);
auto trans = Marshal::StringToHGlobalAnsi(str->ToString());
auto cstr = static_cast<const char*>(trans.ToPointer());
char c = *cstr;
Marshal::FreeHGlobal(trans);
// c contains the expected answer 'b'.
Many thanks!

Related

Why does neither C nor Objective-C have a format specifier for Boolean values?

The app I'm working on has a credit response object with a Boolean "approved" field. I'm trying to log out this value in Objective C, but since there is no format specifier for Booleans, I have to resort to the following:
NSLog("%s", [response approved] ? #"TRUE" : #"FALSE");
While it's not possible, I would prefer to do something like the following:
NSLog("%b", [response approved]);
...where "%b" is the format specifier for a boolean value.
After doing some research, it seems the unanimous consensus is that neither C nor Objective-C has the equivalent of a "%b" specifier, and most devs end up rolling their own (something like option #1 above).
Obviously Dennis Ritchie & Co. knew what they were doing when they wrote C, and I doubt this missing format specifier was an accident. I'm curious to know the rationale behind this decision, so I can explain it to my team (who are also curious).
EDIT:
Some answers below have suggested it might be a localization issue, i.e. "TRUE" and "FALSE" are too English-specific. But wouldn't this be a dilemma that all languages face? i.e. not just C and Objective-C? Java and Ruby, among others, are able to implement "True" and "False" boolean values. Not sure why the authors of these langs didn't similarly punt on this choice.
In addition, if localization were the problem, I would expect it to affect other facets of the language as well. Take protected keywords, for instance. C uses English keywords like "include", "define", "return", "void", etc., and these keywords are arguably more difficult for non-English speakers to parse than keywords like "true" or "false".
Pure C (back to the K&R variety) doesn't actually have a boolean type, which is the fundamental reason why native printf and cousins don't have a native boolean format specifier. Expressions can evaluate to zero or nonzero integral values, which is what is interpreted in if statements as false or true, respectively in C. (Understanding this is the key to understand the semantics of the delightful !! "bang bang operator" syntax.)
(C99 did add a _Bool type, though unless you're using purest C you're unlikely to need it; derived languages and common platforms already have common boolean types or typedefs.)
The BOOL type is an ObjC construct, and -[NSString stringWithFormat:] (and NSLog) simply doesn't have an additional format specifier that does anything special with it. It certainly could (in addition to %#), and choose some reasonable string to drop in there; I don't know whether such a thing was ever considered, but it strikes me anyway as different in kind from all other format specifiers. How would you know to appropriately localize or capitalize the string representations of "yes" or "no" (or "true" or "false"?) etc? No other format specifier results in the library making decisions like that; all others are precisely numeric or insert the string result of another method call. It seems onerous, but making you choose what text you actually want in there is probably the most elegant solution.
What should the formatter display? 0 & 1? TRUE & FALSE? YES & NO? -1 and 1? What about other languages?
There's no good consistently right answer so they punted it to the app developer, for whom it'll be a clearer (and still simple) choice.
In C early days, there was no numeric printf() specifier for char, short either as there was little need for it. Now there is "%hhd" and "%hd". Any type narrower than int/unsigned was promoted.
Today, in C, _Bool type maybe printed with "%d".
#include <stdio.h>
int main(void) {
_Bool some_bool = 2;
printf("%d\n", some_bool); // prints 1 (or 0 when false)
return 0;
}
The missing link in C is its lack of a format specifier to scan into a _Bool. This leads to various solutions like the following which are not satisfactory with input like "T" or "false".
_Bool some_bool;
int temp;
scanf("%d", &temp);
some_bool = temp;

How to use FOURCC formats in SlimDX?

Looking through the defined surface formats of SlimDX, some seem to be missing. I'm interested in using the NV12 format.
Since the formats are defined as an enum, I can't pass in a FOURCC format as I would be able to using unmanaged code.
Is there any way to work around this?
SlimDX enums are defined as ints, so you can cast any int to it.
int nvformat = 12345; //Replace number by the fourcc
SlimDX.Direct3D9.Format fmt = (SlimDX.Direct3D9.Format)nvformat;

Objective-C float string formatting: Formatting the whole numbers

There are some great answers here about how to format the decimal places of a float for a string:
float myFloatA = 2.123456f;
NSLog(#"myFloatA: [%.2f]", myFloatA;
// returns:
// myFloatA: [2.12]
But what I'm looking for is how to format the whole numbers of the same float. This can be done with the same sort of trick on an integer:
int myInt = 2;
NSLog(#"myInt: [%5d]", myInt;
// returns:
// myInt: [ 2]
So I was hoping something like a %5.2f would be the answer to formatting both before and after the decimal. But it doesn't:
float myFloatA = 2.123456f;
NSLog(#"myFloatA: [%5.2f]", myFloatA;
// returns:
// myFloatA: [2.12]
Any thoughts on this one?
Using the print specifiers is all very well for NSLogs, but think about this another way.
Usually, you want a string representation of a number when you are displaying it in something like a text field. In which case, you might as well use an NSNumberFormatter which does most of the heavy lifting for you.
Also confirmed that this works. If you're worried you're not getting 5 character width, try forcing zero padding with:
NSLog(#"myFloatA: [%05.2f]", myFloatA);

Trouble converting Dec to Hex (Int to Char)

i know this seems to be a stupid question, but i'm really getting trouble here.
I'm working in a project where i have some functions i can´t modify. That is, i got some C functions (not really my speciality) inside my Obj. C code that i can modify.
So here it is... to explain a little what i have.
I'm receiving a NSData like "\xce\x2d\x1e\x08\x08\xff\x7f" . I have to put each \hex code in a char array, like this:
cArray[1]=ce;
cArray[2]=2d;
cArray[3]=1e;
cArray[4]=08;
etc, etc... of course not LIKE THIS, but just so you understand. My initial move was to separe the NSData with subdataWithRange: and fill in an array with all "subdata". So the next move could be passing each position of that array to a char array, and that's where i got stuck.
I'm using something like (dont have my code right now)
for(int i=0 ; i<=64 ; i++) {
[[arrayOfSubData objectAtIndex:i] getBytes:&charArray[i]];
}
To fill the char array with the hex from my array of subData. That works almost perfectly. Almost.
Taking that example of cArray, my NSLog(#"pos%i: %x",i,charArray[i]) would show me:
pos1: ce
pos2: 2d
pos3: 1e
pos4: 8
And all the "left zeros" are supressed in that same way. My workaround for the moment (and i´m not sure if it is the best practice here) is to take my subDataArray and initWithFormat: a string with it. With that i can transform the string to an int with NSScanner scanHexInt:, but then i´m stucked again when converting back my decimal int to a hexadecimal CHAR. What would be the best approach to fill my char array that way?
Any help or some "tough love" will be greatly appreciated. Thanks
According to the normal rules of printf formatting (which NSLog follows also) you want the following:
NSLog(#"pos%i: %02x", i, charArray[i]);
The '0' says to left pad with 0s and is a flag. The '2' says to ensure that output for that field is at least two characters. So that'll ensure that at least two characters are output and pad to the left with '0's in order to fill space.

Convert to absolute value in Objective-C

How do I convert a negative number to an absolute value in Objective-C?
i.e.
-10
becomes
10?
Depending on the type of your variable, one of abs(int), labs(long), llabs(long long), imaxabs(intmax_t), fabsf(float), fabs(double), or fabsl(long double).
Those functions are all part of the C standard library, and so are present both in Objective-C and plain C (and are generally available in C++ programs too.)
(Alas, there is no habs(short) function. Or scabs(signed char) for that matter...)
Apple's and GNU's Objective-C headers also include an ABS() macro which is type-agnostic. I don't recommend using ABS() however as it is not guaranteed to be side-effect-safe. For instance, ABS(a++) will have an undefined result.
If you're using C++ or Objective-C++, you can bring in the <cmath> header and use std::abs(), which is templated for all the standard integer and floating-point types.
You can use this function to get the absolute value:
+(NSNumber *)absoluteValue:(NSNumber *)input {
return [NSNumber numberWithDouble:fabs([input doubleValue])];
}
If you are curious as to what they are doing in the abs() function:
char abs(char i) {
return -(i & 128) ^ i;
}