So what I want, is for example to convert the letter 'a' into 97 (such as it is in the ASCII table), and then convert 67 into 'a'.
I actually perform a load of mathematics and stuff to the letter, treating it as binary number - so the transition is necessary.
However for special characters it is not working nicely.
char c = 'ÿ';
int i = int(c);
wchar_t wTemp = static_cast<wchar_t>(i);
wchar_t* w = &wTemp;
String^ newI = gcnew String(w);
That symbol is just a random one I found in an image (the type of character that will need to be read). It just comes out as a completely different symbol. I have no idea why, or what to do?
Characters above 0x7f (127) are probably converting to negative integer values. Maybe change c to unsigned:
unsigned char c = 'ÿ';
int i = c;
Your code doesn't look quite right to me though I didn't run it. Here is a good example from MSDN how to convert from and to wchar_t:
http://msdn.microsoft.com/en-us/library/ms235631(v=vs.80).aspx
I don't believe there is anything special about 'special' characters.
Related
Friends,
I am doing SAS to COBOL conversion.I am stuck with below declaration and conversion.So I am getting SOC7 in COBOL run.Please provide some solution.
IP in SAS - PD3.5
OP in SAS - z6.5
My COBOL declaration below.
IP s9.9(5);
OP .9(5);
Please suggest some solution..
Thanks a lot!!
Packed Decimal is stored one digit per nibble, which is two digits per byte, with the last nibble storing the sign. The sign nibbles C, A, F, and E are treated as positive; the sign nibbles B and D are treated as negative. Sign nibbles C and D are referred to as "preferred sign". A sign nibble of F is considered "unsigned," meaning it is neither positive nor negative, though pragmatically you can think of it as positive for arithmetic purposes. +123 is stored in two bytes as x'123C', -456 is stored as x'456D'.
The SAS PD informat specifies PDw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. So PD3.5 is a 3 byte field (which would store 5 digits and a sign) with all 5 digits to the right of the decimal point.
To obtain the COBOL declaration for a SAS PDw.d declaration...
a = (w * 2) - 1
b = a - d
if b = 0
PIC SVd Packed-Decimal
else
PIC S9(b)Vd Packed-Decimal
The SAS Z format specifies Zw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. The field will be padded with zeroes on the left to make it w bytes wide. So Z6.5 specifies a 6 byte output field with 5 bytes to the right of the decimal point. One byte is taken by the decimal point itself, and unfortunately there is no room for the sign, which may be a bug or may be intentional (perhaps all the data is known to be positive).
IP PIC Sv99999 Packed-Decimal.
OP PIC .99999.
When you MOVE IP TO OP the conversion from Packed Decimal to Zoned Decimal will be done for you by COBOL.
I'm Cesare from Italy (please excuse my english), this is my first question posted on StackOverflow and I'm pretty new to Objective-C... I hope I won't make a mess on my first try.
I would like to "combine" two integers that I already have to create a new float (or a double).
By "combine", I mean that I'd like to have the first int before the point and the second int after the point, I'm not trying to convert from int to float. Maybe an example could explain better what I'm trying to do:
First int: 7
Second int: 92
The float I'm trying to get: 7.92
I looked for a previous question like mine but I haven't found anything, maybe because what I'm trying to do is pretty dumb (I have a UIPickerView with 2 components, each containing hundreds of integers, and I'm trying to create a float or double variable that has the selection of the first component before the point and the selection of the second component after the point).
Thanks in advance for your help,
Cesare
Just think about what the definition and/or the purpose of the decimal point is. It separates the part of the number which is less than one from the part greater than or equal to one.
So, keep dividing the part after the decimal point until it's less than 1:
int firstPart = 7;
int secondPart = 92; // or whatever
float f = secondPart;
while (f >= 1) {
f /= 10;
}
f += firstPart;
I know this is later, but came across a similar situation. Maybe this is more efficient.
Take the second number, 92 and divide it by 100. That gives you .92. Add that to the first number. That can give you 7.92. However, since you're adding integers that you want converted to a float, you'll need to cast the numbers when adding them. Like this:
int firstPart = 7;
int secondPart = 92;
float afterDecimalPlace = (float)secondPart/100.0;
float numberAsFloat = (float)firstPart + afterDecimalPlace;
essentially that is:
92/100 = .92
7 + .92 = 7.92
I was following 'A tour of GO` on http://tour.golang.org.
The table 15 has some code that I cannot understand. It defines two constants with the following syntax:
const (
Big = 1<<100
Small = Big>>99
)
And it's not clear at all to me what it means. I tried to modify the code and run it with different values, to record the change, but I was not able to understand what is going on there.
Then, it uses that operator again on table 24. It defines a variable with the following syntax:
MaxInt uint64 = 1<<64 - 1
And when it prints the variable, it prints:
uint64(18446744073709551615)
Where uint64 is the type. But I can't understand where 18446744073709551615 comes from.
They are Go's bitwise shift operators.
Here's a good explanation of how they work for C (they work in the same way in several languages).
Basically 1<<64 - 1 corresponds to 2^64 -1, = 18446744073709551615.
Think of it this way. In decimal if you start from 001 (which is 10^0) and then shift the 1 to the left, you end up with 010, which is 10^1. If you shift it again you end with 100, which is 10^2. So shifting to the left is equivalent to multiplying by 10 as many times as the times you shift.
In binary it's the same thing, but in base 2, so 1<<64 means multiplying by 2 64 times (i.e. 2 ^ 64).
That's the same as in all languages of the C family : a bit shift.
See http://en.wikipedia.org/wiki/Bitwise_operation#Bit_shifts
This operation is commonly used to multiply or divide an unsigned integer by powers of 2 :
b := a >> 1 // divides by 2
1<<100 is simply 2^100 (that's Big).
1<<64-1 is 2⁶⁴-1, and that's the biggest integer you can represent in 64 bits (by the way you can't represent 1<<64 as a 64 bits int and the point of table 15 is to demonstrate that you can have it in numerical constants anyway in Go).
The >> and << are logical shift operations. You can see more about those here:
http://en.wikipedia.org/wiki/Logical_shift
Also, you can check all the Go operators in their webpage
It's a logical shift:
every bit in the operand is simply moved a given number of bit
positions, and the vacant bit-positions are filled in, usually with
zeros
Go Operators:
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
For example I have a float 55.2f and want to round it down such that the result can be divided by two without rest.
So 55.2 would become 54 as that is the nearest smaller "step" that can be divided by two. Is there a function for this or must I write my own algorithm?
If your result must remain a float, you can do:
float f=55.2f;
f=floorf(f/2.f)*2.f;
First convert to an integral type, such as int or long, and then clear the lowest bit.
float f = 55.2f;
int i = (int)f & ~1;
Explanation
~ means the bitwise inverse, i.e. all the 0 bits become 1, and vice versa.
So, if 1 has the bit pattern
0...0001
then ~1 is
1...1110
(Here I'm using ... to represent all the in-between bits depending on how big an integer is on your platform.)
When you & (bitwise AND) your integer with 1...1110, you are preserving the value of each bit apart from the lowest bit, which is forced to 0. See this description of the bitwise AND operator if you still don't get it.
By forcing the lowest bit to be 0, you are rounding the number down to the nearest even number.
You can write your own algorithm, for example with bitwise operators.
The following code works with clearing the last bit of your number. An even number has indeed the last bit not set.
int
f(float x)
{
return (int)x & ~1;
}
How about long int f = lrintf(x / 2);, where x is your float?
You could also just say int f = x / 2;, but some people have argued that that's more expensive, because the C standard mandates a specific rounding mode which may or may not be native to the CPU. The lrintf function on the other hand uses the CPU's native rounding mode. You need to #include <math.h>.
I am making a binary to decimal number converter on iphone. having some problem when i trying to take each single digit from a number and do calculation. I tried char, characterAtIndex but they all failed to do calculation or i got the syntax completely wrong. Can anyone show me how to do such cast or there is an easier approach?
Your problem is getting numbers from strings?
The easiest way to get an integer from a character is to use the ascii table, like this:
NSString *stringOfNums = #"15";
char c;
int num;
for (int i = 0; i < [stringOfNums length]; ++i) {
c = [stringOfNums characterAtIndex:i];
num = c - 48; // 0 is 48 in ascii table
printf("\nchar is %c and num is %d", c, num);
}
The advantage of this method is that you can validate on a char-by-char basis that each falls in a range of 48 through 57, the ascii digits.
Or you could do the conversion in one step using NSNumberFormatter, as described here: How to convert an NSString into an NSNumber
As for the binary-decimal conversion, does your formula work on paper? Get that right first.