Mantissa and Exponent - Negative number with decimal (beyond .5) - exponent

Here is my question. I am doing some work and am seeing two different answers. I was using a calculator (online) to check my answer and it is clashing with the answer I am supposed to get and I need to see which one is correct.
The problem is: -6.25
I worked this out for 6.25 and then took the twos complement.
6.25 --> 0110.001
Mantissa --> 0.11000100000 Exponent--> 0011
My Answer: Two's Complement 1.00111100000 Exponent--> 0011
The answer I should be getting says: Mantissa --> 1.11000100000 Exponent --> 0011
It doesn't seem to make sense that all you do is add a 1 in front of the positive Mantissa. I know that if the sign bit is a 0 it is a positive number and a 1 is a negative number. Could you please let me know which one is correct or if either of these are correct please? Thanks. Just want to make sure I am doing it right before I continue.

I'm not sure whether the number you want to convert is correct.
In my opinion:
6.25--->110.010(fixed point) or
6.125-->110.001(fixed point)
then you can transform the fixed form to exponent form,the complement of -6.125 is 1_001.111, with the exponent form 1.001111×2^3
So,I think your answer is correct,the other reference answer is just the true form of a negative binary number.

Related

Why is there one more negative int than positive int?

The upper limit for any int data type (excluding tinyint), is always one less than the absolute value of the lower limit.
For example, the upper limit for an int is 2,147,483,647 and ABS(lower limit) = 2,147,483,648.
Is there a reason why there is always one more negative int than positive int?
EDIT: Changed since question isn't directly related to DB's
The types you provided are signed integers. Let's see one byte(8-bit) example. With 1 byte you have 2^8 combinations which gives you 256 possible numbers to store.
Now you want to have the same number of positive and negative numbers (each group should have 128).
The point is 0 doesn't have +0 and -0. There is only one 0.
So you end up with range -128..-1..0..1..127.
The same logic works for 16/32/64-bit.
EDIT:
Why the range is -128 to 127?
It depends on how you represent signed integer:
Signed magnitude representation
Ones' complement
Two's complement
This question isn't really related to databases.
As lad2025 points out, there are an even number of values. So, by including 0, there would be one more positive or negative value. The question you are asking seems to be: "Why is there one more negative value than positive value?"
Basically, the reason is the sign-bit. One possible implementation of negative numbers is to use n - 1 bits for the absolute value and then 0 and 1 for the sign bit. The problem with this approach is that it permits +0 and -0. That is not desirable.
To fix this, computer scientists devised the twos-complement representation for signed integers. (Wikipedia explains this in more detail.) Basically, this representation maintains the concept of a sign bit that can be tested. But it changes the representation. If +1 is represented as 001, then -1 is represented as 111. That is, the negative value is the bit-wise complement of the positive value minus one. In fact the negative is always generated by subtracting 1 and using the bit-wise complement.
The issue is then the value 100 (followed by any number of zeros). The sign bit is set, so it is negative. However, when you subtract 1 and invert, it becomes itself again (011 --> 100). There is an argument for calling this "infinity" or "not a number". Instead it is assigned the smallest possible negative number.
Let's say you have a 4byte (32 bit) integer. The range defined by C++ is -231 to 231-1.
So we end up with a range -231.....0......231.
We can think of this as having 231 non negative integers (note 0 is included) and 231 negative integers.

What is decimal value of the sum of the following 5-bit two's complement numbers?

Can someone explain this question?
What is decimal value of the sum of the following 5-bit two's complement numbers? 10010+10101
Two's complement numbers are added together by doing binary arithmetic.
10010 +
10101 =
00111
Like normal numbers you carry the digit to the next place if you hit two ones at the same time while adding.
To interpret two's complement numbers, you have to understand that the first bit represents a value of 2^0, the second 2^1, the third 2^2, the fourth 2^3. This pattern extends for 32 and 64 bit numbers naturally. The final bit in 5bit two's complement represents -2^4.
Multiplying these values with the bits we came up with we have:
-0*2^4 + 0*2^3 + 1*2^2 + 1 *2^1 + 1*2^0
This value is 4 + 2 + 1 = 7. If we looked at the decimal value of 10010 we'd see its equal to -2^4 + 2^1 = -16 + 2 = -14. 10101 comes out to -11.
So the computer is saying that the sum of (-11) + (-14) is 7. This is because of overflow where we ignored the fact that the final bit should have had a 1 carry over into the next column. Giving a finite representation this is the best we can do.
The overflow is characterized by a bunch of neat properties since the representation we have is an Abelian group, a mathematical construct. It's outside the scope of the question but you should certainly understand them. Just google overflow.
Also, most answers are going to be curt since it's a basic topic that google could solve and StackOverflow gets enough questions as is. Make sure to check google and search StackOverflow before asking questions!
10010=2^4+2^1=16+2=18
10101=2^4+2^2+2^0=21
22+18=39
^=power

Floating point serialization, lexicographical comparison == floating point comparison

I'm looking for a way to serialize floating points so that in their serialized form a lexicographical comparison is the same as a floating point comparison. I think it is possible by storing it in the form:
| signed bit (1 for positive) | exponent | significand |
The exponent and the significand would be serialized as big-endian and the complement would be taken for negative numbers.
Would this work? I don't mind if it breaks for NaN, but having INF comparison working would be nice.
The format of IEEE numbers are specifically designed so that "plain" integer comparison could be used. However, this only applies when two numbers of the same sign is compared.
Your suggestion to complement the numbers when they are negative is sound, so this will work.
This will work for +-Inf:s and for subnormal numbers. NaN:s, however, will not work, or rather, they will be considered "larger" than inf:s.
The only problematic case is "-Zero" (i.e. sign=1, exponent=0, and mantissa=0). Accoring to IEEE, Zero == -Zero. You have to decide if you want to emit -Zero as Zero, treat them as different, or add special code to the comparison routine.

How can I use SYNCSORT to format a Packed Decimal field with a specifc sign value?

I want to use SYNCSORT to force all Packed Decimal fields to a negative sign value. The critical requirement is the 2nd nibble must be Hex 'D'. I have a method that works but it seems much too complex. In keeping with the KISS principle, I'm hoping someone has a better method. Perhaps using a bit mask on the last 4 bits? Here is the code I have come up with. Is there a better way?
*
* This sort logic is intended to force all Packed Decimal amounts to
* have a negative sign with a B'....1101' value (Hex 'xD').
*
SORT FIELDS=COPY
OUTFIL FILES=1,
INCLUDE=(8,1,BI,NE,B'....1..1',OR, * POSITIVE PACKED DECIMAL
8,1,BI,EQ,B'....1111'), * UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING +0
8:(-1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
OUTFIL FILES=2,
INCLUDE=(8,1,BI,EQ,B'....1..1',AND, * NEGATIVE PACKED DECIMAL
8,1,BI,NE,B'....1111'), * NOT UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING -0
8:(+1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
In the code that processes the VSAM file, can you change the read logic to GET with KEY GTEQ and check for < 0 on the result instead of doing a specific keyed read?
If you did that, you could accept all three negative packed values xA, xB and xD.
Have you considered writing an E15 user exit? The E15 user exit lets you
manipulate records as they are input to the sort process. In this case you would have a
REXX, COBOL or other LE compatible language subroutine patch the packed decimal sign field as it is input to the sort process. No need to split into multiple files to be merged later on.
Here is a link to example JCL
for invoking an E15 exit from DFSORT (same JCL for SYNCSORT). Chapter 4 of this reference
describes how to develop User Exit routines, again this is a DFSORT manual but I believe SyncSort is
fully compatible in this respect. Writing a user exit is no different than writing any other subroutine - get the linkage right and the rest is easy.
This is a very general outline, but I hope it helps.
Okay, it took some digging but NEALB's suggestion to seek help on MVSFORUMS.COM paid off... here is the final result. The OUTREC logic used with SORT/MERGE replaces OUTFIL and takes advantage of new capabilities (IFTHEN, WHEN and OVERLAY) in Syncsort 1.3 that I didn't realize existed. It pays to have current documentation available!
*
* This MERGE logic is intended to assert that the Packed Decimal
* field has a negative sign with a B'....1101' value (Hex X'.D').
*
*
MERGE FIELDS=(27,5.4,BI,A),EQUALS
SUM FIELDS=NONE
OUTREC IFTHEN=(WHEN=(32,1,BI,NE,B'....1..1',OR,
32,1,BI,EQ,B'....1111'),
OVERLAY=(32:(-1,MUL,32,1,PD),PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,BI,EQ,B'....1..1',AND,
32,1,BI,NE,B'....1111'),
OVERLAY=(32:(+1,MUL,32,1,PD),PD,LENGTH=1))
Looking at the last byte of a packed field is possible. You want positive/unsigned to negative, so if it is greater than -1, subtract it from zero.
From a short-lived Answer by MikeC, it is now known that the data contains non-preferred signs (that is, it can contain A through F in the low-order half-byte, whereas a preferred sign would be C (positive) or D (negative). F is unsigned, treated as positive.
This is tested with DFSORT. It should work with SyncSORT. Turns out that DFSORT can understand a negative packed-decimal zero, but it will not create a negative packed-decimal zero (it will allow a zoned-decimal negative zero to be created from a negative zero packed-decimal).
The idea is that a non-preferred sign is valid and will be accurately signed for input to a decimal machine instruction, but the result will always be a preferred sign, and will be correct. So by adding zero first, the field gets turned into a preferred sign and then the test for -1 will work as expected. With data in the sign-nybble for packed-decimal fields, SORT has some specific and documented behaviours, which just don't happen to help here.
Since there is only one value to deal with to become the negative zero, X'0C', after the normalisation of signs already done, there is a simple test and replacement with a constant of X'0D' for the negative zero. Since the negative zero will not work, the second test is changed from the original minus one to zero.
With non-preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=INIT,
OVERLAY=(32:+0,ADD,32,1,PD,TO=PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
With preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
Note: If non-preferred signs are stuffed through a COBOL program not using compiler option NUMPROC(NOPFD) then results will be "interesting".

Trouble with floats in Objective-C

I've a small problem and I can't find a solution!
My code is (this is only a sample code, but my original code do something like this):
float x = [#"2.45" floatValue];
for(int i=0; i<100; i++)
x += 0.22;
NSLog(#"%f", x);
the output is 52.450001 and not 52.450000 !
I don't know because this happens!
Thanks for any help!
~SOLVED~
Thanks to everybody! Yes, I've solved with the double type!
Floats are a number representation with a certain precision. Not every value can be represented in this format. See here as well.
You can easily think of why this would be the case: there is an unlimited number of number just in the intervall (1..1), but a float only has a limited number of bits to represent all numbers in (-MAXFLOAT..MAXFLOAT).
More aptly put: in a 32bit integer representation there is a countable number of integers to be represented, But there is an infinite innumerable number of real values that cannot be fully represented in a limited representation of 32 or 64bit. Therefore there not only is a limit to the highest and lowest representable real value, but also to the accuracy.
So why is a number that has little digits after the floating point affected? Because the representation is based on a binary system instead of a decimal, making other numbers easily represented then the decimal ones.
See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Floating point numbers can not always be represented easily by computers. This leads to inaccuracy in some digits.
It's like me asking you what 1/3 is in decimal. No matter how hard you try, you're not going to be able to tell me what it is because decimal can't accurately describe that number.
Floats can't accurately describe some decimal numbers.