Type of inline declaration with calculation - abap

I am declaring a variable with inline declaration 50 * ( 2 / 5 ). The problem is that output result is 0 instead of expected 20.
DATA(exact_result) = 50 * ( 2 / 5 ) .
cl_demo_output=>display( exact_result ).
Can anyone suggest why the result is zero where as 50 * (2/5) = 20.
regards,
Umar Abdullah

The inline declaration assigns a data type depending on the type from the Right-Hand Side (RHS) expression. With an arithmetic expression, the compiler determines a data type based on the overall calculation type.
First, 2 and 5 are considered as type I (4 bytes integer), so the result is also of type I even if the operator is a division (integer division in that precise case).
Then, 50 is also considered as type I, and because it's used with another I-type data object (result of subexpression 2 / 5 which is of type I) the result is also of type I.
So, in your example, EXACT_RESULT is assigned the type I.
At run time, because both LHS and RHS data objects are of type I, then the calculation type is I too. Consequently, 2 / 5 equals 0.4 which is rounded to 0 because it's an integer division and the default ABAP rounding is "half up" (rounding of 0.4 gives 0, but 0.5 gives 1).
The workaround is to define explicitly the data type of EXACT_RESULT as having digits after the decimal point (DECFLOAT16, DECFLOAT34, P type with decimals, F and even C because then the calculation type is P !), because the type of the LHS will have a higher priority than the type of the RHS (I), so the calculation will be deduced from the type of the LHS variable.
DATA(exact_result) = CONV decfloat16( 50 * ( 2 / 5 ) ).
Be careful with this next solution : as I said, C leads to a calculation with type P and many decimals, so we could think this example is a good solution :
DATA(exact_result) = '50' * ( 2 / 5 ). " equals 20
But with inline declarations, a P calculation type leads to a data object of type P but with 0 digits after the decimal point, so the result is truncated with other numbers (8 instead of 50 here) :
DATA(exact_result) = '8' * ( 2 / 5 ). " rounded ! (3 instead of 3.2)

Related

Explanation of % in SQL

Could someone explain the difference between % in SQL?
I understand that % is a wildcard that allows you to query results with LIKE results, i.e. a% for words starting with a, but I am confused why the wildcard can be used as % 2 = 0 to query for even numbers?
I saw an explanation that said % can be used as divide but I thought / was divide.
a % 2 = 0 here % as Modulus arithmetic operator.
Syntax: dividend % divisor
Sample: SELECT 15 % 2 AS Remainder it will return the result as 1
Demo on db<>fiddle
When used outside of a string, the percentage symbol % is the modulus operator, i.e. an operator which returns the remainder following division of the number preceding the operator by that following it.
Therefore, in your example, the expression % 2 = 0 will be validated if the number preceding the percentage symbol is even, e.g. 12 % 2 = 0 will return True.
Whereas, when used in the pattern argument of a like expression, the percentage symbol represents a wildcard operator matching any sequence of characters (or no characters at all).
Let's understand with an example:
I have created an Table name - 'c', which contain 2 attribute 'name' and 'num'.
when num%10 is calculated e.g. 55%10 -> gives 5.
If result is either 2 or 7 then it will not print that row
Elseif result (num%10) is NOT 2 or 7 then in this case it will print the row.
Now:
Select *from c where num%10 NOT In(2,7);
Check out Screenshot here :enter image description here

Short Rounds Up? [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Why is precision lost if resolving of the type left to the compiler?

What is the reason of the lost of precision if the decimal type of the variable is left to be defined by the compiler? Is this documented anywhere?
DATA: gv_1 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_2 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.31'.
DATA: gv_3 TYPE p LENGTH 15 DECIMALS 2 VALUE '56555.34'.
DATA(gv_sum) = gv_1 + gv_2 + gv_3. "data type left to be resolved by the compiler
WRITE / gv_sum.
DATA: gv_sum_exp TYPE p LENGTH 15 DECIMALS 2. "explicit type declaration
gv_sum_exp = gv_1 + gv_2 + gv_3.
WRITE / gv_sum_exp.
The first sum results in
169666
The second one in
169665.96
As we know, the ABAP compiler brings all the operands of an arithmetic expression to the so-called calculation type. And we also know that data type with the largest value range determines the whole caclulation type. But you, probably, don't aware of some changes that were introduced to this process with the release of inline declarations in ABAP. Here they are:
If operands are specified as generically typed field symbols or formal
parameters and an inline declaration DATA(var) is used as the target
field of an assignment, the generic types contribute to the statically
detectable calculation type (used to determine the data type of the
declaration) as follows: ...csequence, clike, c, n, and p like p. If no type with a higher priority is involved, the type p with length 8 (no decimal places) is used for the declaration....
That is exactly what we see in debugger during execution of your code:

How to see what the is being compared in a if statement

I'm having a problem with some vba code.
I have a if statement that doesn't treat the same content equally.
e.g: 0,1 equals 0,1, but a re-run 0,1 does not equal 0,1
(this values are shown by MVBA)
The code is long so before posting it i would like to know if it's possible to see the machine perspective in a if statement (hex, ascii...). This because, although the debug is telling me they are the same (through msgbox, vartype, etc), the if statement is not activated.
pseudo code:
x = 0,0000001 * 1*10^6 (which equals 0,1)
y = 0,0001 * 1*10^3 (which also equals 0,1)
if statement:
x doesn't enter
y does
end if
This is because the floating-point implementation may not be able to represent those number accurately due to the fact that they are encoded in a base 2 representation.
If you want to compare them, I would suggest using Cdec (wich converts to Decimal, a VBA custom base 10 floating-point)
Debug.Print (0.0000001 * 1 * 10 ^ 6) = (0.0001 * 1 * 10 ^ 3) ' False
Debug.Print CDec(0.0000001 * 1 * 10 ^ 6) = CDec(0.0001 * 1 * 10 ^ 3) ' True
While they both display 0.1, in fact 0.0000001 * 1 * 10 ^ 6 flaoting-point value is 0x3FB9999999999999 whereas 0.0001 * 1 * 10 ^ 3 returns 0x3FB999999999999A.
I'd recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic

Divide int into 2 other int

I need to divide one int into 2 other int's. the first int is not constant so one problem would be, what to do with odd numbers because I only want whole numbers. For example, if int = 5, then int(2) will = 2 and int(3) will = 3. Any help will greatly be appreciated.
Supposing you want to express x = a + b, where a and b are as close to x/2 as possible:
a = ceiling(x / 2.0);
b = floor(x / 2.0);
That's pseudo code, you have to find out the actual functions for floor and ceiling from your library. Make sure the division is performed as floating point numbers.
As pure integers:
a = x / 2 + (x % 2 == 0 ? 0 : 1);
b = x / 2
(This may be a bit fishy for negative numbers, because it'll depend on the behaviour of division and modulo for negative numbers.)
You can try ceil and floor functions from math to produce results like 2 and 3 for odd inputs;
int(2)=ceil(int/2); //will produce 3 for input 5
int(3)=floor(int/2); //will produce 2 for input 5
Well my answer is not in Objective-C but i guess you could translate this easily.
My idea is:
part1 = source_number div 2
part2 = source_number div 2 + (source_number mod 2)
This way the second number will be bigger if the starting number is an odd number.