How do you calculate modulo operation with real numbers in system verilog? - operators

For example:
real a = 10.2917541278;
real modout;
assign modout = (a % 3.142);
Currently, this is not supported, I get an error saying numbers need to be integers.
I don't want this code to be synthesized. This is only in the testbench.

The concept of a modulus in real number math is a bit weird since the result of the division of two real numbers should be a real number (ignoring zero). If however, you want something like fmod in C/C++, you can implement it like so:
real x, d, r;
assign r = x - d * $floor(x / d); // Implements fmod(x, d) or "x % d" for real x, d

Related

Unnormalizing in Knuth's Algorithm D

I'm trying to implement Algorithm D from Knuth's "The Art of Computer Programming, Vol 2" in Rust although I'm having trouble understating how to implement the very last step of unnormalizing. My natural numbers are a class where each number is a vector of u64, in base u64::MAX. Addition, subtraction, and multiplication have been implemented.
Knuth's Algorithm D is a euclidean division algorithm which takes two natural numbers x and y and returns (q,r) where q = x / y (integer division) and r = x % y, the remainder. The algorithm depends on an approximation method which only works if the first digit of y is greater than b/2, where b is the base you're representing the numbers in. Since not all numbers are of this form, it uses a "normalizing trick", for example (if we were in base 10) instead of doing 200 / 23, we calculate a normalizer d and do (200 * d) / (23 * d) so that 23 * d has a first digit greater than b/2.
So when we use the approximation method, we end up with the desired q but the remainder is multiplied by a factor of d. So the last step is to divide r by d so that we can get the q and r we want. My problem is, I'm a bit confused at how we're suppose to do this last step as it requires division and the method it's part of is trying to implement division.
(Maybe helpful?):
The way that d is calculated is just by taking the integer floor of b-1 divided by the first digit of y. However, Knuth suggests that it's possible to make d a power of 2, as long as d * the first digit of y is greater than b / 2. I think he makes this suggestion so that instead of dividing, we can just do a binary shift for this last step. Although I don't think I can do that given that my numbers are represented as vectors of u64 values, instead of binary.
Any suggestions?

Given no modulus or if even/odd function, how would one check for an odd or even number?

I have recently sat a computing exam in university in which we were never taught beforehand about the modulus function or any other check for odd/even function and we have no access to external documentation except our previous lecture notes. Is it possible to do this without these and how?
Bitwise AND (&)
Extract the last bit of the number using the bitwise AND operator. If the last bit is 1, then it's odd, else it's even. This is the simplest and most efficient way of testing it. Examples in some languages:
C / C++ / C#
bool is_even(int value) {
return (value & 1) == 0;
}
Java
public static boolean is_even(int value) {
return (value & 1) == 0;
}
Python
def is_even(value):
return (value & 1) == 0
I assume this is only for integer numbers as the concept of odd/even eludes me for floating point values.
For these integer numbers, the check of the Least Significant Bit (LSB) as proposed by Rotem is the most straightforward method, but there are many other ways to accomplish that.
For example, you could use the integer division operation as a test. This is one of the most basic operation which is implemented in virtually every platform. The result of an integer division is always another integer. For example:
>> x = int64( 13 ) ;
>> x / 2
ans =
7
Here I cast the value 13 as a int64 to make sure MATLAB treats the number as an integer instead of double data type.
Also here the result is actually rounded towards infinity to the next integral value. This is MATLAB specific implementation, other platform might round down but it does not matter for us as the only behavior we look for is the rounding, whichever way it goes. The rounding allow us to define the following behavior:
If a number is even: Dividing it by 2 will produce an exact result, such that if we multiply this result by 2, we obtain the original number.
If a number is odd: Dividing it by 2 will result in a rounded result, such that multiplying it by 2 will yield a different number than the original input.
Now you have the logic worked out, the code is pretty straightforward:
%% sample input
x = int64(42) ;
y = int64(43) ;
%% define the checking function
% uses only multiplication and division operator, no high level function
is_even = #(x) int64(x) == (int64(x)/2)*2 ;
And obvisouly, this will yield:
>> is_even(x)
ans =
1
>> is_even(y)
ans =
0
I found out from a fellow student how to solve this simplistically with maths instead of functions.
Using (-1)^n :
If n is odd then the outcome is -1
If n is even then the outcome is 1
This is some pretty out-of-the-box thinking, but it would be the only way to solve this without previous knowledge of complex functions including mod.

i want to write a function that rewrite a float to continued fraction

i am trying to make a recursive function, that can rewrite a float to an continued fraction. I am getting an error messange that i dont understand
it seems like it can't storage certain numbers binary and how do i compare then. Thats my current theory.
condition 'cfa_reg != -1' not met
let rec float2cfrac (x : float) : int list =
if x - floor x = 0.0 then
[int x]
else
[int x] # float2cfrac (1.0/(x - floor x))
printfn "%A" (float2cfrac 3.245)// list
When I run your code. I get a stack overflow.
That means that your condition x - floor x = 0.0 is never met.
Equality with floating point numbers is a tricky thing as there is always a small precision error involved in all calculations. Never use equality, instead calculate until the difference is less than an acceptable error:
abs(x - floor x) < 0.0000000001

Calculating Collision Times Between Two Circles - Physics

I keep stumbling into game/simulation solutions for finding distance while time is running, and it's not what I'm looking for.
I'm looking for an O(1) formula to calculate the (0 or 1 or 2) clock time(s) in which two circles are exactly r1+r2 distance from each other. Negative time is possible. It's possible two circles don't collide, and they may not have an intersection (as in 2 cars "clipping" each other while driving too close to the middle of the road in opposite directions), which is messing up all my mx+b solutions.
Technically, a single point collision should be possible.
I'm about 100 lines of code deep, and I feel sure there must be a better way, and I'm not even sure whether my test cases are correct or not. My initial setup was:
dist( x1+dx1*t, y1+dy1*t, x2+dx2*t, y2+dy2*t ) == r1+r2
By assuming the distance at any time t could be calculated with Pythagoras, I would like to know the two points in time in which the distance from the centers is precisely the sum of the radii. I solved for a, b, and c and applied the quadratic formula, and I believe that if I'm assuming they were phantom objects, this would give me the first moment of collision and the final moment of collision, and I could assume at every moment between, they are overlapping.
I'm working under the precondition that it's impossible for 2 objects to be overlapping at t0, which means infinite collision of "stuck inside each other" is not possible. I'm also filtering out and using special handling for when the slope is 0 or infinite, which is working.
I tried calculating the distance when, at the moment object 1 is at the intersection point, it's distance from object 2, and likewise when o2 is at the intersection point, but this did not work as it's possible to have collision when they are not at their intersection.
I'm having problems for when the slopes are equal, but different magnitude.
Is there a simple physics/math formula for this already?
Programming language doesn't matter, pseudcode would be great, or any math formula that doesn't have complex symbols (I'm not a math/physics person)... but nothing higher order (I assume python probably has a collide(p1, p2) method already)
There is a simple(-ish) solution. You already mentioned using the quadratic formula which is a good start.
First define your problem where the quadratic formula can be useful, in this case, distance between to centers, over time.
Let's define our time as t
Because we are using two dimensions we can call our dimensions x & y
First let's define the two center points at t = 0 of our circles as a & b
Let's also define our velocity at t = 0 of a & b as u & v respectively.
Finally, assuming a constant acceleration of a & b as o & p respectively.
The equation for a position along any one dimension (which we'll call i) with respect to time t is as follows: i(t) = 1 / 2 * a * t^2 + v * t + i0; with a being constant acceleration, v being initial velocity, and i0 being initial position along dimension i.
We know the distance between two 2D points at any time t is the square root of ((a.x(t) - b.x(t))^2 + (a.y(t) - b.y(t))^2)
Using the formula of position along a dimensions we can substitute everything in the distance equation in terms of just t and the constants we defined earlier. For shorthand we will call the function d(t);
Finally using that equation, we will know that the t values where d(t) = a.radius + b.radius are where collision starts or ends.
To put this in terms of quadratic formula we move the radius to the left so we get d(t) - (a.radius + b.radius) = 0
We can then expand and simplify the resulting equation so everything is in terms of t and the constant values that we were given. Using that solve for both positive & negative values with the quadratic formula.
This will handle errors as well because if you get two objects that will never collide, you will get an undefined or imaginary number.
You should be able to translate the rest into code fairly easily. I'm running out of time atm and will write out a simple solution when I can.
Following up on #TinFoilPancakes answer and heavily using using WolframAlpha to simplify the formulae, I've come up with the following pseudocode, well C# code actually that I've commented somewhat:
The Ball class has the following properties:
public double X;
public double Y;
public double Xvel;
public double Yvel;
public double Radius;
The algorithm:
public double TimeToCollision(Ball other)
{
double distance = (Radius + other.Radius) * (Radius + other.Radius);
double a = (Xvel - other.Xvel) * (Xvel - other.Xvel) + (Yvel - other.Yvel) * (Yvel - other.Yvel);
double b = 2 * ((X - other.X) * (Xvel - other.Xvel) + (Y - other.Y) * (Yvel - other.Yvel));
double c = (X - other.X) * (X - other.X) + (Y - other.Y) * (Y - other.Y) - distance;
double d = b * b - 4 * a * c;
// Ignore glancing collisions that may not cause a response due to limited precision and lead to an infinite loop
if (b > -1e-6 || d <= 0)
return double.NaN;
double e = Math.Sqrt(d);
double t1 = (-b - e) / (2 * a); // Collison time, +ve or -ve
double t2 = (-b + e) / (2 * a); // Exit time, +ve or -ve
// b < 0 => Getting closer
// If we are overlapping and moving closer, collide now
if (t1 < 0 && t2 > 0 && b <= -1e-6)
return 0;
return t1;
}
The method will return the time that the Balls collide, which can be +ve, -ve or NaN, NaN means they won't or didn't collide.
Further points to note are, we can check the discriminant against <zero to bail out early which will be most of the time, and avoid the Sqrt. Also since I'm using this in a continuous collision detection system, I'm ignoring collisions (glancing) that will have little or no impact since it's possible the response to the collision won't change the velocities and lead to the same situation being checked infinitely, freezing the simulation.
The 'b' variable can used for this check since luckily it's similar to the dot product. If b is >-1e-6 ie. they're not moving closer fast enough we return NaN, ie. they don't collide. You can tweak this value to avoid freezes, smaller will allow closer glancing collisions but increase the chance of a freeze when they happen like when a bunch of circles are packed tightly together. Likewise to avoid Balls moving through each other we signal an immediate collison if they're already overlapping and moving closer.

Weird Objective-C Mod Behavior for Negative Numbers

So I thought that negative numbers, when mod'ed should be put into positive space... I cant get this to happen in objective-c
I expect this:
-1 % 3 = 2
0 % 3 = 0
1 % 3 = 1
2 % 3 = 2
But get this
-1 % 3 = -1
0 % 3 = 0
1 % 3 = 1
2 % 3 = 2
Why is this and is there a workaround?
result = n % 3;
if( result < 0 ) result += 3;
Don't perform extra mod operations as suggested in the other answers. They are very expensive and unnecessary.
In C and Objective-C, the division and modulus operators perform truncation towards zero. a / b is floor(a / b) if a / b > 0, otherwise it is ceiling(a / b) if a / b < 0. It is always the case that a == (a / b) * b + (a % b), unless of course b is 0. As a consequence, positive % positive == positive, positive % negative == positive, negative % positive == negative, and negative % negative == negative (you can work out the logic for all 4 cases, although it's a little tricky).
If n has a limited range, then you can get the result you want simply by adding a known constant multiple of 3 that is greater that the absolute value of the minimum.
For example, if n is limited to -1000..2000, then you can use the expression:
result = (n+1002) % 3;
Make sure the maximum plus your constant will not overflow when summed.
We have a problem of language:
math-er-says: i take this number plus that number mod other-number
code-er-hears: I add two numbers and then devide the result by other-number
code-er-says: what about negative numbers?
math-er-says: WHAT? fields mod other-number don't have a concept of negative numbers?
code-er-says: field what? ...
the math person in this conversations is talking about doing math in a circular number line. If you subtract off the bottom you wrap around to the top.
the code person is talking about an operator that calculates remainder.
In this case you want the mathematician's mod operator and have the remainder function at your disposal. you can convert the remainder operator into the mathematician's mod operator by checking to see if you fell of the bottom each time you do subtraction.
If this will be the behavior, and you know that it will be, then for m % n = r, just use r = n + r. If you're unsure of what will happen here, use then r = r % n.
Edit: To sum up, use r = ( n + ( m % n ) ) % n
I would have expected a positive number, as well, but I found this, from ISO/IEC 14882:2003 : Programming languages -- C++, 5.6.4 (found in the Wikipedia article on the modulus operation):
The binary % operator yields the remainder from the division of the first expression by the second. .... If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined
JavaScript does this, too. I've been caught by it a couple times. Think of it as a reflection around zero rather than a continuation.
Why: because that is the way the mod operator is specified in the C-standard (Remember that Objective-C is an extension of C). It confuses most people I know (like me) because it is surprising and you have to remember it.
As to a workaround: I would use uncleo's.
UncleO's answer is probably more robust, but if you want to do it on a single line, and you're certain the negative value will not be more negative than a single iteration of the mod (for example if you're only ever subtracting at most the mod value at any time) you can simplify it to a single expression:
int result = (n + 3) % 3;
Since you're doing the mod anyway, adding 3 to the initial value has no effect unless n is negative (but not less than -3) in which case it causes result to be the expected positive modulus.
There are two choices for the remainder, and the sign depends on the language. ANSI C chooses the sign of the dividend. I would suspect this is why you see Objective-C doing so also. See the wikipedia entry as well.
Not only java script, almost all the languages shows the wrong answer'
what coneybeare said is correct, when we have mode'd we have to get remainder
Remainder is nothing but which remains after division and it should be a positive integer....
If you check the number line you can understand that
I also face the same issue in VB and and it made me to forcefully add extra check like
if the result is a negative we have to add the divisor to the result
Instead of a%b
Use: a-b*floor((float)a/(float)b)
You're expecting remainder and are using modulo. In math they are the same thing, in C they are different. GNU-C has Rem() and Mod(), objective-c only has mod() so you will have to use the code above to simulate rem function (which is the same as mod in the math world, but not in the programming world [for most languages at least])
Also note you could define an easy to use macro for this.
#define rem(a,b) ((int)(a-b*floor((float)a/(float)b)))
Then you could just use rem(-1,3) in your code and it should work fine.