I want to make vhdl divider - variables

i'm now using FPGA spartan3,
i want to calculate 'result' which is represented below formula.
and the 'result' should be returned as integer type. So i set up all the variables with integer type but it doesn't work.
result <=((a*b*7894*7)/(w*temp_constant));
i've set a,b,c,w,temp_constant as variables
variable a : integer range 0 to 99;
variable b : integer range 0 to 9999;
variable w : integer range 0 to 200;
variable temp_constant : integer range 0 to 99;
but the operator '/' doesn't work at this synthesis. the error msg was
'Operator '/' must have constant operands or first operand must be power of 2"'

The error message is almost (see the note below) 100% clear: divisions are not supported by your synthesis tool, except with constant operands (the result is computed by the synthesizer in the constant propagation phase) or with divisors that are powers of 2 (the division is a simple right shift).
One possible reason for this limitation of your synthesis tool is that there are many of ways to compute integer divisions in hardware and typing just / in a VHDL code is not enough to chose among them. There may be other reasons.
In your case where operands are not constants, and the divisor is not a power of 2, you must design this divider yourself at a lower level. If you have no idea about hardware implementations of integer dividers you will have to search a bit. This is a very classical topic, it should be easy to find good resources. Just a hint: pre-computing all inverses in fixed point representation, storing them in a read-only memory and using multiplications instead of divisions is an option.
Note: I find the error message you got (first operand must be power of 2) a bit surprising. Unless the term first operand is supposed to designate the divisor, which is not that common, it is probably a bug and the correct error message should be: second operand must be power of 2. Or, even better: divisor must be power of 2.

Related

Kotlin: Why these two implementations of log base 10 give different results on the specific imputs?

println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.

When must/should I declare a variable with the range keyword in VHDL?

I am new to VHDL and have perhaps a basic question, but here goes:
When declaring a variable, say an integer, what is the benefit of
variable count_baud : integer range 0 to clk_freq/baud_rate - 1 := 0;
vs.
variable count_baud : integer := 0;
Is the point of using range (only) to limit the size of the synthesized real estate in the CPLD/FPGA?
There are two very good reasons:
Debugging. If you know that your integer shall stay in the [min..max] range, tell it to the simulator with a proper range declaration. In case there is a bug in your code and you try to assign an out-of-range value, the simulator will let you know with a very useful message. While if you just declared an integer the error could happen long after the bogus assignment.
Synthesis quality. A logic synthesizer, by default, will allocate 32 bits for an integer. Depending on the surrounding it may discover that less bits are sufficient... or not. So, telling the synthesizer what the real range is frequently saves hardware, power and increases the final performance (speed), especially if the real range can be represented on much less than 32 bits.

Why BigFloat.to_s is not precise enough?

I am not sure if this is a bug. But I've been playing with big and I cant understand why this code works this way:
https://carc.in/#/r/2w96
Code
require "big"
x = BigInt.new(1<<30) * (1<<30) * (1<<30)
puts "BigInt: #{x}"
x = BigFloat.new(1<<30) * (1<<30) * (1<<30)
puts "BigFloat: #{x}"
puts "BigInt from BigFloat: #{x.to_big_i}"
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274900000000
BigInt from BigFloat: 1237940039285380274899124224
First I though that BigFloat requires to change BigFloat.default_precision to work with bigger number. But from this code it looks like it only matters when trying to output #to_s value.
Same with precision of BigFloat set to 1024 (https://carc.in/#/r/2w98):
Output
BigInt: 1237940039285380274899124224
BigFloat: 1237940039285380274899124224
BigInt from BigFloat: 1237940039285380274899124224
BigFloat.to_s uses LibGMP.mpf_get_str(nil, out expptr, 10, 0, self). Where GMP is saying:
mpf_get_str (char *str, mp_exp_t *expptr, int base, size_t n_digits, const mpf_t op)
Convert op to a string of digits in base base. The base argument may vary from 2 to 62 or from -2 to -36. Up to n_digits digits will be generated. Trailing zeros are not returned. No more digits than can be accurately represented by op are ever generated. If n_digits is 0 then that accurate maximum number of digits are generated.
Thanks.
In GMP (it applies to all languages not just Crystal), integers (C mpz_t, Crystal BigInt) and floats (C mpf_t, Crystal BigFloat) have separate default precision.
Also, note that using an explicit precision is better than setting a default one, because the default precision might not be reentrant (it depends on a configure-time switch). Also, if someone reads only a part of your code, they may skip the part with setting the default precision and assume a wrong one. Although I do not know the Crystal binding well, I assume that such functionality is exposed somewhere.
The zero parameter passed to mpf_get_str means to guess the value from the precision. I know the number of significant digits is proportional and close to precision / log2(10). Floating point numbers have finite precision. In that case, it was not the mpf_get_str call which made the last digits zero - it was the internal representation that did not keep such data. It looks like your (default) precision is too small to store all the necessary digits.
To summarize, there are two solutions:
Set a global default precision. Although this approach will work, it will require to either change the default precision frequently, or use one in the whole program. Both ways, the approach with the default precision is a form of procrastination which is going to have its vengeance later.
Set a precision on variable basis. This is a better solution than the former. Although it requires more code (1-2 more lines per variable initialization), it is going to pay back later. For example, in a space object tracking system, the physics calculations have to be super-precise, but other systems could use lower precision numbers for speed and memory saving.
I am still unsure what made the conversion BigFloat --> BigInt yield the missing digits.

What does 'Implicit conversion loses integer precision: 'time_t'' mean in Objective C and how do I fix it?

I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..

How different programming languages handle division by 0?

Perhaps this is the wrong sort of question to ask here but I am curious. I know that many languages will simply explode and fail when asked to divide by 0, but are there any programming languages that can intelligently handle this impossible sum - and if so, what do they do? Do they keep processing, treating 350/0 as 350, or stop execution, or what?
The little-known Java programming language gives the special constant Double.POSITIVE_INFINITY or Double.NEGATIVE_INFINITY (depending on the numerator) when you divide by zero in an IEEE floating-point context. Integer division by zero is undefined, and results in an ArithmeticException being thrown, which is quite different from your scenario of "explosion and failure".
The INTERCAL standard library returns #0 on divide by zero
From Wikipedia:
The infinities of the extended real number line can be represented in IEEE floating point datatypes, just like ordinary floating point values like 1, 1.5 etc. They are not error values in any way, though they are often (but not always, as it depends on the rounding) used as replacement values when there is an overflow. Upon a divide by zero exception, a positive or negative infinity is returned as an exact result.
In Java, division by zero in a floating-point context produces the special value Double.POSITIVE_INFINITY or Double.NEGATIVE_INFINITY.
i'd be surprised if any language returns 350 if you do 350/0. Just two examples, but Java throws an Exception that can be caught. C/C++ just crashes (i think it throws a Signal that can probably be caught).
In Delphi, it either throw a compile-time error (if divided by a 0 value const) or a catchable runtime error if it happens at runtime.
It's the same for C and C++.
In PHP you will get a warning:
Warning: Division by zero in
<file.php> on line X
So, in PHP, for something like:
$i = 123 / 0;
$i will be set to nothing. BUT $i is not === NULL and isset($i) returns true and is_string($i) returns false.
Python (at least version 2, I don't have 3) throws a ZeroDivisionError, which can be caught.
num = 42
try:
for divisor in (1,0):
ans = num / divisor
print ans
except ZeroDivisionError:
print "Trying to divide by 0!"
prints out:
42
Trying to divide by 0!
Most SQL implementations raise a "division by zero" error, but MySQL just returns NULL
Floating point numbers as per the IEEE define constants NaN etc. Any continued operation involving thst value will remain unchanged until the end. Integer or whole numbers are different with exceptions being thrown...In java...
In pony division by 0 is 0 but i have yet to find a language where 0/0 is 1
I'm working with polyhedra and trying to choose a language that likes inf.
The total edges for a polyhedron {a,b} where a is edges per polygon and b is edges per corner is
E = 1/(1/a + 1/b - 1/2)
if E is negative it's a negative curvature, but if E is infinity (1/0) it tiles the plane. Examples: {3,6} {4,4}