I am trying to force the value of the net through vpi_put_value (using c interface of the vpi) but simulation doesn't keep the forced value. During simulation it evaluates the value and I see in gtkwave different value than I forced.
I need the method to force the value for specific times (range of the simulation times) which is not depend on simulator (cvc, icarus etc.).
Is this achievable?
Use the vpiForceFlag flag to force the value through vpi, and release with vpiReleaseFlag.
vpi_put_value(sys, &return_val, NULL, vpiForceFlag);
Refer to the documentation form any Verilog/SytemVerilog LRM:
IEEE Std 1364-1995 § 23.23 vpi_put_value()
IEEE std 1364-2001 § 27.32 vpi_put_value()
IEEE std 1364-2005 § 27.32 vpi_put_value()
IEEE std 1800-2012 § 38.34 vpi_put_value()
You can get the same effect with the Verilog keywords force and release
Related
Looking at the PDF Referene ver 1.7 about how objects of type number
are writen according to valid syntax it informs.
Note: PDF does not support the PostScript syntax for numbers with
nondecimal radices (such as 16#FFFE ) or in exponential format (such
as 6.02E23 ).
However it also does not mandate a maximum range the numbers should be in. This seems to suggest it would be correct to write
1.00E10 as 10000000000
or
1.00E-50 as 0.00000000000000000000000000000000000000000000000001
This question has hence 2 aspects:
a) is the notation correct (as provided in the examples?
b) does pdf format expect implementations to use (or at least fall back
to some bigint/bigfloat handling) of numbers, as it seems to not provide
any range for the numbers?
First of all, for normative information on PDF you should refer to the appropriate ISO standards, in particular ISO 32000. Yes, Part 1 (ISO 32000-1) in particular is derived from the PDF reference 1.7 without that many changes, but not without changes either. (Ok, in some situations one has to consult the old PDF reference, too, to understand some of these changes.)
Adobe has published a copy thereof (with "ISO" in the page headers removed) on its web site: https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf
Now to your question:
According to ISO 32000, both part 1 and 2:
An integer shall be written as one or more decimal digits optionally preceded by a sign. [...]
A real value shall be written as one or more decimal digits with an optional sign and a leading, trailing, or embedded PERIOD (2Eh) (decimal point).
(section 7.3.3 "Numeric Objects")
Thus, concerning your question a)
is the notation correct (as provided in the examples?
Yes, 10000000000 is an integer valued numeric object, 0.00000000000000000000000000000000000000000000000001 is a real valued numeric object.
Concerning your question b)
does pdf format expect implementations to use (or at least fall back to some bigint/bigfloat handling) of numbers, as it seems to not provide any range for the numbers?
No, in the same section as quoted above you also find
The range and precision of numbers may be limited by the internal representations used in the computer on which the conforming reader is running; Annex C gives these limits for typical implementations.
and Annex C recommends at least the following limits:
integer
2,147,483,647
Largest integer value; equal to 231 − 1.
integer
-2,147,483,648
Smallest integer value; equal to −231
real
±3.403 × 1038
Largest and smallest real values (approximate).
real
±1.175 × 10-38
Nonzero real values closest to 0 (approximate). Values closer than these are automatically converted to 0.
real
5
Number of significant decimal digits of precision in fractional part (approximate).
(ISO 32000-1)
Integers
Integer values (such as object numbers) can often be expressed within 32 bits.
Real numbers
Modern computers often represent and process real numbers using IEEE Standard for Floating-Point Arithmetic (IEEE 754) single or double precision.
(ISO 32000-2)
I'm using MPFR multiple precision library, and in particular the implementation from here.
Is there any way to compile the code in such a way that all operations are carried out using the standard types (e.g. double)? E.g. a compilation flag that would turn all "software operations" into "hardware operations" normally implemented in standard types?
In practice, the code is slow even when I'm using 64 bits, I profiled that the culprit is the mpfr/gmp, and I would like to measure how much I gain by changing to double (without having to re-write all the code).
This is not possible in the MPFR library for several reasons. First the formats are different. In particular, MPFR has a different exponent range, no subnormals, a single NaN... Moreover it provides correct rounding in 5 rounding modes, while processors only have 4 rounding modes, and for the native types, most operations are not correctly rounded.
You might want to write wrappers, C++ classes or whatever to do what you want, but this is not necessarily interesting as you may get many conversions between both formats.
EDIT: If you don't care about the exact behavior, perhaps what you want is something based on C++ templates. You probably need to look at another C++ MPFR interface such as MPFRCPP or mpfr::real class.
As far as I understand, the implementation you mention (MPFR C++ from Pavel Holoborodko) uses operator overloading to make MPFR calls look like standard C float operations, from the site:
//MPFR C - version
void mpfr_schwefel(mpfr_t y, mpfr_t x)
{
mpfr_t t;
mpfr_init(t);
mpfr_abs(t,x,GMP_RNDN);
mpfr_sqrt(t,t,GMP_RNDN);
mpfr_sin(t,t,GMP_RNDN);
mpfr_mul(t,t,x,GMP_RNDN);
mpfr_set_str(y,“418.9829“,10,GMP_RNDN);
mpfr_sub(y,y,t,GMP_RNDN);
mpfr_clear(t);
}
can be written like this:
// MPFR C++ - version
mpreal mpfr_schwefel(mpreal& x)
{
return "418.9829"-x*sin(sqrt(abs(x)));
}
which is cool by the way, so you just have to make slighty changes like replacing "418.9829" by 418.9829, and comment out MPFR inclusion to your code.
If your code still has remaining mpfr_... calls you can get native double-like behaviour by setting the MPFR precision to 53 bits in variable initialization or using, say, specific functions like mpfr_set_prec, but note that (as another answer points out), results won't be exactly the same:
In particular, with a precision of 53 bits and in any of the four standard rounding modes, MPFR is able to exactly reproduce all computations with double-precision machine floating-point numbers (e.g., double type in C, with a C implementation that rigorously follows Annex F of the ISO C99 standard and FP_CONTRACT pragma set to OFF) on the four arithmetic operations and the square root, except the default exponent range is much wider and subnormal numbers are not implemented (but can be emulated).
This might be just good enough for you to have a rough idea of how much MPFR performance differs from native floats.
If that isn't precise enough though, you can place a temporary include into your main file after including MPFR, with defines that override the MPFR functions you use, more or less like so:
typedef double mpfr_t;
#define mpfr_add(a,b,c,r) {a=b+c;}
Is there a method of using the exponent properties of LabView units for carrying custom units? For example I would find it convenient to use milli-Amperes instead of Amperes in my data wires.
My first attempt at doing so looks like this, but trying to get the value out at the end gives me nothing.
I would find it convenient to use milli-Amperes instead of Amperes in my data wires
For a wire, it's not possible, and it's not a problem, here's why:
I'm afraid what you want make little sense, since you're milli-Amperes instead of Amperes refers to representing your data, while a wire is just raw data. Adding the milli- to a floating point changes the exponent, not the mantissa, so there's no loss or gain of precision in the value that your number carries.
Now if we talk about an indicator which is technically a display of the wire value, you change the unit from "A" to "mA" to have the display you want.
Finally, in your attempt with "set numeric info", the -3 factor added next to Amperes means the unit is A^-3, not mA.
You can use data that don't use units, however than you will loose your automatic check of the units.
For display properties you can tweak the display format to show different outputs:
This format string is constructed as following:
% numeric
^ engineering notation, exponents in multiples of three
# no trailing zeros
_6 six significat digits
e scientific notation (1e1 for instance)
The prefix is the best way to affect the presentation of the value on a specific front panel.
When passing data from VI to VI, the prefix is not passed, and the data uses the base ( Amps, Volts, etc...)
In my example below, the unitless value 3 is assigned units of Amp in mA.vi. The front panel indicator is set to show units of mA.
In Watts.vi I multiply the Amps OUT of mA.vi by a constant of 9V and the result is wired to the indicator x*y.
x*y has units of W and I changed the prefix to k for presentation.
The NI forums have several threads that report certain functions (square and square root specifically) can cause unit errors or broken wires. Most folks don't even know the units capability exists, and most that do have tried and abandoned them. :)
When using an iPhone Objective C method that accepts CGFloats, e.g. [UIColor colorWithRed:green:blue:], is it important to append a f to constant arguments to specifiy them explicitly as floats, e.g. should I always type 0.1f rather than 0.1 in such cases? Or does the compiler automatically cast 0.1 (which is a double in general) to 0.1f (which is a float) at compile time? I don't wish to have these casts happen at run time because they would unneccessarily hog performance.
Thanks in advance
MrMage
It's not important; it won't break anything to use a double-precision constant where a single-precision constant is expected.
However, if you have turned on the warning about implicit 64-bit-to-32-bit conversions and are building for 32-bit architectures (which I believe includes the iPhone), then you'll want to use single-precision constants simply to avoid getting that warning.
(Alternatively, you could set that setting to explicitly off, with an architecture condition turning it on for 64-bit architectures. But that currently only matters if you're also using some of your code in a Mac application.)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Most efficient implementation of a large number class
Suppose I needed to calculate 2^150000. Obviously that number is going to exceed the size of an int, float, or double. How can I make a data type that allows normal math functions but exceeds the basic number types?
If this is a "depends which language you use" kind of deal. I will say C#.
See
Most efficient implementation of a large number class
for some leads.
If C# is not cast in stone, and you want something that just works out of the box, then there are several options. The one I know best is Python, but I think that languages like Scheme and Ruby support large numbers, too.
Python: 2**150000. Prints the result after about 1 second.
If you want free mathematics software, look at Maxima or Sage.
You might also consider using Frink, which is a language with the native capability of dealing with measurement units.
It computes 2^150000 without difficulty, deals with fractions (e.g. 1/3+2/5 --> 11/15), computes 3 meters + 2 inch --> 3.0508 m and is a full programming language.
Frink - Copyright 2000-2008 Alan Eliasen, eliasen#mindspring.com
http://futureboy.us/frinkdocs/
Several languages have built in support for arbitrary large numbers. You could use Mathematica, for example. I tried your example in Mathematica, and the result has 45,155 digits. I tried the same example with bc on a Unix machine. bc supports extended precision, but not that extended; it bombed on the example.
Lisp is your friend. Default biginteger numbers.
I find it very frustrating to use a language without arbitrarily large numbers: it seems nonsensical to be able to use ordinary operators like addition on most numbers, but to have to switch to method calls on a BigInt instance simply because of its size.
A whole bunch of languages have more complete numeric towers, and seamlessly coerce when needed; e.g., Allegro Common Lisp evaluates and prints all 45,155 digits of (expt 2 150000) in 1ms.
cl-user(2): (time (expt 2 150000))
; cpu time (non-gc) 0 msec user, 0 msec system
; cpu time (gc) 0 msec user, 0 msec system
; cpu time (total) 0 msec user, 0 msec system
; real time 1 msec
; space allocation:
; 2 cons cells, 18,784 other bytes, 0 static bytes
There is a product in C called calc which is an arbitrary precision calculator. I used it once when working as a researcher and found it fairly straightforward to use...
http://sourceforge.net/projects/calc/
It can be programmed for difficult or long calculations and can accept arguments from the command line. In interactive mode, it accepts one command at a time, and displays the answer.
Ordinarily the commands are simply expressions such as:
3 * (4 + 1)
and calc will print:
15
Calc does the arithmetic operators +, -, /, * as well as ^ (exponentiation), % (modulus) and // (integer divide).
For example:
3 * 19 ^ 43 - 1
will produce:
29075426613099201338473141505176993450849249622191102976
Calc values can be VERY large. For example:
2 ^ 23209 - 1
will print:
402874115778988778181873329071 ... loads of digits ... 3779264511
Hope this helps...
I don't know C# but I do know the Ruby programming language has the BigDemical class that seems to allow numbers of unlimited size.
Python has a bignum library. If you need to implement a bignum library in another language you can at least use the Python one as reference for validating your work. Note that bignums have a few implementation gotchas that aren't immediately obvious if you don't know what you're looking for.