I would like to write verilog that can be synthesized either using yosys (preferable) or the Lattice Radiant tool chain using Synplify (needed for encrypted IP from Lattice for example).
Most of the hard cells like the PLL have different names between the two tools.
Is there a verilog library that allows one to choose either synthesis tool with a single 'define for example?
Unfortunately not, the open source iCE40 flow was developed before Radiant existed; so used the original iCEcube primitive library (which is still the only option for devices pre-UltraPlus). For reference this is documented at http://www.latticesemi.com/~/media/LatticeSemi/Documents/TechnicalBriefs/SBTICETechnologyLibrary201608.pdf - imo it is Lattice who are at fault for failing to provide backwards compatibility with their own library...
Related
It's failed my google-fu, so I'm asking here.
For a whack-a-doodle reason, I want to link a specific object inside a library to my target.
For example, I want to link foo.o within bar.a to foobar.so.
Is there some syntax in CMake that makes this possible?
edit: Ok, a bit more of my problem.
We're making a modular signal processing system with various 'levels' of implementation:
Python reference model
C/C++ floating point
C/C++ fixed point
C/C++ DSP optimized version
A separate .a file gets made for each C/C++ implementation. They all support a fixed point interface and a floating point interface, though they only implement one of the interfaces and do a translate/trampoline to the other.
In other words, the floating point implementation has a floating point implementation of the algorithm and a fixed point entry point that translates all the input to float before calling the float based API.
A DSP optimized implementation implements the fixed point entry points, and provides a float 'trampoline' to convert from float to fixed before calling the actual implementation.
All of this stuff is for allowing us to mix/match implementations so we can start with the ideal floating point model and piecemeal develop an optimized DSP version. It all works dandy in a C/C++ project where you just link the desired implementation of the module and it 'just works'.
The kicker is our initial model is python and we want to be able to call into the C/C++ code from python with pybind11 bindings.
The link time of pybind11 shared objects is really slow and makes really big objects (even with the recommended settings i'm ending up with 3MB dlls) and we're going to have a lot of modules, so i'm looking for a way to cut down on the number of .so we make by combining the fixed and floating entry in the same .so. by saying something like:
module_pybind.so = module_fixed.a(fixed.o) + module_float.a(float.o)
and module_fixed.a(float.o) and module_float.a(fixed.o) don't get included because they're just trampoline functions.
I know, it's all a bit whacky and convoluted and I'm torturing things in ways that are way outside the norm, but I'm hoping this might work.
If not, I can play more tricks with trampolines and have a pybind specific entry point that's only there for the implemented model.
.a file is an archive of .o files. You can unpack the archive with ar x library.a command with proper dependencies with add_custom_command + add_custom_target. Then just add_library(... SHARED the_unpacked_object.o).
I am combining two designs into a single chip design. The RTL code is written in SystemVerilog for synthesis. Unfortunately, the two designs contain a number of modules with identical names but slightly different logic.
Is there a namespace or library capability in SystemVerilog that would allow me to specify different modules with the same name? In other words is there a lib1::module1, lib2::module1 syntax I could use to specify which module I want? How is this sort of module namespace pollution best handled?
Thanks
Look into config and library. See IEEE Std 1800-2017 § 33. Configuring the contents of a design
library will map this files to target libraries based on file paths (IEEE Std 1800-2017 § 33.3. Libraries)
config will map which library to use for paralytic module (global, instances, subscope) (IEEE Std 1800-2017 § 33.4. Configurations)
Examples are provided in the section 33.8.
Note: some simulators want -libmap <configfile> in the command line. Refer to your simulators manual.
Unfortunately, neither verilog nor system verilog provide a comprehensive solution for the namespaces problem for design element (which include modules). V2K libraries and config statements (yes,they were introduced in verilog v2k) can partially help you solving this issue for modules only, and only if you plan for this in advance and use correct methodology to implement it. Not many people try to use v2k libs to solve it.
There are other parts of this as well, which you might discover. It include other design elements, macro names, file names, package names, ... System verilog makes it even worse with introducing of the global scopes.
So, depending on the complexity of your design you might be able to fix it with v2k libs. But in general, the solution always lies in the methodology and having those names uniquified upfront. Some companies even try to use on-fly uniquification by automatically rewriting verilog code in order to make those names unique.
You might also be able to solve some of the issues like that using compilation units, as defined in the SV standard and which are implemented at least by major tool vendors.
On the official website of gobject, we can read:
GObject, and its lower-level type system, GType, are used by GTK+ and most GNOME libraries to provide:
object-oriented C-based APIs and
automatic transparent API bindings to other compiled or interpreted
languages
The first part seems clear to me but not the second one.
Indeed, when talking about gobject and binding, the concept introduced is often gobject-intropspection, but as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
Therefore I wonder what makes gobject particularly binding-friendly.
as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
That's not really true in practice. You can do some very basic stuff, but you have to write the GIR by hand (instead of just running a program which scans the source code). The only ones I'm aware of are those distributed with gobject-introspection (the *.gir files, the *.c files there are to avoid cyclical dependencies), and even those are generally only a fairly small subset of the C API.
As for other features, almost everything in GObject helps… the basic idea is that bindings often need RTTI. There are types like GValue (a simple box to store a value + type information), GClosure (for callbacks), properties and signals describe themselves with GTypes, etc. If you use GObjects (instead of creating a new fundamental type) you get run-time data about inheritance and interfaces, and GObject's odd construction scheme even allows other languages to subclass types declared in C.
The reason g-ir-scanner can't really do much on non-GObject libraries is that all that information is missing. After scanning the source code looking for annotations, g-ir-scanner will actually load the compiled module and use GObject's API to grab this information (which makes cross-compiling painful). In other words, GObject-Introspection is a much smaller project than you think… a huge percentage of the data it needs it gets from the GObject API.
i am new to java.
i wanted to know this.
what is the need to create the .class file in java ?
can't we just pass the source code to every machine so that each machine can compile it according to the OS and the hardware ?
I believe it's mostly for efficiency reasons.
From wikipedia http://en.wikipedia.org/wiki/Bytecode:
Bytecode, also known as p-code (portable code), is a form of
instruction set designed for efficient execution by a software
interpreter. Unlike human-readable source code, bytecodes are compact
numeric codes, constants, and references (normally numeric addresses)
which encode the result of parsing and semantic analysis of things
like type, scope, and nesting depths of program objects. They
therefore allow much better performance than direct interpretation of
source code.
(my emphasis)
And as others have mentioned possible weak obfuscation of the source code.
The main reason for the compilation is that the Virtual Machines which are used to host java classes and run them only understands bytecode
And since compiling a class each time to the language the virtual machine understands is expensive. That's the only reason why the source code is compiled into bytecode.
But we can also use some compilers which compiles source code directly into machine code.But that's a different story which I don't know about much.
To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.