I have a top file in Verilog and it uses multiple modules instantiated in it that are in different files. If I put all these files in one directory and then I use read Verilog command only on the top file. Will all the files having the modules are be read by this command in the correct order?
If your Verilog files are named after the modules they contain:
read_verilog top.v
hierarchy -top top -libdir .
otherwise, you will have to read in all the modules:
read_verilog *.v
hierarchy -top top
Related
Let's assume I have a script that generates a set of source files forming a target I want to link against in a CMakeLists.txt. If the file names are known to the latter then the usual add_custom_target() and add_custom_command() commands will make it possible to use the generated files as target sources.
Let's assume, though, that only the generator script knows the file names and locations. How can a target library be generated so that the parent CMakeLists.txt can link against it without its knowing the actual file names?
Note that the dependency topic isn't in this question's scope as the script knows itself when to regenerate or not. It's not the finest use of CMake, but it's sufficient in this use case.
Idea #1
The script also generates a generated.cmake file included by the parent one using include(generated.cmake). Problem: CMake doesn't find generated.cmake as it isn't existing at configuration time.
Idea #2
Similar to idea #1, but the script is called with the execute_process() so that generated.cmake is present at configuration time. Problem: The script is not called anymore at subsequent builds, thus ignoring possible changes to its input.
Idea #3
The script passes back a list of targets and files that is somehow considered by the parent CMakeLists.txt. So far I couldn't find a way to do so.
The solution I came with is eventually a mixture of all three ideas.
Solution to idea #1's problem
execute_process() actually ensures that generated_targets.cmake is present at configure time.
Solution to idea #2's and #3's problems
As stated in this answer to "Add dependency to the CMake-generated build-system itself", the CMAKE_CONFIGURE_DEPENDS directory property can be edited to add files whose touching re-triggers the configure step.
The key success factor is that this property can be set after the initial execute_process() call so that the script can identify and list its input dependencies (in an output file) that are then added to CMAKE_CONFIGURE_DEPENDS, hence also solving the input dependency problem.
Resulting pseudo code
# The script generates:
# - <output_dir>/cmake/input_files
# - <output_dir>/cmake/generated_targets.cmake
execute_process(
COMMAND myScript
--output-dir ${CMAKE_CURRENT_BINARY_DIR}/generated
)
# Mark the input files as configure step dependencies so that the execute_process
# commands are retriggered on input file change.
file(STRINGS ${CMAKE_CURRENT_BINARY_DIR}/generated/cmake/input_files _input_files)
set_property(
DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS
${_input_files}
)
# Add the generated CMake targets.
include(${CMAKE_CURRENT_BINARY_DIR}/generated/cmake/generated_targets.cmake)
Is there some way to pass parameters (or command line arguments) to a Yosys script?
I see in this quetion (Can we have variables in a Yosys script?) you can run the Yosys script within a TCL interpreter. Is there some way to pass in an argument?
The reason I am doing this is that I have a script, and I want to be able to call the script with a parameterized path to a Verilog file. Surely this is a common need, and there must be some easy way to do this, but I'm not seeing it.
The only way to do that at the moment is using environment variables and TCL scripts. For example, you can write a TCL script test.tcl:
yosys read_verilog $::env(VLOG_FILE_NAME)
yosys synth -top $::env(TOP_MODULE)
yosys write_verilog output.v
And then call if with VLOG_FILE_NAME and TOP_MODULE set in the environment:
VLOG_FILE_NAME=tests/simple/fiedler-cooley.v TOP_MODULE=up3down5 yosys test.tcl
If you are running Yosys from a shell script you can also simply run something like export VLOG_FILE_NAME=... at the top of your script. Similarly you can use the export Makefile statement when you are running Yosys from a Makefile.
I was facing a similar case. This question showed up while I was working on a solution. I ended up with a different approach though:
I'm creating a wrapper to my top module, written in m4 language. It's very simple, it overrides the parameters value, and then includes my top module definition.
Then in the Makefile, I process the wrapper.m4 file, to create the resulting wrapper.v file, that will be input to yosys.
I have detailled the solution here.
For now, Spoon's directory structure of output will follow the package path written in *.java file. In fact, there are many other files even *.java files, whose real file paths are different from package paths.
So, my Spoon's output folder was disordered.
In short the answer for this question is: no.
Spoon uses standard Java organization to process output files, meaning: each Java file is output in its package hierarchy as it should be done for source files (see: https://docs.oracle.com/javase/tutorial/java/package/managingfiles.html).
However if your problem is related to file created because of inner classes: you could solve it using the following option:
[--output-type ]
States how to print the processed source code:
nooutput|classes|compilationunits (default: classes)
with value "compilationunits".
Finally if it's really an issue for you, don't hesitate to propose a new feature through a pull request on the Github repository!
Currently i am having problems with a makefile due to some unexpected recursion and the neccessary collection of filenames. I want to call recursively a Makefile in the root folder of my project and that one should go through every possible subfolder (and their subfolders...) with the goal to collect all files and write them to a variable to be used as "targets" or dependent files.
For example: /Makefile goes through /Source, /Source/Boot and finds /Source/Boot/Boot.s (-> one target is therefore /Source/Boot/Boot.o) and it goes on with /Source/Kernel and finds /Source/Kernel/Foo.c (-> second target is therefore /Source/Kernel/Foo.o). I can compile these files in the Makefiles in the subfolders, but i need to link them when my root Makefile returns to the root.
So the question is, how can i pass adequately the paths to these object files to the root makefile to link them?
Recursively called makefiles can't pass info back to their caller (unless you resort to a hack, like using external files to collect the object file names).
Have a look at the paper Mark linked to. It shows a way of organising your project to do what you want, in a maintainable way.
There are struct definitions in the .h file that my library creates after I build it.. but I cannot find these in the corresponding .h.in. Can somebody tell me how all this works and where it gets the extra info from?
To be specific: I am building pth, the userspace threading library. It has pth_p.h.in, which doesn't contain the struct definition I am looking for, yet when I build the library, a pth_p.h appears and it has the definition I need.
In fact, I have searched every single file in the library before it is built and cannot find where it is generating the struct definition.
Pth uses GNU Autoconf, Automake, and Libtool. By running ./configure you'll be running a shell script which eventually runs m4 to detect the presence of a whole bunch of different system attributes and make changes to a number of files.
It looks like it boils down to ./configure generating Makefile from Makefile.in and then running something via make that triggers the shtool subcommand scpp:
pth_p.h: $(S)pth_p.h.in
$(SHTOOL) scpp -o pth_p.h -t $(S)pth_p.h.in -Dcpp -Cintern -M '==#==' $(HSRCS)
Obscure link, but here's an shtool-scpp manpage, which describes it as:
This command is an additional ANSI C
source file pre-processor for sharing
cpp(1) code segments, internal
variables and internal functions. The
intention for this comes from writing
libraries in ANSI C. Here a common
shared internal header file is usually
used for sharing information between
the library source files.
The operation is to parse special
constructs in files, generate a few
things out of these constructs and
insert them at position mark in tfile
by writing the output to ofile.
Additionally the files are never
touched or modified. Instead the
constructs are removed later by the
cpp(1) phase of the build process. The
only prerequisite is that every file
has a ``"#include ""ofile"""'' at the
top.
.h.in is probably processed within a configure (generated from configure.ac) script, look out for
AC_CONFIG_FILES([thatfile.h])
It replaces variables of the form #VAR# in the .in file with their values.
Edit: Just noticed if I'm right you should retag your question