In the Yosys manual I read
C.108 read
-sv2005 -sv2009 -sv2012
load HDL designs Load the specified Verilog/SystemVerilog files. (Full SystemVerilog support is only available via Verific.)
C.113 read_verilog – read modules from Verilog file
-sv enable support for SystemVerilog features. (only a small subset of SystemVerilog is supported)
Is there a concise spec anywhere about this? If not, a guidline? Which Verilog and which SystemVerilog?
What is Verific?
In Clifford: A Free and Open Source Verilog-to-Bitstream Flow for iCE40 FPGAs Wolf says that "Verilog is pretty much everything from Verilog 2005". What not from Verilog 2005? Changes over time from this late 2015 lecture?
Verific is a reference to a commercial front-end provided by Verific Design Automation. There's a commercial version of Yosys sold as part of the Symbiotic EDA Suite that comes with this Verific front end installed. The Verific front end provides full VHDL+SV support as you see above.
The open source version of Yosys officially supports only Verilog 2005. Unofficially, several SV features have been added to it, (Enum, Typedef, etc.) and there's a beta VHDL support provided by GHDL-synth that's a work in progress.
It looks like the folks at Antmicro were kind enough to provide a frontend that allows direct reading of SystemVerilog. Check the following blogpost https://antmicro.com/blog/2022/02/simplifying-open-source-sv-synthesis-with-the-yosys-uhdm-plugin/
Related
I'm researching of virus and I'm faced with the task of deobfuscating its virtual machine. I chose to do this through LLVM and I had a question, where can I see a simple example of lifting instructions to the LLVM-IR level? For example, where can I look at code that just translate one pop rsp instruction to LLVM-IR? Since I didn't find anything like that.
Maybe someone has articles where this is described or can someone suggest with an example?
Here is a list of similar tools you could try:
MeSema relies on IDA Pro to disassemble a binary file and produce a control flow graph. Then it can convert the control flow graph into LLVM IR.
llvm-mctoll is easy to use, but SIMD instructions such as SSE, AVX, and Neon cannot be raised.
retdec is a retargetable machine-code decompiler
reopt is a general purpose decompilation and recompilation tool, support x86-64 Linux programs.
I'm trying to use Yosys formal verification capabilities along with Verific parser.
What are the supported capabilities of yosys with verific for formal verification, compared to the "read_verilog -formal" command?
For example, a quick compilation of formal code that works with read_verilog gave me an error for "assume property" syntax:
"The sva directive is not sensitive to a clock. Unclocked directives are not supported"
I'm not sure if I should modify the Verific library flags in any way to make it support more capabilities, or it's something that is not supported.
Currently Yosys only has very limited support for SVA with or without Verific. However, we plan on vastly extending the Yosys support for SVA via Verific in the near future. The goal is to provide pretty much complete support for everything Verific can parse.
Regarding the "The sva directive is not sensitive to a clock. Unclocked directives are not supported" error message: That's a Verific error message and I don't think there is a Verific library flag to bypass it. (But I'm not sure.) Technically unclocked properties are not part of the SystemVerilog standard afaik. (The syntax would allow it, but the standard text does not define a semantic for it.)
Yosys supports unclocked SVA properties. (But only trivial expression properties.)
Both Verific and Yosys do support immediate assertions and assumptions. (That is assertions and assumptions in always blocks.) Right now this is the thing I recommend for most cases where people write new properties, also because most simulators have better support for immediate assertions (or it would be easier to add if the support is missing so far).
Right now I'd say the biggest advantage of using Verific with Yosys is support for non-SVA System Verilog (and VHDL) code. In a few months we will hopefully have support for much more SVA constructs via Verific, but that isn't implemented yet.
Edit/Update: SVA support via Verific is slowy improving now. See this directory for examples that can be processed via Verific. New examples are added as new features are added to the Verific bindings. Currently counter.sv is the most advanced example there.
Can you suggest on this points related to Autosar, taking into consideration I am a software developer who can write some software in C?
Now I Develop a functionality in C, that has to read some ECU specific data, process it & update some ECU specific data (which can be some variable or i/o signal).
Now how I will be using Autosar RTE & virtual functional bus?
What will be there use to a software developer?
Also, as Autosar says "standardization of interfaces" what does it mean? Does it mean that if some else anywhere around the world is also developing same functionality (in C language, like me) we both will be using same name of the API's for those I/O signals?
How RTE will be helpful for me in Unit testing? Or what really RTE is doing from software developer point of view?
http://www.autosar.org/gfx/AUTOSAR_TechnicalOverview_b.jpg
I read a lot technical terms... but being a software developer these points are important for me to know. Can you explain it a bit to me.
Your reply will be appreciated.
I don't think it is going to be that easy...
I believe that you are developing Autosar SWC (software component).
I would recommend for you to develop a portable C module. That has very clear inputs, outputs and req. on execution (check Autosar runnables).
Remember Autosar ECU includes RTOS, therefore your module will be part of a OS task.
When and if you come to the point of building an Autosar ECU, you will be able to wrap the module and connect ins/outs with Autosar virtual functional bus signals. For that you will need Autosar framework and probably configuration tools. These are complex and expensive.
Unit test the module the usual way you test C module.
Good luck.
P.S. RTE is just the "glue" code generated automatically by configuration tools according the configuration of ECU BSW and System Extract for that ECU. You will worry about it during wrapping.
The Idea behind dividing the functionality in AUTOSAR SWC and Basic software is to make the application SW development independent of any platform. To answer your questions.
RTE is giving the application a signal based interface, hence you expect the other SW components (inter-ECU /intra-ECU) to provide the required data in the form of signals, you dont care about the platform or type of communication medium
Yes by standardizing the interfaces (all kind of interactions), a software component or any Basic software module can be Fixed into the SW architecture. Read more about the different type of AUTOSAR interfaces.
Refer to answer 1
RTE is there as a layer to 'abstract' the inner components of the system. For example, if you need to get access to the system's installed flash memory, you have to use the RTE-related memory functions.
You are correct. You only need to read the specifications and use the corresponding functions to get your desired result in an AUTOSAR system.
RTE makes sure that the developers of the software components and the middle-layer systems would work properly with minimal interaction between them. SWC developers just need to read the AUTOSAR standard and follow it to ensure compatibility with the middle-layer systems, since it is expected that the middle-layer system developers would follow that same standard in providing functionalities on their side. It also helps developers with the portability of their software.
I think all your questions can be answered by reading the AUTOSAR standard documents at the AUTOSAR website. Most of my limited knowledge in development of AUTOSAR systems (started reading about it for close to a month already), I got there.
I am a Software developer who Developed a Console Application Tool for Autosar RTE, Test Case Generation for RTE, and wrote Unit Testing Scripts for the tool I created.
I Developed these using C# and NUnit Framework. Same can be Developed using C or a java or any other language. Ultimate goal is to generate AUTOSAR modules (.c and .h files) based on the requirement.
1. Software Developer Scope
As a Software Developer, i had a task to implement complete RTE and Test Applications for the Implemented RTE code.
Inputs and Outputs:
Basically our inputs were Software Component files and ECU Extract which were in ARXML format and Outputs were Rte and test application source and header files (.c and .h) which were created based on the requirements.
Tasks as a developer:
Here, as a developer, we need to perform Input parsing from AXXML to our own data structure, Schema Validation, Modal Validation, File generation etc.
2. Standardization
Yes, AUTOSAR Architecture provides standardized interface. Irrespective of the implementation strategy, API structure remains same which eases the usage. This acts as a generalised library where you can use already developed Module or you can implement the module in your own way by considering API specification. All you need is to follow the specifications provided for every module you use.
Requirement varies from Company to Company but the way of using APIs remains same.
3. Unit Testing
Unit Testing has nothing to do with RTE or AUTOSAR modules. You will be testing the Uints of Your Code. When i say your code, it is the one which you used to develop any particular module (eg. Rte.c) and not testing the generated module itself. You will be testing the Source code you developed to generate the specfic module. Your source code is not part of RTE or any other module implementation but is tool which generates the module implementation.
Overview:
Software developer have various scope in generating AUTOSAR modules depends on the Requirement.
You can develop a tool which will generate AUTOSAR modules.
You can develop an Editor which will is used to edit/create AUTOSAR XML files. (Eg: Artop)
Developing might sound complex as we do not get direct resources other than specifications. Once you are in, you will learn a lot.
To answer your question
If you will go through the Layered Architecture of AUTOSAR, you will come to know this architecture is followed to minimize the dependency of the each module
(layer) with lower layer.
Again, RTE is a like wrapper to separate the lower layered dependency, this enables to work on each layer independently. Most of the virtual buses are mapped with RTE, in my experience I have worked on IOC which is allowed to map with RTE and which communicates with other SWC's with memory and core boudary. To OS Developer its via to the application layer and Mapped software partitions.
The standard is used to maintain uniformation in all software layers, however to meet the requirements the developers may have different way of implementation and design, but the API's and requirements will be universal.
This is useful for standardised intefacing too.
For Unit testing of the developers OS design and implementation RTE works as abstract module.
Reading Specs for different module will resolve most of doubts.
Suppose I have a software and I want to make cross-plataform plugins. You compile the plugin for a virtual machine, and any platform running my software would be able to run this code.
I am wondering if it is possible to use LLVM interpreter and bytecode for this purpose. Also, I am wondering if does make sense using LLVM for this purpose instead of something else, i.e. is it what LLVM was made for?
I'm not sure that LLVM was designed for it. However, I doubt there is anything that hasn't been done using LLVM1
Other virtual-machines based script engines are specifically created for the job:
LUA is very popular
Wikipedia lists some other Extension/embeddable languages under the Scripting language entry
If you're looking for embeddable virtual machines:
IKVM supports embedding JVM and CLR in a bridged mode (interoperable)
Parrot supports embedding (and includes a Python interpreter; mind you, you can just run python bytecode images)
Perl has similar architecture and supports embedding
Javascript supports embedding (not sure about the architecture of v8, but I guess it would use a virtual machine)
Mono's CLR engine supports embedding: http://www.mono-project.com/Embedding_Mono
1 including compiling c++ information to javascript to run in your browser...
There is VMIR (https://github.com/andoma/vmir) which is a LLVM bitcode interpreter / JIT engine that's intended to be embedded into other apps.
Disclaimer: I'm the author of it and it's still work-in-progress but works reasonable well.
In theory, there exist a limited subset of LLVM IR which can be portable across various platforms. You shall not specify alignments, you shall not bitcast pointers to integral types, you must avoid intrinsics, etc. Which means - you can't immediately use a code generated by a stock C compiler (llvm-gcc, Clang, whatever), unless you specify a limited target for it and implement sanitising LLVM passes. Another issue is that the bitcode format from different LLVM versions is not guaranteed to be compatible.
In practice, I would not go there. Mono is a reasonably small, embeddable, fast VM, and all the .NET stack of tools is available for it. VM itself is pretty low-level (as long as you do not care about the verifyability).
LLVM includes an interpreter, so if you can build this interpreter for your target platforms, you can then evaluate LLVM bitcode on the fly.
It's apparently not so fast though.
In their classic discussion (that you do not want to miss if you're a fan of open source, LLVM, compilers) about LLVM vs libJIT, that has happened long before LLVM became famous and established, the author of libJIT Rhys Weatherley raised this particular issue, he stated that LLVM is not suitable to be embedded, while Chris Lattner, the author of LLVM stated that otherwise, it is modular and you can use it in any possible fashion including embedding only the parts you need.
I've been making my way through The Little Schemer and I was wondering what environment, IDE or interpreter would be best to use in order to test any of the Scheme code I jot down for myself.
Racket (formerly Dr Scheme) has a nice editor, several different Scheme dialects, an attempt at visual debugging, lots of libraries, and can run on most platforms. It even has some modes specifically geared around learning the language.
I would highly recommend both Chicken and Gauche for scheme.
PLT Scheme (DrScheme) is one of the best IDEs out there, especially for Scheme. The package you get when downloading it contains all you need for developing Scheme code - libraries, documentation, examples, and so on. Highly recommended.
If you just want to test your scheme code, I would recommend PLT Scheme. It offers a very complete environment, with debugger, help, etc., and works on most platforms.
But if you also want to get an idea of how the interpreter behind the scenes works, and have Visual Studio, I would recommend Tachy. It is a very lightweight scheme interpreter written in c#. It allows you to debug just your scheme code, or also step through the c# interpreter behind the scenes to see what is going on.
Just for the record I have to mention IronScheme.
IronScheme will aim to be a R6RS conforming Scheme implementation based on the Microsoft DLR.
Version 1.0 Beta 1 was just released. I think this should be good implementation for someone that is already using .NET framework.
EDIT
Current version is 1.0 RC 1 from Oct 23 2009
Google for the book's authors (Daniel Friedman and Matthias Felleisen). See whether either of them is involved with a popular, free, existing Scheme implementation.
It doesn't matter, as long as you subscribe to the mailing list(wiki/irc/online-community-site) for the associated community. It's probably worth taking a look at the list description and archives to be sure you are in the right one.
Most of these are friendly and welcoming to newcomers, so don't be afraid to ask.
It's also worth searching the archives of their mailing list(or FAQ or whatever they use) when you have a question - just in case it is a frequent question.
Good Luck!
Guile running under Geiser within Emacs provides a nice, lightweight implementation for doing the exercises. Racket will also run under Geiser and Emacs, though I personally prefer Guile and Chez Scheme a bit more.
Obviously installation of each will depend on your OS. I would recommend using Emacs version 24 and later since this allows you to use Melpa or Marmalade to install Geiser and other Emacs extensions.
The current version of Geiser also works quite nicely with Chicken Scheme, Chez Scheme, MIT Scheme and Chibi Scheme.
LispMe works on a Palm Pilot, take it anywhere, and scheme on the go. GREAT way to learn scheme.
I've used PLT as mentioned in some of the other posts and it works quite nicely. One that I have read about but have not used is Allegro Common LISP Express. I read a stellar review about their database app called Allegro Cache and found that they are heavy into LISP. Like I said, I don't know if it's any good, but it might be worth a try.
I am currently working through the Little Schemer as well and use Emacs as my environment, along Quack, which adds additional support and utilities for scheme-mode within Emacs.
If you are planning on experimenting with other Lisps (e.g. Common Lisp), Emacs has excellent support for those dialects as well (Emacs itself can be customized with its own dialect of Lisp, appropriately named Emacs Lisp).
As far as Scheme implementations go, I am currently using Petit Chez Scheme, which is an interpreted, freely distributable version of Chez Scheme (which uses a compiler and costs money to obtain a license).