What is different meaning of source and code in programming? - module

Excerpt From: Robert C. Martin. “Clean Architecture: A Craftsman's Guide to Software Structure and Design (Robert C. Martin Series).”
“Now, what do we mean by the word “module”? The simplest definition is just a source file. Most of the time that definition works fine. Some languages and development environments, though, don’t use source files to contain their code. In those cases a module is just a cohesive set of functions and data structures.”
I got confused about "source" and "code" here. What does him meaning when he wrote "don’t use source files to contain their code"?
Thanks your explains.

"code" can be any statement or expression. "Source code" and "code" can be used interchangeably.
A file system file (ie. helloworld.c) can contain any number of statements. However, some development environments do not use files to store their code. Conceptually it could be just on the web and stored in a database, or the code could be in-memory only, and so not in a file.
In this particular passage, Mr. Martin (also known as Uncle Bob) is trying to illustrate that a module can mean different things depending upon the language or development environment. It is better to concentrate on the last line you quoted: a module is just a cohesive set of functions and data structures. as this is a sufficient definition without getting confused about the storage location of the code.

Related

systemverilog module namespaces

I am combining two designs into a single chip design. The RTL code is written in SystemVerilog for synthesis. Unfortunately, the two designs contain a number of modules with identical names but slightly different logic.
Is there a namespace or library capability in SystemVerilog that would allow me to specify different modules with the same name? In other words is there a lib1::module1, lib2::module1 syntax I could use to specify which module I want? How is this sort of module namespace pollution best handled?
Thanks
Look into config and library. See IEEE Std 1800-2017 § 33. Configuring the contents of a design
library will map this files to target libraries based on file paths (IEEE Std 1800-2017 § 33.3. Libraries)
config will map which library to use for paralytic module (global, instances, subscope) (IEEE Std 1800-2017 § 33.4. Configurations)
Examples are provided in the section 33.8.
Note: some simulators want -libmap <configfile> in the command line. Refer to your simulators manual.
Unfortunately, neither verilog nor system verilog provide a comprehensive solution for the namespaces problem for design element (which include modules). V2K libraries and config statements (yes,they were introduced in verilog v2k) can partially help you solving this issue for modules only, and only if you plan for this in advance and use correct methodology to implement it. Not many people try to use v2k libs to solve it.
There are other parts of this as well, which you might discover. It include other design elements, macro names, file names, package names, ... System verilog makes it even worse with introducing of the global scopes.
So, depending on the complexity of your design you might be able to fix it with v2k libs. But in general, the solution always lies in the methodology and having those names uniquified upfront. Some companies even try to use on-fly uniquification by automatically rewriting verilog code in order to make those names unique.
You might also be able to solve some of the issues like that using compilation units, as defined in the SV standard and which are implemented at least by major tool vendors.

Step by step developing AUTOSAR's Software Component

I'm a new learner about the AUTOSAR and already understand about summary of AUTOSAR Architecture. I have read AUTOSAR_TR_Methodology.pdf as my starting point to developing AUTOSAR's sofware components (SWC). For another information, I should get the "system extract" from the main organization and I will add my SWC into it. In that document, the task that I have to do to develop SWC described one by one as a whole process, but not in sequence. So my question is, after I got the system extract what the task that required to do to make SWCs? It will be great if the tools is mentioned.
The system extract usually contains software-components, albeit usually in form of so called compositions (in AUTOSAR lingo: CompositionSwComponentType). These compositions come with defined PortPrototypes which in turn are typed by PortInterfaces.
The task of the designer of an application software-component (technically speaking: an ApplicationSwComponentType) is to connect to the PortPrototypes define on the composition level and then specify the internal behavior (SwcInternalBehavior) that formally defines the internal structure of a software-component. On this basis the function of the software-component can be implemented.
A software-component itself consists of the formal specification (serialized in the ARXML format) and the corresponding C code that implements the actual function of the software-component.
There are tons of tools out there to develop AUTOSAR software-components. Most of these are commercial, and require a license. On top of that, the toolchain to be applied for a given project is in many cases predefined and you may not be able to select your tools freely.
If you seriously want to dive into AUTOSAR I'd strongly advise taking a class offered by the various tool vendors, preferably a class held by the tool vendor selected for a given actual ECU project.

Autodocumentation type functionality for Fortran?

In the past I've used Doxygen for C and C++, but now I've been thrown on Fortran project and I would like to get a quick all encompassing look at the architecture.
In the past I've found reverse engineering tools to be useful where no documentation of the architecture exists.
So, is there a tool out there that will reverse engineer Fortran code?
I tried to use Doxygen, but didn't have any luck. I will be working with two different projects - one Fortran 90 and I think is in Fortran 77.
Thanks for any insights and feedback.
Tools which may help with reverse engineering:
SciTools Understand
Link with some more tools (search "fortran")
Also, maybe some of these unit testing frameworks will be helpful (I haven't used them, so I cannot comment on the pros and cons of any of them):
FUnit
FRUIT
Ftnunit
(these links link to fortranwiki, where you can find a tidbit on every one of them, and from there there are links to their home sites).
Doxygen 1.6.1 will generate documentation, call graphs, etc. for Fortran source code in free-format (F90) format. You are out of luck for auto-documenting fixed-format (F77) code with doxygen.
All is not lost, however. The conversion from fixed to free format is straightforward and can be automated to a great degree - change comment characters to '!', change continuation characters to '&', and append '&' to lines to be continued. In fact, if the appended continuation character is placed in column 73, it should be ignored by standard F77 compilers (which still only recognize code in columns 1 through 72) but will be recognized by F9x/F2003/F2008 compilers. This allows the same code to be recognized as both in fixed and free format, which lets you gracefully migrate from one format to the other.
Conveniently, there are about a thousand small programs that will do this format adjustment to some degree or another. Realistically, if you're going to be maintaining the code, you might as well move it away from the 1928 spec for Hollerith (IBM) punched cards. :)

Batch source-code aware spell check

What is a tool or technique that can be used to perform spell checks upon a whole source code base and its associated resource files?
The spell check should be source code aware meaning that it would stick to checking string literals in the code and not the code itself. Bonus points if the spell checker understands common resource file formats, for example text files containing name-value pairs (only check the values). Super-bonus points if you can tell it which parts of an XML DTD or Schema should be checked and which should be ignored.
Many IDEs can do this for the file you are currently working with. The difference in what I am looking for is something that can operate upon a whole source code base at once.
Something like a Findbugs or PMD type tool for mis-spellings would be ideal.
As you mentioned, many IDEs have this functionality already, and one such IDE is Eclipse. However, unlike many other IDEs Eclipse is:
A) open source
B) designed to be programmable
For instance, here's an article on using Eclipse's code formatting functionality from the command line:
http://www.peterfriese.de/formatting-your-code-using-the-eclipse-code-formatter/
In theory, you should be able to do something similar with it's spell-checking mechanism. I know this isn't exactly what you're looking for, and if there is a program for doing spell-checking in code then obviously that'd be better, but if not then Eclipse may be the next best thing.
This seems little old but seems to do a good job
Source Code Spell Checker

Refactoring dissassembled code

You write a function and, looking at the resulting assembly, you see it can be improved.
You would like to keep the function you wrote, for readability, but you would like to substitute your own assembly for the compiler's. Is there any way to establish a relationship between your high-livel language function and the new assembly?
If you are looking at the assembly, then its fair to assume that you have a good understanding about how code gets compiled down. If you have this knowledge, then its sometimes possible to 'reverse enginer' the changes back up into the original language but its often better not to bother.
The optimisations that you make are likely to be very small in comparison to the time and effort required in first making these changes. I would suggest that you leave this kind of work to the compiler and go have a cup of tea. If the changes are significant, and the performance is critical, (as say in the embedded world) then you might want to mix the normal code with the assemblar in some fashion, however, on most computers and chips the performance is usually sufficient to avoid this headache.
If you really need more performance, then optimise the code not the assembly.
None, I suppose. You've rejected the compiler's work in favor of your own. You might as well throw out the function you wrote in the compiled language, because now all you have is your assembler in that platform.
I would highly advise against engaging in this kind of optimization because unless you're sure, via profiling and analysis, that you truly are making a difference.
It depends on the language you wrote your function in. Some languages like C are very low-level, translating each function call or statement to specific assembly statements. If you did use C, you can replace your function with inline assembly to improve performance.
Other high-level languages may convert each statement into macro routines or other more complex calls on the assembly side. Certain optimizations (like tail recursion, loop unrolling, etc) can be implemented easily on the source side, but others (like making more efficient use of the register file) may be impossible (again, depending on the language and the compiler you're using).
Its tough to say there is any relationship between modified assembly and the source which generated the unmodified version. It will certainly confuse debugging tools: register contents will no longer match the source variables they were supposed to correspond to.
There are a number of places in packet processing code where I've examined the generated assembly and gone back to change the original source code in order to improve the result. Re-arranging source can reduce the number of branches, __attribute__ and compiler arguments can align branch points and functions to reduce I$ misses. In desperate cases a little inline assembly can be used, so that the binary can still be compiled from source.
Something you could try is to separate your original function into its own file, and provide a make rule to build the assembler from there. Then update the assembler file with your improved version, and provide a make rule to build an object file from the assembler file. Then change your link rules to include that object file.
If you only ever change the assembler file, that will keep on being used. If you ever change the original higher-level language file, the assembler file will be rebuilt and the object file built from the new (unimproved) version.
This gives you a relationship between the two; you probably want to add a warning comment at the top of the higher-level language file to warn about the behaviour. Using some form of VCS will give you the ability to recover the improved assembler file if you make a mistake here.
If you're writing a native compiled app in Visual C++, there are two methods:
Use the __asm { } block and write your assembler in there.
Write your functions in MASM assembler, assemble to .obj, and link it as an static library. In your C/C++ code, declare the function with an extern "C" declaration.
Other C/C++ compilers have similar approaches.
In this situation, you generally have two options: optimize the code or rewrite the compiler. I can't see where breaking the link between source and op is ever going to be the correct solution.