We have a file within inline assembly for a DSP. Cppcheck thinks there are a load of "variable assigned but not used" lines in the assembly.
Is there any way to tell it to skip checking the inline assembly sections? I couldn't see anything obvious in the manual, and it is a bit tedious to have to suppress each line in turn (t
Here's an example of some of the the offending lines. It's a context save routine.
inline assembly void save_ctx()
{
asm_begin
.undef global data saved_ctx;
.undef global data p_ctx;
asm_text
...
st XM[p0++], r0;
st XM[p0++], r1;
st XM[p0++], r2;
st XM[p0++], r3;
st XM[p0++], r4;
st XM[p0++], r5;
st XM[p0++], r6;
...
I can turn off the messages with
// cppcheck-suppress unreadVariable
before each line, but it would be better to just tell cppcheck to skip the whole inline assembly section.
Is there any way I can do this, or will we just have to accept lots of repeated comments?
Somewhat counter-intuitive, but thanks to #DavidWohlferd for pointing me the right way.
-D__CPPCHECK__ doesn't do the right thing. It tells cppcheck to only check blocks with __CPPCHECK__ or nothing defined, i.e. it completely turns off the combinatorial checking. However there is a simple but counter-intuitive solution using -U.
Wrap the block with
#define EXCLUDE_CPPCHECK
#ifdef EXCLUDE_CPPCHECK
...
#endif // EXCLUDE_CPPCHECK
Now if you call cppcheck with -UEXCLUDE_CPPCHECK it will skip that block (even though the #define is just before it!) but still do all the other combinations of #define which are used in #if.
Thank you David and Drew.
According to man page (didn't try myself) you can add command line options:
--suppress=<spec>
Suppress a specific warning. The format of <spec> is: [error id]:[filename]:[line]. The [filename] and [line] are optional. [error id] may be * to suppress all warnings (for a specified file or files). [filename] may contain the wildcard characters * or ?.
--suppressions-list=<file>
Suppress warnings listed in the file. Each suppression is in the format of <spec> above.
I.e. in your case --suppress=unreadVariable:all_dsp_asm_*.cpp and switch it completely for those particular files. Which is IMO usable, as you can put all the DSP inline asm things into separate file, so it will not affect your ordinary cpp check.
Or in worst case use the suppression-list listing file, where you may list particular lines ad absurd I guess, to cover whole inline parts.
I don't see how to inline it in the source, looks like it may affect only single line.
Checking probably more up to date version of manual here, you can exclude whole file also by -i<filename> (second page).
The options above are at page 11.
Related
What options do Bazel provide for creating new or extending existing targets that call C/C++-code checkers such as
qac
cppcheck
iwyu
?
Do I need to use a genrule or is there some other target rule for that?
Is https://bazel.build/versions/master/docs/be/extra-actions.html my only viable choice here?
In security critical software industries, such as aviation and automotive, it's very common to use the results of these calls to collect so called "metric reports".
In these cases, calls to such linters must have outputs that are further processed by the build actions of these metric report collectors. In such cases, I cannot find a useful way of reusing Bazel's "extra-actions". Ideas any one?
I've written something which uses extra actions to generate a compile_commands.json file used by clang-tidy and other tools, and I'd like to do the same kind of thing for iwyu when I get around to it. I haven't used those other tools, but I assume they fit the same pattern too.
The basic idea is to run an extra action which generates some output for each file (aka C/C++ compilation command), and then find all the output files afterwards (outside of Bazel) and aggregate them. A reasonably complete example is here for reference. Basically, the action listener (written in Python) decodes the extra action proto and extracts the source files, compiler options, etc:
action = extra_actions_base_pb2.ExtraActionInfo()
with open(argv[1], 'rb') as f:
action.MergeFromString(f.read())
cpp_compile_info = action.Extensions[extra_actions_base_pb2.CppCompileInfo.cpp_compile_info]
compiler = cpp_compile_info.tool
options = ' '.join(cpp_compile_info.compiler_option)
source = cpp_compile_info.source_file
output = cpp_compile_info.output_file
print('%s %s -c %s -o %s' % (compiler, options, source, output))
If you give the extra action an output template, then it can write that output to a file. If you give the output files distinctive names, you can find them all in the output tree and merge them together however you want.
A more sophisticated option is to use bazel query --output=proto and write code to calculate the extra action output filenames of the targets you're interested in from there. That requires writing more code, but you don't have problems with old output files in the output tree that are accidentally included when aggregating.
FWIW, Aspects are another possibility. However, I think extra actions work acceptably for this.
I'm having some preprocessing mishap while compiling a 3rd party library with g++.
I can see in -E output that a certain header wrapped with #ifndef SYMBOL is being bypassed. Apparently, that symbol has been defined somewhere else.
But I cannot see where because processed directives are not present in the -E output.
Is there a way to include them (as comments, probably)?
No, there is no standard way to get preprocessed directives as comments.
However, you could use g++ -C -E and the line numbers (output in lines starting with #) and the comments (which are then copied to the preprocessed form).
And you might also use the -H option (to get the included files)
The closest thing I found is the -d<chars> family of options:
-dM dumps all the macros that are defined
-dD shows where they are defined (dumps #define directives)
-dU shows where they are used (in place of #if(n)def, it outputs #define or #undef depending on whether the macro was defined)
Adding I to any of these also dumps #include directives.
The downside is only one of the three can be used at a time and they suppress normal output.
Another, less understandable downside is -dD and -dU do not include predefined macros.
I am working on a project where our verification test scripts need to locate symbol addresses within the build of software being tested. This might be used for setting breakpoints or reading static data from memory. What I am after is to create a map file containing symbol names, base address in memory, and size. Our build outputs an ELF file which has the information I want. I've been trying to use the readelf, nm, and objdump tools to try and to gain the symbol addresses I need.
I originally tried readelf -s file.elf and that seemed to access some symbols, particularly those which were written in assembler. However, many of the symbols that I wanted were not in there - specifically those that originated within our Ada code.
I used readelf --debug-dump file.elf to dump all debug information. From that I do see all symbols, including those that were in the Ada code. However, the format seems to be in the DWARF format. Does anyone know why these symbols would not be output by readelf when I ask it to list the symbolic information? Perhaps there is simply an option I am missing.
Now I could go to the trouble of writing a custom DWARF parser to get the information but if I can get it using one of the Binutils (nm, readelf, objdump) then I'd really like prefer a standard solution.
DWARF is the debug information and tries to reflect the relation of the original source code. Taking following code as an example
static int one() {
// something
return 1;
}
int main(int ac, char **av) {
return one();
}
After you compile it using gcc -O3 -g, the static function one will be inlined into main. So when you use readelf -s, you will never see the symbol one. However, when you use readelf --debug-dump, you can see one is a function which is inlined.
So, in this example, compiler does not prohibit you use optimization with -g, so you can still debug the executable. In that example, even the function is optimized and inlined, gdb still can use DWARF information to know the function and source/line from current code block inside inlined function.
Above is just a case of compiler optimization. There might be plenty of reasons that could lead to mismatch symbols address between readelf -s and DWARF.
Noob build question.
When I change this;
#define NOTIFICATION_PLAYBACK_STATE_CHANGED #"SC_NOTIFICATION_PLAYBACK_STATE_CHANGED"
to this;
NSString * const NOTIFICATION_PLAYBACK_STATE_CHANGED = #"SC_NOTIFICATION_PLAYBACK_STATE_CHANGED";
I get this:
ld: 752 duplicate symbols for architecture armv7
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Sample of the 752 duplicates:
duplicate symbol _NOTIFICATION_PLAYBACK_STATE_CHANGED in:
/Users/myname/Library/Developer/Xcode/DerivedData/MyApp-hazegevzmypmbtbnalpiwebrhaea/Build/Intermediates/MyApp.build/Debug-iphoneos/MyApp.build/Objects-normal/armv7/SCRemoteRecordManager.o
/Users/myname/Library/Developer/Xcode/DerivedData/MyApp-hazegevzmypmbtbnalpiwebrhaea/Build/Intermediates/MyApp.build/Debug-iphoneos/MyApp.build/Objects-normal/armv7/SCRegisterAcceptTermsViewController.o
duplicate symbol _NOTIFICATION_PLAYBACK_STATE_CHANGED in:
/Users/myname/Library/Developer/Xcode/DerivedData/MyApp-hazegevzmypmbtbnalpiwebrhaea/Build/Intermediates/MyApp.build/Debug-iphoneos/MyApp.build/Objects-normal/armv7/SCRemoteRecordManager.o
/Users/myname/Library/Developer/Xcode/DerivedData/MyApp-hazegevzmypmbtbnalpiwebrhaea/Build/Intermediates/MyApp.build/Debug-iphoneos/MyApp.build/Objects-normal/armv7/SCStreamingVideoViewController.o
(A search for this particular duplicate symbol returns nothing outside of the class's own .h and .m files.)
There are many other places in the code where I have replaced such a #define with a constant without objections during the build.
Can someone take a guess at what's happening here (or advise me what information I would need to post for a guess to be possible)?
Is trawling through the code replacing #defines where they have been used to create constants (leaving stuff like debug/release defs untouched) a dumb thing to do, i.e. should I be doing this differently (if at all)?
You seem to have these constants defined in a header file. The header file is imported into multiple other files; the definition is thus repeated across all those files. Multiple definitions using the same name are not allowed.
What you want to do instead is to declare the constant in the header:
extern NSString * const NOTIFICATION_PLAYBACK_STATE_CHANGED;
extern indicates to the compiler "this is a name I'm going to use, but the storage and definition for it is elsewhere; let the linker handle that".
Then, in a file that imports the header, but is not itself imported anywhere, you define the string:
NSString * const NOTIFICATION_PLAYBACK_STATE_CHANGED = #"SC_NOTIFICATION_PLAYBACK_STATE_CHANGED";
The linker will find this definition, and all the copies of the extern declarations, and tie them together to be the same thing.
(It may interest you to see what errors you get if you omit each of these pieces in turn. You'll get a compiler error in one case, and a linker error in the other.)
I have a program which performs a useful task. Now I want to produce the plain-text source code when the compiled executable runs, in addition to performing the original task. This is not a quine, but is probably related.
This capability would be useful in general, but my specific program is written in Fortran 90 and uses Mako Templates. When compiled it has access to the original source code files, but I want to be able to ensure that the source exists when a user runs the executable.
Is this possible to accomplish?
Here is an example of a simple Fortran 90 which does a simple task.
program exampl
implicit none
write(*,*) 'this is my useful output'
end program exampl
Can this program be modified such that it performs the same task (outputs a string when compiled) and outputs a Fortran 90 text file containing the source?
Thanks in advance
It's been so long since I have touched Fortran (and I've never dealt with Fortran 90) that I'm not certain but I see a basic approach that should work so long as the language supports string literals in the code.
Include your entire program inside itself in a block of literals. Obviously you can't include the literals within this, instead you need some sort of token that tells your program to include the block of literals.
Obviously this means you have two copies of the source, one inside the other. As this is ugly I wouldn't do it that way, but rather store your source with the include_me token in it and run it through a program that builds the nested files before you compile it. Note that this program will share a decent amount of code with the routine that recreates the code from the block of literals. If you're going to go this route I would also make the program spit out the source for this program so whoever is trying to modify the files doesn't need to deal with the two copies.
My original program (see question) is edited: add an include statement
Call this file "exampl.f90"
program exampl
implicit none
write(*,*) "this is my useful output"
open(unit=2,file="exampl_out.f90")
include "exampl_source.f90"
close(2)
end program exampl
Then another program (written in Python in this case) reads that source
import os
f=open('exampl.f90') # read in exampl.f90
g=open('exampl_source.f90','w') # and replace each line with write(*,*) 'line'
for line in f:
#print 'write(2,*) \''+line.rstrip()+'\'\n',
g.write('write(2,*) \''+line.rstrip()+'\'\n')
f.close
g.close
# then complie exampl.f90 (which includes exampl_source.f90)
os.system('gfortran exampl.f90')
os.system('/bin/rm exampl_source.f90')
Running this python script produces an executable. When the executable is run, it performs the original task AND prints the source code.