Why would a generated gmon.out file contain no data? - gprof

I've compiled a program using -pg switch and linked using the -pg switch. When my program is executed a file "gmon.out" is produced. However after running gprof on the file, there is no data other than the standard information explaining the data provided.
Why would there be nothing in the gmon.out file? The program is obviously compiled and linked correctly as the new "gmon.out" file is generated; it just has no data.

It's a bug that happens with the recent gnu c compiler.
You can use the -no-pie option as a workaround
gcc -no-pie

Related

CMake: Source file depends on source file

I am embedding a source file into another source file using inline assembly and .incbin, which is just like I want it. I will not accept the standard objcopy method, which while works is (imho) the lesser method. xxd is also an option, but really only for very small includes. I have a static site builder that takes a lot of resources and packs it into a single program, which is very quick with .incbin.
Unfortunately, adding the JS file to the list of sources is not enough:
ninja explain: output CMakeFiles/jsapp.dir/static_site.c.o older than most recent input static_site.c (1629797306094133842 vs 1629797311521966739)
ninja explain: CMakeFiles/jsapp.dir/static_site.c.o is dirty
ninja explain: jsapp is dirty
[2/2] Linking C executable jsapp
The main C file that embeds the JS is not being rebuilt, but the static site source which is unrelated here is because the timestamp changed.
How can I tell CMake that source.c now depends on some_file.js?
As per #arrowd 's idea:
set_source_files_properties(main.c OBJECT_DEPENDS
${CMAKE_SOURCE_DIR}/my.js
)
Worked beautifully.

What does the -specs argument do in arm-none-eabi-gcc?

I was having trouble with the linker for the embedded arm gcc compiler, and I found a tutorial somewhere online saying that I could fix my linker errors in arm-none-eabi-gcc by including the argument -specs=nosys.specs, which worked for me, and it was able to compile my code.
My chip is an ATSAM7SE256 microcontroller, which to my understanding is an arm7tdmi processor using the armv4t and thumb instruction sets, and I've been compiling my code using:
arm-none-eabi-gcc -march=armv4t -mtune=arm7tdmi -specs=nosys.specs -o <exe_name>.elf <input_files>
And the code compiles with no issue, but I have no idea if it's doing what I think it's doing.
What is the significance of a spec file? What other values can you set with -specs=, and in what situations would you want to? Is nosys.specs the value I want for a completely embedded arm microcontroller?
It is documented at: https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Overall-Options.html#Overall-Options
It is a file containing switches to override standard defaults for various build components such as the compiler, assembler and linker. For example it can be used to replace the default C library.
I have never seen it used; typically bare-metal embedded system builds explicitly specify --nostdlib then explicitly link the required library. It could be used for environment specific build environments to link other default code such as an RTOS I guess. Personally I'd rather make all that explicit on the command line that hiding it in a file somewhere.
Essentially it applies the switches specified in the file as if they were defaults, so can be used to define defaults for specific build and execution environments.
The format of the specs file is documented at https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Spec-Files.html#Spec-Files
Without seeing both the linker errors and the content of the nosys.specs file in this case it is difficult to say how or why it solved your linker problem. The alternative solution of course would be to apply whatever switches are in the specs file directly.

how to compile perl6 program to generate bytecode?

I am trying to understand perl6 and its changes than perl5. I come to know that perl 6 is compiled languages but I am not getting how? It is not generating any intermediate code (directly executable or jvm bytecode)?
I am not getting any option to do the same. How to do it?
Currently I am able to directly execute my code.
$ perl6-j hello.p6
Hello world
I am following https://github.com/rakudo/rakudo
You can use --target= on the perl6 command line to see a human readable trace of each stage of the compiler. On JVM if you wish to have a "compiled" bytecode output you can use --target=jar and then take a look inside there. But ultimately Perl 6 compiles on the fly unless asked otherwise. It leaves a bytecode representation cached in library path directories of each "CompUnit", so that the compile step is faster next time. This can be seen in .precomp directories. The precomp cache is very tricky to use by hand due to how Perl 6 hashes and indexes all comp units. This is so libraries with the same name but different version and author can sit side by side. On MoarVM there is no equivalent to --target=jar but in the .precomp directory you can see the raw bytecode files that can be directly executed by moar if you link the runtime setting.
Updating the answer for this as this is now supported.
To generate the bytecode for a perl6 program, run perl6 --target=<backend> --output=foo foo.pl6. You can use mbc, jvm, or js as your target backend. The bytecode will be written to the file foo.
Writing bytecode to a file both for modules and programs is not official supported yet. Hence the lack of documentation for --target.

What is g++ -I option (capital i)?

Trying to do this and stumbled upon the -I option here: $ g++ -o version version.cpp -I/usr/local/qt4/include/QtCore -I/usr/local/qt4/include -L/usr/local/qt4/lib -lQtCore
I can't find any information about it
If you're looking for what -I does:
-I[/path/to/header-files]
Add search path to header files (.h) or (.hpp).
From https://caiorss.github.io/C-Cpp-Notes/compiler-flags-options.html
This pretty much just means that any #include statements you make to an external library (in your case qt) have to be referenced so that g++ knows where to look.
if my understanding is correct, question is about -i, not -L, I hope this helps:
-Idir Append directory dir to the list of directories searched for include files.
on this link
http://www.cs.virginia.edu/helpnet/Software_Development/compilers/g.html
g++ - GNU project C++ Compiler (v2 preliminary)
g++ [option | filename] ...
Capabilities
The C and C++ compilers are integrated. Both process input files through one or more of four stages: preprocessing, compilation, assembly, and linking.
C++ source files use one of the suffixes `.C', `.cc', or `.cxx'.
Options
There are many command-line options, including options to control details of optimization, warnings, and code generation, which are common to both gcc and g++. For full information on all options, see gcc(1).
Options must be separate: -dr' is quite different from- d -r '.
-c Compile or assemble the source files, but do not link. The compiler output is an object file corresponding to each source file.
-Dmacro Define macro macro with the string `1' as its definition.
-Dmacro=defn Define macro as defn
-E Stop after the preprocessing stage; do not run the compiler proper. The output is preprocessed source code, which is sent to the standard output.
- g Produce debugging information in the operating system's native format (for DBX or SDB or DWARF). GDB also can work with this debugging information. On most systems that use DBX format, `-g' enables use of extra debugging information that only GDB can use.
Unlike most other C compilers, GNU CC allows you to use ` -g' with `-O'. The shortcuts taken by optimized code may occasionally produce surprising results: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values were already at hand; some statements may execute in different places because they were moved out of loops.
Nevertheless it proves possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs.
-Idir Append directory dir to the list of directories searched for include files.
-llibrary Use the library named library when linking. (C++ programs often require `-lg++' for successful linking.)
-O Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.
Without `-O', the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you would expect from the source code.
Without `-O', only variables declared register are allocated in registers. The resulting compiled code is a little worse than produced by PCC without `-O'.
With `-O', the compiler tries to reduce code size and execution time.
-o file Place output in file file.

.h generated from .h.in?

There are struct definitions in the .h file that my library creates after I build it.. but I cannot find these in the corresponding .h.in. Can somebody tell me how all this works and where it gets the extra info from?
To be specific: I am building pth, the userspace threading library. It has pth_p.h.in, which doesn't contain the struct definition I am looking for, yet when I build the library, a pth_p.h appears and it has the definition I need.
In fact, I have searched every single file in the library before it is built and cannot find where it is generating the struct definition.
Pth uses GNU Autoconf, Automake, and Libtool. By running ./configure you'll be running a shell script which eventually runs m4 to detect the presence of a whole bunch of different system attributes and make changes to a number of files.
It looks like it boils down to ./configure generating Makefile from Makefile.in and then running something via make that triggers the shtool subcommand scpp:
pth_p.h: $(S)pth_p.h.in
$(SHTOOL) scpp -o pth_p.h -t $(S)pth_p.h.in -Dcpp -Cintern -M '==#==' $(HSRCS)
Obscure link, but here's an shtool-scpp manpage, which describes it as:
This command is an additional ANSI C
source file pre-processor for sharing
cpp(1) code segments, internal
variables and internal functions. The
intention for this comes from writing
libraries in ANSI C. Here a common
shared internal header file is usually
used for sharing information between
the library source files.
The operation is to parse special
constructs in files, generate a few
things out of these constructs and
insert them at position mark in tfile
by writing the output to ofile.
Additionally the files are never
touched or modified. Instead the
constructs are removed later by the
cpp(1) phase of the build process. The
only prerequisite is that every file
has a ``"#include ""ofile"""'' at the
top.
.h.in is probably processed within a configure (generated from configure.ac) script, look out for
AC_CONFIG_FILES([thatfile.h])
It replaces variables of the form #VAR# in the .in file with their values.
Edit: Just noticed if I'm right you should retag your question