What is the difference in gcc between lto and fat-lto-objects - optimization

I have tried to compile to assembler my source code with next flags:
1. -flto
2. -flto -ffat-lto-objects
3. -flto -fno-fat-lto-objects
Third one provides optimized slim LTO code as written in documentation, but I don't see any difference in the output assembly file between first and second, why?
OS: linux
Compiler: GCC 4.7

The difference between fat and non-fat object files is that fat object files contains both intermediate language as well as the normally compiled code. At linktime, if you invoke compiler without -flto, fat objects will be handled as normal object files (and LTO information discarded), while slim objects will invoke LTO optimizers because there is no way to handle them without it.
If you both compile and link with -flto, both fat and slim objects ought to give you the same binary, just slim objects will be smaller and faster to compile, because you will avoid the redundant code generation.

Probably it will be helpful to someone:
Here wrote next:
The current implementation only produces “fat” objects, effectively doubling compilation time and increasing file sizes up to 5x the original size
So as I think it's the main reason.

Related

What does the -specs argument do in arm-none-eabi-gcc?

I was having trouble with the linker for the embedded arm gcc compiler, and I found a tutorial somewhere online saying that I could fix my linker errors in arm-none-eabi-gcc by including the argument -specs=nosys.specs, which worked for me, and it was able to compile my code.
My chip is an ATSAM7SE256 microcontroller, which to my understanding is an arm7tdmi processor using the armv4t and thumb instruction sets, and I've been compiling my code using:
arm-none-eabi-gcc -march=armv4t -mtune=arm7tdmi -specs=nosys.specs -o <exe_name>.elf <input_files>
And the code compiles with no issue, but I have no idea if it's doing what I think it's doing.
What is the significance of a spec file? What other values can you set with -specs=, and in what situations would you want to? Is nosys.specs the value I want for a completely embedded arm microcontroller?
It is documented at: https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Overall-Options.html#Overall-Options
It is a file containing switches to override standard defaults for various build components such as the compiler, assembler and linker. For example it can be used to replace the default C library.
I have never seen it used; typically bare-metal embedded system builds explicitly specify --nostdlib then explicitly link the required library. It could be used for environment specific build environments to link other default code such as an RTOS I guess. Personally I'd rather make all that explicit on the command line that hiding it in a file somewhere.
Essentially it applies the switches specified in the file as if they were defaults, so can be used to define defaults for specific build and execution environments.
The format of the specs file is documented at https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Spec-Files.html#Spec-Files
Without seeing both the linker errors and the content of the nosys.specs file in this case it is difficult to say how or why it solved your linker problem. The alternative solution of course would be to apply whatever switches are in the specs file directly.

how to compile perl6 program to generate bytecode?

I am trying to understand perl6 and its changes than perl5. I come to know that perl 6 is compiled languages but I am not getting how? It is not generating any intermediate code (directly executable or jvm bytecode)?
I am not getting any option to do the same. How to do it?
Currently I am able to directly execute my code.
$ perl6-j hello.p6
Hello world
I am following https://github.com/rakudo/rakudo
You can use --target= on the perl6 command line to see a human readable trace of each stage of the compiler. On JVM if you wish to have a "compiled" bytecode output you can use --target=jar and then take a look inside there. But ultimately Perl 6 compiles on the fly unless asked otherwise. It leaves a bytecode representation cached in library path directories of each "CompUnit", so that the compile step is faster next time. This can be seen in .precomp directories. The precomp cache is very tricky to use by hand due to how Perl 6 hashes and indexes all comp units. This is so libraries with the same name but different version and author can sit side by side. On MoarVM there is no equivalent to --target=jar but in the .precomp directory you can see the raw bytecode files that can be directly executed by moar if you link the runtime setting.
Updating the answer for this as this is now supported.
To generate the bytecode for a perl6 program, run perl6 --target=<backend> --output=foo foo.pl6. You can use mbc, jvm, or js as your target backend. The bytecode will be written to the file foo.
Writing bytecode to a file both for modules and programs is not official supported yet. Hence the lack of documentation for --target.

What is g++ -I option (capital i)?

Trying to do this and stumbled upon the -I option here: $ g++ -o version version.cpp -I/usr/local/qt4/include/QtCore -I/usr/local/qt4/include -L/usr/local/qt4/lib -lQtCore
I can't find any information about it
If you're looking for what -I does:
-I[/path/to/header-files]
Add search path to header files (.h) or (.hpp).
From https://caiorss.github.io/C-Cpp-Notes/compiler-flags-options.html
This pretty much just means that any #include statements you make to an external library (in your case qt) have to be referenced so that g++ knows where to look.
if my understanding is correct, question is about -i, not -L, I hope this helps:
-Idir Append directory dir to the list of directories searched for include files.
on this link
http://www.cs.virginia.edu/helpnet/Software_Development/compilers/g.html
g++ - GNU project C++ Compiler (v2 preliminary)
g++ [option | filename] ...
Capabilities
The C and C++ compilers are integrated. Both process input files through one or more of four stages: preprocessing, compilation, assembly, and linking.
C++ source files use one of the suffixes `.C', `.cc', or `.cxx'.
Options
There are many command-line options, including options to control details of optimization, warnings, and code generation, which are common to both gcc and g++. For full information on all options, see gcc(1).
Options must be separate: -dr' is quite different from- d -r '.
-c Compile or assemble the source files, but do not link. The compiler output is an object file corresponding to each source file.
-Dmacro Define macro macro with the string `1' as its definition.
-Dmacro=defn Define macro as defn
-E Stop after the preprocessing stage; do not run the compiler proper. The output is preprocessed source code, which is sent to the standard output.
- g Produce debugging information in the operating system's native format (for DBX or SDB or DWARF). GDB also can work with this debugging information. On most systems that use DBX format, `-g' enables use of extra debugging information that only GDB can use.
Unlike most other C compilers, GNU CC allows you to use ` -g' with `-O'. The shortcuts taken by optimized code may occasionally produce surprising results: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values were already at hand; some statements may execute in different places because they were moved out of loops.
Nevertheless it proves possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs.
-Idir Append directory dir to the list of directories searched for include files.
-llibrary Use the library named library when linking. (C++ programs often require `-lg++' for successful linking.)
-O Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.
Without `-O', the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you would expect from the source code.
Without `-O', only variables declared register are allocated in registers. The resulting compiled code is a little worse than produced by PCC without `-O'.
With `-O', the compiler tries to reduce code size and execution time.
-o file Place output in file file.

How to reuse Fortran modules without copying source or creating libraries

I'm having trouble understanding if/how to share code among several Fortran projects without building libraries or duplicating source code.
I am using Eclipse/Photran with the Intel compiler (ifort) on a linux system, but I believe I'm having a bigger conceptual problem with modules than with the specific tools.
Here's a simple example: In ~/workspace/cow I have a source directory (src) containing cow.f90 (the PROGRAM) and two modules m_graze and m_moo in m_graze.f90 and m_moo.f90, respectively. This project builds and links properly to create the executable 'cow'. The executable and modules (m_graze.mod and m_moo.mod) are stored in ~/workspace/cow/Debug and object files are stored under ~/workspace/cow/Debug/src
Later, I create ~/workplace/sheep and have src/sheep.f90 as the program and src/m_baa.f90 as the module m_baa. I want to 'use m_graze, only: ruminate' in sheep.f90 to get access to the ruminate() subroutine. I could just copy m_graze.f90 but that could lead to code getting out of sync and doesn't take into account any dependencies m_graze might have. For these reasons, I'd rather leave m_graze in the cow project and compile and link sheep.f90 against it.
If I try to compile the sheep project, I'll get an error like:
error #7002: Error in opening the compiled module file. Check INCLUDE paths. [M_GRAZE]
Under Properties:Project References for sheep, I can select the cow project. Under Properties:Fortran Build:Settings:Intel Compiler:Preprocessor I can add ~/workspace/cow/Debug (location of the module files) to the list of include directories so the compiler now finds the cow modules and compiles sheep.f90. However the linker dies with something like:
Building target: sheep
Invoking: Intel(R) Fortran Linker
ifort -L/home/me/workspace/cow/Debug -o "sheep" ./src/sheep.o
./src/sheep.o: In function `sheep':
/home/me/workspace/sheep/src/sheep.f90:11: undefined reference to `m_graze_mp_ruminate_'
This would normally be solved by adding libraries and library paths to the linker settings except there are no appropriate libraries to link to (this is Fortran, not C.)
The cow project was perfectly capable of compiling and linking together cow.f90, m_graze.f90 and m_moo.f90 into an executable. Yet while the sheep project can compile sheep.f90 and m_baa.f90 and can find the module m_graze.mod, it can't seem to find the symbols for m_graze even though all the requisite information is present on the system for it to do so.
It would seem to be an easy matter of configuration to get the linker portion of ifort to find the missing pieces and put them together but I have no idea what magic words need to be entered where in the Photran UI to make this happen.
I confess an utter lack of interest and competence in C and the C build process and I'd rather avoid the diversion of creating libraries (.a or .so) unless that's the only way to make this work.
Ultimately, I'm looking for a pure Fortran solution to this problem so I can keep a single copy of the source code and don't have to manually maintain a pile of custom Makefiles.
So can this be done?
Apologies if this has already been documented somewhere; Google is only showing me simple build examples, how to create modules, and how to link with existing libraries. There don't seem to be (m)any examples of code reuse with modules that don't involve duplicating source code.
Edit
As respondents have pointed out, the .mod files are necessary but not sufficient; either object code (in the form of m_graze.o) or static or shared libraries must be specified during the linking phase. The .mod files describe the interface to the object code/library but both are necessary to build the final executable.
For an oversimplified toy problem such as this, that's sufficient to answer the question as posed.
In a larger project with more complex dependencies (in my case, 80+KLOC of F90 linking to the MKL version of LAPACK95), the IDE or toolchain may lack sufficient automatic or user-interface facilities to make sharing a single canonical set of source files a viable strategy. The choice seems to be between risking duplicate source files getting out of sync, giving up many of the benefits of an IDE (i.e. avoiding manual creation of make/CMake/SCons files), or, in all likelihood, both. While a revision control system and good code organization can help, it's clear that sharing a single canonical set of source files among projects is far from easy given the current state of Eclipse.
Some background which I suspect you already know: Typically (including ifort) compiling the source code for a Fortran module results in two outputs - a "mod" file that contains a description of the Fortran entities that the module defines that the compiler needs to find whenever it sees a USE statement for the module, and object code for the linker that implements the procedures and variable storage, etc., that the module defines.
Your first error (the one you solved) is because the compiler couldn't find the mod file.
The second error is because the linker hasn't been told about the object code that implements the stuff that was in the source file with the module. I'm not an Eclipse user by any means, but a brute force way of specifying that is just to add the object file (xxxxx/Debug/m_graze.o) as an additional linker option (Fortran Build > Settings, under Intel Fortran Linker > Command Line). (Other tool chains have explicit "additional object file" properties for their link stage - there may well be a better way of doing this for the Intel chain.)
For more involved examples you would typically create a library out of the shared code. That's not really C specific, the only Fortran aspect is that the libraries archive of object code needs to be provided alongside the mod files that the Fortran compiler generates.
Yes the object code must be provided. E.g., when you install libnetcdf-dev in Debian (apt-get install libnetcdf-dev), there is a /usr/include/netcdf.mod file that is included.
You can now use all netcdf routines in your Fortran code. E.g.,
program main
use netcdf
...
end
but you'll have link to the netcdf shared (or static) library, i.e.,
gfortran -I/usr/include/ main.f90 -lnetcdff
However, as user MSB mentioned the mod file can only be used by gfortran that comes with the distribution (apt-get install gfortran). If you want to use any other compiler (even a different version that you may have installed yourself) then you'll have to build netcdf yourself using that particular compiler.
So creating a library is not a bad solution.

Static library symbols missing in linked executable

I am trying to link a statically created .a library with another piece of C code.
However, in the final executable several symbols (function names) are are found missing when seen with the nm command. This is due to the fact that the linker (gcc being called) is stripping the symbols which are not referenced in the other piece of C code that is being linked with the library. The function symbol that I am trying to find with the nm command is visible in the .a library.
How can I make the linker not strip the symbols omitted this way?
Compile in gcc with -dynamic to force the compiler to include all symbols. But make sure that's what you really want, since it's wasteful.
Might be useful for some static factory patterns.
Generally, the linker does strip out other symbols - mainly for
Reduce the final size of the executable
Speed up the execution of the program
There are two trains of thoughts here:
When you use the option -O as part of the gcc command line, that is optimizing the code and thus all debugging information gets stripped out, and hence the linker will automatically do the same.
When you use the option -g as part of the gcc command line, that includes all debugging information so that the executable can be loaded under the debugger with symbols intact.
In essence those two are mutually exclusive - you cannot have both combined.
So it depends on which switches did you use for this to happen. Usually, -g switch is for internal debugging and testing prior to public release. The opposite would be something like this -O2 which makes the compiler smart enough to generate a executable that would be considered optimized such as removing dead variables, unrolling loops and so on.
Hope this helps and gives you the hint
Normally you need to call some registration function in your application to generate such a reference. Of course if you don't have access to the code of the first library, you can only use the -g option as described by tommieb75.