My questions is related to symbols in an ELF. As we know an ELF's Symbol table holds information needed to locate and relocate a program’s symbolic definitions and references.
My question is that can we differentiate b/w a library symbol and user defined symbol (if both are global)? consider the scenario in which no source code is available and you have only ELF.
A static library is just an archive of unlinked object files (.o) (with index to speed up linker searching for symbols in it). When you link against such library, the linker takes each unresolved symbol and tries to find it there. If it finds it, it extracts corresponding object and adds it to the collection to link. So no, you can't tell whether symbol comes from static library.
If you have another instance of the library that is sufficiently close to what the executable was linked against, you could look which symbols it defines and than assume that all those symbols, plus any symbols those depend on, come from the library.
It is of course possible to tell symbols defined in shared library, because that remains different file.
But there is another point: It is most likely illegal to provide a Linux binary without sources statically linked against libc. That is, it is definitely illegal if that libc is the GNU Libc, because that is distributed under the terms of LGPL and LGPL requires providing (on request) sources of all derived code excepting code that is linked to it dynamically. If it uses different libc like sourceware newlib or bionic libc (Android) (I can't find any other). I am not however sure how well such code would work in a GNU libc-based system.
Related
I read that the windows portable executable format contains a symbol table. I understand why a symbol table would be required during the semantic analysis phase of the compilation and also during code generation. But I don't understand why the final executable itself should contain a symbol table since the addresses are mapped into the assembly code by this stage. What am I missing??
I can't really speak specifically for PE, but I'd imagine it's similar to the situation for ELF, where there are two different symbol tables to speak of:
The "ordinary" symbol table (the one one would normally refer to as "the symbol table"), is optional in the final executable. If it's present, it's used by debuggers and other programs that inspect a program with symbolic information. It is normally generated by the linker, but can, and often is, stripped away afterwards to reduce the file size.
The dynamic symbol table is used for linking against DSOs at runtime, and as such needs to be present for executables that use dynamic linking. It only lists the external symbols that the executable needs (or wants to publicize, which is also possible), however; not every symbol that was present inside it during linking.
I'm having trouble understanding if/how to share code among several Fortran projects without building libraries or duplicating source code.
I am using Eclipse/Photran with the Intel compiler (ifort) on a linux system, but I believe I'm having a bigger conceptual problem with modules than with the specific tools.
Here's a simple example: In ~/workspace/cow I have a source directory (src) containing cow.f90 (the PROGRAM) and two modules m_graze and m_moo in m_graze.f90 and m_moo.f90, respectively. This project builds and links properly to create the executable 'cow'. The executable and modules (m_graze.mod and m_moo.mod) are stored in ~/workspace/cow/Debug and object files are stored under ~/workspace/cow/Debug/src
Later, I create ~/workplace/sheep and have src/sheep.f90 as the program and src/m_baa.f90 as the module m_baa. I want to 'use m_graze, only: ruminate' in sheep.f90 to get access to the ruminate() subroutine. I could just copy m_graze.f90 but that could lead to code getting out of sync and doesn't take into account any dependencies m_graze might have. For these reasons, I'd rather leave m_graze in the cow project and compile and link sheep.f90 against it.
If I try to compile the sheep project, I'll get an error like:
error #7002: Error in opening the compiled module file. Check INCLUDE paths. [M_GRAZE]
Under Properties:Project References for sheep, I can select the cow project. Under Properties:Fortran Build:Settings:Intel Compiler:Preprocessor I can add ~/workspace/cow/Debug (location of the module files) to the list of include directories so the compiler now finds the cow modules and compiles sheep.f90. However the linker dies with something like:
Building target: sheep
Invoking: Intel(R) Fortran Linker
ifort -L/home/me/workspace/cow/Debug -o "sheep" ./src/sheep.o
./src/sheep.o: In function `sheep':
/home/me/workspace/sheep/src/sheep.f90:11: undefined reference to `m_graze_mp_ruminate_'
This would normally be solved by adding libraries and library paths to the linker settings except there are no appropriate libraries to link to (this is Fortran, not C.)
The cow project was perfectly capable of compiling and linking together cow.f90, m_graze.f90 and m_moo.f90 into an executable. Yet while the sheep project can compile sheep.f90 and m_baa.f90 and can find the module m_graze.mod, it can't seem to find the symbols for m_graze even though all the requisite information is present on the system for it to do so.
It would seem to be an easy matter of configuration to get the linker portion of ifort to find the missing pieces and put them together but I have no idea what magic words need to be entered where in the Photran UI to make this happen.
I confess an utter lack of interest and competence in C and the C build process and I'd rather avoid the diversion of creating libraries (.a or .so) unless that's the only way to make this work.
Ultimately, I'm looking for a pure Fortran solution to this problem so I can keep a single copy of the source code and don't have to manually maintain a pile of custom Makefiles.
So can this be done?
Apologies if this has already been documented somewhere; Google is only showing me simple build examples, how to create modules, and how to link with existing libraries. There don't seem to be (m)any examples of code reuse with modules that don't involve duplicating source code.
Edit
As respondents have pointed out, the .mod files are necessary but not sufficient; either object code (in the form of m_graze.o) or static or shared libraries must be specified during the linking phase. The .mod files describe the interface to the object code/library but both are necessary to build the final executable.
For an oversimplified toy problem such as this, that's sufficient to answer the question as posed.
In a larger project with more complex dependencies (in my case, 80+KLOC of F90 linking to the MKL version of LAPACK95), the IDE or toolchain may lack sufficient automatic or user-interface facilities to make sharing a single canonical set of source files a viable strategy. The choice seems to be between risking duplicate source files getting out of sync, giving up many of the benefits of an IDE (i.e. avoiding manual creation of make/CMake/SCons files), or, in all likelihood, both. While a revision control system and good code organization can help, it's clear that sharing a single canonical set of source files among projects is far from easy given the current state of Eclipse.
Some background which I suspect you already know: Typically (including ifort) compiling the source code for a Fortran module results in two outputs - a "mod" file that contains a description of the Fortran entities that the module defines that the compiler needs to find whenever it sees a USE statement for the module, and object code for the linker that implements the procedures and variable storage, etc., that the module defines.
Your first error (the one you solved) is because the compiler couldn't find the mod file.
The second error is because the linker hasn't been told about the object code that implements the stuff that was in the source file with the module. I'm not an Eclipse user by any means, but a brute force way of specifying that is just to add the object file (xxxxx/Debug/m_graze.o) as an additional linker option (Fortran Build > Settings, under Intel Fortran Linker > Command Line). (Other tool chains have explicit "additional object file" properties for their link stage - there may well be a better way of doing this for the Intel chain.)
For more involved examples you would typically create a library out of the shared code. That's not really C specific, the only Fortran aspect is that the libraries archive of object code needs to be provided alongside the mod files that the Fortran compiler generates.
Yes the object code must be provided. E.g., when you install libnetcdf-dev in Debian (apt-get install libnetcdf-dev), there is a /usr/include/netcdf.mod file that is included.
You can now use all netcdf routines in your Fortran code. E.g.,
program main
use netcdf
...
end
but you'll have link to the netcdf shared (or static) library, i.e.,
gfortran -I/usr/include/ main.f90 -lnetcdff
However, as user MSB mentioned the mod file can only be used by gfortran that comes with the distribution (apt-get install gfortran). If you want to use any other compiler (even a different version that you may have installed yourself) then you'll have to build netcdf yourself using that particular compiler.
So creating a library is not a bad solution.
Does small embedded system without RTOS/OS uses dynamic/shared libraries. my understanding is that its very tough to use it and will be not productive.
If we are calling an API multiple times which is present in a static library. Does API code will be placed at every call location like macro expansion or code/text will be common for all calls. I think code/text will be common.
If I have made a static library for a .c files which has multiple API's and I am statically linking it with main file and in main file only one API has been called so my question is does whole library is included in final .bin or only particular API code.
from above questions you can assume that I am missing fundamentals itself so can anyone please provide the related links to brush up these.
Regards
[edit]
I have tried following things
addition.c module
`int addition(int a,int b)`
`{`
`int result;`
`result = a + b;`
`return result;`
`}`
`size addition.o`
23 0 0 23 17 addition.o
multiplication.c module
`int multiplication(int a, int b)`
`{`
`int result;`
`result = a * b;`
`return result;`
`}`
`size multiplication.o`
21 0 0 21 15 multiplication.o
created object file of both and put in archieve
ar cr libarith.a addition.o multiplication.o
then statically linked to my main application
example.c module
`#include "header.h"`
`#include <stdio.h>`
`1:int main()`
`2:{`
`3:int result;`
`4:result = addition(1,2);`
`5:printf("addition result is : %d\n",result);`
`6:result = multiplication(3,2);`
`7:printf("multiplication result is : %d\n",result);`
`8:return 0;`
`9:}`
gcc -static example.c -L. -larith -o example
size of example
511141 1928 7052 520121 7efb9 example
commented line number 6 of example.c
and again linked
gcc -static example.c -L. -larith -o example
size of example
511109 1928 7052 520089 7ef99 example
32 bytes of difference between above two
thats mean addition.o is not included in example
merged both modules addition.c and multiplication.c as addmult.c as below
int addition(int a,int b)
{
int result;
result = a + b;
return result;
}
int multiplication(int a, int b)
{
int result;
result = a * b;
return result;
}
created object file and put in archieve
before doing that i have deleted previous archieve
ar cr libarith.a addmult.o
now commented line number 6 of example.c
gcc -static example.c -L. -larith -o example
size example
511093 1928 7052 520073 7ef89 example
uncommented line nmber 6 of example.c
size example
511141 1928 7052 520121 7efb9 example
My question is in both cases if both functions are called final text size is same but if only one function is called then there is difference of 16
but multiplication.o size is 23 so definitly it has been not included but how we will justify 16.
If i am missing some fundamental itself ?
To dynamically load and link a library at runtime requires code to perform the load/link operation. That capability is normally part of an operating system. Moreover in a system without mass-storage of some kind, dynamic linking would not have any benefits since the dynamically linked code would have to exist in memory in any case so may as well have been statically linked.
To answer the second part of your question, a static library is simply a collection of object files in an archive. The linker will only extract and link the object code necessary to resolve symbols referenced in the executable as a whole. Some smart linkers can discard unused functions from within an object file, but you should not rely on that.
So by linking a static library you are not including all the unused code in the library. You can probably tell that by comparing the size of all your library files with the size of the executable binary - you will probably see that your executable is far smaller than the sum of the sizes of the libraries linked. Also your linker will have an option to create a map file which will tell you exactly what code has been included, and if it has a cross-reference output facility, what code references or is referenced by what.
If you are building your own static libraries, or even your own non-library code, it will pay to ensure good granularity at the object file level. For example if an object file contains two functions, one used and one unused, most linkers will have no choice but to include both, whereas if the functions are defined in separate compilation units (source files), then they will be in separate object files (even when collated into a library) and can be separately linked.
If you really have a embedded system without any operating system, then your hardware has essentially a fixed software, which you can change only by physical means (e.g. a soldering iron, or plugging something, etc...). In that case, that software runs on the "bare iron" and is doing somehow what an OS is providing (it is managing the physical resources and interacts directly with the I/O ports by appropriate machine instruction).
In particular, an embedded system without any OS cannot have any kind of dynamic libraries, because by definition these libraries need to be inside some files (on the embedded processor), and to have files you need an operating system.
The exact definition of what exactly is an operating system is debatable and fuzzy; I believe that providing a file system is one of the roles of most current OSes
Since shared libraries (or static libraries) are libraries sitting inside some files, you cannot have them without an OS. Something which provide files is by definition an operating system.
Perhaps you are using a cross-development chain to develop your embedded software. If you want to get something which runs on the bare metal, your chain has to ultimately give a single binary image which you can flash into a ROM, then solder or plug that ROM -or transfer somehow physically- in your embedded hardware (some tools enable you to flash an entire self contained processor).
I believe you might be confused, and you should read more about operating systems, kernels, the linux kernel, file systems, syscalls, RTOS, linkers & loaders, cross-compilers, microcontrollers, shared libraries, dynamic linkers ....
As Clifford suggested in comments, you could have an embedded system with some file system and some dynamic linker; in my view that would make an embryonic operating system, but it is a debatable matter of definition.
Notice that making a dynamic linker might not be an easy task (you'll need to do relocation); you could either make a generic ELF dynamic loader, or you could restrict the form of the dynamically loaded modules, and perhaps use your specific ld script to generate them.
You already have all the fundamentals you need. Without an operating system, mass storage (disc, filesystem, etc) and mulitple/many different programs that can take advantage of the shared library it doesnt make any sense. You dont save anything and it probably costs you a little more if you were to fake it enough to use a shared library in a fixed bare metal environment.
You mentioned having codesourcery, how do you learn these things? You disassemble your binaries and see what the compiler did. Does it link the entire gcc library because you used one divide? Does it link the entire C library because you used one function (does it even work to try to link a C library function, many have system calls to an operating system which you have to resolve). Start by using a simple divide in a very simple function (needs to be generic)
unsigned int fun ( unsigned int a, unsigned int b )
{
return(a/b);
}
DO NOT call that function with fixed constants and do not call it from the same .c file, the best thing would be to simply add that function as is, and do nothing else with it just have it sit there. You may hit problems even trying to compile it, once you do, disassemble and see what the compiler did with it, see if the entire gcc library was added or just the code for that one function.
You cant trust any old web page or resource as it may not be the same tools you are using and may be out dated, the compiler you are using right now is the one that matters, right now, no other. And the answers are all right there in front of you.
No, they dont use dynamic libraries, the functions needed are linked in as needed. The optimizer may choose to inline some code, but in general the code for each function is in one place and each call to it is a call, it is not like a macro, in general. Again the optimizer may choose otherwise for performance reasons (small enough functions that dont consume too much memory and are small enough that the code required to make a function call is excessive compared to the function itself. Also that function needs to be in the same optimization space, for gcc this is the same .c file, for llvm this could be any code in the project.
I have some examples, cortex-m and others, bare metal. http://github.com/dwelch67 you may find some that may help answer your questions, examine for example that the compiler will implement a public function like the one above AND inline it when used. If you declare the function as static, then the optimizer, if it inlines, doesnt need to implement the function in the binary. if you make a call to a function like that in the same .c file, for example
c = fun(10,5);
there is a good chance that the optimizer if used, will replace that code with
c = 2;
and not perform the divide at all.
I have a project that needs to incorporate two third-party libraries, libA and libB. I have little, if any, influence over the third-party libraries. The problem being is that both libA and libB include different versions of a common library, ASIHTTPRequest. As a result, I'm getting errors like:
-[ASIFormDataRequest setNumberOfTimesToRetryOnTimeout:]: unrecognized selector sent to instance 0x3b4170
, which I can only assume are because libA is referring to libB's implementation of ASIHTTPRequest (or the other way around).
I've tried playing around with strip -s <symbol file> -u <library> to isolate the libraries' symbols from each other, but that results in XCode's linker spitting out thousands of warnings and doesn't actually fix the main problem outlined above.
ld: warning: can't add line info to anonymous symbol anon-func-0x0 from ...
In general, how can/should one isolate libraries from each other?
There is absolutely no way to do so. One Objective-C application can have only one meaning for one symbol at a time. If you load two different versions of one library the last one will overwrite the first one.
Two workarounds:
convince the developer to use a recent version
run both libraries in separate processes
If they used the same linker symbol name for different routines, the only way out (short of hacking their object files), is to link them into different executables somehow.
On platforms that support dynamic linking (eg: DLLs) you could build one or both into a separate DLL. If they aren't part of the exported interface the symbols shouldn't clash then.
Otherwise you would be stuck putting them into entirely separate processes and using IPC to pass data between them.
What is standard or "most-popular" naming convention for MSVC library builds.
For example, for following platforms library foo has these conventions:
Linux/gcc:
shared: libfoo.so
import: ---
static: libfoo.a
Cygwin/gcc:
shared: cygfoo.dll
import: libfoo.dll.a
static: libfoo.a
Windows/MinGW:
shared: libfoo.dll
import: libfoo.dll.a
static: libfoo.a
What should be used for MSVC buidls? As far as I know, usually names are foo.dll and foo.lib, but how do you usually distinguish between import library and static one?
Note: I ask because CMake creates quite unpleasant collision between them naming both import and static library as foo.lib. See bug report. The answer would
help me to convince the developers to fix this bug.
You distinguish between a library and a .dll by the extension. But you distinguish between a import library and a static library by the filename, not the extension.
There will be no case where an import library exists for a set of code that was built to be a static library, or where a static library exists for a dll. These are two different things.
There is no single MSVC standard filename convention. As a rule, a library name that ends in "D" is often a debug build of library code, msvcrtd.dll vs msvcrt.dll but other than that, there are no standards.
As mentioned by others, there are no standards, but there are popular conventions. I'm unsure how to unambiguously judge what is the most popular convention. In addition the nomenclature for static vs. import libraries, which you asked about, there is also an analogous distinction between the naming of Release libraries vs. Debug libraries, especially on Windows.
Both cases (i.e. static vs. import, and debug vs. release) can be handled in one of two ways: different names, or different directory locations. I usually choose to use different names, because I feel it minimizes the chance of mistaking the library type later, especially after installation or other file moving activities.
I usually use foo.dll and foo.lib for the shared library on Windows, and foo_static.lib for the static library, when I wish to have both shared and static versions. I have seen others use this convention, so it might be the "most popular".
So I would recommend the following addition to your table:
Windows/MSVC:
shared: foo.dll
import: foo.lib
static: foo_static.lib
Then in cmake, you could either
add_library(foo_static STATIC foo.cpp)
or
add_library(FooStatic STATIC foo.cpp)
set_target_properties(FooStatic PROPERTIES OUTPUT_NAME "foo_static")
if for some reason you don't wish to use "foo_static" as the symbolic library name.
There is no standard naming convention for libraries. Traditional library names are prefixed with lib. Many linkers have options to prepend lib to a library name on the command line.
The static and dynamic libraries are usually identified by their file extension; although this is not required. So libmath.a would be a static library whereas libmath.so or libmath.dll would be a dynamic library.
A common naming convention is to append the category of the library to the name. For example, a debug static math library would be 'libmathd.a' or in Windows, 'lib_math_debug'. Some shops also add Unicode as a filename attribute.
If you want, you can append _msvc to the library name to indicate the library requires or was created by MSVC (to differentiate from GCC and other tools). A popular convention when working with multiple platforms, is to place the objects and libraries in platform specific folders. For example a ./linux/ folder would contain objects and libraries for Linux and similarly ./msw/ for Microsoft Windows platform.
This is a style issue. Style issues are often treated like religious issues: none of them are wrong, there is no universal style, and they are an individual preference. What ever system you choose, just be consistent.
As far as I know, there's no real 'standard', at least no standard most software would conform to.
My convention is to name my dynamic and static .lib equally, but place them in different directories if a project happens to support both static and dynamic linkage. For example:
foo-static
foo.lib
foo
foo.lib
foo.dll
The library to link against depends on the choice of the library directories, so it's almost totally decoupled from the rest of the build process (it won't appear in-source if you use MSVC's #pragma comment(lib,"foo.lib") facility, and it doesn't appear in the list of import libraries for the linker).
I've seen this quite a few times. Also, I think that MSVC/Windows based projects tend to stick more often with a single, official linkage type - either static, or dynamic. But that's just my personal observation.
In short:
Windows/MSVC
shared: foo.dll
import: foo.lib
static: foo.lib
You should be able to use this directory-based pattern with CMAKE (never used it). Also, I don't think it's a 'bug'. It's merely lack of standardization. CMAKE does (imho) the right thing not to establish a pseudo-standard if everyone likes it differently.
As the others have said, there is no single standard to file naming on windows.
For our complete product base which covers 100's of exes, dlls, and static libs we have used the following successfully for many years now and it has saved a lot of confusion. Its basically a mixing of several methods I've seen used throughout the years.
In a nutshell all our files of both a prefix and suffix (not including the extension itself). They all start with "om" (based on our company name), and then have a 1 or 2 character combination that roughly identifies the area of code.
The suffix explains what type of built-file they are and includes up to three letters used in combination depending on the build which includes Unicode, Static, Debug (Dll builds are the default and have no explicit suffix identifier). When we started this system Unicode was not so prevalent and we had to support both Unicode and Non-unicode builds (pre Windows 2000 os), now everything is exclusively built unicode but we still use the same nomenclature.
So a typical .lib "set" of files might look like
omfThreadud.lib (Unicode/Debug/Dll)
omfThreadusd.lib (Unicode/Static/Debug)
omfThreadu.lib (Unicode/Release/Dll)
omfThreadus.lib (Unicode/static)
All files are built-in into a common bin folder, which eliminates a lot of dll-hell issues for developers and also makes it simpler to adjust compiler/linker settings - they all point to the same location using relative paths and there is never any need for manual (or automatic) copying of the libraries a project needs. Having these suffixes also eliminates any confusion as to what type of file you may have, and guarantees you can't have a mixed scenario where you put down the debug dll on a release kit or vice-versa. All exes also use a similar suffix (Unicode/Debug) and build into the same bin folder.
There is likewise one single "include" folder, each library has one header file in the include folder that matches the name of the library/dll (for example omfthread.h) That file itself #includes all the other items that are exposed by that library. This keeps its simpler if you want functionality that is in foo.dll you just #include "foo.h"; our libraries are highly segmented by areas of functionality - effectively we don't have any "swiss-army knife" dlls so including the libraries entire functionality makes sense. (Each of these headers also include other prerequisite headers whether they be our internal libraries or other vendor SDKs)
Each of these include files internally uses macros that use #pramga's to add the appropriate library name to the linker line so individual projects don't need to be concerned with that. Most of of our libraries can be built statically or as a DLL and #define OM_LINK_STATIC (if defined) is used to determine which the individual project wants (we usually use the DLLs but in some cases static libraries built-in into the .exe make more sense for deployment or other reasons)
#if defined(OM_LINK_STATIC)
#pragma comment (lib, OMLIBNAMESTATIC("OMFTHREAD"))
#else
#pragma comment (lib, OMLIBNAME("OMFTHREAD"))
#endif
These macros (OMLIBNAMESTATIC & OMLIBNAME) use _DEBUG determine what type of build it is and generate the proper library name to add to the linker line.
We use a common define in the static & dll versions of a library to control proper exporting of the class/functions in dll builds. Each class or function exported from the library is decorated with this macro (the name of which matches the base name for the library, though that is largely unimportant)
class OMUTHREAD_DECLARE CThread : public CThreadBase
In the DLL version of the project settings we define OMFTHREAD_DECLARE=__declspec(dllexport), in the static library version of the library we define OMFTHREAD_DECLARE as empty.
In the libraries header file we define it based on how the client is trying to link to it
#if defined(OM_LINK_STATIC)
#define OMFTHREAD_DECLARE
#else
#define OMFTHREAD_DECLARE __declspec(dllimport)
#endif
A typical project that wants to use one of our internal libraries would just add the appropriate include to their stdafx.h (typically) and it just works, if they need to link against the static version they just add OM_LINK_STATIC to their compiler settings (or define it in the stdafx.h) and it again it just works.
As far as I know there still aren't any conventions with regards to this. Here's an example of how I do it:
{Project}{SubModule}{Platform}{Architecture}{CompilerRuntime}_{BuildType}.lib/dll
The full filename shall be lowercase only and shall only contain alphanumerics with predesignated underscores. The submodule field, including its leading underscore, is optional.
Project: holds project name/identifier. Preferably as short as possible. ie "dna"
SubModule: optional. holds module name. Preferably as short as possible. ie "dna_audio"
Platform: identifies the platform the binary is compiled for. ie "win32" (Windows), "winrt", "xbox", "android".
Architecture: describes the architecture the binary is compiled for. ie "x86", "x64", "arm". There where architecture names are equal for various bitnesses use its name followed by the bitness. ie. "name16", "name32", "name64"
CompilerRuntime: optional. Not all binaries link to a compiler runtime, but if they do, it's included here. ie "vc90" (Visual Studio 2008), "gcc". Where applicable apartment can be included ie "vc90mt"
BuildType: optional. This can hold letters (in any order desired), each which tell something about the build-specifics. d=debug (omitted if release) t=static (omitted if dynamic) a=ansi (omitted if unicode)
Examples (assuming a project named "DNA"):
dna_win32_x86_vc90.lib/dll
dna_win32_x64_vc90_d.lib/dll
dna_win32_x86_vc90_sd.lib
dna_audio_win32_x64_vc90.lib/dll
dna_audio_winrt_x64_vc110.lib/dll