CC=g++
CFLAGS=-O3 -c -Wall
DFLAGS=-g -Wall
LDFLAGS= -lz -lm -lpthread
KSWSOURCE=ksw.c
ALGNSOURCES=main.cpp aligner.cpp graph.cpp readfl.cpp hash.cpp form.cpp btree.cpp conLSH.cpp
INDSOURCES=whash.cpp genhash.cpp formh.cpp conLSH.cpp
INDOBJECTS=$(INDSOURCES:.cpp=.o) $(KSWSOURCE:.c=.o)
ALGNOBJECTS=$(ALGNSOURCES:.cpp=.o) $(KSWSOURCE:.c=.o)
INDEXER=conLSH-indexer
ALIGNER=conLSH-aligner
all: $(INDSOURCES) $(ALGNSOURCES) $(KSWSOURCE) $(ALIGNER) $(INDEXER)
$(ALIGNER): $(ALGNOBJECTS)
$(CC) $(ALGNOBJECTS) -o $# $(LDFLAGS)
$(INDEXER): $(INDOBJECTS)
$(CC) $(INDOBJECTS) readfl.o -o $# $(LDFLAGS)
debug:
$(CC) $(DFLAGS) $(ALGNSOURCES) $(KSWSOURCE) $(LDFLAGS)
.cpp.o:
$(CC) $(CFLAGS) $< -o $#
.c.o:
$(CC) $(CFLAGS) $< -o $#
clean:
rm -rf *.o $(ALIGNER) $(INDEXER) a.out
I have the above makefile but I am getting an error
/usr/lib/gcc/i686-linux-gnu/4.8/include/emmintrin.h:31:3: error: #error "SSE2 instruction set not enabled"
# error "SSE2 instruction set not enabled"
From what I understand and googled this is a flag for parallel computation.
I tried from other posts with the same problem to either include:
CXXFLAGS=-03 -c Wall -mfpmath=sse
OR:
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse -msse2 -msse3")
but without any success. Can you help?
I am not sure a CXX flags is necessary because a lot of (probably) cascading errors are shown in ksw like,
ksw.c:49:2: error: ā__m128iā does not name a type
__m128i *qp, *H0, *H1, *E, *Hmax;
-msse2 is the specific option, so passing that to GCC will work, if you get your build scripts set up to actually do that. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#x86-Options
Or better, use -march=native to enable everything your CPU has, if you're building for local use, not for distributing a binary that might have to work on an old-but-not-ancient CPU. (Of course, if you care about performance, it's weird to be building for 32-bit mode. SSE2 is baseline for x86-64. Unless your CPU is too old to support SSE2, e.g. a Pentium III. Or for example, there are embedded x86 CPUs without SSE, like AMD Geode. In that case, a binary built (successfully) with -msse2 will probably crash with an illegal instruction on such a CPU.)
-mfpmath=sse just tells GCC to use SSE for scalar FP math assuming that SSE is available; unrelated to telling GCC to assume the target CPU does support SSE2. It can be good to use it as well for performance, but it's not going to matter in getting your code to compile.
And yes, SSE1/2 intrinsic types like __m128i will only get defined when SSE is enabled, so error: ā__m128iā does not name a type is a clear sign that -msse wasn't enabled
If using autoconf or something, maybe use this:
./configure CPPFLAGS="-O3 -march=native -fno-math-errno"
If you have .c files as well as .cpp, set CFLAGS as well as CPPFLAGS. More options like -flto can be helpful for optimization (cross-file inlining at link time), if you get those added to your LD options. As well as any other optimization options like -ffast-math if you want to use it. Or at least -fno-trapping-math helps some, and GCC already did optimizations that violated the semantics trapping-math was supposed to provide. See this Q&A re: -fno-trapping-math -fno-math-errno being safe to use basically everywhere, even in code that depends on strict FP like Kahan summation.
This worked for me also:
./configure CPPFLAGS="-march=native"
Related
If I compile in two stages, using a particular language standard:
g++ -std=c++2a -c file1.cpp #compile source files
g++ -std=c++2a -c file2.cpp
g++ -std=c++2a file1.o file2.o -o program #link 'em
...can I leave the -std=c++2a out of the link command, or is it sometimes needed?
Version is gcc 10.
I guess you are compiling on Linux with a recent GCC. Be sure to read more about C++ and about your particular compiler (i.e. GCC 9 is not the same as GCC 10). Check with g++ --version what it is.
In practice you want to compile with warnings and debug information (in DWARF for GDB inside your ELF object files and executables), so use
g++ -std=c++2a -Wall -Wextra -g -c file1.cpp
and likewise for file2.cpp
Later (once your program is correct enough, e.g. has few bugs) you could want to ask the compiler to optimize it. So you could use
g++ -std=c++2a -Wall -Wextra -O3 -g -c file1.cpp
Practically speaking, you'll configure your build automation tool (e.g. GNU make or ninja) to run your compilation commands.
In rare cases, you could want to use link time optimizations. Then you need to both compile and link with g++ -std=c++2a -Wall -Wextra -O3 -g -flto and perhaps other options.
Be aware that link time optimization could almost double your build time.
You could also be interested by static analysis options of GCC 10 (or even by writing your own static analysis using GCC plugins).
From the research I have done, the problem seems to be with clang. If that is the case, how would I fix this on a Mac? Would switching to Ubuntu/Linux be a better option?
I'm not sure if it is relevant, but my professor is having us code using C syntax using g++ and saving our files as '.cpp' before we dive into C++.
Warning:
clang: warning: argument unused during compilation: '-ansi'
[-Wunused-command-line-argument]
Makefile:
CC = g++
calendar: main.o calendar.o appt.o day.o time.o
$(CC) main.o calendar.o appt.o day.o time.o -g -ansi -Wall -o calendar.out
%.o: %.cpp
$(CC) -Wall -c $<
You are correct in believing that this warning is issued by clang++ in these
circumstances and not by g++, and that you see it on your Mac because g++ is
really clang++.
The GCC option -ansi is meaningful for compilation and not meaningful
for linkage. Clang is warning you because you are passing it in your linkage recipe:
$(CC) main.o calendar.o appt.o day.o time.o -g -ansi -Wall -o calendar.out
where it is ineffective, and not passing it to your compilation recipe:
$(CC) -Wall -c $<
The wording of the diagnostic is misleading since it is provoked here
precisely by the absence of compilation. Nevertheless, it does
draw attention to a mistake on your part. Remove -ansi from your linkage recipe and add it to your compilation recipe.
I'm building an SDL2/C++ program that needs to be portable to Windows, Mac, and Linux machines which may not have SDL installed.
I've read that static linking is the solution, but I'm not very good with compiling and don't know how to static link.
My program relies only on SDL2, GLU, and OpenGL. I'm compiling C++ with either MinGW (on Windows 8.1) or gcc (on Ubuntu 14.04) -- both of these OS's have SDL installed natively.
Here is my current makefile, derived from a sample makefile given to me by a professor of mine:
# Executable/file name
EXE=experiment
# MinGW
ifeq "$(OS)" "Windows_NT"
CFLG=-O3 -Wall -DUSEGLEW
LIBS= -lSDL2 -lglu32 -lopengl32
CLEAN=del *.exe *.o *.a
else
# OSX
ifeq "$(shell uname)" "Darwin"
CFLG=-O3 -Wall -Wno-deprecated-declarations
LIBS=-framework SDL2 -framework OpenGL
# Linux\Unix\Solaris
else
CFLG=-O3 -Wall
LIBS= `sdl2-config --cflags --libs` -lGLU -lGL -lm
endif
# OSX\Linux\Unix\Solaris
CLEAN=rm -f $(EXE) *.o *.a
endif
# Dependencies
$(EXE).o: $(EXE).cpp FORCE
.c.o:
gcc -c -o $# $(CFLG) $<
.cpp.o:
g++ -std=c++11 -c -o $# $(CFLG) $<
# Link
$(EXE):$(EXE).o
g++ -std=c++11 -O3 -o $# $^ $(LIBS)
# Clean
clean:
$(CLEAN)
# Force
FORCE:
To link with static library you either specify path to library file
gcc -o out_bin your_object_files.o path/to/lib.a -lfoo
or ask linker to use static version with -Bstatic linker flag. Usually you'll want to reset linking back to dynamic for the rest of the libraries, e.g. for static SDL2 and GLU but dynamic GL:
gcc -o out_bin your_object_files -Wl,-Bstatic -lSDL2 -lGLU -Wl,-Bdynamic -lGL
That of course implies that static versions of libraries are present in library search path list (.a libs for gcc on all specified platforms, although MSVC uses .lib for static libraries).
However you usually don't really want to do that at all. It is common practice for software to either depend on some libs (widespread on linux, with packages and dependendices lists) or bring required libraries with it. You can just distribute SDL dynamic library with your program and load it with LD_LIBRARY_PATH or relative rpath.
Please also note that newer SDL2 implements dynamic loading of functions which provides a way to override SDL with user-specified dynamic library, even if linked statically.
It wasn't related directly to static linking. When static linking, I had to include all of SDL's dependency libraries. Turns out, having -mwindows causes console communication to fail.
I have a GNU build system with autoconf-2.69, automake-1.14.1, libtool-2.4.2. I've configured with --host=i686-linux on a x86_64 RHEL6 host OS to build a 32-bit program. The libtool command seems to be:
/bin/sh ../libtool --tag=CXX --mode=link g++ -I/home/STools/RLX/boost/include/boost-1_44 -m32 -g3 -Wall -static -o engine engine-main.o ../components/librlxvm.la /home/STools/RLX/boost/include/boost-1_44/../../lib/libboost_program_options-gcc42-mt-1_44.a -lz -lpthread -ldl -lrt -ldl -lz -lm
But the real command is to search the 64-bit libraries not the 32-bit libraries as shown below:
libtool: link: g++ -I/home/STools/RLX/boost/include/boost-1_44 -m32 -g3 -Wall -o engine engine-main.o -L/home/robert_bu/src/gcc/gcc-4.2.2/build-x86_64/x86_64-unknown-linux-gnu/libstdc++-v3/src -L/home/robert_bu/src/gcc/gcc-4.2.2/build-x86_64/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs -L/home/robert_bu/src/gcc/gcc-4.2.2/build-x86_64/./gcc ../components/.libs/librlxvm.a /home/STools/RLX/boost/include/boost-1_44/../../lib/libboost_program_options-gcc42-mt-1_44.a /home/STools/RLX/gcc-4.2.2-x86_64/lib/../lib64/libstdc++.so -L/lib/../lib64 -L/usr/lib/../lib64 -lc -lgcc_s -lrt -ldl -lz -lm -pthread -Wl,-rpath -Wl,/home/STools/RLX/gcc-4.2.2-x86_64/lib/../lib64 -Wl,-rpath -Wl,/home/STools/RLX/gcc-4.2.2-x86_64/lib/../lib64
The --host config seems to have no effect. Is there anyway to tell libtool that 32-bit libraries are what we want?
It seems that libtool uses "CC", "CXX" to check the library search path. After I set CC to "gcc -m32", and CXX to "g++ -m32", it works. So libtool does not add "-m32" automatically even if I try to build a 32-bit program on a 64-bit system.
You're being hit by the problem of libtool .la files expansion. In particular libstdc++.la is being expanded for you to a full path rather than a simple -lstdc++.
My suggestion is to remove .la file from the SDK you're using (/home/STools). This way libtool can't assume things for you. Usually the ones you have in the system are fine, because the libraries are already in the search path, so it does not need to use -rpath or the full path to the .so file.
Depending on how well the SDK was crafted, this might or might not work correctly, so take it with a grain of salt.
When compiling, how can you determine what compiler flags are set? I'm dealing with a weird issue where, if I don't have any environmental variables set:
$ env | grep FLAG
$
then gfortran uses all these flags:
-Wall -arch i686 -arch x86_64 -Wall -undefined dynamic_lookup -bundle
Whereas, in an environment where these are set
$ env | grep FLAG
LDFLAGS=
CCFLAGS=
CXXFLAGS=
CFLAGS=
FFLAGS=
the only flag is: -Wall
I'm just lost as to how to ensure a consistent build environment when distributing code.
EDIT: Further investigation hints that this magic may happen in numpy.distutils.fcompiler, but I don't know!
Well, I'm not at all sure about Numpy, but I distutils uses distutils.sysconfig.customize_compiler to set the flags.
By default this uses the flags that were set in the Makefile when your interpreter was built, but they can be added to by environment variables.