mingw/msys2 build and link to dll without version number - dll

I'm building pjsip with mingw/msys2 and it keeps building dlls with .2 after them (.dll.2 files) as well as .dll files. If I delete the .dll.2 files that are built and try and build my program my program will STILL link to the .dll.2 versions and complain that they don't exists.
Command I run to build pjsip:
./configure CFLAGS="${MAKEFLAGS}" CXXFLAGS="${MAKEFLAGS}" \
--build=${MINGW_CHOST} \
--host=${MINGW_CHOST} \
--target=${MINGW_CHOST} \
--prefix="${OUT_PREFIX}" \
--disable-openh264 \
--disable-v4l2 \
--disable-ffmpeg \
--enable-libsamplerate \
--disable-video \
--enable-shared \
--disable-static \
--disable-libyuv \
--with-external-speex \
--with-gnutls
I can see in the build output that it builds dll.2 and then links them
ln -sf libpjsua2.dll.2 ../lib/libpjsua2.dll
How can I make my probgram only depend on the .dll and not the .dll.2?

You are going to have to go (grep, perhaps) through the Makefile.am files, find the rule for libpjsua2, and modify it to remove the .2 'extension'. My guess is that the lib extension and integer 'extension' will not be hard-coded, so just search for libpjsua2. You can also remove the ln -sf bit at this point. Any changes you make to any files should be saved to a copy (or, you can diff) outside of the source/build directories so that you can reapply the changes if you ever download and unpack the source again.
The reason that you are running into this issue is that, at link time, the symbolic link is resolved and the actual name of the library is used. No amount of removing libraries is going to change this. Based only on the information you have given, it seems you might be misunderstanding what is actually being built: libpjsua2.dll is not a library in and of itself, rather a link to libpjsua2.dll.2. When you delete libpjsua2.dll.2, you are deleting the actual library, libpjsua2.dll points nowhere, and you end up with a "not found" error.

Related

How to run sanitizers on whole project

I'm trying to get familiar with sanitizers as ASAN, LSAN etc and got a lot of useful information already from here: https://developers.redhat.com/blog/2021/05/05/memory-error-checking-in-c-and-c-comparing-sanitizers-and-valgrind
I am able to run all sort of sanitizers on specific files, as shown on the site, like this:
clang -g -fsanitize=address -fno-omit-frame-pointer -g ../TestFiles/ASAN_TestFile.c
ASAN_SYMBOLIZER_PATH=/usr/local/bin/llvm-symbolizer ./a.out >../Logs/ASAN_C.log 2>&1
which generates a log with found issue. Now I would like to extend this to run upon building the project with cmake. This is the command to build it at the moment:
cmake -S . -B build
cd build
make
Is there any way I can use this script with adding the sanitizers, without having to alter the cmakelist.txt file??
For instance something like this:
cmake -S . -B build
cd build
make -fsanitize=address
./a.out >../Logs/ASAN_C.log 2>&1
The reason is that I want to be able to build the project multiple times with different sanitizers (since they cannot be used together) and have a log created without altering the cmakelist.txt file (just want to be able to quickly test the whole project for memory issues instead of doing it for each file created).
You can add additional compiler flags from command line during the build configuration:
cmake -D CMAKE_CXX_FLAGS="-fsanitize=address" -D CMAKE_C_FLAGS="-fsanitize=address" /path/to/CMakeLists.txt
If your CMakeLists.txt is configured properly above should work. If that does not work then try adding flags as environment variable:
cmake -E env CXXFLAGS="-fsanitize=address" CFLAGS="-fsanitize=address" cmake /path/to/CMakeLists.txt

From CMake setup 'make' to use '-j' option by default

I want my CMake project to be built by make -j N, whenever I call make from the terminal. I don't want to set -j option manually every time.
For that, I set CMAKE_MAKE_PROGRAM variable to the specific command line. I use the ProcessorCount() function, which gives the number of procesors to perform build in parallel.
When I do make, I do not see any speed up. However if I do make -j N, then it is built definitely faster.
Would you please help me on this issue? (I am developing this on Linux.)
Here is the snippet of the code that I use in CMakeList.txt:
include(ProcessorCount)
ProcessorCount(N)
message("number of processors: " ${N})
if(NOT N EQUAL 0)
set(CTEST_BUILD_FLAGS -j${N})
set(ctest_test_args ${ctest_test_args} PARALLEL_LEVEL ${N})
set(CMAKE_MAKE_PROGRAM "${CMAKE_MAKE_PROGRAM} -j ${N}")
endif()
message("cmake make program" ${CMAKE_MAKE_PROGRAM})
Thank you very much.
In case you want to speed up the build you can run multiple make processes in parallel but not cmake.
To perform every build with predefined number of parallel processes you can define this in MAKEFLAGS.
Set MAKEFLAGS in your environment script, e.g. ~/.bashrc as you want:
export MAKEFLAGS=-j8
On Linux the following sets MAKEFLAGS to the number of CPUs - 1: (Keep one CPU free for other tasks while build) and is useful in environments with dynamic ressources, e.g. VMware:
export MAKEFLAGS=-j$(($(grep -c "^processor" /proc/cpuinfo) - 1))
New from cmake v3.12 on:
The command line has a new option --parallel <JOBS>.
Example:
cmake --build build_arm --parallel 4 --target all
Example with number of CPUs- 1 using nproc:
cmake --build build_arm --parallel $(($(nproc) - 1)) --target all
Via setting the CMAKE_MAKE_PROGRAM variable you want to affect the build process. But:
This variable affects only the build via cmake --build, not on native tool (make) call:
The CMAKE_MAKE_PROGRAM variable is set for use by project code. The value is also used by the cmake(1) --build and ctest(1) --build-and-test tools to launch the native build process.
This variable should be a CACHEd one. It is used in such way by make-like generators:
These generators store CMAKE_MAKE_PROGRAM in the CMake cache so that it may be edited by the user.
That is, you need to set this variable with
set(CMAKE_MAKE_PROGRAM <program> CACHE PATH "Path to build tool" FORCE)
This variable should refer to the executable itself, not to a program with arguments:
The value may be the full path to an executable or just the tool name if it is expected to be in the PATH.
That is, value "make -j 2" cannot be used for that variable (splitting arguments as list
set(CMAKE_MAKE_PROGRAM make -j 2 CACHE PATH "Path to build tool" FORCE)
wouldn't help either).
In summary, you may redefine the behavior of cmake --build calls with setting the CMAKE_MAKE_PROGRAM variable to the script, which calls make with parallel options. But you may not affect the behavior of direct make calls.
You may set the env variable MAKEFLAGS using this command
export MAKEFLAGS=-j$(nproc)
My solution is to have a small script which will run make include all sorts of other features, not just the number of CPUs.
I call my script mk and I do a chmod 755 mk so I can run it with ./mk in the root of my project. I also have a few flags to be able to run various things with a simple command line. For example, while working on the code and I get many errors, I like to pipe the output to less. I can do that with ./mk -l without having to retype all the heavy duty Unix stuff...
As you can see, I have the -j4 in a couple of places where it makes sense. For the -l option, I don't want it because in this case it would eventually cause multiple errors to be printed at the same time (I tried that before!)
#!/bin/sh -e
#
# Execute make
case "$1" in
"-l")
make -C ../BUILD/Debug 2>&1 | less -R
;;
"-r")
make -j4 -C ../BUILD/Release
;;
"-d")
rm -rf ../BUILD/Debug/doc/lpp-doc-?.*.tar.gz \
../BUILD/Debug/doc/lpp-doc-?.*
make -C ../BUILD/Debug
;;
"-t")
make -C ../BUILD/Debug
../BUILD/Debug/src/lpp tests/suite/syntax-print.logo
g++ -std=c++14 -I rt l.cpp rt/*.cpp
;;
*)
make -j4 -C ../BUILD/Debug
;;
esac
# From the https://github.com/m2osw/lpp project
With CMake, it wouldn't work unless, as Tsyvarev mentioned, you create your own script. But I personally don't think it's sensible to call make from your make script. Plus it could break a build process which would not expect that strange script. Finally, my script, as I mentioned, allows me to vary the options depending on the situation.
I usually use alias in linux to set cm equal to cmake .. && make -j12. Or write a shell to specify make and clean progress ...
alias cm='cmake .. && make -j12'
Then use cm to make in a single command.

make does not rebuild target on header file modification

I have a makefile of this kind:
program: \
a/a.o \
b/b.o
$(CXX) $(CXXFLAGS) -o program \
a/a.o \
b/b.o
a.o: \
a/a.cpp \
a/a.h
$(CXX) $(CXXFLAGS) -c a/a.cpp
b.o: \
b/b.cpp \
b/b.h
$(CXX) $(CXXFLAGS) -c b/b.cpp
So in the directory of the makefile I have two subdirectories a and b
that contain respectively a.h, a.cpp and b.h, b.cpp.
The problem is that if I modify a .cpp file, issuing a make rebuilds the target program
but if I modify an .h file make do not rebuilds anything but says
make: `program' is up to date.
I can't understand why, because the .h files are in the prerequisites line
along with the .cpp files.
Interestingly, if I issue a make on an object file target like
$ make a.o
instead, the modifications to a/a.h
are detected and the target a/a.o is rebuild.
Where is the problem?
The subdirectories that you added to the question later are causing the problem indeed. The target program depends on a/a.o and b/b.o, but there are no explicit rules to make those to .o files -- only the targets a.o and b.o are present but those are not in the subdirectories.
Therefore, make will look for implicit rules to build a/a.o and b/b.o. That rule does exist, you will see it being found when you run make -d. That implicit rule depends on a/file_a.cpp only, not on a/file_a.h. Therefore, changing a/file_a.cpp will make a/a.o out of date according to that implicit rule, whereas a/file_a.h will not.
For your reference, the make User's Manual has a section Catalogue of Implicit Rules. That also explains that you can use the argument --no-builtin-rules to avoid that implicit behavior. If you use that, you will see that make can not find any rules to make a/a.o and b/b.o.
Finally, running make a.o will run the recipe for the target a.o as defined in your makefile. That target does have a/a.h as its prerequisite so any change to that file will result in a recompile. But essentially, that has nothing to do with the target program, which has different prerequisites.

NetBeans doesn't recognize Makefile.am

I'm coming from a Objective-C/Xcode background.
I'm used to working with C projects already imported into XCode, but now I want to analyse an existing implementation of an algorithm I'm interested in integrating with my project.
Only that this project is written completely in C and has nothing to do with Objective-C/Xcode etc.
I'm not sure what is the best way to view a purely C project on Mac, so I installed NetBeans for C/C++.
The problem is that when I try to create a New Project on NetBeans and select C/C++ Project with Existing Sources it complains that
no make files or configure scripts were found
in the root directory.. although it clearly has a Makefile.am
I know that the Balsa project is written for linux, but I'm not interested in building the binary I just want to look at the source code in a IDE kinda way (ie I can click on a function call and see where it's implemented etc etc).
So in short my question is why isn't NetBeans recognising my Makefile.am?
and just for reference here is the content of the Makefile.am
#intl dir needed for tarball --disable-nls build.
DISTCHECK_CONFIGURE_FLAGS=--disable-extra-mimeicons --without-gnome --without-html-widget
SUBDIRS = po sounds images doc libbalsa libinit_balsa src
# set tar in case it is not set by automake or make
man_MANS=balsa.1
pixmapdir = $(datadir)/pixmaps
pixmap_DATA = gnome-balsa2.png
desktopdir = $(datadir)/applications
desktop_in_files = balsa.desktop.in balsa-mailto-handler.desktop.in
desktop_DATA = balsa.desktop balsa-mailto-handler.desktop
#INTLTOOL_DESKTOP_RULE#
balsa_extra_dist = \
GNOME_Balsa.server.in \
HACKING \
balsa-mail-style.xml \
balsa-mail.lang \
balsa.1.in \
balsa.spec.in \
bootstrap.sh \
docs/mh-mail-HOWTO \
docs/pine2vcard \
docs/vconvert.awk \
$(desktop_in_files) \
gnome-balsa2.png \
intltool-extract.in \
intltool-merge.in \
intltool-update.in \
mkinstalldirs
if BUILD_WITH_G_D_U
balsa_g_d_u_extra_dist = gnome-doc-utils.make
endif
if !BUILD_WITH_UNIQUE
serverdir = $(libdir)/bonobo/servers
server_in_files = GNOME_Balsa.server
server_DATA = $(server_in_files:.server.in=.server)
$(server_in_files): $(server_in_files).in
sed -e "s|\#bindir\#|$(bindir)|" $< > $#
endif
EXTRA_DIST = \
$(balsa_extra_dist) \
$(balsa_g_d_u_extra_dist)
if BUILD_WITH_GTKSOURCEVIEW2
gtksourceviewdir = $(BALSA_DATA_PREFIX)/gtksourceview-2.0
gtksourceview_DATA = balsa-mail.lang \
balsa-mail-style.xml
endif
DISTCLEANFILES = $(desktop_DATA) $(server_DATA) \
intltool-extract intltool-merge intltool-update \
gnome-doc-utils.make
dist-hook: balsa.spec
cp balsa.spec $(distdir)
#MAINT#RPM: balsa.spec
#MAINT# rm -f *.rpm
#MAINT# $(MAKE) distdir="$(PACKAGE)-#BALSA_VERSION#" dist
#MAINT# cp $(top_srcdir)/rpm-po.patch $(top_builddir)/rpm-po.patch
#MAINT# rpm -ta "./$(PACKAGE)-#BALSA_VERSION#.tar.gz"
#MAINT# rm $(top_builddir)/rpm-po.patch
#MAINT# -test -f "/usr/src/redhat/SRPMS/$(PACKAGE)-#VERSION#-#BALSA_RELEASE#.src.rpm" \
#MAINT# && cp -f "/usr/src/redhat/SRPMS/$(PACKAGE)-#VERSION#-#BALSA_RELEASE#.src.rpm" .
#MAINT# -for ping in /usr/src/redhat/RPMS/* ; do \
#MAINT# if test -d $$ping ; then \
#MAINT# arch=`echo $$ping |sed -e 's,/.*/\([^/][^/]*\),\1,'` ; \
#MAINT# f="$$ping/$(PACKAGE)-#VERSION#-#BALSA_RELEASE#.$$arch.rpm" ; \
#MAINT# test -f $$f && cp -f $$f . ; \
#MAINT# fi ; \
#MAINT# done
#MAINT#snapshot:
#MAINT# $(MAKE) distdir=$(PACKAGE)-`date +"%y%m%d"` dist
#MAINT#balsa-dcheck:
#MAINT# $(MAKE) BALSA_DISTCHECK_HACK=yes distcheck
## to automatically rebuild aclocal.m4 if any of the macros in
## `macros/' change
bzdist: distdir
#test -n "$(AMTAR)" || { echo "AMTAR undefined. Run make bzdist AMTAR=gtar"; false; }
-chmod -R a+r $(distdir)
$(AMTAR) chojf $(distdir).tar.bz2 $(distdir)
-rm -rf $(distdir)
# macros are not used any more by current configure.in, see also
# post by Ildar Mulyukov to balsa-list, 2006.06.27
# ACLOCAL_AMFLAGS = -I macros
UPDATE
I tried this answer.. but I got the following:
autoreconf --install
configure.in:250: warning: macro `AM_GLIB_GNU_GETTEXT' not found in library
glibtoolize: putting auxiliary files in `.'.
glibtoolize: copying file `./ltmain.sh'
glibtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
glibtoolize: copying file `m4/libtool.m4'
glibtoolize: copying file `m4/ltoptions.m4'
glibtoolize: copying file `m4/ltsugar.m4'
glibtoolize: copying file `m4/ltversion.m4'
glibtoolize: copying file `m4/lt~obsolete.m4'
glibtoolize: Remember to add `LT_INIT' to configure.in.
glibtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
glibtoolize: `AC_PROG_RANLIB' is rendered obsolete by `LT_INIT'
configure.in:250: warning: macro `AM_GLIB_GNU_GETTEXT' not found in library
configure.in:249: error: possibly undefined macro: AC_PROG_INTLTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.in:250: error: possibly undefined macro: AM_GLIB_GNU_GETTEXT
configure.in:301: error: possibly undefined macro: AC_MSG_ERROR
autoreconf: /usr/bin/autoconf failed with exit status: 1
I'm looking into using the suggestions in the output..
Interesting. I just tried downloading "balsa" and noticed that they distributed the Makefile.am and configure.in files instead of a ready to run configure script. You could let the package maintainers know they aren't doing anyone any favors by not precompiling their own autotools sources.
Makefile.am is not a real Makefile. It's the thing that generates Makefile.in, which in turn gets translated into a real Makefile by a configure script.
Try the following steps:
Download the sources to balsa again clean. Then from the command prompt type the following:
autoreconf --install
(If you don't have autoreconf, you likely need to install the autotools packages - ughh...)
That should generate the configure script. Then type:
./configure
It complained about some missing GMime dependencies, so I didn't see it actually generate a Makefile. Once you get to the point in which a Makefile is generated, you should be able to point Netbeans to "open project from existing sources".
As per abbood's request...
Netbeans is not very good for C development. One approach would be to build an XCode project around the source base. The maintainers of the project may even accept the XCode project as a contribution.

Build kernel module into a specific directory

is there a way to set a output-directory for making kernel-modules inside my makefile?
I want to keep my source-direcory clean from the build-files.
KBUILD_OUTPUT and O= did not work for me and were failing to find the kernel headers when building externally.
My solution is to symlink the source files into the bin directory, and dynamically generate a new MakeFile in the bin directory. This allows all build files to be cleaned up easily since the dynamic Makefile can always just be recreated.
INCLUDE=include
SOURCE=src
TARGET=mymodule
OUTPUT=bin
EXPORT=package
SOURCES=$(wildcard $(SOURCE)/*.c)
# Depends on bin/include bin/*.c and bin/Makefile
all: $(OUTPUT)/$(INCLUDE) $(subst $(SOURCE),$(OUTPUT),$(SOURCES)) $(OUTPUT)/Makefile
make -C /lib/modules/$(shell uname -r)/build M=$(PWD)/$(OUTPUT) modules
# Create a symlink from src to bin
$(OUTPUT)/%: $(SOURCE)/%
ln -s ../$< $#
# Generate a Makefile with the needed obj-m and mymodule-objs set
$(OUTPUT)/Makefile:
echo "obj-m += $(TARGET).o\n$(TARGET)-objs := $(subst $(TARGET).o,, $(subst .c,.o,$(subst $(SOURCE)/,,$(SOURCES))))" > $#
clean:
rm -rf $(OUTPUT)
mkdir $(OUTPUT)
If you are building inside the kernel tree you can use the O variable:
make O=/path/to/mydir
If you are compiling outside the kernel tree (module, or any other kind of program) you need to change your Makefile to output in a different directory. Here a little example of a Makefile rule which output in the MY_DIR directory:
$(MY_DIR)/test: test.c
gcc -o $# $<
and then write:
$ make MY_DIR=/path/to/build/directory
The same here, but I used a workaround that worked for me:
Create a sub-directory with/for every arch name (e.g. "debug_64").
Under "debug_64": create symbolic link of all .c and .h files. Keeping the same structure.
Copy the makefile to "debug_64" and set the right flags for 64 Debug build, e.g.
ccflags-y := -DCRONO_DEBUG_ENABLED
ccflags-y += -I$(src)/../../../lib/include
KBUILD_AFLAGS += -march=x86_64
Remember to set the relative directories paths to one level down, e.g. ../inc will be ../../inc.
Repeat the same for every arch/profile.
Now we have one source code, different folders, and different make files.
By the way, creating profiles inside make files for kernel module build is not an easy job, so, I preferred to create a copy of makefile for every arch.