syntax error near unexpected token `AX_VALGRIND_CHECK' - valgrind

I am trying to integrate valgrind into my unit test framework by using the following m4 macro described at https://www.gnu.org/software/autoconf-archive/ax_valgrind_check.html. In my configure.ac I have
AC_CONFIG_MACRO_DIR([m4])
...
AX_VALGRIND_DFLT()
AX_VALGRIND_CHECK
I have placed the .m4 script provided, in both ./m4 and in /usr/share/aclocal. To generate the configure script etc, I run the following:
aclocal && autoconf && autoreconf --no-recursive --install && \
autoheader && libtoolize --force && automake --force-missing \
--add-missing
However when I go an run ./configure I get the following error
./configure: line 12914: syntax error near unexpected token `AX_VALGRIND_CHECK'
./configure: line 12914: `AX_VALGRIND_CHECK'
What do I need to do to get my configure script to work with the macros provided by the .m4 script above. I am not sure what other information to provide.
Below is my configure.ac. I will try to find at which point things break using this configure.ac vs the one generated by autoreconf -i as posted by #Kusalananda.
AC_INIT([binary_balanced], [0.1], [mehoggan#gmail.com])
AM_INIT_AUTOMAKE([-Wall -Werror foreign subdir-objects])
AC_CONFIG_SRCDIR([./src/])
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_MACRO_DIR([m4])
AC_PROG_CC
AM_PROG_AR
AM_PATH_CHECK
LT_INIT
# Checks for programs.
AC_PROG_CC
# Checks for libraries.
AX_VALGRIND_DFLT()
AX_VALGRIND_CHECK
# Checks for header files.
# Checks for typedefs, structures, and compiler characteristics.
# Checks for library functions.
AC_CONFIG_FILES([Makefile
src/Makefile
tests/Makefile])
AC_OUTPUT

I can not re-create your problem.
I also very seldom run anything other than autoreconf -i. This will re-run the other autotools as needed.
I put the ax_valgrind_check.m4 into a ./m4 directory and created a stub configure.ac:
AC_PREREQ([2.69])
AC_INIT([test],[0.0.0-dev])
AM_INIT_AUTOMAKE([foreign])
AC_CONFIG_MACRO_DIR([m4])
AX_VALGRIND_DFLT()
AX_VALGRIND_CHECK
Running autoreconf -i creates a configure script that does the following:
$ ./configure
checking for a BSD-compatible install... /Users/kk/sw/bin/ginstall -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /Users/kk/sw/bin/gmkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for valgrind... no
So the macros are picked up (which they weren't in your case).
So, run autoreconf -i to see if that sorts things out for you.
If you can't get this to work, try installing the autoconf-archive package for whatever Unix you're on. It will also contain this macro.

Related

How to run sanitizers on whole project

I'm trying to get familiar with sanitizers as ASAN, LSAN etc and got a lot of useful information already from here: https://developers.redhat.com/blog/2021/05/05/memory-error-checking-in-c-and-c-comparing-sanitizers-and-valgrind
I am able to run all sort of sanitizers on specific files, as shown on the site, like this:
clang -g -fsanitize=address -fno-omit-frame-pointer -g ../TestFiles/ASAN_TestFile.c
ASAN_SYMBOLIZER_PATH=/usr/local/bin/llvm-symbolizer ./a.out >../Logs/ASAN_C.log 2>&1
which generates a log with found issue. Now I would like to extend this to run upon building the project with cmake. This is the command to build it at the moment:
cmake -S . -B build
cd build
make
Is there any way I can use this script with adding the sanitizers, without having to alter the cmakelist.txt file??
For instance something like this:
cmake -S . -B build
cd build
make -fsanitize=address
./a.out >../Logs/ASAN_C.log 2>&1
The reason is that I want to be able to build the project multiple times with different sanitizers (since they cannot be used together) and have a log created without altering the cmakelist.txt file (just want to be able to quickly test the whole project for memory issues instead of doing it for each file created).
You can add additional compiler flags from command line during the build configuration:
cmake -D CMAKE_CXX_FLAGS="-fsanitize=address" -D CMAKE_C_FLAGS="-fsanitize=address" /path/to/CMakeLists.txt
If your CMakeLists.txt is configured properly above should work. If that does not work then try adding flags as environment variable:
cmake -E env CXXFLAGS="-fsanitize=address" CFLAGS="-fsanitize=address" cmake /path/to/CMakeLists.txt

CMake incremental compilation through toolchain upgrade

I am trying to find a way to enable incremental compilation with CMake through a toolchain upgrade. Here is the problematic scenario :
Branch main uses g++-9 (using CMAKE_CXX_COMPILER=g++-9)
A new branch uses g++-10 (using CMAKE_CXX_COMPILER=g++-10)
Commits are happening on both branches
Incremental builds on one branch work fine
Switching to the other branch and explicitly invoking CMake fails
My question is the following : I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
Here is a script that will make it quick and easy to reproduce the problem. This script requires Docker. It will create folders Sources and Build at the location where it is executed to avoid littering your filesystem. It then creates Dockerfiles to build docker containers with both g++ and cmake. It then creates a dummy Hello World C++ CMake project. Finally, it creates a folder for build artifacts and then executes the build with g++-9 and then g++-10. The second build fails because CMake generates an error.
#!/bin/bash
set -e
mkdir -p Sources
mkdir -p Build
# Creates a script that will be executed inside the docker container to perform builds
cat << EOF > Sources/Compile.sh
cd /Build \
&& cmake /Sources \
&& make \
&& ./IncrementalBuild
EOF
# Creates a Dockerfile that will be used to have both gcc-9 and cmake
cat << EOF > Sources/Dockerfile-gcc9
FROM gcc:9
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-9
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a Dockerfile that will be used to have both gcc-10 and cmake
cat << EOF > Sources/Dockerfile-gcc10
FROM gcc:10
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-10
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a dummy C++ program that will be compiled
cat << EOF > Sources/main.cpp
#include <iostream>
int main()
{
std::cout << "Hello World!\n";
}
EOF
# Creates CMakeLists.txt that will be used to compile the dummy C++ program
cat << EOF > Sources/CMakeLists.txt
cmake_minimum_required(VERSION 3.9)
project(IncrementalBuild CXX)
add_executable(IncrementalBuild main.cpp)
set_target_properties(IncrementalBuild PROPERTIES CXX_STANDARD 17)
EOF
# Build the docker images with both Dockerfiles created earlier
docker build -t cmake-gcc:9 -f Sources/Dockerfile-gcc9 Sources
docker build -t cmake-gcc:10 -f Sources/Dockerfile-gcc10 Sources
# Run a build with g++-9
echo ""
echo "### Compiling with g++-9 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-9 cmake-gcc:9
echo ""
# Run a build with g++-10
echo "### Compiling with g++-10 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-10 cmake-gcc:10
echo ""
# Print success if we reach this point
echo "SUCCESS!"
I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
The proper way is to use a fresh binary directory. Either remove the binary directory when changing and let it recreate or just use a separate different directory for each toolchain.
Use Build/gcc10 binary directory for gcc10 build and Build/gcc9 for gcc9 builds.
No need to cd Build and mkdir with nowadays cmake - use cmake -S. -BBuild. Also do not use make - prefer cmake --build Build to let you switch generator later.
"If you change the toolchain, you should start with a fresh build. There are too many things that assume the toolchain doesn’t change and while you may be able to find workarounds which appear to work, I recommend you always use a fresh build tree for a different toolchain. This same logic also applies if you update the existing toolchain in-place (e.g. you update to a newer version of GCC on Linux, a newer version of Xcode on macOS, etc.). CMake queries compiler capabilities and caches the results. If you change the toolchain in a way that CMake can’t catch, then you end up with stale cached capabilities being used for the new/updated toolchain. Please don’t do that." - Craig Scott
So essentially I don't think it's possible. You just need to blow away your build. The best thing you can do is alert users if CMake isn't doing it for you.
Perhaps reply on this also:
https://discourse.cmake.org/t/how-to-change-toolchain-without-breaking-developer-workflows/1166
Or start another discourse.

view command before build code in Cmake [duplicate]

I'm trying to debug a compilation problem, but I cannot seem to get GCC (or maybe it is make??) to show me the actual compiler and linker commands it is executing.
Here is the output I am seeing:
CCLD libvirt_parthelper
libvirt_parthelper-parthelper.o: In function `main':
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:102: undefined reference to `ped_device_get'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:116: undefined reference to `ped_disk_new'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:122: undefined reference to `ped_disk_next_partition'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:172: undefined reference to `ped_disk_next_partition'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:172: undefined reference to `ped_disk_next_partition'
collect2: ld returned 1 exit status
make[3]: *** [libvirt_parthelper] Error 1
What I want to see should be similar to this:
$ make
gcc -Wall -c -o main.o main.c
gcc -Wall -c -o hello_fn.o hello_fn.c
gcc main.o hello_fn.o -o main
Notice how this example has the complete gcc command displayed. The above example merely shows things like "CCLD libvirt_parthelper". I'm not sure how to control this behavior.
To invoke a dry run:
make -n
This will show what make is attempting to do.
Build system independent method
make SHELL='sh -x'
is another option. Sample Makefile:
a:
#echo a
Output:
+ echo a
a
This sets the special SHELL variable for make, and -x tells sh to print the expanded line before executing it.
One advantage over -n is that is actually runs the commands. I have found that for some projects (e.g. Linux kernel) that -n may stop running much earlier than usual probably because of dependency problems.
One downside of this method is that you have to ensure that the shell that will be used is sh, which is the default one used by Make as they are POSIX, but could be changed with the SHELL make variable.
Doing sh -v would be cool as well, but Dash 0.5.7 (Ubuntu 14.04 sh) ignores for -c commands (which seems to be how make uses it) so it doesn't do anything.
make -p will also interest you, which prints the values of set variables.
CMake generated Makefiles always support VERBOSE=1
As in:
mkdir build
cd build
cmake ..
make VERBOSE=1
Dedicated question at: Using CMake with GNU Make: How can I see the exact commands?
Library makefiles, which are generated by autotools (the ./configure you have to issue) often have a verbose option, so basically, using make VERBOSE=1 or make V=1 should give you the full commands.
But this depends on how the makefile was generated.
The -d option might help, but it will give you an extremely long output.
Since GNU Make version 4.0, the --trace argument is a nice way to tell what and why a makefile do, outputing lines like:
makefile:8: target 'foo.o' does not exist
or
makefile:12: update target 'foo' due to: bar
Use make V=1
Other suggestions here:
make VERBOSE=1 - did not work at least from my trials.
make -n - displays only logical operation, not command line being executed. E.g. CC source.cpp
make --debug=j - works as well, but might also enable multi threaded building, causing extra output.
I like to use:
make --debug=j
https://linux.die.net/man/1/make
--debug[=FLAGS]
Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behavior is the same as if -d was specified. FLAGS may be a for all debugging output (same as using -d), b for basic debugging, v for more verbose basic debugging, i for showing implicit rules, j for details on invocation of commands, and m for debugging while remaking makefiles.
Depending on your automake version, you can also use this:
make AM_DEFAULT_VERBOSITY=1
Reference: AM_DEFAULT_VERBOSITY
Note: I added this answer since V=1 did not work for me.
In case you want to see all commands (including the compiled ones) of the default target run:
make --always-make --dry-run
make -Bn
show commands executed the next run of make:
make --dry-run
make -n
You are free to choose a target other than the default in this example.

From CMake setup 'make' to use '-j' option by default

I want my CMake project to be built by make -j N, whenever I call make from the terminal. I don't want to set -j option manually every time.
For that, I set CMAKE_MAKE_PROGRAM variable to the specific command line. I use the ProcessorCount() function, which gives the number of procesors to perform build in parallel.
When I do make, I do not see any speed up. However if I do make -j N, then it is built definitely faster.
Would you please help me on this issue? (I am developing this on Linux.)
Here is the snippet of the code that I use in CMakeList.txt:
include(ProcessorCount)
ProcessorCount(N)
message("number of processors: " ${N})
if(NOT N EQUAL 0)
set(CTEST_BUILD_FLAGS -j${N})
set(ctest_test_args ${ctest_test_args} PARALLEL_LEVEL ${N})
set(CMAKE_MAKE_PROGRAM "${CMAKE_MAKE_PROGRAM} -j ${N}")
endif()
message("cmake make program" ${CMAKE_MAKE_PROGRAM})
Thank you very much.
In case you want to speed up the build you can run multiple make processes in parallel but not cmake.
To perform every build with predefined number of parallel processes you can define this in MAKEFLAGS.
Set MAKEFLAGS in your environment script, e.g. ~/.bashrc as you want:
export MAKEFLAGS=-j8
On Linux the following sets MAKEFLAGS to the number of CPUs - 1: (Keep one CPU free for other tasks while build) and is useful in environments with dynamic ressources, e.g. VMware:
export MAKEFLAGS=-j$(($(grep -c "^processor" /proc/cpuinfo) - 1))
New from cmake v3.12 on:
The command line has a new option --parallel <JOBS>.
Example:
cmake --build build_arm --parallel 4 --target all
Example with number of CPUs- 1 using nproc:
cmake --build build_arm --parallel $(($(nproc) - 1)) --target all
Via setting the CMAKE_MAKE_PROGRAM variable you want to affect the build process. But:
This variable affects only the build via cmake --build, not on native tool (make) call:
The CMAKE_MAKE_PROGRAM variable is set for use by project code. The value is also used by the cmake(1) --build and ctest(1) --build-and-test tools to launch the native build process.
This variable should be a CACHEd one. It is used in such way by make-like generators:
These generators store CMAKE_MAKE_PROGRAM in the CMake cache so that it may be edited by the user.
That is, you need to set this variable with
set(CMAKE_MAKE_PROGRAM <program> CACHE PATH "Path to build tool" FORCE)
This variable should refer to the executable itself, not to a program with arguments:
The value may be the full path to an executable or just the tool name if it is expected to be in the PATH.
That is, value "make -j 2" cannot be used for that variable (splitting arguments as list
set(CMAKE_MAKE_PROGRAM make -j 2 CACHE PATH "Path to build tool" FORCE)
wouldn't help either).
In summary, you may redefine the behavior of cmake --build calls with setting the CMAKE_MAKE_PROGRAM variable to the script, which calls make with parallel options. But you may not affect the behavior of direct make calls.
You may set the env variable MAKEFLAGS using this command
export MAKEFLAGS=-j$(nproc)
My solution is to have a small script which will run make include all sorts of other features, not just the number of CPUs.
I call my script mk and I do a chmod 755 mk so I can run it with ./mk in the root of my project. I also have a few flags to be able to run various things with a simple command line. For example, while working on the code and I get many errors, I like to pipe the output to less. I can do that with ./mk -l without having to retype all the heavy duty Unix stuff...
As you can see, I have the -j4 in a couple of places where it makes sense. For the -l option, I don't want it because in this case it would eventually cause multiple errors to be printed at the same time (I tried that before!)
#!/bin/sh -e
#
# Execute make
case "$1" in
"-l")
make -C ../BUILD/Debug 2>&1 | less -R
;;
"-r")
make -j4 -C ../BUILD/Release
;;
"-d")
rm -rf ../BUILD/Debug/doc/lpp-doc-?.*.tar.gz \
../BUILD/Debug/doc/lpp-doc-?.*
make -C ../BUILD/Debug
;;
"-t")
make -C ../BUILD/Debug
../BUILD/Debug/src/lpp tests/suite/syntax-print.logo
g++ -std=c++14 -I rt l.cpp rt/*.cpp
;;
*)
make -j4 -C ../BUILD/Debug
;;
esac
# From the https://github.com/m2osw/lpp project
With CMake, it wouldn't work unless, as Tsyvarev mentioned, you create your own script. But I personally don't think it's sensible to call make from your make script. Plus it could break a build process which would not expect that strange script. Finally, my script, as I mentioned, allows me to vary the options depending on the situation.
I usually use alias in linux to set cm equal to cmake .. && make -j12. Or write a shell to specify make and clean progress ...
alias cm='cmake .. && make -j12'
Then use cm to make in a single command.

Installing pdfgrep on windows server

I am trying to install the pdfgrep command line utility on a Windows Server 2008 R2 from this sources unfortunaly, I had no idea about what to do with those source files after download it so i seach on stackoverflow related problem and i found a ticket who tell to install cygwin, I have done that but when I make the ./configure, it doesn't work, here are the output I get:
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... no
checking for g++... no
checking for c++... no
checking for gpp... no
checking for aCC... no
checking for CC... no
checking for cxx... no
checking for cc++... no
checking for cl.exe... no
checking for FCC... no
checking for KCC... no
checking for RCC... no
checking for xlC_r... no
checking for xlC... no
checking whether the C++ compiler works... no
configure: error: in `/home/Administrateur/pdfgrep-1.2':
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
So what I still have to install for this to work ?
I know the user found a solution but I didn't have the option of installing Cygwin (for reasons...) and i found a simple solution.
Note: I am running windows 10, so this might not work for Windows below 7.
All you need to do is extract the folder and then go to DOS or command prompt.
There are two ways to go about it now.
Go to the folder itself and run it from there
cd C:\to\where\the\folder\is
pdfgrep -i keyword1|keyword2|etc C:\wherever\the\file\is.pdf
OR
2. run code directly from where ever you are in command prompt
C:\to\where\the\folder\is\pdfgrep -i keyword1|keyword2|etc C:\wherever\the\file\is.pdf
the -i is ignore case (one of pdfgrep options).
Hopefully this helps out other people
Sorry for the useless question, I found it by myself, I just rerun cygwin installer and select a c++ compile before reinstall