CMake execute_process empty OUTPUT_VARIABLE - cmake

I have a docker container with linux image that executes CMake script. While executing external program using execute_process we noticed a problem with missing output in OUTPUT_VARIABLE, the problem can be narrowed to following:
execute_process(COMMAND /bin/echo TestMessage
OUTPUT_VARIABLE o
ERROR_VARIABLE e)
This works perfectly fine with docker-ce on linux and Docker Desktop on windows.
One of the users (using docker-toolbox) reported that ${o} and ${e} are empty.
There's also a chance that the docker run was executed from Cygwin or git-bash (msys2).
Same command executed from command line:
bash> /bin/echo TestMessage
produces a proper output (TestMessage), but CMake doesn't (empty variables).
What could be going wrong here?

Related

How to forward output from CMake execute_process to CMake's logs?

In my CMake file, I set up a Python test environment:
execute_process(
COMMAND pip install -U -r ${REQUIREMENTS}
RESULT_VARIABLE STATUS
)
The issue is, I usually don't need its verbose OUTPUT. So I want to optionally hide it. This is what I've done:
if(SHOW_PIP_LOGS)
execute_process(...)
else()
execute_process(... OUTPUT_QUIET)
endif()
The thing is, there is already a way to control what logs are shown in CMake: it's --log-level coupled with message(). This way I don't need to manage any logging-related variables. But the command outputs directly to stdout, without going through CMake log system.
Can I somehow forward the output of a command invocation to CMake's logs?
The output must be printed on-line, without buffering everything to a variable first, so that if a pip takes a long time installing packages, I can see what's going on.

CMake incremental compilation through toolchain upgrade

I am trying to find a way to enable incremental compilation with CMake through a toolchain upgrade. Here is the problematic scenario :
Branch main uses g++-9 (using CMAKE_CXX_COMPILER=g++-9)
A new branch uses g++-10 (using CMAKE_CXX_COMPILER=g++-10)
Commits are happening on both branches
Incremental builds on one branch work fine
Switching to the other branch and explicitly invoking CMake fails
My question is the following : I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
Here is a script that will make it quick and easy to reproduce the problem. This script requires Docker. It will create folders Sources and Build at the location where it is executed to avoid littering your filesystem. It then creates Dockerfiles to build docker containers with both g++ and cmake. It then creates a dummy Hello World C++ CMake project. Finally, it creates a folder for build artifacts and then executes the build with g++-9 and then g++-10. The second build fails because CMake generates an error.
#!/bin/bash
set -e
mkdir -p Sources
mkdir -p Build
# Creates a script that will be executed inside the docker container to perform builds
cat << EOF > Sources/Compile.sh
cd /Build \
&& cmake /Sources \
&& make \
&& ./IncrementalBuild
EOF
# Creates a Dockerfile that will be used to have both gcc-9 and cmake
cat << EOF > Sources/Dockerfile-gcc9
FROM gcc:9
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-9
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a Dockerfile that will be used to have both gcc-10 and cmake
cat << EOF > Sources/Dockerfile-gcc10
FROM gcc:10
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-10
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a dummy C++ program that will be compiled
cat << EOF > Sources/main.cpp
#include <iostream>
int main()
{
std::cout << "Hello World!\n";
}
EOF
# Creates CMakeLists.txt that will be used to compile the dummy C++ program
cat << EOF > Sources/CMakeLists.txt
cmake_minimum_required(VERSION 3.9)
project(IncrementalBuild CXX)
add_executable(IncrementalBuild main.cpp)
set_target_properties(IncrementalBuild PROPERTIES CXX_STANDARD 17)
EOF
# Build the docker images with both Dockerfiles created earlier
docker build -t cmake-gcc:9 -f Sources/Dockerfile-gcc9 Sources
docker build -t cmake-gcc:10 -f Sources/Dockerfile-gcc10 Sources
# Run a build with g++-9
echo ""
echo "### Compiling with g++-9 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-9 cmake-gcc:9
echo ""
# Run a build with g++-10
echo "### Compiling with g++-10 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-10 cmake-gcc:10
echo ""
# Print success if we reach this point
echo "SUCCESS!"
I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
The proper way is to use a fresh binary directory. Either remove the binary directory when changing and let it recreate or just use a separate different directory for each toolchain.
Use Build/gcc10 binary directory for gcc10 build and Build/gcc9 for gcc9 builds.
No need to cd Build and mkdir with nowadays cmake - use cmake -S. -BBuild. Also do not use make - prefer cmake --build Build to let you switch generator later.
"If you change the toolchain, you should start with a fresh build. There are too many things that assume the toolchain doesn’t change and while you may be able to find workarounds which appear to work, I recommend you always use a fresh build tree for a different toolchain. This same logic also applies if you update the existing toolchain in-place (e.g. you update to a newer version of GCC on Linux, a newer version of Xcode on macOS, etc.). CMake queries compiler capabilities and caches the results. If you change the toolchain in a way that CMake can’t catch, then you end up with stale cached capabilities being used for the new/updated toolchain. Please don’t do that." - Craig Scott
So essentially I don't think it's possible. You just need to blow away your build. The best thing you can do is alert users if CMake isn't doing it for you.
Perhaps reply on this also:
https://discourse.cmake.org/t/how-to-change-toolchain-without-breaking-developer-workflows/1166
Or start another discourse.

CMake do something when command fails

I use CMake. A custom build step saves error output when it fails.
FIND_PROGRAM (BASH bash HINTS /bin)
SET (BASH_CMD "my-script input.file >output.file 2>error.file")
ADD_CUSTOM_COMMAND (
OUTPUT output.file
COMMAND ${CMAKE_COMMAND} -E env ${BASH} -c "${BASH_CMD}"
...)
This works. If my-script fails for the given input.file then the stderr is saved in error.file, however when I run make and the target fails to build, the normal output does not make the location of error.file obvious. (The actual path of this file is generated in a tangly way.)
I don't want the noisy stderr to show in the terminal during make. I would like to do something like
MESSAGE ("input.file failed, see error.file")
(ideally coloured red or something) to be executed when the command for output.file failed.
Can I express this behaviour in CMakeLists.txt recipes?
Not sure about the highlighting, but you could create a cmake script file executing the command via execute_process, check it's error code and print a custom message in case there's an issue. The following example runs on windows, not on linux, but this should be sufficient for demonstration.
Some command that fails: script.bat
echo "some long message" 1>&2
exit 1
CMake script: execute_script_bat.cmake
execute_process(COMMAND script.bat RESULT_VARIBALE _EXIT_CODE ERROR_FILE error.log)
if (NOT _EXIT_CODE EQUAL 0)
message(FATAL_ERROR "command failed; output see ${CMAKE_SOURCE_DIR}/error.log")
endif()
CMakeLists.txt
add_custom_command(
OUTPUT output.file
COMMAND "${CMAKE_COMMAND}" -P "${CMAKE_CURRENT_SOURCE_DIR}/execute_script_bat.cmake")
Additional info can be passed by adding -D "SOME_VARIABLE=some value" arguments after "${CMAKE_COMMAND}"

Why is $PATH different when executing commands via SSH and libssh?

I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.

CTest with multiple commands

I'm building some tests using CTest. Usually, I can set up the test by simply the line:
ADD_TEST(Test_Name executable args)
However, I've run into a problem, I have some tests that require two commands to be run in order for it to work, is there any way I can run two programs within a single ctest, or am I required to create a new test for each?
Thank you.
The add_test command only accepts one executable, but you can run any executable that is really a script. To do this in a cross platform way, write the script in CMake itself. CMake has the -P option for running arbitrary chunks of CMake scripting language when you run make or make test, rather than at Makefile generation time.
Sadly you can't pass arguments to such a script. But you can set variables to values, which is just as good.
This script you can call runtests.cmake, it runs the commands CMD1 and CMD2 and checks each for a non-zero return code, returning out of CMake itself with an error if that happens:
macro(EXEC_CHECK CMD)
execute_process(COMMAND ${CMD} RESULT_VARIABLE CMD_RESULT)
if(CMD_RESULT)
message(FATAL_ERROR "Error running ${CMD}")
endif()
endmacro()
exec_check(${CMD1})
exec_check(${CMD2})
... and then add your test cases like so:
add_executable(test1 test1.c)
add_executable(test2 test2.c)
add_test(NAME test
COMMAND ${CMAKE_COMMAND}
-DCMD1=$<TARGET_FILE:test1>
-DCMD2=$<TARGET_FILE:test2>
-P ${CMAKE_CURRENT_SOURCE_DIR}/runtests.cmake)
$<TARGET_FILE:test1> gets expanded to the full path to the executable at build-file generation time. When you run make test or equivalent this will run "cmake -P runtests.cmake" setting the CMD1 and CMD2 variables to the appropriate test programs. The script will then execute your 2 programs in sequence. If either of the test programs fail, the whole test fails. If you need to see the output of the test, you can run make test ARGS=-V
There is a simple, although not cross platform, way to achieve this.
In Linux you can use bash to execute multiple commands:
add_test(
NAME
TestName
COMMAND
bash -c "COMMAND1 ; \
COMMAND2 ; \
${CMAKE_CURRENT_BINARY_DIR}/testExecutable"
)