meson build set verbosity - meson-build

is there a way to decrease the traces output when the project is configured
with meson?
e.g.
meson build .....
$ meson build --reconfigure
The Meson build system
Version: 0.59.1
......
Executing subproject mylib
mylib| Project name: mylib
mylib| Project version: 1.0.0
mylib| C compiler for the host machine: cc (gcc 8.5.0 "cc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4)")
mylib| C linker for the host machine: cc ld.bfd 2.30-108
mylib| Build targets in project: 1
mylib| Subproject mylib finished.
Dependency mylib from subproject subprojects/mylib found:

Related

Cross Compiling IamGui with SDL2 and Vulkan from Ubuntu 22.04 LTS WSL on Rasptberry Pi 4 aarch64 with cmake and aarch64-linux-gnu-gcc/g++

I'm trying to cross compile from Win10 to Raspian OS on the Raspberry Pi 4. For this reason I use the Ubuntu 22.04 LTS WSL. I installed gcc/g++, CMake, Ninja, SDL2, Vulkan SDK, aarch64-linux-gnu-gcc/g++.
In the CMakeLists.txt I call find_package(SDL2 REQUIRED) and find_package(Vulkan REQUIRED).
When I compile with the normal gcc and g++ there is no problem, but when I try the aarch64-linux-gnu-gcc and aarch64-linux-gnu-g++
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_CROSSCOMPILING TRUE)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
Error message:
CMake Error at CMakeLists.txt:66 (find_package):
By not providing "FindSDL2.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "SDL2", but
CMake did not find one.
Could not find a package configuration file provided by "SDL2" with any of
the following names:
SDL2Config.cmake
sdl2-config.cmake
Add the installation prefix of "SDL2" to CMAKE_PREFIX_PATH or set
"SDL2_DIR" to a directory containing one of the above files. If "SDL2"
provides a separate development package or SDK, be sure it has been
installed.
The libs can be found in the folder /usr/lib/x86_64-linux-gnu
Why are they not in the folder /usr/lib/aarch64-linux-gnu?
How can I use or install them for the aarch64-linux-gnu-gcc/g++?
Thank you in advance!
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
In this link at the bottom aarch64-linux-gnu files are mentioned, but there are non installed in my /usr/lib/aarch64-linux-gnu folder
https://ubuntu.pkgs.org/20.04/ubuntu-universe-arm64/libsdl2-dev_2.0.10+dfsg1-3_arm64.deb.html

How to check if a library (libssh) is installed with Cmake before adding executable

I have an executable in cmake that depends on libssh being installed on the system.
I use this to install it:
sudo apt-get install -y libssh-dev
This is my Cmake:
cmake_minimum_required(VERSION 3.5.0)
project(validateTensor VERSION 0.0.1)
find_package(gflags QUIET)
add_executable(myapplication
"myapplication.cpp"
)
target_link_libraries(myapplication gflags teamApplication -lssh)
add_dependencies(myapplication teamApplication)
My question is how can I use cmake to check if libshh is installed on the system before adding the executable. If it is not installed then I want to exclude the executable from the build but not have the build fail.
How to check if a library (libssh) is installed with Cmake before adding executable
With find_library.
find_library(HAVE_SSH NAMES ssh)
add_executable(myapplication
myapplication.cpp
)
target_link_libraries(myapplication gflags teamApplication)
if (HAVE_SSH)
target_link_libraries(myapplication ssh)
endif()
No need to add_dependencies(myapplication teamApplication) - target_link_libraries already "does that".

Generating aarch64 RPM package from yocto SDK

I am running Ubuntu 18:04 x86_64 in Docker.
I have copied and sourced SDK I have produced with Yocto.
source /sdk/environment-setup-aarch64-poky-linux
I am compiling my library
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
Everything is fine so far, when I check the lib architecture file myLib.so it says aarch64
myLib.so: ELF 64-bit LSB shared object, ARM aarch64, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=5e01090be56b47a2dd2edd7c44e9861709f3090a, with debug_info, not stripped
Now I want to generate RPM package using cpack -G "RPM"
-- Toolchain file defaulted to '/sdk/sysroots/x86_64-pokysdk-linux/usr/share/cmake/OEToolchainConfig.cmake'
CPack: Create package using RPM
CPack: Install projects
CPack: - Run preinstall target for: myLib
CPack: - Install project: myLib
CPack: Create package
-- CPackRPM:Debug: Using CPACK_RPM_ROOTDIR=/myLib/build/_CPack_Packages/Linux/RPM
CPackRPM: Will use GENERATED spec file: /myLib/build/_CPack_Packages/Linux/RPM/SPECS/myLib.spec
CPack: - package: /myLib/build/myLib.rpm generated.
The result of the rpm file when I check it with rpm -qi myLib.so is
Name : myLib
Version : 1.1.1
Release : 1
Architecture: x86_64
...
Why is the architecture of RPM file x86_64?
What am I missing for the cpack to produce aarch64 RPM file?
The variable CPACK_RPM_PACKAGE_ARCHITECTURE defaults to uname -m which is currently set to the architecture of your computer. You can manually set this variable to override the package architecture.
CPACK_RPM_PACKAGE_ARCHITECTURE=aarch64 cpack -G "RPM"

Generating libLLVM is not supported on MSVC

I'm trying to build and install LLVM 9.0.0 on my Windows 10 machine.
I have CMake and Visual Studio 2019 with C++ tools installed, and using the following strategy to build (and install) LLVM using CMake:
> cmake . -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_INCLUDE_TESTS=OFF
> cmake --build . --config Release --target INSTALL -j8
The following error occurs on the first command:
Generating libLLVM is not supported on MSVC
Note that I'm required to build LLVM dynamically otherwise it won't link. Any ideas?

Tensorflow bazel build failing - not generating bazel-bin directory

I'm trying to install Tensorflow from the source using the following configuration:
NVIDIA GTX 1070
UBUNTU 16.04
CUDA 8.0
Cudnn v5.0
I have followed the following steps from here:
installed bazel
installed dependencies
installed CUDA support
./configure with CUDA 8.0 support
bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
After this step, to my knowledge, there should be a bazel-bin directory, so that I can subsequently execute
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-0.10.0rc0-py2-none-any.whl
However, there is no such directory.
I have a feeling this error message might have something to do with it?
ERROR: /usr/local/lib/python2.7/dist-packages/tensorflow_clone/tensorflow/contrib/rnn/BUILD:45:1: error while parsing .d file: /home/volcart/.cache/bazel/_bazel_volcart/62dff5ffffc63bcd8a9350984645e0be/execroot/tensorflow_clone/bazel-out/local_linux-opt/bin/tensorflow/contrib/rnn/_objs/python/ops/_lstm_ops_gpu/tensorflow/contrib/rnn/kernels/lstm_ops_gpu.cu.pic.d (No such file or directory).
nvcc warning : option '--relaxed-constexpr' has been deprecated and replaced by option '--expt-relaxed-constexpr'.
In file included from third_party/gpus/cuda/include/cuda_runtime.h:78:0,
from <command-line>:0:
third_party/gpus/cuda/include/host_config.h:115:2: error: #error -- unsupported GNU version! gcc versions later than 5.3 are not supported!
#error -- unsupported GNU version! gcc versions later than 5.3 are not supported!
Upon re-executing bazel build ... I found this...
WARNING: /usr/local/lib/python2.7/dist-packages/tensorflow/util/python/BUILD:11:16: in includes attribute of cc_library rule //util/python:python_headers: 'python_include' resolves to 'util/python/python_include' not in 'third_party'. This will be an error in the future.
I should also add this...
$ bazel version
Build label: 0.3.1
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Fri Jul 29 09:09:52 2016 (1469783392)
Build timestamp: 1469783392
Build timestamp as int: 1469783392
bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
Caused a permissions issue. Added sudo
sudo bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package