Build Sundials with MKL - cmake

I need to build Sundials (as a dependency for another C/C++ library) on a Linux cluster which provides only MKL as BLAS and LAPACK support.
As far as I know, unlike other versions, the MKL BLAS and LAPACK wrappers are not self-contained at link time but require linking against MKL core and/or other libraries as well.
So how (if at all) can I tell CMake what to include in order to successfully build Sundials? Is it possible and safe to use flags like
$ cmake (...) -DBLAS_LIBRARIES=/path/to/mkl/<several files grouped together>
and what would be the correct syntax?

Related

As a library author using CMake, should I be cognizant of pkg-config?

Suppose that:
I'm writing a C or C++ library.
I intend my library to be usable on multiple Unix-like platforms (and perhaps also on Windows).
I use CMake for build configuration.
I have some dependencies on other libraries.
Given this - should I be cognizant of the pkg-config mechanism? Versed in its use? Or - should I just ignore it? I'm asking both about its use when configuring my library's build, but also about whether to make sure the installation commands generate and install a .pc file for my library?

Creating a build environment to build Tensorflow apps with cmake

I am wondering if there is a definitive recipe for using cmake to build tensorflow and tensor for apps. I followed the instructions at https://github.com/cjweeks/tensorflow-cmake without much success and ended up having to build Eigen and Protobuf by hand and then copy relevant headers files into the the header file tree created by the Bazel build of Tensorflow.
I just built TF with CMake, VS2017, and CUDA 9.2, but had to make two manual changes:
Patch Half.h in Eigen
Change CUDA version from "9.0" to "9.2" in the main CMakeLists.txt.
Build has to be single threaded, otherwise VS runs out of heap (on my 16GB laptop). It takes a while and one project fails, but builds enough libraries to run all the examples I wanted.
Another problem with CMake build, vs. Bazel, is that the former rebuilds a bunch of projects (involving protobuf generated files) even when nothing there changes. Bazel is smarter and only compiles the changed code, then statically links all object files into a single executable, which is still faster than CMake build.

Compile Tensorflow programs with custom compiler

I'm trying to compile a very simple Tensorflow program (which only prints the Tensorflow version) with my company's c compiler but the libtensorflow.so I downloaded from Tensorflow's offical website is incompatible with our c compiler.
My company's c compiler is pretty much just a standard gcc but gcc can compile the program and our custom compiler cannot.
My colleague told me I have two options: (1) replace Bazel's compiler with our compiler and use Bazel to compile the program or (2) Compile the program with Bazel first then compile the program using our compiler and include the pb.h files generated by Bazel (because those bazel files can only be generated by Bazel).
I'm not sure how to do (!) but I tried (2). The problem with (2) is I got erros saying the protoc was generated by an older version and I'm not sure how to change to the right version.
Some additional information: (1) The OS is Linux, (2) I do not have the privilege to use sudo commands, (3) I cannot access system directories (e.g. /usr/local)
Is there any hope I can make this work? You may ask why not just build the program with Bazel. It's because our company's program needs to be run by our company's simulator and the simulator only accepts program generated by our company's compiler.
Your only option is to build tensorflow with Bazel, and tell Bazel to use your C/C++ compiler. The easiest way is to set the CC and CXX environment variables to point to your compiler's executable. If it is really a drop-in replacement of GCC, then it should work and after building you should get a tensorflow binary compiled with your custom compiler.
If special flags are needed then you should make a custom toolchain in Bazel to tell it how to use your compiler, it is a bit complex but not much. Instructions for that are at https://github.com/bazelbuild/bazel/wiki/Building-with-a-custom-toolchain

Loadable modules messages under Cygwin

When building LLVM using cmake, a few components involving "Loadable modules" are not built, and warning messages such as the following are issued:
-- LLVMHello ignored -- Loadable modules not supported on this platform.
...
-- BugpointPasses ignored -- Loadable modules not supported on this platform.
...
-- SampleAnalyzerPlugin ignored -- Loadable modules not supported on this platform.
-- PrintFunctionNames ignored -- Loadable modules not supported on this platform.
But loadable modules are supported under Cygwin; and the handy opt tool can readily be used. Building with ./configure produces no such messages; and the components are built. Why do these messages occur? Is there a way to build using cmake, and still have these components built?
The loadable modules are not supported on Windows due to lack of dynamic linking on this platform. The plugins definitely should be disabled on autoconf build as well.
The only way to use loadable modules on windows is to build the whole LLVM into a big .DLL

Bitbake vs. cmake for x86 and arm project

I have a layered cmake project with a hierarchy of libraries and applications. Each of these libraries and applications has a CMakeLists.txt and a top level CMakeLists.txt that includes the sub-cmake files.
Right now we are developing and testing entirely on an x86 Linux platform but at some point we will want to start pulling the code into a Yocto build and target arm. We want to maintain being able to build for both x86 and arm.
I've seen some Yocto guides on building for x86 but these appear to build the entire world (the toolchain, linux kernel, all libraries etc) and run the image via qemu. For our desktop use this is quite a bit of overkill when our machines have compilers and we can just run the applications directly, but it would be very helpful to have bitbake build some libraries that we have dependencies on and that need to be installed to a 'virtual root'.
How can I use use bitbake for native x86 projects (in place of or in addition to cmake) and be able to leverage the recipe files for Yocto later on?
I don't have much experience with Yocto, but I'm using another embedded Linux distribution with similar concept: Buildroot. Buildroot creates toolchainfile (output/host/usr/share/buildroot/toolchainfile.cmake) for the currently selected toolchain.
You create two output folders for your project:
build-x86
build-arm
I the first folder you just execute:
cmake ../path-to-your-source
In the second one:
cmake ../path-to-your-source -DCMAKE_TOOLCHAIN_FILE=../path-to-buildroot/output/host/usr/share/buildroot/toolchainfile.cmake
If Yocto provides a toolchainfile, you can use it directly. If not you can create it yourself. See this wiki.
Update:
This section explains, how you can add your software to Buildroot (package). Here the source folder override mechanism is described.