I want to build a linux kernel driver which uses another library which can be compiled to be used with kernel code.
I tried using Makefile and adding all the sources/headers with no luck.
The recommended way is to use CMake but I didn't find any good tutorial on how to use CMake with linux kernel module.
Are there some basic rules?
An hello world example?
Related
Suppose that:
I'm writing a C or C++ library.
I intend my library to be usable on multiple Unix-like platforms (and perhaps also on Windows).
I use CMake for build configuration.
I have some dependencies on other libraries.
Given this - should I be cognizant of the pkg-config mechanism? Versed in its use? Or - should I just ignore it? I'm asking both about its use when configuring my library's build, but also about whether to make sure the installation commands generate and install a .pc file for my library?
I have an embedded project for ARM platform, specifically aarch64.
Up until now I was using Make. I recently set up CMake with no particular issues.
I moved to CMake because I was under the impression it was a more modern build tool that would have allowed a smarter configuration.
For example, I can compile my project using different toolchains (aarch64-elf-gcc-linaro, aarch64-linux-gnu-gcc,...) and I would like CMake to try if any of those are installed on the system and use whichever is found first by default.
Is this possible (or meant to)? I'd expect it to be an easy feat for the tool, but after searching for a while I can't seem to find the right track.
Yes, you can make your CMake project to search for available tool-chains installed in your OS, choose one and compile your project. I also write a CMake program for ARM embedded project, because now it is universal transferable between different OS system Windows and Unix. On Linux there is ARM ToolChain installed and on Windows there is Keil-MDK. If you have different tool-chains to choose between, you can write CMake script which will find paths with command like find_path() and then call correct "toolchianxx.cmake" script with right compiler flags for chosen compiler.
In your particular problem just use find_path commands and use hits to find installed compilers in "pre-set" known paths.
I just installed msys2 and mingw64, with their development packages. I really need perl-Gtk3. Perl is msys2 and compiled with gcc-4.9.x, Gtk and friends are mingw and compiled with gcc-5.
Perl complains "Glib.c: loadable library and perl binaries are mismatched (got handshake key 0xde00080, needed 0xdd80080)" when building Glib. Should this work?
Thanks.
PS ... mingw-w64-x86_64-perl is simply unable to compile. And yes, I'm careful to use a mingw shell vs an msys shell.
Are you still having this problem? I have been able to build a Perl dev environment in MinGW64, current as of this time.
I have been able to build Perl Gtk2 / Gtk3 applications in that environment and the GUIs work. (Both Gtk2 and Gtk3 based). These applications are used in a production environment with several thousand desktop users. The application runs on OSX, Windows, and Linux, and can be packed into a binary for release as an "executable" for those operating systems. The details here are for the Windows version.
I do this by either installing the requisite system packages first with pacman, then as necessary rebuilding whatever system library packages that I may have modified, from source, using makepkg-mingw.
Then I build the requisite Perl modules using the CPAN shell, and the "look" command.
I use pkg-config to detect what library and header files are needed.
I then build (at minimum), the Perl Glib, Pango, Cairo, Gtk2, and Gtk3 modules using the perl Makefile.PL command.
The LIBS and INC options need to be added to that command to create a Makefile that includes the correct header files, and links to the correct libraries. The EXTRALIBS and LDLOADLIBS sections of the Makefile needs be correct.
Also ExtUtils::MM_Win32.pm ExtUtils::Liblist::Kid.pm needed to be edited due to the different archname reported by the MinGW64 perl.
I am only giving a general answer, because I was thinking offing a YouTube video on this. If this is a desired topic I will.
I realize this may be a really dumb question. Please humor me:
True or False: The only way to compile a program to run on a VxWorks platform is to purchase a development environment like Tornado or Workbench from WindRiver.
(I'm looking for an free/open-source solution to compile for a VxWorks platform.)
Outside of an academic license (which would use a VxWorks installation anyway), there is not any way to legally compile your code for a vxWorks platform.
Technically, you CAN obtain the GNU toolchain used to compile code for VxWorks.
The issue you will run into is that you won't have access to the header files necessary for compiling your code or the libraries to link against.
One can use a generic GNU cross-compiler to generate ELF files, and load them onto a running VxWorks system using the ld command. However, I don't recommend it for anything beyond proof-of-concept or initial experimentation -- the VxWorks libraries and Wind River's superb documentation of them are both necessary.
On the development host:
powerpc-elf-eabi-gcc -c foo.c
Then on the target-resident shell, that has mounted a filesystem from the development host (for example, over NFS):
-> ld < foo.o
-> main()
(Where the function main() comes from foo.c)
Since VxWorks is proprietary, they made it so you need their tools (Workbench/Tornado), which they supply, in order to develop for their OS.
I have a layered cmake project with a hierarchy of libraries and applications. Each of these libraries and applications has a CMakeLists.txt and a top level CMakeLists.txt that includes the sub-cmake files.
Right now we are developing and testing entirely on an x86 Linux platform but at some point we will want to start pulling the code into a Yocto build and target arm. We want to maintain being able to build for both x86 and arm.
I've seen some Yocto guides on building for x86 but these appear to build the entire world (the toolchain, linux kernel, all libraries etc) and run the image via qemu. For our desktop use this is quite a bit of overkill when our machines have compilers and we can just run the applications directly, but it would be very helpful to have bitbake build some libraries that we have dependencies on and that need to be installed to a 'virtual root'.
How can I use use bitbake for native x86 projects (in place of or in addition to cmake) and be able to leverage the recipe files for Yocto later on?
I don't have much experience with Yocto, but I'm using another embedded Linux distribution with similar concept: Buildroot. Buildroot creates toolchainfile (output/host/usr/share/buildroot/toolchainfile.cmake) for the currently selected toolchain.
You create two output folders for your project:
build-x86
build-arm
I the first folder you just execute:
cmake ../path-to-your-source
In the second one:
cmake ../path-to-your-source -DCMAKE_TOOLCHAIN_FILE=../path-to-buildroot/output/host/usr/share/buildroot/toolchainfile.cmake
If Yocto provides a toolchainfile, you can use it directly. If not you can create it yourself. See this wiki.
Update:
This section explains, how you can add your software to Buildroot (package). Here the source folder override mechanism is described.