Building kernel modules for different linux version - module

I am new to writing kernel modules, so facing few non-technical problems.
Since for creating kernel module for a specific kernel version ( say 3.0.0-10, 10 is patch number) requires same version kernel headers, so it looks straight to install kernel headers and start development over there.
But kernel headers for patched kernel version are not available.
As I have a guest kernel vmlinuz-3.0.0-10 running in machine and upon downloading kernel headers it says not found.
other approach is to get the source for that specific kernel, but again problem is same source for patched kernel is not available ( its not necessary to get sources of linux-kernel-3.0.0-10 or even linux-kernel-3.0.0 and 10th patch). In some situation it is possible to get source of running kernel, but not always possible.
another is to build kernel other than the running kernel and place built kernel in the machine. But it requires to build the modules of that kernel that is time-consuming and space-consuming process.
So intention of asking this is to know what are the preferences of kernel driver developers. Are there other alternatives ?
Is it possible to compile kernel module in one version and run in another version ( though it is going to give error, but are there any workaround for this ?)

So, building a new kernel is not a good option as it will require :
building kernel
building modules and firmware
building headers
Moving all of above things in appropriate location (if your machine is not same on which you are going to develop module)
So if you have kernel headers for running system then you dont need to download a source code for any kernel version, and while making module use
make -C /lib/modules/kernel-headers-x.y.z/build M=`pwd` modules
and your module will be ready.
If there would be better answers, i will not hesitate to accept any of them.

I know it's a long time since this question asked. I am new in the kernel development. I have also encountered the same error. But now I am able to load my module in the different kernel by which I have built it. Following is the solution:
download the kernel-devel related to the image that you are running. It should have version as close as possible.
Check that the functions you are using in the module are mapped with the header files you have in the kernel-devel.
change the include/generated/utsrelease.h file for UTS_RELEASE value. change it to the version of kernel image running on your HW.
Compile the module using this kernel tree.
Now you can insert your module inside kernel.
Note:: It may cause some unwanted events to be happened as Shahbaz mentioned above. But if you are doing this just for experiments I think its good to go. :)

There is a way to build a module on one kernel and insert it in another. It is by turning off a certain configuration. I am not telling you which configuration it is because this is ABSOLUTELY DANGEROUS. The reason is that there may be changes between the kernels that could cause your module to behave differently, often resulting in a total freeze.
What you should do is to build the module against an already-built kernel (or at least a configured one). If you have a patched kernel, the best thing you can do is to build that kernel and boot your OS with that.
I know this is time consuming. I have done it many many times and I know how boring it can get, but once you do it right, it makes your life much easier. Kernel compilation takes about 2 hours or so, but you can parallelize it if you have a multi-core CPU. Also, you can always let it compile before you leave the office (or if at home, before going to bed) and let it work at night.
In short, I strongly recommend that you build the kernel you are interested in yourself.

Related

Is Skiko right now only available for JVM awt?

Using this: https://github.com/JetBrains/skiko/
I was able to get the SkiaAwtSample to work and it shows a window with a grid of animating clocks. It shows that the backend is OpenGL (I'm using Linux Mint 21, and have NVidia proprietary drivers installed). My first impression is that the performance seems average at best. I predict if I'd try to replicate this using plain old Java2D, I'd get similar performance. I also predict that the performance of Java2D is downplayed. But it is not performance that I am after.
I want to stop investing in UI and graphics technologies that aren't portable.
The samples directory shows these 4 subdirectories:
SkiaAndroidSample SkiaAwtSample SkiaJsSample SkiaMultiplatformSample
When I try to use the build target in the SkiaJsSample directory, I get a long maven error report, that amounts to a dependency not having been met. It wants org.jetbrains.skiko:skiko:0.0.0-SNAPSHOT with 'org.jetbrains.kotlin.platform.type' with value 'js'.
The DEVELOPMENT.md file only mentions of building and making available in the local maven repo using :skiko:publishToMavenLocal
Digging further, I tried :skiko-js-wasm-runtime:publicToMavenLocal but no such target exists.
It seems only the awt stuff is included in the github repository. Isn't the whole thing open source. I can find wasm related entries in online maven repos, but why can't we build it locally and public to our local maven repos?

Cannot disable CUDA build and the process stops

Trying to build ArrayFire examples, everything goes well until I get to the CUDA ones. They are supposed to be skipped, since I have an AMD processor/GPU. However, during the build process, the CUDA section is built anyway, failing for obvious reasons, interrupting the rest of the process.
I could manually change the CMakeLists.txt files. However, is there a higher level way to let the build system (cmake) know that I do not have a CUDA compatible GPU?
It looks like the ArrayFire_CUDA_FOUND and CUDA_FOUND macros are erroneously defined on my system.
The ArrayFire CMake build provides a flag to disable the CUDA backend. Simply set AF_BUILD_CUDA to NO via -DAF_BUILD_CUDA=NO at the command line to disable CUDA.

Cross-compiling vs virtualization

I want my app to run in Windows and Ubuntu, in both 32- and 64-bit modes. So I must compile four times and also test it four times. The question is whether it's best to use cross-compiling or compile in virtual-machine (VM) like VirtualBox.
I know cross-compiling is hard for the first time, but this way I can keep the VM for testing "clean", with no development stuff that may hide some lack of files in the final user PC. On the other hand, compiling directly in a VM is quite more simple.
So I ask:
What are other pros/cons for each method?
Which is the right way?
Which is the most used way and why?
TL;DR: In this case, skip cross-compilation. Build and test on each target platform directly.
Details: If you need to ship your software on these 4 platforms, you will need either physical or virtual manifestations of them, regardless of whether you cross-compile or compile natively on the target platform.
Why? Because you will want to run tests on every target platform, not just one.
Why? Because your cross-compiler could have bugs on one platform but not another, and because 32-bit vs. 64-bit as well as Linux vs. Windows are sufficiently different. For example, if you only run tests on Ubuntu-32, but cross-compile to Windows-64 and ship the software, you might find a problem only once it reaches the customer.
Cross-compiling is hard to set up and hard to maintain. Given that you're going to want to test the code, the installation, etc. on every of these platforms, you might as well skip the cross-compilation and just run builds and tests on each of the target platforms directly.
Speaking of keeping VM state "clean": don't set it up manually, create it from scratch every time. Use tools like Packer and Vagrant to automate the builds, and use clean VMs every time to ensure it's reproducible and hermetic.

Cygwin & OCaml: OPAM + Batteries

I extensively use Cygwin on a Windows 8 environment (I do not want to go ahead and boot/load Linux directly on the machine). I use the OCamlIDE plug-in for Eclipse and have experienced relatively no problems using this workflow setup.
However, I would like to use Batteries so that I may make use of use of its dynamic arrays among a few other interesting features that will speed up my development process.
I have tried this method: http://ocaml.org/install.html, but I get the following error:
$ sh ./opam_installer.sh /usr/local/bin
No file yet for i686:CYGWIN_NT-6.2-WOW64
What am I missing and how would I configure Cygwin so that it can accept the Opam installer? When I tried yet a different way of building Opam, I got:
'i686-w64-mingw32-gcc' is not recognized as an internal or external command,
as a Makefile error and reason for building failure. It seems something is wrong related to mingw32-gcc, what do I need to install and/or configure for my Cygwin to get it to compile/build things properly. I have wget and curl installed as well.
My overall question: What is the best way to get Batteries installed on my system with the minimum of time spent tracing all of its dependencies by hand? Is there a way I can just build the library module, such as BatDynArray and the includes:
include BatEnum.Enumerable
include BatInterfaces.Mappable
That way I can just call them directly in my code with open...;; and/or include...;;;
OCaml works beautifully on Windows with WODI, which is a Cygwin-based distribution that includes Batteries and tons of other useful packages (which are a pain to install manually on Windows).
I urge you to take a shot at WODI, which I believe to be an indispensable tool for the
rest of us, the forgotten souls, who have to deal with Windows.
First of all, include does not do what you think it does. open Batteries should be exactly what you're looking for. OPAM is not yet solid on windows (maybe Thomas could give an update on where things stand).
Frankly, I would recommend to install a linux on a VM, you should be able to get started with OPAM instantly then. Otherwise, take a look at this package manager for OCaml which focuses on cross platform support: http://yypkg.forge.ocamlcore.org/. I've never tried it myself however. The last package manger you could try is GODI, I'm not sure about its windows support though.
Finally, if none of these options work then it should be possible to install batteries from the source. All you need is OCaml and make. And if there are problems with this approach then you should definitely follow up on them either here or on the bug tracker because batteries does intend to support windows AFAIK.

Can you freeze a C/C++ process and continue it on a different host?

I was wondering if it is possible to generate a "core" file, copy if to another machine and then continue execution of the a core file on that machine?
I have seen the gcore utility that will make a core file from a running process. But I do not think gdb can continue execution based on a core file.
Is there any way to just dump the heap/stack and and restore those at a later point?
it's called process migration.
mosix and OpenMosix used to be able to do that. nowadays it's easiest to migrate a whole VM.
On modern systems, not from a core file, no you can't. For freezing and restoring an individual process on Linux, CryoPID and the new Kernel-based checkpoint and restart are in the works, but their abilities are currently quite limited. OpenVZ and other virtualization-like softwares can freeze and restore an entire system.
Also checkout out the Condor project. Condor can do that with parallel jobs as well. Condor also include monitors that can automatically migrate your process when some, for example, starts using their workstation again. It's really designed for utilizing spare cycles in networked environments.
This won't, in general, be sufficient to let an arbitrary process continue on another machine. In addition to the heap and stack state, there may also also open I/O handles, allocated hardware resources, etc. etc.
Your options are either to explicitly write your software in a way that lets it dump state on a signal and later resume from the dumped state, or to run your software in a virtual machine and migrate that to the alternate host - Xen and Vmware both support freeze/restore as well as live migration.
That said, CryoPID attempts to do precisely this and occasionally succeeds.
As of Feb. 2017, there's a fairly stable and mature tool, called CRIU that depends on updates to the Linux Kernel made in version 3.11 (as this was done in Sep. 2013, most modern distros should have those incorporated into their kernel versions).
It can be installed via aptitude by simply calling sudo apt-get install criu.
Instructions on how to use it.
In some cases, this can be done. For example, part of the Emacs build process is to load up all the Lisp libraries and then dump the memory image on disk for quick loading. Some other language interpreters do that too (I'm thinking of Lisp and Scheme implementations, mostly). However, they're specially designed for that kind of use, so I don't know what special things they have to do to allow that to work.
I think this would be very hard to do for a random program, but if you wrote a framework where all objects supported serialisation/deserialisation, you can then serialise all objects used by your program, and then ship that elsewhere, and deserialise them at the other end.
The other people's answers about virtualisation are on the spot, too.
Depends on the machine. It's very doable in a very small embedded system, for instance. I think it's also implemented somewhat in Beowulf clusters and other supercomputeresque apps.
There are lots of reasons you can't do what you want very easily. For example, when you restore the core file on the other machine how do you resolve file descriptors that you process had open? What about sockets, named pipes, semaphores, or any other OS-level resource? Basically unless your system is specifically designed to handle such an operation you can't naively dump a core file and move it to another machine.
I don't believe this is possible. However, you might want to look into virtualization software - e.g. Xen - which make it possible to freeze and move entire system images fromone machine to another.