Perl 6 NativeCall and C source files - raku

What the best strategy to release a Perl 6 binding for C library using NativeCall for both windows and Linux?
Does the developer need to compile both the .dll and .so files and upload them with the perl6 code to github? Or there are an option on perl6 like perl5 to bundle the C source files with Perl 6 code and C compiler will run as part of make and make install?

The libraries do not need to be compiled first (although they could be). To accomplish this first you'll need a Build.pm file in the root of your distribution:
class Builder {
method build($dist-path) {
# do build stuff to your module
# which is located at $dist-path
}
# Only needed for panda compatability
method isa($what) {
return True if $what.^name eq 'Panda::Builder';
callsame;
}
}
Then you'll want to use a module like LibraryMake. Here we use it's make routine in the build method:
use LibraryMake;
class Builder {
method build($dist-path) {
make($dist-path, "$dist-path/resources");
# or you could do the appropriate `shell` calls
# yourself and have no extra dependencies
}
...
This method is supported by the package managers zef and panda, and also allows it to be manually ran via perl6 -I. -MBuild -e 'Builder.new.build($*CWD)'
Here is a working example

Related

Is there a provision to invoke jamfile and jamrule from cmake and vice versa?

Trying to migrate a legecy codebase whose buildsystem jam to CMake.
to divide and conquer it , checking whether there and provisions to Is there a provision to invoke jamfile and jamrule from cmake and vice versa .
one option would be add a custom target invoking jam program.
is it also possible to use a jamrule defined in a jamfile / .jam file
Disclaimer: There are several jam flavors. The answer applies to Perforce Jam and some of its compatible descendants.
As you've already mentioned yourself, invoking jam from cmake can be done with add_custom_target/add_custom_command, so that answers the first part of your question.
Since jam rules (or rather actions) can invoke arbitrary commands, the other direction is certainly possible as well. cmake itself is usually not the tool you invoke for building a target. So, depending on your generator, you would actually want to call make, ninja,...
In your question you're not very concrete regarding you migration approach. Assuming you start out with a jam build system with multiple library and executable targets that span a dependency graph, and you want to migrate the build system component by component. If you start bottom up with a library without dependencies (whose sources hopefully live in their own subdirectory), you would replace the rule invocation that builds the library -- e.g. Library libfoo : foo.c bar.c ; -- by a rule invocation that calls e.g. make -- like Make libfoo$(SUFLIB) ;. The rule could be defined (e.g. in Jamrules) as:
rule Make
{
# tell jam where the target will be created
MakeLocate $(1) : $(LOCATE_TARGET) ;
# always invoke the actions, since we don't let jam check the target's dependencies
Always $(1) ;
# we need the source dir in the actions
SOURCE_DIR on $(1) = $(SUBDIR) ;
}
actions Make
{
# get absolute source dir path
sourceDir=`cd $(SOURCE_DIR) && pwd`
# cd into the output directory
cd $(LOCATE)
# generate Makefile, if not done yet
if [ ! -e Makefile ]; then
cmake -G "Unix Makefiles" $(sourceDir) ;
fi
# make the target
make `basename $(1)`
}
If you need other information from jam to be passed to cmake (like the build type or certain build options), define respective on-target variables (like SOURCE_DIR in the example) to have them available in the actions.

Meson build: Add dependency path for executable manually

What I'd like to do, is rather easy: Compile a project using the Meson build system + manually including a dependency.
However, there's one dependency, which I do not want to install to /usr/lib, due to System Integrity Protection on Mac. (I know I can turn this off; I don't want to.)
So basically I wanna do:
g++ -L[path_to_lib] [files...] but use meson instead of g++.
However, this seems to be super complicated. After doing some research and unsuccessfully adding
cc = meson.get_compiler('c')
dep = cc.find_library('granite' dirs: [ [path_to_dep] ])
to my meson.build file (which doesn't work, as it handles libraries, not dependencies)
I'm left feeling rather dumb.
Please help!
I know I could just add the relevant path to $PATH, but that is more than overkill and I refuse to believe that there isn't another nice quick way to do so. (As is with the ancient c compiler...)
You should be able to solve your problem without modifying meson.build file (I mean leave granite as ordinary dependency). meson uses pkg-config to search for dependencies, so if you add your non-standard path containing granite package config file to PKG_CONFIG_PATH it will find it. And in this case granite package config file should be correct, of course, i.e. contain correct library and header paths, which should be correct if you configure installation of granite with something like:
# Configure:
$ cmake -DCMAKE_INSTALL_PREFIX=/some/path...
# Build:
$ make
# Install (need sudo?):
$ make install
$ export PKG_CONFIG_PATH=/some/path...:$PKG_CONFIG_PATH
granite_dep = dependency('granite')
my_app = executable('my_app',
dependencies : [granite_dep]
...
However, note that in case of find_library() according to reference manual:
The result object can be used just like the return value of dependency
So, it should work:
granite_dep = cc.find_library('granite', dirs : [path])
executable(..., dependencies : granite_dep)
But, I recommend standard way that utilizes pkg-config, because granite can also have dependencies that you will not be able to automatically pick up this way.

Does the main kernel "make" command also make modules internally?

I am learning how to write kernel drivers and this is my first attempt to build one. I have created a folder drivers/naveen/ for my module files - hello.c,Kconfig and Makefile. These are the contents of these files :
Kconfig
config HELLO_WORLD
tristate "Hello World support"
default m
---help---
This option enables printing hello world
Makefile
obj-$(CONFIG_HELLO_WORLD) += hello.o
hello.c
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
static int __init hello_init(void)
{
printk(KERN_ERR "This is NAVEEN module");
return 0;
}
static int __exit hello_exit(void)
{
printk(KERN_ERR "NAVEEN exiting module");
return 0;
}
module_init(hello_init);
module_exit(hello_exit);
MODULE_AUTHOR("Naveen");
MODULE_LICENSE("GPL");
Also, I have added the following line in drivers/Makefile :
obj-$(CONFIG_HELLO_WORLD) += naveen/
and the following line in drivers/Kconfig :
source "drivers/naveen/Kconfig"
My generated .config contains CONFIG_HELLO_WORLD=m.
I did make ARCH=x86_64 -j16 and I can see hello.ko generated. Why? I was expecting to get it generated only when I had done make modules as its set to be as modular with m inside the .config, and not to be compiled with just make. Can someone please explain the behaviour to me or what wrong I am doing?
Does that mean that make also does make modules. I can see from make help that make actually means make all and hence it should do make modules as well internally, and so there should be no need to do make modules once make is successful.
Linux kernel consists of 2 parts - core kernel and modules. When we do simply make, it means make all which means make vmlinux && make modules. Hence, if we have done make, we need not do make modules again and we can simply run the command make modules_install without doing make modules.
You are doing nothing wrong. The modules target has been a dependency of the all target since kernel version 2.6.0 (actually since kernel version 2.5.60 I think).
The way you are adding your module is to add it to the kernel source tree. It is also possible to build custom modules outside the kernel source tree - so called "out-of-tree" kernel modules. Typically, those don't need a Kconfig file and the obj-$(CONFIG_HELLO_WORLD) would be replaced with obj-m in the Makefile.
Here is a Makefile for an "out-of-tree" version of your "hello" module:
ifneq ($(KERNELRELEASE),)
# Kbuild part of Makefile
obj-m += hello.o
else
# Normal part of Makefile
#
# Kernel build directory specified by KDIR variable
# Default to running kernel's build directory if KDIR not set externally
KDIR ?= "/lib/modules/`uname -r`/build"
all:
$(MAKE) -C "$(KDIR)" M=`pwd` modules
clean:
$(MAKE) -C "$(KDIR)" M=`pwd` clean
endif
This Makefile uses a common trick so that the same Makefile can be invoked as a "normal" Makefile and as a "kbuild" Makefile. The "normal" part between the else and endif lines invokes $(MAKE) on the kernel's Makefile (the location of which is specified by the KDIR variable), telling it to build the modules target in the current directory (specified by M=`pwd`). The "kbuild" part is between the ifneq($(KERNELRELEASE),) and else lines and is in the normal "kbuild" format for building parts of the kernel.
That trick depends on the KERNELRELEASE variable being initially unset or empty. It will be set to a non-empty value by the kernel's Makefile rules.

Correctly Building Fortran Libraries And Using Them To Build Applications

I found a few previous questions regarding this, but was unable to find something specific for advice on correctly associating libraries and module files *.mod in a Makefile.
I have a project directory named project where all source files for a library are in project/src, all compiled *.mod files are placed in project/include, and static libraries are created into the directory project/lib using the following:
ar rc myLibrary.a module1.o module2.o module3.o
Following this, I create an application code (a Fortran program that uses these libraries) in the directory project/applications. I have now, at the root level (that is, inside project) created a simple shell script that can build the application. This part is where I cannot get the process to work.
Here is what I am doing:
INCLUDELIB='./include'
LINKLIB='./lib'
INCLUDEOTHER=<include directories for other math libraries>
LINKOTHER=<link directories and link flags for other math libraries>
COMPILER='ifort'
COMPOPTS=<compiler flags, currently I use none>
# building the application:
$COMPILER $COMPOPTS -c ./applications/application.f90 -I$INCLUDELIB $INCLUDEOTHER -L$LINKLIB $LINKOTHER
$COMPILER $COMPOPTS application.o -I$INCLUDELIB $INCLUDEOTHER -L$LINKLIB $LINKOTHER -o application.out
This procedure does not work, and it gives Error in opening the compiled module file. Check INCLUDE paths.
I tried a few variants of the above from my readings on the web about this, and I hope that it is not some minor/silly error that I am overlooking that is leading to this.
Any help or advise will be much appreciated.
This is the message you get when things were not done right with the library (it's not your fault!).
*.mod files are compiler-specific, but not *.o files : *.mod files of gfortran are not compatible with *.mod files of ifort. Therefore, when you build a library, you should put all your API functions and subroutines outside of the modules. For example:
don't do this:
module x
...
contains
subroutine sub_x
...
end subroutine sub_x
end module
but do this instead:
module x
...
end module
subroutine sub_x
use x
...
end subroutine sub_x
In this way you don't require the users to use mod files, and you can distribute your library as a .a or a .so archive.
In your case, the library you use was almost surely compiled with gfortran, so you are stuck with gfortran. The solution is to write another library as a wrapper around the original library. For example, do this
for each function/subroutine you need:
subroutine wrapped_sub_x(arguments)
use x
call sub_x(arguments)
end
Then, you compile your wrapper library with gfortran in a .a archive, and you link it to your project with ifort. In your project, don't forget to call your wrapper library instead of the original library.

Create Custom Builds of an Xcode Project

I am going to build a Mac application written in Obj-C with Xcode. For argument's sake let's say it will have 10 optional features. I need a way to enable or disable those features to create custom builds of the application. These builds would be automated (most likely through the Mac OS X Terminal) so I would need a way to state which of these features are enabled/disabled at build time (a configuration file or CLI arguments would be ideal.)
So what is the best way to accomplish this? I'm trying to plan this out before I start coding so that there is proper separation in my code base to allow for these features to come and go. Ideally the custom build would only contain compiled code for the features it should have. In other words I don't want to always compile all the features and condition them out at runtime.
You can use Xcode configurations for this purpose; for each configuration you could include a different prefix header, for example. Then you can trigger builds form the command line via xcodebuild.
If you'd prefer the config file approach, you can use a .xcconfig file instead to define any of the Xcode build settings.
The Xcode Build System Guide describes both of these approaches.
use #ifdef and the -D flag under the compiler flags to control whether stuff is compiled in or out. You can set up lots of different configs this way if you want, and just have the xcode build configurations work nicely.
#include <stdio.h>
int
main (void)
{
#ifdef TEST
printf ("Test mode\n");
#endif
printf ("Running...\n");
return 0;
}
output 1:
$ gcc -Wall -DTEST dtest.c
$ ./a.out
Test mode
Running...
output 2:
$ gcc -Wall dtest.c
$ ./a.out
Running...
source: http://www.network-theory.co.uk/docs/gccintro/gccintro_34.html