Use collections with cargo without stdlib - embedded

I am currently trying to setup an embedded Rust project. For that it would be nice if I could use the collections crate (and by extension the alloc crate since it is required by collections). Is there an easy way to achieve this? I currently have the following dependencies in Cargo.toml
[build-dependencies]
gcc = "0.3"
[dependencies]
rust-libcore = "*"
[dependencies.rlibc]
git = "https://github.com/hackndev/rlibc"
branch = "zinc"
And use them as follows:
#![no_std]
#![crate_type="staticlib"]
#![feature(lang_items)]
#![feature(start)]
// This is not found when building with Cargo
extern crate collections;
//#[cfg(target_os = "none")]
extern crate rlibc;
#[start]
pub fn main(_argc: isize, _argv: *const *const u8) -> isize {
// or some call like this
core::collections::Vec::new();
0
}
Is there an easy way to include the collections crate?

One possible solution is to compile it yourself. This requires having a checkout of the Rust source. I don't have a working environment to test this in, so take this suggestion with a pinch of salt. Conceptually, you would do something like this:
cd $RUST_SRC_DIR
rustc --version --verbose | grep commit-hash # Grab the hash
git checkout $RUSTC_HASH
mkdir cross-compiled-libraries
rustc --target=arm-whatever-whatever -O src/libcollections/lib.rs \
--out-dir cross-compiled-libraries
Repeat the last step for whatever libraries you need. A lot of this is taken from the ideas in Embedded Rust Right Now!.
A big concern with this solution is that libcollections requires an allocator. Generally, there is jemalloc or the system allocator. I don't know if either are available on the target you are compiling for...
This doesn't quite get you all the way to something easy to use for Cargo, either. The stuff inside of Rust isn't actually Cargo-ified yet, either. You could create a new Cargo project and add something like this to the Cargo.toml:
[lib]
path = "/path/to/rust/src/libcollections/lib.rs"
Which would then allow you to rely on Cargo more.

Related

how to customize nix package builder script

The root problem is that nix uses autoconf to build libxml2-2.9.14 instead of cmake, and a consequence of this is that the cmake-configuration is missing (details like version number, platform specific dependencies like ws2_32 etc which are needed by my project cmake scripts). libxml2-2.9.14 already comes with cmake configuration and works nicely, except that nix does not use it (I guess they have their own reasons).
Therefore I would like to reuse the libxml2-2.9.14 nix package and override the builder script with my own (which is a trivial cmake dance).
Here is my attempt:
defaultPackage = forAllSystems (system:
let
pkgs = nixpkgsFor.${system};
cmakeLibxml = pkgs.libxml2.overrideAttrs( o: rec {
PROJECT_ROOT = builtins.getEnv "PWD";
builder = "${PROJECT_ROOT}/nix-libxml2-builder.sh";
});
in
Where nix-libxml2-builder.sh is my script calling cmake with all the options I need. It fails like this:
last 1 log lines:
> bash: /nix-libxml2-builder.sh: No such file or directory
For full logs, run 'nix log /nix/store/andvld0jy9zxrscxyk96psal631awp01-libxml2-2.9.14.drv'.
As you can see the issue is that PROJECT_ROOT does not get set (ignored) and I do not know how to feed my builder script.
What am I doing wrong?
Guessing from the use of defaultPackage in your snippet, you use flakes. Flakes are evaluated in pure evaluation mode, which means there is no way to influence the build from outside. Hence, getEnv always returns an empty string (unfortunately, this is not properly documented).
There is no need to refer to the builder script via $PWD. The whole flake is copied to the nix store so you can use your files directly. For example:
builder = ./nix-libxml2-builder.sh;
That said, the build will probably still fail, because cmake will not be available in the build environment. You would have to override nativeBuildInputs attribute to add cmake there.

Using extended classes in gst (GNU smalltalk)?

This is a bit of a follow-up question to this one.
Say I've managed to extend the Integer class with a new method 'square'. Now I want to use it.
Calling the new method from within the file is easy:
Integer extend [
square [
| r |
r := self * self.
^r
]
]
x := 5 square.
x printNl.
Here, I can just call $ gst myprogram.st in bash and it'll print 25. But what if I want to use the method from inside the GNU smalltalk application? Like this:
$ gst
st> 5 square
25
st>
This may have to do with images, I'm not sure. This tutorial says I can edit the ~/.st/kernel/Builtins.st file to edit what files are loaded into the kernel, but I have no such file.
I would not edit what's loaded into the kernel. To elaborate on my comment, one way of loading previously created files into the environment for GNU Smalltalk, outside of using image files, is to use packages.
A sample package.xml file, which defines the package per the documentation, would look like:
<package>
<name>MyPackage</name>
<!-- Include any prerequisite packages here, if you need them -->
<prereq>PrequisitePackageName</prereq>
<filein>Foo.st</filein>
<filein>Bar.st</filein>
</package>
A sample Makefile for building the package might look like:
# MyPackage makefile
#
PACKAGE_DIR = ~/.st
PACKAGE_SPEC = package.xml
PACKAGE_FILE = $(PACKAGE_DIR)/MyPackage.star
PACKAGE_SRC = \
Foo.st \
Bar.st
$(PACKAGE_FILE): $(PACKAGE_SRC) $(PACKAGE_SPEC)
gst-package -t ~/.st $(PACKAGE_SPEC)
With the above files in your working directory containing Foo.st and Bar.st, you can do a make and it will build the .star package file and put it in ~/.st (where gst will go looking for packages as the first place to look). When you run your environment, you can then use PackageLoader to load it in:
$ gst
GNU Smalltalk ready
st> PackageLoader fileInPackage: 'MyPackage'
Loading package PrerequisitePackage
Loading package MyPackage
PackageLoader
st>
Then you're ready to rock and roll... :)

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github

How can you use autoconf to check if member of httpd.h typedef structure exists

How do I make GNU autoconf script to test for typedef struct members using APXS as the compiler?
I have defined the following tests but the results are not what I'm expecting...
AC_CHECK_MEMBER(struct conn_rec.remote_ip, define 'USE_CON_REC_REMOTE_IP',,[#include "httpd.h"]);
AC_CHECK_MEMBER(struct conn_rec.client_ip, define 'USE_CON_REC_CLIENT_IP',,[#include "httpd.h"]);
AC_CHECK_MEMBER(struct conn_rec.remote_addr, define 'USE_CON_REC_REMOTE_ADDR',,[#include "httpd.h"]);
All of these tests are returning "no" even though I know that the first test and the last test should return "yes". I suspect this may be because these are typedefs instead of structs, and/or because autoconf isn't using APXS to run the tests.
The full code is at https://github.com/rritoch/PikeVM/blob/master/root/boot/system-1.1/apache/configure.ac
I am hoping there is a preexisting solution that doesn't require making custom test scripts.
AC_CHECK_MEMBER is for the C/C++ compiler. There are apxs macros to help setup compilation with apxs. It shouldn't be too hard to translate AC_CHECK_MEMBER into a macro suitable for apxs.

SCONS: making a special script builder depend on output of another builder

I hope the title clarifies what I want to ask because it is a bit tricky.
I have a SCONS SConscript for every subdir as follows (doing it in linux, if it matters):
src_dir
compiler
SConscript
yacc srcs
scripts
legacy_script
data
SConscript
data files for the yacc
I use a variant_dir without copy, for example:
SConscript('src_dir/compiler/SConscript', variant_dir = 'obj_dir', duplicate = 0)
The resulting obj_dir after building the yacc is:
obj_dir
compiler
compiler_compiler.exe
Now here is the deal.
I have another SConscript in the data dir that needs to do 2 things:
1. compile the data with the yacc compiled compiler
2. Take the output of the compiler and run it with the legacy_script I can't change
(the legacy_script, takes the output of the compiled data and build some h files for another software to depend on)
number 1 is acheived easily:
linux_env.Command('[output1, output2]', 'data/data_files','compiler_compiler.exe data_files output1 output2')
my problem is number 2: How do I make the script runner depend on outputs of another target
And just to clarify it, I need to make SCONS run (and only if compiler_output changes):
src_dir/script/legacy_script obj_dir/data/compiler_output obj_dir/some_dir/script_output
(the script is usage is: legacy_script input_file output_file)
I hope I made myself clear, feel free to ask some more questions...
I've had a similar problem recently when I needed to compile Cheetah Templates first, which were then used from another Builder to generate HTML files from different sources.
If you define the build output of the first builder as source for the second builder, SCons will run them in the correct order and only if intermediate files have changed.
Wolfgang