I have installed the "mipsel tuxbox" compile suite for crosscompile
Host system is x86_64 slackware
destination is mipsel32bit "vuduo+"
For example,I want to compile a program, I use this script
make clean
export TOOLCHAIN=/opt/mipsel-tuxbox-linux-gnu
export PATH="$TOOLCHAIN/bin:$PATH"
export CC=/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/bin/gcc
export RANLIB=/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/bin/ranlib
make
Compile and executable is..x86_64!
If I use this line give me a lot of error about includes not found
make CC=/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/bin/gcc STRIP=/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/bin/strip CPPFLAGS="-I/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/sysroot/usr/include/linux/ -I/opt/mipsel-tuxbox-linux-gnu/mipsel-tuxbox-linux-gnu/sysroot/usr/include/sys/"
What's wrong?
You should also take a look at http://code.vuplus.com/index.php?action=repo
These systems are based on https://github.com/openembedded which use the https://github.com/openembedded/bitbake build system.
Besides the original vu+ from above there are many others like
https://github.com/OpenPLi
https://github.com/oe-alliance
https://www.vuplus-support.org/wbb4/vtisoftware/
that let you build and integrate simple applications to full blown system images with consistent dependencies.
Other options are
to use gcc/llvm/etc. and make (https://www.linux-mips.org/wiki/Toolchains).
create a Go compiler with crossdev (https://wiki.gentoo.org/wiki/Crossdev) or build and manage your software with crossdev.
use RUST (https://www.rust-lang.org/tools/install) with the mipsel-unknown-linux-gnu toolchain.
build an Erlang-Runtime with https://github.com/joaohf/meta-erlang and the above mentioned OpenEmbedded build system, then use it directly on your Box.
cross compile D with GDC or LDC (https://wiki.dlang.org/Compilers).
Solutions found, a script like that compile fine
make clean
export TOOLCHAIN=/opt/mipsel-tuxbox-linux-gnu
export PATH=$PATH:/opt/mipsel-tuxbox-linux-gnu/libexec/gcc/mipsel-tuxbox-linux-gnu/4.8.1/:/opt/mipsel-tuxbox-linux-gnu/bin
export LDCFLAGS=/opt/mipsel-tuxbox-linux-gnu/lib
export LD_LIBRARY_PATH=/opt/mipsel-tuxbox-linux-gnu/lib
make CC=mipsel-tuxbox-linux-gnu-gcc LD=mipsel-tuxbox-linux-gnu-ld
Related
Mediapipe exist for python or js, but I would like to use it with unsupported native language. Usualy I use dll and export function.
With the current source of mediapipe, does it's possible to generate a dll ?
I try to generate dll from py but I got an error, don't know if I need to do it to all mediapipe files ?
python -m nuitka --module my_mediapipe_module.py
I identify one module to use, selfie_segmentation and only two method, SelfieSegmentation (__init__) and process.
That look not so difficult, but I don't really know where to start.
Thanks
So I'm trying to use stb_image in my Kotlin/Native project and I am having trouble trying to include it in my project. It's a header only library and konan seems to expect a compiled object file anyways so I was wondering if there is any way of just generating the cstubs and then using the header for linking unless I have to compile a basic translation file since stb_image only requires you to have a translation unit that defines STB_IMAGE_IMPLEMENTATION however I have that defined in my compilerOpts -GSTB_IMAGE_IMPLEMENTATION. Would it be easier to just compile a translation unit, create the static object, and then link against it or does K/N have some way of doing that for me?
I am using Gradle Multiplatform so if there is some gradle script I can run then please let me know.
My -GSTB_IMAGE_IMPLEMENTATION is supposed to be -DSTB_IMAGE_IMPLEMENTATION and I needed to put my -I switch in my compilerOpts not linkerOpts.
I recommend actually creating a translation file but it's not required.
You can just give the header file with the compileropts as you've done and that should work.
You can look at this as a reference. I'm working on a wrapper in my free time.
As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.
I am working on a project and I have a plan to separate certain sections out into separate dlls/ndlls in the final program. The main reason I want to do this is to support plugins and add ons, so more functionality can be added if needed, but the core app can still be used if that's the only requirement.
I have done something similar in C# (abet through an IDE so I never had to write any linker/compiling commands) so I know the general process but I can't seem to find a way to write HX code and then have it compile into a ndll.
I found this http://old.haxe.org/doc/cpp/ffi?lang=en which shows how to compile cpp code into a ndll using hxcpp and g++. I would think there should be a way I can use LIME or HXCPP to create a build file that will allow me to do it in one step instead of having to make a "fake" main function to compile the HX to CPP or CS.
If anyone knows of a project that does this and has a build.hxml or build.xml file that describes this or a tutorial or guide that takes about this, I would love it see it.
Try this:
lime create extension TestExt
lime rebuild TestExt windows
Replace "windows" with "mac" or "linux" as appropriate. Assuming it works, the ndll will show up in a subfolder of TestExt/ndll/.
As for tutorials, I wrote this one. It's targeted at OpenFL programmers, but the "Writing code for iOS" section covers what you'll need to know. (You can also just model your code on the template.)
In case it helps, I've made a tool to generate some of the boilerplate code that Lime requires.
I'm writing a custom check for installed libraries in autoconf:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON autotools/check-ghc-version-range ....)
...
])
where my Python script that actually performs the check resides in the autotools/ sub-directory of the project.
However, this is not portable, for example make dist-check fails because then autoconf tools are called from a different directory. How can I reference the absolute path to my Python script so that it gets called properly no matter what the current directory is?
ac_top_srcdir or ac_abs_top_srcdir should work in this case:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON $ac_top_srcdir/autotools/check-ghc-version-range ....)
...
])
EDIT: I don't think this approach will work -- it seems that $ac_top_srcdir aren't evaluated until later (AC_OUTPUT?).
What I think might work in this instance is to do something similar to what the runtime C tests do: blast a configuration test to a temporary file (conftest.py instead of conftest.c in this case) and run it. Unfortunately, there's (yet) no builtin macros or for automake/autoconf other tools that directly assist with this task.
Fortunately it seems that a clever person has written at least a couple different ways to do this. The first one is GNU pyconfigure which seems to have facilities for writing Python test code as I described above. The second one is more of an ad hoc macro collection that he used for his project.
You can use $srcdir.
It's not necessarily an absolute path, but it's a path that points from the top of the build tree to the top of the source tree.