Using the mathutils module from blender in an independent project - blender

I have a pluggin build for blender which i want to decouple from blender and run independently. The only dependency is the mathutils module. Is there a way to use the mathutils module from blender3d in an independent project.

There is a python package called mathutils, which can be found at https://gitlab.com/ideasman42/blender-mathutils It has all the modules except kdtree. To install simply run
sudo pip install mathutils
Note that it needs python 3 to run

The header files are written in C++ before being compiled into blender executable so I guess you could download the source files and have a look at it but you might not be able to isolate and recompile what you need in particular because of interdependencies and the way they mapped c++ classes into python objects.

Related

Best way to write setup script for multi-language project package that includes anaconda, atom, node.js etc.?

I am designing an environment for productive research, i.e. writing, data-analysis, publication, etc.
In order to share the final results with others, I need to find a way to package this and to set up the local installation.
The project depends on Anaconda, so conda as a package manager is available.
It also includes
Pandoc and some pandoc packages, some will have to be fetched from Github directly because some versions are not available via conda-forge (doable in conda)
Atom and Atom packages; they should be installed and configured by my script (this works on the CLI via the apm package manager)
Node.js and Mermaid and a few other JS packages, which require npm calls
Some file-system-level operations, like deleting parts from packages where I only need a portion from, creating symlinks and aliases etc.
Maybe some Python code for modifying yaml/json/ini files or reading therefrom.
The main project will reside in a Github repository. It will be fine for users to clone it from there and start a build script locally.
My idea is to write a Bash shell script that
creates a conda environment based on requirements.yaml for everything that can be done this way
installs other parts using CLI commands (wget/curl etc.)
does all necessary modifications using CLI commands, maybe using a few short Python scripts (e.g. for changing or reading JSON or yaml files).
My local usage will be on OSX Big Sur, Linux should be supported, Windows compatibility would be nice-to-have.
Before I start:
Is this approach viable? I think it will be pretty transparent, but of course also a bit proprietary.
Docker is likely overkill for my purpose, and I also read that the execution will be slow on OSX.
The same environment will likely be installed multiple times on the same users' machine, so it is important that I can control e.g. the usage of existing packages and files via aliases or symlinks. It is not important that the multiple installations are decoupled for the non-python/non-conda parts (e.g. atom, node.js, mermaid could be the same binaries for all installations; just the set of Python packages might vary by installation).
Thanks for your expertise!

How can I try a Cygwin package that I just created?

I'm a maintainer of a program that I'd might like to propose for inclusion in the Cygwin distribution.
We use CMake so there is a packager available, and it's easy to create a .bz2 package.
Once I've created the package, how can I try it locally? In Linux this can easily be done, but is there a way to use the Cygwin package installer so that it picks up a local package?
I've read the package contribution documentation and related pages but can't find an answer.
The CMake Cygwin package generator seems extremely out-of-date. Cygwin hasn't used .bz2 for some years. This is from a Cygwin-mailing list answer from Adam Dinwoodie:
Cygwin packages generally use Cygport to define the build process and
so forth. It's more-or-less the equivalent of rpmbuild for RPM
packages, and similar tools for other distribution systems. The
documentation for Cygport is at http://cygwinports.github.io/cygport/;
if you're using make in a reasonably standard way, most things should
Just Work™.
In particular, if you're using Cygport, it'll automatically do things
like creating setup.hint files for you.
For testing locally, I find it's simplest to just do tar -xaC/ -f
<tarball> on the compiled tarballs that Cygport generates. That
doesn't test the dependency management or anything that requires
post-install scripts, but it's fine for checking the installation
itself works.

Can a library that uses CMake also be built with SCons?

I want to use KDL (Kinematics and Dynamics Library) in robot control box. But robot control box uses SCons as their build system while KDL uses CMake.
It turned out that the control box doesn't have CMake installed. Should I install CMake in the control box? Or write SCons file for compiling KDL?
====================================================
My question is ambiguous. Sorry for that. And unfortunately, I cannot show the link of Control Box, it's not public. Here is link of KDL installation manual.
http://www.orocos.org/kdl/installation-manual
Let me make it more clear.
Forget all of previous question above and all about Control box, KDL. Let's say that you want to use one library. But the library can be built using CMake according to installation manual. Your PC doesn't have CMake installed but it has SCons, and unfortunately you should not install CMake on your PC.
If you can only use SCons, what can you do?
I know this situation is not usual, I want to know your opinion.
To answer your initial question: Yes, you should always try to install CMake, if that is a build requirement for you library and if you need to build that library from the sources.
To answer your later question: Replacing or rewriting the build system scripts is a major effort and not advisable. In general there is no script to convert build-systems. Such script might help to make the manual transformation. If you have a look at LLVM's effort to replace Autotools by CMake or Boost replacing it's own build system by CMake, you find out it takes several people several years and still not everybody is satisfied.
Often you don't need to build the library yourself. Either there are already built packages from the project directly of from your distribution (Debian etc. packages) or third party packagers like Mac Ports or NuGet.
In your case KDL provides Debian/Ubuntu packages.
Additional KDL is part of ROS, which is experimental in Homebrew for OS X.

Is there a way for cmake to automatically extend the system PATH variable to compiled executables?

Can cmake configuration files also be used to automatically extend the system PATH variable to include the directory paths to all the installed executable applications and if it is possible (and a standard practice), how can I do this?
This way, as soon as I configure all the CMakeLists.txt files and everything compiles (and hopefully runs) nicely, I can start using the applications, and the path configuration would be packaged together with the build process. I am working with Linux and my code is written in C++, but since cmake is cross-platform, the question extends to other systems as well.
I'm unaware of any capability in CMake to do this. However, we based what we do what Cantera does. They upgraded to SCONS recently instead of their old build system, but the idea still applies.
Anyway, there's a script that CMake configures with the paths during the configure step and then installs somewhere. So once built on Linux, one would run make install then source ~/setup_cantera and it sets up all the variables needed.
We do the same thing for our libraries built with CMake. It's possible to detect which shell the user is running and configure an appropriate template script.

How to develop and package with ActivePython?

I have been developing (a somewhat complicated) app on core python (2.6) and have also managed to use pyinstaller to create an executable for deployment in measurements, or distribution to my colleagues. I work on the Ubuntu OS.
What has troubled me is in upgrading the versions of numpy or scipy. Some features I need are in 0.9 and I'm still on 0.7. The process of upgrading them, or matplotlib, for that matter are not elegant. The way I've upgraded on my local machine was to delete the folders of these libraries, and then manually install the newer versions.
However, this does not work for machines where I do not have root access. While trying to find a workaround, I found ActivePython. I gave it a quick try and it seems to use PyPM to download the newest scipy and numpy to its custom install location. Excellent! I don't need root access and can use the latest version of the libraries.
QUESTION:
If there are libraries not available on the PyPM index with ActivePython, how can I directly use the source code of those libraries (example wxpython) to include into this installation?
How can I use pyinstaller to build an executable using only the libraries in the ActivePython installation?