Problems adding DKMS support to kernel module - dynamic

I'm trying to add DKMS support in a kernel module i'm working on.
I have placed the kernel module source with a static lib to be linked against in the following directory:
/usr/src/dpx/1.0
With the following files:
dkms.conf
Makefile
dpxmtt.c
lib.a
dkms.conf file is like this:
MAKE="make"
CLEAN="make clean"
BUILT_MODULE_NAME=dpx
BUILT_MODULE_LOCATION=src/
DEST_MODULE_LOCATION=/kernel/drivers/input/touchscreen
PACKAGE_NAME=dpxm
PACKAGE_VERSION=1.0
REMAKE_INITRD=yes
And the makefile is like this:
EXTRA_CFLAGS+=-DLINUX_DRIVER -mhard-float
obj-m += dpx.o
dpx-objs:= dpxmtt.o ../source/lib.a
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
The ../source/lib.a is an hack since when the makefile is invoked by the dkms building system it was saying that it couldn't be found in directory (the build directory), but since it was being copied to the source directory, i'm referencing it relatively.
When I call
sudo dkms build -m dpx -v 1.0
The result is almost perfect:
santos#NS-PC:~$ sudo dkms build -m dpx -v 1.0
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area....
make KERNELRELEASE=3.0.0-14-generic....
ERROR (dkms apport): binary package for dpx: 1.0 not found
Error! Build of dpx.ko failed for: 3.0.0-14-generic (i686)
Consult the make.log in the build directory
/var/lib/dkms/dpx/1.0/build/ for more information.
nsantos#NS-PC:~$
And the content of the log file is:
DKMS make.log for dpx-1.0 for kernel 3.0.0-14-generic (i686)
Thu Jan 19 11:07:54 WET 2012
make -C /lib/modules/3.0.0-14-generic/build M=/var/lib/dkms/dpx/1.0/build modules
make[1]: Entering directory `/usr/src/linux-headers-3.0.0-14-generic'
CC [M] /var/lib/dkms/dpx/1.0/build/dpxmtt.o
LD [M] /var/lib/dkms/dpx/1.0/build/dpx.o
Building modules, stage 2.
MODPOST 1 modules
CC /var/lib/dkms/dpx/1.0/build/dpx.mod.o
LD [M] /var/lib/dkms/dpx/1.0/build/dpx.ko
make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-14-generic'
The module was built correctly but it ends with the error:
ERROR (dkms apport): binary package for dpx: 1.0 not found
Error! Build of dpx.ko failed for: 3.0.0-14-generic (i686)
And I don't know what it means. Does anybody know?

Using:
$(shell uname -r)
in the Makefile it might be also wrong! The "shell uname -r" refers to the currently running kernel, but the main reason to use the dkms it's because it offers an automated method to recompile the kernel modules that reside outside of the kernel tree for every newly installed kernel. What i mean is that the Makefile might refers to a different kernel which the dkms is building the module for.
Use:
${kernelver} instead.

I had a similar problem. I think your BUILT_MODULE_LOCATION is set incorrectly to the src directory. It should be set in your example to the current directory, or you can just omit this variable and dkms would default to the current directory.

Related

Setting up on Macbook Pro M1 Tenserflow with OpenCV, Scipy, Scikit-learn

I think I read pretty much most of the guides on setting up tensorflow, tensorflow-hub, object detection on Mac M1 on BigSur v11.6. I managed to figure out most of the errors after more than 2 weeks. But I am stuck at OpenCV setup. I tried to compile it from source but seems like it can't find the modules from its core package so constantly can't make the file after the successful cmake build. It fails at different stages, crying for different libraries, despite they are there but max reached 31% after multiple cmake and deletion of the build folder or the cmake cash file. So I am not sure what to do in order to make successfully the file.
I git cloned and unzipped the opencv-4.5.0 and opencv_contrib-4.5.0 in my miniforge3 directory. Then I created a folder "build" in my opencv-4.5.0 folder and the cmake command I use in it is (my miniforge conda environment is called silicon and made sure I am using arch arm64 in bash environment):
cmake -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DWITH_OPENJPEG=OFF -DWITH_IPP=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/Users/adi/miniforge3/opencv_contrib-4.5.0/modules -D PYTHON3_EXECUTABLE=/Users/adi/miniforge3/envs/silicon/bin/python3.8 -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=ON /Users/adi/miniforge3/opencv-4.5.0
So it cries like:
[ 20%] Linking CXX shared library ../../lib/libopencv_core.dylib
[ 20%] Built target opencv_core
make: *** [all] Error 2
or also like in another tries was initially asking for calib3d or dnn but those libraries are there in the main folder opencv-4.5.0.
The other way I try to install openCV is with conda:
conda install opencv
But then when I test with
python -c "import cv2; cv2.__version__"
it seems like it searches for the ffmepg via homebrew (I didn't install any of these via homebrew but with conda). So it complained:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: dlopen(/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so, 2): Library not loaded: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
Referenced from: /Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so
Reason: image not found
Though I have these libs, so when I searched with: find /usr/ -name 'libavcodec.58.dylib' I could find many locations:
find: /usr//sbin/authserver: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/keyring: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/data: Permission denied
find: /usr//local/hw_mp_userdata/Internet_Manager/OnlineUpdate: Permission denied
/usr//local/lib/libavcodec.58.dylib
/usr//local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib
(silicon) MacBook-Pro:opencv-4.5.0 adi$ ln -s /usr/local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
ln: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib: No such file or directory
One of the guides said to install homebrew also in arm64 env, so I did it with:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
export PATH="/opt/homebrew/bin:/usr/local/bin:$PATH"
alias ibrew='arch -x86_64 /usr/local/bin/brew' # create brew for intel (ibrew) and arm/ silicon
Not sure if that is affecting it but seems like it didn't do anything because still uses /opt/homebrew/ instead of /usr/local/.
So any help would be highly appreciated if I can make any of the ways work. Ultimately I want to use Tenserflow Model Zoo Object Detection models. So all the other dependencies seems fine (for now) besides either OpenCV not working or if it is working with conda install then it seems that scipy and scikit-learn don't work.
In my case I also had lot of trouble trying to install both modules. I finally managed to do so but to be honest not really sure how and why. I leave below the requirements in case you might want to recreate the environment that worked in my case. You should have the conda Miniforge 3 installed :
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: osx-arm64
absl-py=1.0.0=pypi_0
astunparse=1.6.3=pypi_0
autocfg=0.0.8=pypi_0
blas=2.113=openblas
blas-devel=3.9.0=13_osxarm64_openblas
boto3=1.22.10=pypi_0
botocore=1.25.10=pypi_0
c-ares=1.18.1=h1a28f6b_0
ca-certificates=2022.2.1=hca03da5_0
cachetools=5.0.0=pypi_0
certifi=2021.10.8=py39hca03da5_2
charset-normalizer=2.0.12=pypi_0
cycler=0.11.0=pypi_0
expat=2.4.4=hc377ac9_0
flatbuffers=2.0=pypi_0
fonttools=4.31.1=pypi_0
gast=0.5.3=pypi_0
gluoncv=0.10.5=pypi_0
google-auth=2.6.0=pypi_0
google-auth-oauthlib=0.4.6=pypi_0
google-pasta=0.2.0=pypi_0
grpcio=1.42.0=py39h95c9599_0
h5py=3.6.0=py39h7fe8675_0
hdf5=1.12.1=h5aa262f_1
idna=3.3=pypi_0
importlib-metadata=4.11.3=pypi_0
jmespath=1.0.0=pypi_0
keras=2.8.0=pypi_0
keras-preprocessing=1.1.2=pypi_0
kiwisolver=1.4.0=pypi_0
krb5=1.19.2=h3b8d789_0
libblas=3.9.0=13_osxarm64_openblas
libcblas=3.9.0=13_osxarm64_openblas
libclang=13.0.0=pypi_0
libcurl=7.80.0=hc6d1d07_0
libcxx=12.0.0=hf6beb65_1
libedit=3.1.20210910=h1a28f6b_0
libev=4.33=h1a28f6b_1
libffi=3.4.2=hc377ac9_2
libgfortran=5.0.0=11_1_0_h6a59814_26
libgfortran5=11.1.0=h6a59814_26
libiconv=1.16=h1a28f6b_1
liblapack=3.9.0=13_osxarm64_openblas
liblapacke=3.9.0=13_osxarm64_openblas
libnghttp2=1.46.0=h95c9599_0
libopenblas=0.3.18=openmp_h5dd58f0_0
libssh2=1.9.0=hf27765b_1
llvm-openmp=12.0.0=haf9daa7_1
markdown=3.3.6=pypi_0
matplotlib=3.5.1=pypi_0
mxnet=1.6.0=pypi_0
ncurses=6.3=h1a28f6b_2
numpy=1.21.2=py39hb38b75b_0
numpy-base=1.21.2=py39h6269429_0
oauthlib=3.2.0=pypi_0
openblas=0.3.18=openmp_h3b88efd_0
opencv-python=4.5.5.64=pypi_0
openssl=1.1.1m=h1a28f6b_0
opt-einsum=3.3.0=pypi_0
packaging=21.3=pypi_0
pandas=1.4.1=pypi_0
pillow=9.0.1=pypi_0
pip=22.0.4=pypi_0
portalocker=2.4.0=pypi_0
protobuf=3.19.4=pypi_0
pyasn1=0.4.8=pypi_0
pyasn1-modules=0.2.8=pypi_0
pydot=1.4.2=pypi_0
pyparsing=3.0.7=pypi_0
python=3.9.7=hc70090a_1
python-dateutil=2.8.2=pypi_0
python-graphviz=0.8.4=pypi_0
pytz=2022.1=pypi_0
pyyaml=6.0=pypi_0
readline=8.1.2=h1a28f6b_1
requests=2.27.1=pypi_0
requests-oauthlib=1.3.1=pypi_0
rsa=4.8=pypi_0
s3transfer=0.5.2=pypi_0
scipy=1.8.0=pypi_0
setuptools=58.0.4=py39hca03da5_1
six=1.16.0=pyhd3eb1b0_1
sqlite=3.38.0=h1058600_0
tensorboard=2.8.0=pypi_0
tensorboard-data-server=0.6.1=pypi_0
tensorboard-plugin-wit=1.8.1=pypi_0
tensorflow-deps=2.8.0=0
tensorflow-macos=2.8.0=pypi_0
termcolor=1.1.0=pypi_0
tf-estimator-nightly=2.8.0.dev2021122109=pypi_0
tk=8.6.11=hb8d0fd4_0
tqdm=4.63.1=pypi_0
typing-extensions=4.1.1=pypi_0
tzdata=2021e=hda174b7_0
urllib3=1.26.9=pypi_0
werkzeug=2.0.3=pypi_0
wheel=0.37.1=pyhd3eb1b0_0
wrapt=1.14.0=pypi_0
xz=5.2.5=h1a28f6b_0
yacs=0.1.8=pypi_0
zipp=3.7.0=pypi_0
zlib=1.2.11=h5a0b063_4

The libgit2 examples are not building properly

I have built libgit2 on my Ubuntu 16.04 machine, and everything seemed fine. I ran make in the /examples directory and when I try to run ./log I get the following:
./log: error while loading shared libraries: libgit2.so.26: cannot open shared object file: No such file or directory
But, in the /build folder I indeed have both libgit2.so and libgit2.so.26 so I am not really sure what I am missing. I can post more info if it is needed. I am using cmake version 3.5.1.
The Makefile in the examples will provide guidelines for usage, which should be suitable when you have actually installed libgit2 into system library locations.
To build the examples within the source directory, you should use cmake to build the examples. Given a fresh configuration:
$ mkdir build
$ cd build
$ cmake .. -DBUILD_EXAMPLES=ON
$ cmake --build .
...truncated...
$ examples/log
commit 8ac8c78c35905f7f9cc37f240c3d633a7cc5a5e3
Merge: 34ec6f3 4955125
Author: Edward Thomson <ethomson#edwardthomson.com>
Date: Mon Oct 9 15:15:08 2017 +0100
Merge pull request #4356 from pks-t/pks/static-clar
cmake: use static dependencies when building static libgit2
...truncated...

Bazel build fails with "Executing genrule #six_archive//:copy_six failed" error while building syntaxnet

I'm trying to follow the instructions at syntaxnet's github page to build syntaxnet parser models.
My system is a Debian Wheezy. Shouldn't be very different from Ubuntu 14.04 LTS or 15.05. I have compiled bazel 0.2.2 (as opposed to 0.2.2b) from source and it appears to work correctly.
Whenever I launch the bazel test syntaxnet/... util/utf8/... command, no tests are executed (all skipped) with some quite cryptic error messages. Here's an example:
root#host:~/tensorflow_syntaxnet/models/syntaxnet# ../../bazel/output/bazel test syntaxnet/... util/utf8/...
Extracting Bazel installation...
.............
INFO: Found 65 targets and 12 test targets...
ERROR: /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/external/six_archive/BUILD:1:1: Executing genrule #six_archive//:copy_six failed: namespace-sandbox failed: error executing command /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox ... (remaining 5 argument(s) skipped).
unshare failed with EINVAL even after 101 tries, giving up.
INFO: Elapsed time: 95.469s, Critical Path: 22.46s
//syntaxnet:arc_standard_transitions_test NO STATUS
//syntaxnet:beam_reader_ops_test NO STATUS
//syntaxnet:graph_builder_test NO STATUS
//syntaxnet:lexicon_builder_test NO STATUS
//syntaxnet:parser_features_test NO STATUS
//syntaxnet:parser_trainer_test NO STATUS
//syntaxnet:reader_ops_test NO STATUS
//syntaxnet:sentence_features_test NO STATUS
//syntaxnet:shared_store_test NO STATUS
//syntaxnet:tagger_transitions_test NO STATUS
//syntaxnet:text_formats_test NO STATUS
//util/utf8:unicodetext_unittest NO STATUS
Executed 0 out of 12 tests: 12 were skipped.
I'm using Oracle Java 8 JDK as recommended, and my compiler is:
~/tensorflow_syntaxnet/models/syntaxnet# gcc --version
gcc (Debian 4.7.2-5) 4.7.2
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Tried looking into the namespace-sandbox binary that's mentioned in the error message, but before I dive deep into this, I thought I'd ask here.
~/tensorflow_syntaxnet/models/syntaxnet# ls -l /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
lrwxrwxrwx 1 root root 108 May 13 14:52 /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox -> /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
~/tensorflow_syntaxnet/models/syntaxnet# readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Command seems to work fine though:
~/tensorflow_syntaxnet/models/syntaxnet# file $(readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox)
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[md5/uuid]=0xecfd97b6a6b9a193b045be13654bd55b, not stripped
~/tensorflow_syntaxnet/models/syntaxnet# /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
No command specified.
Usage: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox [-S sandbox-root] -- command arg1
provided: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Mandatory arguments:
-S <sandbox-root> directory which will become the root of the sandbox
-- command to run inside sandbox, followed by arguments
Optional arguments:
-W <working-dir> working directory
-T <timeout> timeout after which the child process will be terminated with SIGTERM
-t <timeout> in case timeout occurs, how long to wait before killing the child with SIGKILL
-d <dir> create an empty directory in the sandbox
-M/-m <source/target> system directory to mount inside the sandbox
Multiple directories can be specified and each of them will be mounted readonly.
The -M option specifies which directory to mount, the -m option specifies where to
mount it in the sandbox.
-n if set, a new network namespace will be created
-r if set, make the uid/gid be root, otherwise use nobody
-D if set, debug info will be printed
-l <file> redirect stdout to a file
-L <file> redirect stderr to a file
#FILE read newline-separated arguments from FILE
Any idea?
UPDATE: I have done exactly the same steps on a Ubuntu 14.04 LTS (my small workstation, as opposed to the production server running Debian) and everything works well there, with all tests passing. I wonder what's the difference.
Apparently some permission errors happens when setting up the sandbox. A quick workaround is to deactivate the sandbox by using --genrule_strategy=standalone --spawn_strategy=standalone (note that the second one is already specified in the TensorFlow rc file).
You can set those flag in your ~/.bazelrc:
echo "build --genrule_strategy=standalone --spawn_strategy=standalone" >>~/.bazelrc

building a out-of-tree module on Beagle Bone Black

Machine Details :
Linux beaglebone 3.8.13-bone47 armv7l GNU/Linux
Problem Details:
In a attempt to write out-of-tree modules on beagle bone black(as intree modules require me to compile/flash them again and again ), i have logged in to beagle bone black revc through ssh client, which gives me a command line interface via putty, as in general out-of-tree module development, i have tried to compile module with the following make file
ifneq ($(KERNELRELEASE),)
# kbuild part of makefile
obj-m := module.o
#module-objs := module.o
else
# normal makefile
KDIR ?= /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KDIR) M=$(PWD) modules
endif
resulting an error
root#beaglebone:~/lddgeek# make
make -C /lib/modules/3.8.13-bone47/build M=/root/lddgeek modules
make: *** /lib/modules/3.8.13-bone47/build: No such file or directory. Stop.
make: *** [default] Error 2
but when i parse to the path of KDIR i did not find build folder as we find it in an normal ubuntu installed on x86
if i have to develop drivers/modules out-of-tree on a Beagle how could i do that?
The reason i was not able to compile was i was missing the kbuild environment , i need to install kernel headers which will give me the capability to compile out-of-tree/external modules
#wget https://raw.github.com/gkaindl/beaglebone-ubuntu-scripts/master/bb-get-rcn-kernel-source.sh
#chmod +x bb-get-rcn-kernel-source.sh
#./bb-get-rcn-kernel-source.sh
The above steps actually helped me in solving the errors faced , while i was able to insert,remove the hello world module that i was trying to build

How to use cmake's 'make install' from a pbuilder env debian/rules script?

This is to compile and link a static library (so only a build time dependency) that the source is fetched from a repository (just like the source of the main program) on a ubuntu launchpad build bot.
currently i am doing:
#!/usr/bin/make -f
export PREFIX=/usr
export CFLAGS= -O3 -fomit-frame-pointer -flto -fwhole-program
export CXXFLAGS= -O3 -fomit-frame-pointer -flto -fwhole-program
%:
dh $#
override_dh_auto_configure:
cd src/munt;cmake -DCMAKE_CXX_FLAGS="-O3 -fomit-frame-pointer -flto" mt32emu;make;make install
#...compile of the program that depends on mt32emu...
But it fails with:
Install the project...
-- Install configuration: ""
-- Installing: /usr/local/lib/libmt32emu.a
CMake Error at cmake_install.cmake:36 (FILE):
file INSTALL cannot copy file
"/tmp/buildd/dosbox-0.74+20121225/src/munt/libmt32emu.a" to
"/usr/local/lib/libmt32emu.a".
make[2]: *** [install] Error 1
make[2]: Leaving directory `/tmp/buildd/dosbox-0.74+20121225/src/munt'
make[1]: *** [override_dh_auto_configure] Error 2
make[1]: Leaving directory `/tmp/buildd/dosbox-0.74+20121225'
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2
E: Failed autobuilding of package
I: unmounting /var/cache/pbuilder/ccache filesystem
I: unmounting dev/pts filesystem
I: unmounting proc filesystem
I: cleaning the build env
I: removing directory /var/cache/pbuilder/build//2751 and its subdirectories
The idea is to install a static library dependency that is is not packaged in the ubuntu repositories in the launchpad pbuilder env, so it can be used as if it was a system dependency already.
If i try to do 'sudo make install' (and add sudo to the build-deps in debian/control), it asks me for the 'pbuilder' password when testing locally, which i'm assuming will hang the machine on the ubuntu buildbots.
edit: it actually fails on the buildbots because 'no tty present and no askpass program specified'.
There are several things you can do to clean up your rules file, especially when you are using dh.
In the % target, all of the dh command take a parameter builddirectory, which specifies what directory you are building in. This tells the builder to cd to that directory and then call commands (make, cmake, etc.).
In addition, you should just let dh install the files for you. This is done automatically. You shouldn't have to call make install manually.
Here's a slightly easier-to-read rules file:
#!/usr/bin/make -f
export PREFIX=/usr
export CFLAGS= -O3 -fomit-frame-pointer -flto -fwhole-program
export CXXFLAGS= -O3 -fomit-frame-pointer -flto -fwhole-program
%:
dh $# --builddirectory=src/munt
override_dh_auto_configure:
cd src/munt && cmake -DCMAKE_CXX_FLAGS="-O3 -fomit-frame-pointer -flto" mt32emu
#...compile of the program that depends on mt32emu...
Is this just a permissions issue? (i.e. -- must use 'sudo' to install to '/usr/local'?)
Must you install it to '/usr/local'?
If it's just a static library, purely needed for the build of the "the program that depends on mt32emu" then you could put it anywhere, and just tell the dependent program where it is.
To install somewhere else, use -DCMAKE_INSTALL_PREFIX=/directory/where/you/have/write/privileges. Or use DESTDIR= with the make install.
I eventually 'solved' this by depending on launchpad repository dependencies, that is, building a whole package for the library and building that on launchpad and then importing the archive where that was placed to my other builds. Made it explicit i guess.