How to include the .so of custom ops in the pip wheel and organize the sources of custom ops? - tensorflow

Following the documentation, I put my_op.cc and my_op.cu.cc under tensorflow/core/user_ops, and created tensorflow/core/user_ops/BUILD which contains
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "my_op.so",
srcs = ["my_op.cc"],
gpu_srcs = ["my_op.cu.cc"],
)
Then I run the following commands under the root of tensorflow:
bazel build -c opt //tensorflow/core/user_ops:all
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
After building and installing the pip wheel, I want to use my_op in the project my_project.
I think I should create something like my_project/tf_op/__init__.py and my_project/tf_op/my_op.py, which calls tf.load_op_library like the example cuda_op.py. However, the my_op.so is not included in the installed pip wheel. How can I generate the input (the path of my_op.so) for tf.load_op_library?
Is there any better way to organized my_op.cc, my_op.cu.cc, my_op.py with my_project?

You can preserve directory structure of your project and create setup.py such that it also include .so files. You can also add other non-python files of your project same way.
Example Directory Structure:
my_package
my_project
__init__.py
setup.py
You can install 'my_project' package while in my_package directory using command:
pip install . --user (Avoid --user argument if you install packages with root access)
from setuptools import setup, find_packages
setup(name='my_project',
version='1.0',
description='Project Details',
packages=find_packages(),
include_package_data=True,
package_data = {
'': ['*.so', '*.txt', '*.csv'],
},
zip_safe=False)
Don't forget to add __init__.py in all folders containing python modules you want to import.
Reference: https://docs.python.org/2/distutils/setupscript.html#installing-package-data

Related

conan doesn't upload export/conanfile.py to remote

I have created customized OpenCV package using conan package manager and uploaded it to a remote storage.
Workflow:
create package
cd c:\path\to\conanfile.py
conan create . smart/4.26 --profile ue4
Export with conan export . opencv-ue4/3.4.0#smart/4.26
Result:
c:\path\> conan export . opencv-ue4/3.4.0#smart/4.26
[HOOK - attribute_checker.py] pre_export(): WARN: Conanfile doesn't have 'url'. It is recommended to add it as attribute
Exporting package recipe
opencv-ue4/3.4.0#smart/4.26 exports_sources: Copied 3 '.patch' files: cmakes.patch, check_function.patch, typedefs.patch
opencv-ue4/3.4.0#smart/4.26: The stored package has not changed
opencv-ue4/3.4.0#smart/4.26: Exported revision: ceee251590f4bf50c4ff48f6dc27c2ed
I upload everything to the remote:
c:\path> conan upload -r bart opencv-ue4/3.4.0#rs7-smart/4.26 --all
Uploading to remote 'bart':
Uploading opencv-ue4/3.4.0#smart/4.26 to remote 'bart'
Recipe is up to date, upload skipped
Uploading package 1/1: 1d79899922d252aec6da136ce61bff640124c1c4 to 'bart'
Uploaded conan_package.tgz -> opencv-ue4/3.4.0#smart/4.26:1d79 [23667.97k]
Uploaded conaninfo.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [0.75k]
Uploaded conanmanifest.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [11.81k]
Our remote storage runs on the Artifactory, and I can see in a browser that conanfile.py is not listed anywhere.
I can also verify that directory C:\Users\user\.conan\data\opencv-ue4\3.4.0\smart\4.26\export on my Windows PC does contain both conanfile.py and conanmanifest.txt
I am using Windows PC for doing all above.
Now I'm trying to consume that package on another machine, running Ubuntu Linux.
Here is my conanfile.txt
[requires]
opencv-ue4/3.4.0#smart/4.26
[generators]
json
Command and results
> conan install -g json . opencv-ue4/3.4.0#smart/4.26
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
compiler.libcxx=libstdc++
compiler.version=9
os=Linux
os_build=Linux
[options]
[build_requires]
[env]
opencv-ue4/3.4.0#smart/4.26: Not found in local cache, looking in remotes...
opencv-ue4/3.4.0#smart/4.26: Trying with 'bart'...
Downloading conanmanifest.txt completed [0.33k]
opencv-ue4/3.4.0#-smart/4.26: Downloaded recipe revision 0
ERROR: opencv-ue4/3.4.0#smart/4.26: Cannot load recipe.
Error loading conanfile at '/home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py': /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py not found!
Running ls -la /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/ shows that the directory indeed contains only file conanmanifest.txt
Below is the relevant part of the conanfile.py that I've used to build the package
from conans import ConanFile, CMake, tools
class OpenCVUE4Conan(ConanFile):
name = "opencv-ue4"
version = "3.4.0"
url = ""
description = "OpenCV custom build for UE4"
license = "BSD"
settings = "os", "compiler", "build_type", "arch"
generators = "cmake"
exports_sources = 'patches/cmakes.patch', 'patches/check_function.patch', 'patches/typedefs.patch'
def requirements(self):
self.requires("ue4util/ue4#adamrehn/profile")
self.requires("zlib/ue4#adamrehn/{}".format(self.channel))
self.requires("UElibPNG/ue4#adamrehn/{}".format(self.channel))
def cmake_flags(self):
flags = [
"-DOPENCV_ENABLE_NONFREE=OFF",
# cut
]
return flags
def source(self):
self.run("git clone --depth=1 https://github.com/opencv/opencv.git -b {}".format(self.version))
self.run("git clone --depth=1 https://github.com/opencv/opencv_contrib.git -b {}".format(self.version))
def build(self):
# Patch OpenCV to avoid build errors
for p in self.exports_sources:
if p.endswith(".patch"):
tools.patch(base_path='opencv', patch_file=p, fuzz=True)
cmake = CMake(self)
cmake.configure(source_folder="opencv", args=self.cmake_flags())
cmake.build()
cmake.install()
def package_info(self):
self.cpp_info.libs = tools.collect_libs(self)
Conan version both in Windows and in Linux is 1.54.0
How do I correctly upload and consume the package?
Update.
After conversation with #drodri in comments I have removed conanfile.py from exports_sources, deleted all conan-generated files in all PCs and removed uploaded files from the Artifactory.
Then I've rebuilt the package, re-exported and re-uploaded it.
The issue was in restrictions of our Artifactory. Admins have forbidden uploading .py files.

setup.py sdist creates an archive without the package

I have a project called Alexandria that I want to upload on PyPi as a package. To do so, I have a top folder called alexandria-python in which I put the package and all the elements required to create a package archive with setup.py. The folder alexandria-python has the following structure:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
Then, following many tutorials to create an uploadable archive, I open a terminal, cd to alexandria-python, and use the command:
python setup.py sdist
This creates additional folders, so the structure of alexandria-python is now:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
|- alexandria.egg-info
|- dist
everything looks fine, and from my understanding the package should now be archived in the dist folder. But when I open the dist folder and extract the alexandria-0.0.2.tar.gz archive that has been created, it does not contain the 'alexandria' package. Everything else thus seems to be there, except the most important element: the package, as shown on the image:
Following, when I upload the project to test-PyPi and then pip install it, any attempt to import a module from the toolbox results in a ModuleNotFoundError. How is it that my package does not get uploaded to the archive? Am I doing something very silly?
Note: in case it can help, this is the structure of my setup.py file:
from setuptools import setup
# set up the package
setup(
name = "alexandria",
license = "Other/Proprietary License",
version = "0.0.2",
author = "Romain Legrand",
author_email = "alexandria.toolbox#gmail.com",
description = "a software for Bayesian time-series econometrics applications",
python_requires = ">=3.6",
keywords=["python", "Bayesian", "time-series", "econometrics"])
Your setup.py has neither py_modules nor packages. Must have one of those. In your case alexandria is a package so
setup(
…
packages = ['alexandria'],
…
)
or
from setuptools import find_packages, setup
…
packages = find_packages('.')

Setting up on Macbook Pro M1 Tenserflow with OpenCV, Scipy, Scikit-learn

I think I read pretty much most of the guides on setting up tensorflow, tensorflow-hub, object detection on Mac M1 on BigSur v11.6. I managed to figure out most of the errors after more than 2 weeks. But I am stuck at OpenCV setup. I tried to compile it from source but seems like it can't find the modules from its core package so constantly can't make the file after the successful cmake build. It fails at different stages, crying for different libraries, despite they are there but max reached 31% after multiple cmake and deletion of the build folder or the cmake cash file. So I am not sure what to do in order to make successfully the file.
I git cloned and unzipped the opencv-4.5.0 and opencv_contrib-4.5.0 in my miniforge3 directory. Then I created a folder "build" in my opencv-4.5.0 folder and the cmake command I use in it is (my miniforge conda environment is called silicon and made sure I am using arch arm64 in bash environment):
cmake -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DWITH_OPENJPEG=OFF -DWITH_IPP=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/Users/adi/miniforge3/opencv_contrib-4.5.0/modules -D PYTHON3_EXECUTABLE=/Users/adi/miniforge3/envs/silicon/bin/python3.8 -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=ON /Users/adi/miniforge3/opencv-4.5.0
So it cries like:
[ 20%] Linking CXX shared library ../../lib/libopencv_core.dylib
[ 20%] Built target opencv_core
make: *** [all] Error 2
or also like in another tries was initially asking for calib3d or dnn but those libraries are there in the main folder opencv-4.5.0.
The other way I try to install openCV is with conda:
conda install opencv
But then when I test with
python -c "import cv2; cv2.__version__"
it seems like it searches for the ffmepg via homebrew (I didn't install any of these via homebrew but with conda). So it complained:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: dlopen(/Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so, 2): Library not loaded: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
Referenced from: /Users/adi/miniforge3/envs/silicon/lib/python3.8/site-packages/cv2/cv2.cpython-38-darwin.so
Reason: image not found
Though I have these libs, so when I searched with: find /usr/ -name 'libavcodec.58.dylib' I could find many locations:
find: /usr//sbin/authserver: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/keyring: Permission denied
find: /usr//local/mysql-8.0.22-macos10.15-x86_64/data: Permission denied
find: /usr//local/hw_mp_userdata/Internet_Manager/OnlineUpdate: Permission denied
/usr//local/lib/libavcodec.58.dylib
/usr//local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib
(silicon) MacBook-Pro:opencv-4.5.0 adi$ ln -s /usr/local/Cellar/ffmpeg/4.4_2/lib/libavcodec.58.dylib /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib
ln: /opt/homebrew/opt/ffmpeg/lib/libavcodec.58.dylib: No such file or directory
One of the guides said to install homebrew also in arm64 env, so I did it with:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
export PATH="/opt/homebrew/bin:/usr/local/bin:$PATH"
alias ibrew='arch -x86_64 /usr/local/bin/brew' # create brew for intel (ibrew) and arm/ silicon
Not sure if that is affecting it but seems like it didn't do anything because still uses /opt/homebrew/ instead of /usr/local/.
So any help would be highly appreciated if I can make any of the ways work. Ultimately I want to use Tenserflow Model Zoo Object Detection models. So all the other dependencies seems fine (for now) besides either OpenCV not working or if it is working with conda install then it seems that scipy and scikit-learn don't work.
In my case I also had lot of trouble trying to install both modules. I finally managed to do so but to be honest not really sure how and why. I leave below the requirements in case you might want to recreate the environment that worked in my case. You should have the conda Miniforge 3 installed :
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: osx-arm64
absl-py=1.0.0=pypi_0
astunparse=1.6.3=pypi_0
autocfg=0.0.8=pypi_0
blas=2.113=openblas
blas-devel=3.9.0=13_osxarm64_openblas
boto3=1.22.10=pypi_0
botocore=1.25.10=pypi_0
c-ares=1.18.1=h1a28f6b_0
ca-certificates=2022.2.1=hca03da5_0
cachetools=5.0.0=pypi_0
certifi=2021.10.8=py39hca03da5_2
charset-normalizer=2.0.12=pypi_0
cycler=0.11.0=pypi_0
expat=2.4.4=hc377ac9_0
flatbuffers=2.0=pypi_0
fonttools=4.31.1=pypi_0
gast=0.5.3=pypi_0
gluoncv=0.10.5=pypi_0
google-auth=2.6.0=pypi_0
google-auth-oauthlib=0.4.6=pypi_0
google-pasta=0.2.0=pypi_0
grpcio=1.42.0=py39h95c9599_0
h5py=3.6.0=py39h7fe8675_0
hdf5=1.12.1=h5aa262f_1
idna=3.3=pypi_0
importlib-metadata=4.11.3=pypi_0
jmespath=1.0.0=pypi_0
keras=2.8.0=pypi_0
keras-preprocessing=1.1.2=pypi_0
kiwisolver=1.4.0=pypi_0
krb5=1.19.2=h3b8d789_0
libblas=3.9.0=13_osxarm64_openblas
libcblas=3.9.0=13_osxarm64_openblas
libclang=13.0.0=pypi_0
libcurl=7.80.0=hc6d1d07_0
libcxx=12.0.0=hf6beb65_1
libedit=3.1.20210910=h1a28f6b_0
libev=4.33=h1a28f6b_1
libffi=3.4.2=hc377ac9_2
libgfortran=5.0.0=11_1_0_h6a59814_26
libgfortran5=11.1.0=h6a59814_26
libiconv=1.16=h1a28f6b_1
liblapack=3.9.0=13_osxarm64_openblas
liblapacke=3.9.0=13_osxarm64_openblas
libnghttp2=1.46.0=h95c9599_0
libopenblas=0.3.18=openmp_h5dd58f0_0
libssh2=1.9.0=hf27765b_1
llvm-openmp=12.0.0=haf9daa7_1
markdown=3.3.6=pypi_0
matplotlib=3.5.1=pypi_0
mxnet=1.6.0=pypi_0
ncurses=6.3=h1a28f6b_2
numpy=1.21.2=py39hb38b75b_0
numpy-base=1.21.2=py39h6269429_0
oauthlib=3.2.0=pypi_0
openblas=0.3.18=openmp_h3b88efd_0
opencv-python=4.5.5.64=pypi_0
openssl=1.1.1m=h1a28f6b_0
opt-einsum=3.3.0=pypi_0
packaging=21.3=pypi_0
pandas=1.4.1=pypi_0
pillow=9.0.1=pypi_0
pip=22.0.4=pypi_0
portalocker=2.4.0=pypi_0
protobuf=3.19.4=pypi_0
pyasn1=0.4.8=pypi_0
pyasn1-modules=0.2.8=pypi_0
pydot=1.4.2=pypi_0
pyparsing=3.0.7=pypi_0
python=3.9.7=hc70090a_1
python-dateutil=2.8.2=pypi_0
python-graphviz=0.8.4=pypi_0
pytz=2022.1=pypi_0
pyyaml=6.0=pypi_0
readline=8.1.2=h1a28f6b_1
requests=2.27.1=pypi_0
requests-oauthlib=1.3.1=pypi_0
rsa=4.8=pypi_0
s3transfer=0.5.2=pypi_0
scipy=1.8.0=pypi_0
setuptools=58.0.4=py39hca03da5_1
six=1.16.0=pyhd3eb1b0_1
sqlite=3.38.0=h1058600_0
tensorboard=2.8.0=pypi_0
tensorboard-data-server=0.6.1=pypi_0
tensorboard-plugin-wit=1.8.1=pypi_0
tensorflow-deps=2.8.0=0
tensorflow-macos=2.8.0=pypi_0
termcolor=1.1.0=pypi_0
tf-estimator-nightly=2.8.0.dev2021122109=pypi_0
tk=8.6.11=hb8d0fd4_0
tqdm=4.63.1=pypi_0
typing-extensions=4.1.1=pypi_0
tzdata=2021e=hda174b7_0
urllib3=1.26.9=pypi_0
werkzeug=2.0.3=pypi_0
wheel=0.37.1=pyhd3eb1b0_0
wrapt=1.14.0=pypi_0
xz=5.2.5=h1a28f6b_0
yacs=0.1.8=pypi_0
zipp=3.7.0=pypi_0
zlib=1.2.11=h5a0b063_4

Using conan to package multiple configurations of preexisting binaries

I have a set of third-party binaries that I am trying to put into a conan package. The binaries are in folders for the build configuration: Linux32, Win32, Win64, Win32.
I have been able to produce a conan package for the Win64 configuration using the following conanfile.py:
from conans import ConanFile
class LibNameConan(ConanFile):
name = "LibName"
version = "1.1.1"
settings = "os", "compiler", "build_type", "arch"
description = "Package for LibName"
url = "None"
license = "None"
def package(self):
self.copy("*", dst="lib", src="lib")
self.copy("*.c", dst="include", src="include", keep_path=False)
def package_info(self):
self.cpp_info.libs = self.collect_libs()
I run the following commands in powershell:
conan install
mkdir pkg
cd pkg
conan package .. --build_folder=../
cd ..
conan export name/testing
conan package_files libname/1.1.1#name/testing
For the Win64 this works as expected. When I repeat the steps with Win32 binaries I do not get a different hash for the package.
I have tried running:
conan install -s arch=x86
However, this still results in the package having the same hash as the x86_64 configuration.
How is the configuration supposed to be set for generating a package from preexisting binaries?
If you are just packaging pre-built binaries, you are fine without the package() method, that is only relevant when building from the recipe:
from conans import ConanFile
class LibNameConan(ConanFile):
name = "LibName"
version = "1.1.1"
settings = "os", "compiler", "build_type", "arch"
description = "Package for LibName"
url = "None"
license = "None"
def package_info(self):
self.cpp_info.libs = self.collect_libs()
Unless there is some important reason you want to package the sources too, do you want them to be able to debug your dependencies too? In that case, please condition it to the build_type.
However this could be mostly irrelevant for your question. As your package doesn't have dependencies and you are not using any generator either, you don't need a conan install, and the settings you use there, have no effect.
You have to specify the settings for your binary configuration when you package_files:
$ conan package_files libname/1.1.1#name/testing # using your default config
$ conan package_files libname/1.1.1#name/testing -s arch=x86 # 32 bits instead of 64
...
Probably the recommended way is to use profiles:
$ conan package_files libname/1.1.1#name/testing # using your default profile
$ conan package_files libname/1.1.1#name/testing -pr=myprofile2
The documentation got recently a re-write, you might want to check: https://docs.conan.io/en/latest/creating_packages/existing_binaries.html

Using CMake with setup.py

For a project I build a C library and implict Python bindings (via GObject introspection) with CMake. I also want to distribute some Python helper modules using distutils. I am able to build and install the module with this CMakeLists.txt
find_program(PYTHON "python")
if (PYTHON)
set(SETUP_PY_IN "${CMAKE_CURRENT_SOURCE_DIR}/setup.py.in")
set(SETUP_PY "${CMAKE_CURRENT_BINARY_DIR}/setup.py")
set(DEPS "${CMAKE_CURRENT_SOURCE_DIR}/module/__init__.py")
set(OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/build")
configure_file(${SETUP_PY_IN} ${SETUP_PY})
add_custom_command(OUTPUT ${OUTPUT}
COMMAND ${PYTHON}
ARGS setup.py build
DEPENDS ${DEPS})
add_custom_target(target ALL DEPENDS ${OUTPUT})
install(CODE "execute_process(COMMAND ${PYTHON} ${SETUP_PY} install)")
endif()
and the following setup.py.in:
from distutils.core import setup, Extension
if __name__ == '__main__':
setup(name='foo',
version='${PACKAGE_VERSION}',
package_dir={ '': '${CMAKE_CURRENT_SOURCE_DIR}' },
packages=['module'])
Unfortunately, the build step is executed each time I run make. I guess, the problem is related to the output of the custom command which is a directory rather than a file. Now, is there any way to tell CMake to run python setup.py build only when setup.py.in or one of the sources changed?
Only files, not directories, can be reliably used as OUTPUT and DEPENDS. You could modify your custom command to also produce a timestamp file, something like this:
add_custom_command(
OUTPUT ${OUTPUT}/timestamp
COMMAND ${PYTHON} setup.py build
COMMAND ${CMAKE_COMMAND} -E touch ${OUTPUT}/timestamp
DEPENDS ${DEPS}
)
add_custom_target(target ALL DEPENDS ${OUTPUT}/timestamp)