conan doesn't upload export/conanfile.py to remote - conan

I have created customized OpenCV package using conan package manager and uploaded it to a remote storage.
Workflow:
create package
cd c:\path\to\conanfile.py
conan create . smart/4.26 --profile ue4
Export with conan export . opencv-ue4/3.4.0#smart/4.26
Result:
c:\path\> conan export . opencv-ue4/3.4.0#smart/4.26
[HOOK - attribute_checker.py] pre_export(): WARN: Conanfile doesn't have 'url'. It is recommended to add it as attribute
Exporting package recipe
opencv-ue4/3.4.0#smart/4.26 exports_sources: Copied 3 '.patch' files: cmakes.patch, check_function.patch, typedefs.patch
opencv-ue4/3.4.0#smart/4.26: The stored package has not changed
opencv-ue4/3.4.0#smart/4.26: Exported revision: ceee251590f4bf50c4ff48f6dc27c2ed
I upload everything to the remote:
c:\path> conan upload -r bart opencv-ue4/3.4.0#rs7-smart/4.26 --all
Uploading to remote 'bart':
Uploading opencv-ue4/3.4.0#smart/4.26 to remote 'bart'
Recipe is up to date, upload skipped
Uploading package 1/1: 1d79899922d252aec6da136ce61bff640124c1c4 to 'bart'
Uploaded conan_package.tgz -> opencv-ue4/3.4.0#smart/4.26:1d79 [23667.97k]
Uploaded conaninfo.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [0.75k]
Uploaded conanmanifest.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [11.81k]
Our remote storage runs on the Artifactory, and I can see in a browser that conanfile.py is not listed anywhere.
I can also verify that directory C:\Users\user\.conan\data\opencv-ue4\3.4.0\smart\4.26\export on my Windows PC does contain both conanfile.py and conanmanifest.txt
I am using Windows PC for doing all above.
Now I'm trying to consume that package on another machine, running Ubuntu Linux.
Here is my conanfile.txt
[requires]
opencv-ue4/3.4.0#smart/4.26
[generators]
json
Command and results
> conan install -g json . opencv-ue4/3.4.0#smart/4.26
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
compiler.libcxx=libstdc++
compiler.version=9
os=Linux
os_build=Linux
[options]
[build_requires]
[env]
opencv-ue4/3.4.0#smart/4.26: Not found in local cache, looking in remotes...
opencv-ue4/3.4.0#smart/4.26: Trying with 'bart'...
Downloading conanmanifest.txt completed [0.33k]
opencv-ue4/3.4.0#-smart/4.26: Downloaded recipe revision 0
ERROR: opencv-ue4/3.4.0#smart/4.26: Cannot load recipe.
Error loading conanfile at '/home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py': /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py not found!
Running ls -la /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/ shows that the directory indeed contains only file conanmanifest.txt
Below is the relevant part of the conanfile.py that I've used to build the package
from conans import ConanFile, CMake, tools
class OpenCVUE4Conan(ConanFile):
name = "opencv-ue4"
version = "3.4.0"
url = ""
description = "OpenCV custom build for UE4"
license = "BSD"
settings = "os", "compiler", "build_type", "arch"
generators = "cmake"
exports_sources = 'patches/cmakes.patch', 'patches/check_function.patch', 'patches/typedefs.patch'
def requirements(self):
self.requires("ue4util/ue4#adamrehn/profile")
self.requires("zlib/ue4#adamrehn/{}".format(self.channel))
self.requires("UElibPNG/ue4#adamrehn/{}".format(self.channel))
def cmake_flags(self):
flags = [
"-DOPENCV_ENABLE_NONFREE=OFF",
# cut
]
return flags
def source(self):
self.run("git clone --depth=1 https://github.com/opencv/opencv.git -b {}".format(self.version))
self.run("git clone --depth=1 https://github.com/opencv/opencv_contrib.git -b {}".format(self.version))
def build(self):
# Patch OpenCV to avoid build errors
for p in self.exports_sources:
if p.endswith(".patch"):
tools.patch(base_path='opencv', patch_file=p, fuzz=True)
cmake = CMake(self)
cmake.configure(source_folder="opencv", args=self.cmake_flags())
cmake.build()
cmake.install()
def package_info(self):
self.cpp_info.libs = tools.collect_libs(self)
Conan version both in Windows and in Linux is 1.54.0
How do I correctly upload and consume the package?
Update.
After conversation with #drodri in comments I have removed conanfile.py from exports_sources, deleted all conan-generated files in all PCs and removed uploaded files from the Artifactory.
Then I've rebuilt the package, re-exported and re-uploaded it.

The issue was in restrictions of our Artifactory. Admins have forbidden uploading .py files.

Related

How to copy a file in Meson to a subdirectory

My application uses a Glade file and also cached data in a JSON file. When I do the following, everything works okay as long as the user installs the application with ninja install
#Install cached JSON file
install_data(
join_paths('data', 'dataCache.json'),
install_dir: join_paths('myapp', 'resources')
)
#Install the user interface glade file
install_data(
join_paths('src', 'MainWindow.glade'),
install_dir: join_paths('myapp', 'resources')
)
The downside is that the user needs to install the application. I want the user to be able to just build the application with ninja and run it without installing it if they don't want to install it on their system. The problem is that when I do
#Copy the cached JSON file to the build output directory
configure_file(input : join_paths('data', 'dataCache.json'),
output : join_paths('myapp', 'resources', 'dataCache.json'),
copy: true
)
#Copy the Glade file to the build output directory
configure_file(input : join_paths('src', 'MainWindow.glade'),
output : join_paths('myapp', 'resources', 'MainWindow.glade'),
copy: true
)
I get ERROR: Output file name must not contain a subdirectory.
Is there a way to run ninja and have it create the directories myapp/resources on the build folder and then copy the Glade and JSON file there to be used as resources? Such as to let the user run the application without having to do ninja install?
You can do it by making a script and call it from Meson.
For example, in a file copy.py that takes the relative input and output paths as arguments:
#!/usr/bin/env python3
import os, sys, shutil
# get absolute input and output paths
input_path = os.path.join(
os.getenv('MESON_SOURCE_ROOT'),
os.getenv('MESON_SUBDIR'),
sys.argv[1])
output_path = os.path.join(
os.getenv('MESON_BUILD_ROOT'),
os.getenv('MESON_SUBDIR'),
sys.argv[2])
# make sure destination directory exists
os.makedirs(os.path.dirname(output_path), exist_ok=True)
# and finally copy the file
shutil.copyfile(input_path, output_path)
and then in your meson.build file:
copy = find_program('copy.py')
run_command(
copy,
join_paths('src', 'dataCache.json'),
join_paths('myapp', 'resources', 'dataCache.json')
)
run_command(
copy,
join_paths('src', 'MainWindow.glade'),
join_paths('myapp', 'resources', 'MainWindow.glade')
)

Building libjpeg-turbo with conan fails on windows

I am trying to build libjpeg-turbo package with conan on Windows:
conan install libjpeg-turbo/1.5.2#bincrafters/stable
But it fails with:
libjpeg-turbo/1.5.2#bincrafters/stable: Not found in local cache, looking in remotes...
libjpeg-turbo/1.5.2#bincrafters/stable: Trying with 'conan-center'...
Downloading conanmanifest.txt
Downloading conanfile.py
Downloading conan_export.tgz
....
ERROR: libjpeg-turbo/1.5.2#bincrafters/stable: Error in configure() method, line 43
if self.settings.os == "Emscripten":
ConanException: Invalid setting 'Emscripten' is not a valid 'settings.os' value.
Possible values are ['Android', 'Arduino', 'FreeBSD', 'Linux', 'Macos', 'SunOS', 'Windows', 'WindowsStore', 'iOS', 'tvOS', 'watchOS']
Read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-invalid-setting"
The same command on Linux works fine.
On both system I have conan in version 1.21.0
I cannot find any clue about this error.
EDIT
Here is full output of libjpeg-turbo in version 2.0.2 installation:
>conan install -r conan-center libjpeg-turbo/2.0.2#
Configuration:
[settings]
arch=x86
arch_build=x86
build_type=Release
compiler=Visual Studio
compiler.runtime=MD
compiler.version=15
os=Windows
os_build=Windows
[options]
[build_requires]
[env]
ERROR: libjpeg-turbo/2.0.2: Error in configure() method, line 49
if self.settings.os == "Emscripten":
ConanException: Invalid setting 'Emscripten' is not a valid 'settings.os
' value.
Possible values are ['Android', 'Arduino', 'FreeBSD', 'Linux', 'Macos', 'SunOS',
'Windows', 'WindowsStore', 'iOS', 'tvOS', 'watchOS']
Read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-invalid-sett
ing"
The Conan package libjpeg-turbo/1.5.2#bincrafters/stable is obsolete and has been replaced by libjpeg-turbo/2.0.2#. You can obtain that package from Conan Center as well:
conan install -r conan-center libjpeg-turbo/2.0.2#
Now about your error:
ConanException: Invalid setting 'Emscripten' is not a valid 'settings.os' value.
As you can see, your current settings.os is configured as Emscripten which is not supported by that recipe. As the FAQ link indicates, you should customize your current settings, thus you can try:
conan install -r conan-center libjpeg-turbo/2.0.2# -s os=Windows
Thus, you should:
Update your current package to libjpeg-turbo/2.0.2# (it requires Conan >=1.18)
Update your current profile to Windows:
conan profile update settings.os=Windows default
If you really need Emscripten, so open an issue to Conan Center Index requesting such feature.
Regards!

Using conan to package multiple configurations of preexisting binaries

I have a set of third-party binaries that I am trying to put into a conan package. The binaries are in folders for the build configuration: Linux32, Win32, Win64, Win32.
I have been able to produce a conan package for the Win64 configuration using the following conanfile.py:
from conans import ConanFile
class LibNameConan(ConanFile):
name = "LibName"
version = "1.1.1"
settings = "os", "compiler", "build_type", "arch"
description = "Package for LibName"
url = "None"
license = "None"
def package(self):
self.copy("*", dst="lib", src="lib")
self.copy("*.c", dst="include", src="include", keep_path=False)
def package_info(self):
self.cpp_info.libs = self.collect_libs()
I run the following commands in powershell:
conan install
mkdir pkg
cd pkg
conan package .. --build_folder=../
cd ..
conan export name/testing
conan package_files libname/1.1.1#name/testing
For the Win64 this works as expected. When I repeat the steps with Win32 binaries I do not get a different hash for the package.
I have tried running:
conan install -s arch=x86
However, this still results in the package having the same hash as the x86_64 configuration.
How is the configuration supposed to be set for generating a package from preexisting binaries?
If you are just packaging pre-built binaries, you are fine without the package() method, that is only relevant when building from the recipe:
from conans import ConanFile
class LibNameConan(ConanFile):
name = "LibName"
version = "1.1.1"
settings = "os", "compiler", "build_type", "arch"
description = "Package for LibName"
url = "None"
license = "None"
def package_info(self):
self.cpp_info.libs = self.collect_libs()
Unless there is some important reason you want to package the sources too, do you want them to be able to debug your dependencies too? In that case, please condition it to the build_type.
However this could be mostly irrelevant for your question. As your package doesn't have dependencies and you are not using any generator either, you don't need a conan install, and the settings you use there, have no effect.
You have to specify the settings for your binary configuration when you package_files:
$ conan package_files libname/1.1.1#name/testing # using your default config
$ conan package_files libname/1.1.1#name/testing -s arch=x86 # 32 bits instead of 64
...
Probably the recommended way is to use profiles:
$ conan package_files libname/1.1.1#name/testing # using your default profile
$ conan package_files libname/1.1.1#name/testing -pr=myprofile2
The documentation got recently a re-write, you might want to check: https://docs.conan.io/en/latest/creating_packages/existing_binaries.html

How to include the .so of custom ops in the pip wheel and organize the sources of custom ops?

Following the documentation, I put my_op.cc and my_op.cu.cc under tensorflow/core/user_ops, and created tensorflow/core/user_ops/BUILD which contains
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "my_op.so",
srcs = ["my_op.cc"],
gpu_srcs = ["my_op.cu.cc"],
)
Then I run the following commands under the root of tensorflow:
bazel build -c opt //tensorflow/core/user_ops:all
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
After building and installing the pip wheel, I want to use my_op in the project my_project.
I think I should create something like my_project/tf_op/__init__.py and my_project/tf_op/my_op.py, which calls tf.load_op_library like the example cuda_op.py. However, the my_op.so is not included in the installed pip wheel. How can I generate the input (the path of my_op.so) for tf.load_op_library?
Is there any better way to organized my_op.cc, my_op.cu.cc, my_op.py with my_project?
You can preserve directory structure of your project and create setup.py such that it also include .so files. You can also add other non-python files of your project same way.
Example Directory Structure:
my_package
my_project
__init__.py
setup.py
You can install 'my_project' package while in my_package directory using command:
pip install . --user (Avoid --user argument if you install packages with root access)
from setuptools import setup, find_packages
setup(name='my_project',
version='1.0',
description='Project Details',
packages=find_packages(),
include_package_data=True,
package_data = {
'': ['*.so', '*.txt', '*.csv'],
},
zip_safe=False)
Don't forget to add __init__.py in all folders containing python modules you want to import.
Reference: https://docs.python.org/2/distutils/setupscript.html#installing-package-data

cx_Oracle running with mod_wsgi environment

I installed cx_Oracle on CentOS 6.2. When I import the library from the shell, it works fine but when I launch it through wsgi, I get the error :
ImportError: libclntsh.so.10.1: cannot open shared object file: No such file or directory
This is an environment variable problem : cx_Oracle does not find the path to the lib.
I have tried the solutions provided here
I have added a link to libclntsh.so.10.1 (with ln) in the /usr/lib directory
I have edited apache configuration and added :
ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
LD_LIBRARY_PATH=$ORACLE_HOME/
PATH=$ORACLE_HOME/bin:$PATH
I edited /etc/ld.so.conf and added :
/usr/lib/oracle/11.2/client64/lib
done after ldconfig
I tried to use python with :
os.env['ORACLE_HOME']='/usr/lib/oracle/11.2/client64/lib'
I edited the bashrc with :
export ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
export LD_LIBRARY_PATH=$ORACLE_HOME/
export PATH=$ORACLE_HOME/bin:$PATH
I also edited apachectl with
ORACLE_HOME=/usr/lib/oracle/11.2/client64/lib
export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/
export LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:$PATH
export PATH
I am running out of ideas. Any suggestions ?
When you compile the Python module for Oracle, set:
LD_RUN_PATH=/usr/lib/oracle/11.2/client64/lib
user environment variable and export it. This will cause that directory to be embedded in Python extension module .so file and will know where to find it at run time without needing to set LD_LIBRARY_PATH environment variable.
For a standard Apache distribution (Linux distros are often a bit different), the file to set extra environment variables in is called 'envvars' and is in same directory as 'httpd'. For Linux distros often needs to be in a special init.d startup script.
So, lookup what LD_RUN_PATH is all about.
Instead of using yum install on the cx_Oracle rpm, I downloaded the source of the library and run the setup.py build.
I got an error that would point me to the function that was trying to locate the instant client sdk libraries in :
possibleIncludeDirs = ["rdbms/demo", "rdbms/public", "network/public","sdk/include"]
Browsing the Oracle_home folder, i discovered that the sdk file where installed in the lib folder ( I installed the skd using yum install on the rpm from oracle ) and not in the possibleIncludeDirs or in an include folder as suggested in the setup.py :
if not includeDirs:
path = os.path.join(oracleLibDir, "include")
if os.path.isdir(path):
includeDirs.append(path)
if not includeDirs:
path = re.sub("lib(64)?", "include", oracleHome)
if os.path.isdir(path):
includeDirs.append(path)
I downloaded the instant client sdk (the zip file this time) and unziped it to the lib folder.
There was then a sdk folder in the lib folder (/usr/lib/oracle/11.2/client64/lib)
I then launched the setup.py build and setup.py install and it worked.