setup.py sdist creates an archive without the package - pypi

I have a project called Alexandria that I want to upload on PyPi as a package. To do so, I have a top folder called alexandria-python in which I put the package and all the elements required to create a package archive with setup.py. The folder alexandria-python has the following structure:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
Then, following many tutorials to create an uploadable archive, I open a terminal, cd to alexandria-python, and use the command:
python setup.py sdist
This creates additional folders, so the structure of alexandria-python is now:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
|- alexandria.egg-info
|- dist
everything looks fine, and from my understanding the package should now be archived in the dist folder. But when I open the dist folder and extract the alexandria-0.0.2.tar.gz archive that has been created, it does not contain the 'alexandria' package. Everything else thus seems to be there, except the most important element: the package, as shown on the image:
Following, when I upload the project to test-PyPi and then pip install it, any attempt to import a module from the toolbox results in a ModuleNotFoundError. How is it that my package does not get uploaded to the archive? Am I doing something very silly?
Note: in case it can help, this is the structure of my setup.py file:
from setuptools import setup
# set up the package
setup(
name = "alexandria",
license = "Other/Proprietary License",
version = "0.0.2",
author = "Romain Legrand",
author_email = "alexandria.toolbox#gmail.com",
description = "a software for Bayesian time-series econometrics applications",
python_requires = ">=3.6",
keywords=["python", "Bayesian", "time-series", "econometrics"])

Your setup.py has neither py_modules nor packages. Must have one of those. In your case alexandria is a package so
setup(
…
packages = ['alexandria'],
…
)
or
from setuptools import find_packages, setup
…
packages = find_packages('.')

Related

conan doesn't upload export/conanfile.py to remote

I have created customized OpenCV package using conan package manager and uploaded it to a remote storage.
Workflow:
create package
cd c:\path\to\conanfile.py
conan create . smart/4.26 --profile ue4
Export with conan export . opencv-ue4/3.4.0#smart/4.26
Result:
c:\path\> conan export . opencv-ue4/3.4.0#smart/4.26
[HOOK - attribute_checker.py] pre_export(): WARN: Conanfile doesn't have 'url'. It is recommended to add it as attribute
Exporting package recipe
opencv-ue4/3.4.0#smart/4.26 exports_sources: Copied 3 '.patch' files: cmakes.patch, check_function.patch, typedefs.patch
opencv-ue4/3.4.0#smart/4.26: The stored package has not changed
opencv-ue4/3.4.0#smart/4.26: Exported revision: ceee251590f4bf50c4ff48f6dc27c2ed
I upload everything to the remote:
c:\path> conan upload -r bart opencv-ue4/3.4.0#rs7-smart/4.26 --all
Uploading to remote 'bart':
Uploading opencv-ue4/3.4.0#smart/4.26 to remote 'bart'
Recipe is up to date, upload skipped
Uploading package 1/1: 1d79899922d252aec6da136ce61bff640124c1c4 to 'bart'
Uploaded conan_package.tgz -> opencv-ue4/3.4.0#smart/4.26:1d79 [23667.97k]
Uploaded conaninfo.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [0.75k]
Uploaded conanmanifest.txt -> opencv-ue4/3.4.0#smart/4.26:1d79 [11.81k]
Our remote storage runs on the Artifactory, and I can see in a browser that conanfile.py is not listed anywhere.
I can also verify that directory C:\Users\user\.conan\data\opencv-ue4\3.4.0\smart\4.26\export on my Windows PC does contain both conanfile.py and conanmanifest.txt
I am using Windows PC for doing all above.
Now I'm trying to consume that package on another machine, running Ubuntu Linux.
Here is my conanfile.txt
[requires]
opencv-ue4/3.4.0#smart/4.26
[generators]
json
Command and results
> conan install -g json . opencv-ue4/3.4.0#smart/4.26
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
compiler.libcxx=libstdc++
compiler.version=9
os=Linux
os_build=Linux
[options]
[build_requires]
[env]
opencv-ue4/3.4.0#smart/4.26: Not found in local cache, looking in remotes...
opencv-ue4/3.4.0#smart/4.26: Trying with 'bart'...
Downloading conanmanifest.txt completed [0.33k]
opencv-ue4/3.4.0#-smart/4.26: Downloaded recipe revision 0
ERROR: opencv-ue4/3.4.0#smart/4.26: Cannot load recipe.
Error loading conanfile at '/home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py': /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/conanfile.py not found!
Running ls -la /home/user/.conan/data/opencv-ue4/3.4.0/smart/4.26/export/ shows that the directory indeed contains only file conanmanifest.txt
Below is the relevant part of the conanfile.py that I've used to build the package
from conans import ConanFile, CMake, tools
class OpenCVUE4Conan(ConanFile):
name = "opencv-ue4"
version = "3.4.0"
url = ""
description = "OpenCV custom build for UE4"
license = "BSD"
settings = "os", "compiler", "build_type", "arch"
generators = "cmake"
exports_sources = 'patches/cmakes.patch', 'patches/check_function.patch', 'patches/typedefs.patch'
def requirements(self):
self.requires("ue4util/ue4#adamrehn/profile")
self.requires("zlib/ue4#adamrehn/{}".format(self.channel))
self.requires("UElibPNG/ue4#adamrehn/{}".format(self.channel))
def cmake_flags(self):
flags = [
"-DOPENCV_ENABLE_NONFREE=OFF",
# cut
]
return flags
def source(self):
self.run("git clone --depth=1 https://github.com/opencv/opencv.git -b {}".format(self.version))
self.run("git clone --depth=1 https://github.com/opencv/opencv_contrib.git -b {}".format(self.version))
def build(self):
# Patch OpenCV to avoid build errors
for p in self.exports_sources:
if p.endswith(".patch"):
tools.patch(base_path='opencv', patch_file=p, fuzz=True)
cmake = CMake(self)
cmake.configure(source_folder="opencv", args=self.cmake_flags())
cmake.build()
cmake.install()
def package_info(self):
self.cpp_info.libs = tools.collect_libs(self)
Conan version both in Windows and in Linux is 1.54.0
How do I correctly upload and consume the package?
Update.
After conversation with #drodri in comments I have removed conanfile.py from exports_sources, deleted all conan-generated files in all PCs and removed uploaded files from the Artifactory.
Then I've rebuilt the package, re-exported and re-uploaded it.
The issue was in restrictions of our Artifactory. Admins have forbidden uploading .py files.

pkg_config_modules dependency fails because version in "Uncontrolled"

The Problem
I've got a CMakeLists.txt file with this content:
pkg_check_modules(FOO REQUIRED foo>=0.1.0.1)
When I run Cmake v3.17.2 with cmake3 -G Ninja . in that directory:
Checking for module 'foo>=0.1.0.1'
Requested 'foo >= 0.1.0.1' but version of foo is Uncontrolled
Details
This is running inside RHEL7
yum info foo | grep Version returns Version : 0.1.0.1.20200417git602d018
The foo module is created by the team I'm on
The Question
How can I tell CMake what version my foo library is so that it isn't "Uncontrolled"?
In the output of the foo project, inside of the generated lib64 directory, there's a pkgconfig directory which contains foo.pc.
Inside of that file, version info is as follows:
Version: HEAD
Change this to be the intended version. In my case this was automated by the build process of foo, so what was required was to add a git tag for the current version and rebuild.

How to run eslint in different folders

I came into a eslint problem.
Generally, we have src folder side by side with package.json, like this:
|-projectName
|- src
|- .eslintrc.js
|- package.json
|- vue.config.js
Then i want to take compiling scripts and business code apart into different folders, and to enable them to be linted, i put two '.eslintrc.js' files in each, like this:
|- projectName
|- runtime
|- babel.config.js
|- .eslintrc.js
|- package.json
|- vue.config.js
|- business
|- .eslintrs.js
|- src
|- main.js
|- index.html
|- ...
Then i found eslint turns into chaos:
When i ran 'npm run build', eslint firstly linted files in 'runtime' folder as '.eslintrc.js' in it,
and then vue-cli-service came into building stage, which called eslint again as '.eslintrc.js' in 'template' folder,
but at last the vue-cli-service still finished building even if eslint throws errors, which should stopped building.
The output was like this:
✖ 7 problems (4 errors, 3 warnings)
3 errors and 3 warnings potentially fixable with the `--fix` option.
# ../template/src/store/index.js 14:0-34 18:10-14
# ../template/src/main.js
# multi ../template/src/main.js
...
File Size Gzipped
../template/build/apps/local/0/js/chun 174.42 KiB 58.47 KiB
k-vendors.9db3f582.js
...
DONE Build complete. The ../template/build directory is ready to be deployed.
INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html
I don't know why this happened? Is there someting wrong with eslint config? I just put two '.eslintrc.js' files generated by 'vue craete' command in two different folders , and configure right paths in vue.config.js, like this:
// vue.config.js
...
outputDir: path.resolve(__dirname, '../template/build'),
pages:{
index: {
entry: path.resolve(__dirname, '../template/src/main.js')
}
}
...

How to include the .so of custom ops in the pip wheel and organize the sources of custom ops?

Following the documentation, I put my_op.cc and my_op.cu.cc under tensorflow/core/user_ops, and created tensorflow/core/user_ops/BUILD which contains
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "my_op.so",
srcs = ["my_op.cc"],
gpu_srcs = ["my_op.cu.cc"],
)
Then I run the following commands under the root of tensorflow:
bazel build -c opt //tensorflow/core/user_ops:all
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
After building and installing the pip wheel, I want to use my_op in the project my_project.
I think I should create something like my_project/tf_op/__init__.py and my_project/tf_op/my_op.py, which calls tf.load_op_library like the example cuda_op.py. However, the my_op.so is not included in the installed pip wheel. How can I generate the input (the path of my_op.so) for tf.load_op_library?
Is there any better way to organized my_op.cc, my_op.cu.cc, my_op.py with my_project?
You can preserve directory structure of your project and create setup.py such that it also include .so files. You can also add other non-python files of your project same way.
Example Directory Structure:
my_package
my_project
__init__.py
setup.py
You can install 'my_project' package while in my_package directory using command:
pip install . --user (Avoid --user argument if you install packages with root access)
from setuptools import setup, find_packages
setup(name='my_project',
version='1.0',
description='Project Details',
packages=find_packages(),
include_package_data=True,
package_data = {
'': ['*.so', '*.txt', '*.csv'],
},
zip_safe=False)
Don't forget to add __init__.py in all folders containing python modules you want to import.
Reference: https://docs.python.org/2/distutils/setupscript.html#installing-package-data

Correct way to configure interdependent projects (e.g. tensorflow) in bazel build system so proto imports work as is?

As the title suggests, I'm running into an issue where proto import statements do not seem to be relative to the correct path. For concreteness, consider the directory structure in a dir (let's call it ~/base):
`>> tree -L 1
├── models
├── my-lib
| ├── nlp
| ├── BUILD
| └── nlp_parser.cc
| └── WORKSPACE
├── serving
└── tensorflow
For those not familiar, models (as in https://github.com/tensorflow/models/) has tensorflow (https://github.com/tensorflow/tensorflow) as a git submodule, as does serving. Because of this coupled with the fact that the git submodules of tensorflow where on different commits and sometimes incompatible, I have removed the gitsubmodule from the projects and symlinked them to the tensorflow repo on the top most directory so that I can manage only one tensor flow repo instead of 3. That is I have done the following:
`cd models/syntaxnet; rm -rf tensorflow; ln -s ../../tensorflow/ .; cd -`
`cd serving; rm -rf tensorflow tf_models; ln -s ../tensorflow/ .; ln -s ../models .`
Now I want to build a target within my-lib that depends on serving, tensorflow, and models. I added these as local repositories in my WORKSPACE as follows (cat my-lib/WORKSPACE):
workspace(name = "myworkspace")
local_repository(
name = "org_tensorflow",
path = __workspace_dir__ + "/../tensorflow",
)
local_repository(
name = "syntaxnet",
path = __workspace_dir__ + "/../models/syntaxnet",
)
local_repository(
name = "tf_serving",
path = __workspace_dir__ + "/../serving",
)
load('#org_tensorflow//tensorflow:workspace.bzl', 'tf_workspace')
tf_workspace("~/base/tensorflow", "#org_tensorflow")
# ===== gRPC dependencies =====
bind(
name = "libssl",
actual = "#boringssl_git//:ssl",
)
bind(
name = "zlib",
actual = "#zlib_archive//:zlib",
)
Here is my BUILD file (cat my-lib/nlp/BUILD):
load("#tf_serving//tensorflow_serving:serving.bzl", "serving_proto_library")
cc_binary(
name = "nlp_parser",
srcs = [ "nlp_parser.cc" ],
linkopts = ["-lm"],
deps = [
"#org_tensorflow//tensorflow/core:core_cpu",
"#org_tensorflow//tensorflow/core:framework",
"#org_tensorflow//tensorflow/core:lib",
"#org_tensorflow//tensorflow/core:protos_all_cc",
"#org_tensorflow//tensorflow/core:tensorflow",
"#syntaxnet//syntaxnet:parser_ops_cc",
"#syntaxnet//syntaxnet:sentence_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_config_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_factory",
"#org_tensorflow//tensorflow/contrib/session_bundle",
"#org_tensorflow//tensorflow/contrib/session_bundle:signature",
],
)
Lastly, here is the output of the build (cd my-lib; bazel build nlp/nlp_parser --verbose_failures):
INFO: Found 1 target...
ERROR: /home/blah/blah/external/org_tensorflow/tensorflow/core/debug/BUILD:33:1: null failed: linux-sandbox failed: error executing command
(cd /home/blah/blah/execroot/my-lib && \
exec env - \
/home/blah/blah/execroot/my-lib/_bin/linux-sandbox #/home/blah/blah/execroot/my-lib/bazel-sandbox/c65fa6b6-9b7d-4710-b19c-4d42a3e6a667-31.params -- bazel-out/host/bin/external/protobuf/protoc '--cpp_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' '--plugin=protoc-gen-grpc=bazel-out/host/bin/external/grpc/grpc_cpp_plugin' '--grpc_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' -Iexternal/org_tensorflow -Ibazel-out/local-fastbuild/genfiles/external/org_tensorflow -Iexternal/protobuf/src -Ibazel-out/local-fastbuild/genfiles/external/protobuf/src external/org_tensorflow/tensorflow/core/debug/debug_service.proto).
bazel-out/local-fastbuild/genfiles/external/protobuf/src: warning: directory does not exist.
tensorflow/core/util/event.proto: File not found.
tensorflow/core/debug/debug_service.proto: Import "tensorflow/core/util/event.proto" was not found or had errors.
tensorflow/core/debug/debug_service.proto:38:25: "Event" is not defined.
Target //nlp:nlp_parser failed to build
INFO: Elapsed time: 0.776s, Critical Path: 0.42s
What is the correct way to add the modules as local_repository in WORKSPACE so that the proto imports work?
I was having a similar problem after trying to build a project of mine depending on tensorflow on Ubuntu after getting it building on OS X. What ended up working for me was disabling sandboxing with --spawn_strategy=standalone