I am trying to put my logs in a "logs" folder but when I try to deploy to Scrapy I get No such file or directory: '/scrapinghub/sfb/logs/random_log.log' but I think I am declaring it correctly in the setup.py file. What am I doing wrong here?
File structure:
sfbu
-- bin
-- sfbitemcomparer.py
-- requirements.txt
-- scrapy.cfg
-- setup.py
-- sfb
-- logs
-- random_log.log
-- _init_.py
-- items.py
-- middlewares.py
-- pipelines.py
-- settings.py
-- spiders
-- spider.py
setup.py:
setup(
name = 'sfb',
version = '1.0',
packages = find_packages(),
scripts = ['bin/sfbitemcomparer.py'],
package_data = {
'sfb': ['sfb/logs/*.log']
},
entry_points = {'scrapy': ['settings = sfb.settings']},
)
Related
I have a project called Alexandria that I want to upload on PyPi as a package. To do so, I have a top folder called alexandria-python in which I put the package and all the elements required to create a package archive with setup.py. The folder alexandria-python has the following structure:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
Then, following many tutorials to create an uploadable archive, I open a terminal, cd to alexandria-python, and use the command:
python setup.py sdist
This creates additional folders, so the structure of alexandria-python is now:
|- setup.py
|- README.md
|- alexandria (root folder for the package)
|- __init__.py
|- many sub-packages
|- alexandria.egg-info
|- dist
everything looks fine, and from my understanding the package should now be archived in the dist folder. But when I open the dist folder and extract the alexandria-0.0.2.tar.gz archive that has been created, it does not contain the 'alexandria' package. Everything else thus seems to be there, except the most important element: the package, as shown on the image:
Following, when I upload the project to test-PyPi and then pip install it, any attempt to import a module from the toolbox results in a ModuleNotFoundError. How is it that my package does not get uploaded to the archive? Am I doing something very silly?
Note: in case it can help, this is the structure of my setup.py file:
from setuptools import setup
# set up the package
setup(
name = "alexandria",
license = "Other/Proprietary License",
version = "0.0.2",
author = "Romain Legrand",
author_email = "alexandria.toolbox#gmail.com",
description = "a software for Bayesian time-series econometrics applications",
python_requires = ">=3.6",
keywords=["python", "Bayesian", "time-series", "econometrics"])
Your setup.py has neither py_modules nor packages. Must have one of those. In your case alexandria is a package so
setup(
…
packages = ['alexandria'],
…
)
or
from setuptools import find_packages, setup
…
packages = find_packages('.')
My application uses a Glade file and also cached data in a JSON file. When I do the following, everything works okay as long as the user installs the application with ninja install
#Install cached JSON file
install_data(
join_paths('data', 'dataCache.json'),
install_dir: join_paths('myapp', 'resources')
)
#Install the user interface glade file
install_data(
join_paths('src', 'MainWindow.glade'),
install_dir: join_paths('myapp', 'resources')
)
The downside is that the user needs to install the application. I want the user to be able to just build the application with ninja and run it without installing it if they don't want to install it on their system. The problem is that when I do
#Copy the cached JSON file to the build output directory
configure_file(input : join_paths('data', 'dataCache.json'),
output : join_paths('myapp', 'resources', 'dataCache.json'),
copy: true
)
#Copy the Glade file to the build output directory
configure_file(input : join_paths('src', 'MainWindow.glade'),
output : join_paths('myapp', 'resources', 'MainWindow.glade'),
copy: true
)
I get ERROR: Output file name must not contain a subdirectory.
Is there a way to run ninja and have it create the directories myapp/resources on the build folder and then copy the Glade and JSON file there to be used as resources? Such as to let the user run the application without having to do ninja install?
You can do it by making a script and call it from Meson.
For example, in a file copy.py that takes the relative input and output paths as arguments:
#!/usr/bin/env python3
import os, sys, shutil
# get absolute input and output paths
input_path = os.path.join(
os.getenv('MESON_SOURCE_ROOT'),
os.getenv('MESON_SUBDIR'),
sys.argv[1])
output_path = os.path.join(
os.getenv('MESON_BUILD_ROOT'),
os.getenv('MESON_SUBDIR'),
sys.argv[2])
# make sure destination directory exists
os.makedirs(os.path.dirname(output_path), exist_ok=True)
# and finally copy the file
shutil.copyfile(input_path, output_path)
and then in your meson.build file:
copy = find_program('copy.py')
run_command(
copy,
join_paths('src', 'dataCache.json'),
join_paths('myapp', 'resources', 'dataCache.json')
)
run_command(
copy,
join_paths('src', 'MainWindow.glade'),
join_paths('myapp', 'resources', 'MainWindow.glade')
)
I am new to pypi, tried to create and upload a package, following is the directory structure:
mypackage
-- README.rst
-- LICENSE.txt
-- MANIFEST.in
-- setup.py
mypackage
-- file1.py
-- __init__.py
setup.py is as following:
from setuptools import setup, find_packages
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='mypackage',
version='0.0.1',
description='test package',
long_description=long_description,
url='https://github.com/xxxxx/test',
author='xxx',
author_email='xxx#gmail.com',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
],
)
I run:
python setup.py sdist bdist_wheel
twine upload dist/*
I can see mypackage is on pypi, and there are:
mypackage-0.0.1-py3-none-any.whl (md5)
mypackage-0.0.1.tar.gz (md5)
Then I run: "pip install mypackage" on my machine, it said "successfully installed mypackage-0.0.1", but under python_directory\Lib\site-packages, there is only "mypackage-0.0.1.dist-info" directory, no "mypackage" directory.
Anyone can tell me what is wrong here? Thanks!
You've forgotten to include any Python code in setup — either py_modules or packages:
…
packages=['mypackage'],
…
Following the documentation, I put my_op.cc and my_op.cu.cc under tensorflow/core/user_ops, and created tensorflow/core/user_ops/BUILD which contains
load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")
tf_custom_op_library(
name = "my_op.so",
srcs = ["my_op.cc"],
gpu_srcs = ["my_op.cu.cc"],
)
Then I run the following commands under the root of tensorflow:
bazel build -c opt //tensorflow/core/user_ops:all
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
After building and installing the pip wheel, I want to use my_op in the project my_project.
I think I should create something like my_project/tf_op/__init__.py and my_project/tf_op/my_op.py, which calls tf.load_op_library like the example cuda_op.py. However, the my_op.so is not included in the installed pip wheel. How can I generate the input (the path of my_op.so) for tf.load_op_library?
Is there any better way to organized my_op.cc, my_op.cu.cc, my_op.py with my_project?
You can preserve directory structure of your project and create setup.py such that it also include .so files. You can also add other non-python files of your project same way.
Example Directory Structure:
my_package
my_project
__init__.py
setup.py
You can install 'my_project' package while in my_package directory using command:
pip install . --user (Avoid --user argument if you install packages with root access)
from setuptools import setup, find_packages
setup(name='my_project',
version='1.0',
description='Project Details',
packages=find_packages(),
include_package_data=True,
package_data = {
'': ['*.so', '*.txt', '*.csv'],
},
zip_safe=False)
Don't forget to add __init__.py in all folders containing python modules you want to import.
Reference: https://docs.python.org/2/distutils/setupscript.html#installing-package-data
As the title suggests, I'm running into an issue where proto import statements do not seem to be relative to the correct path. For concreteness, consider the directory structure in a dir (let's call it ~/base):
`>> tree -L 1
├── models
├── my-lib
| ├── nlp
| ├── BUILD
| └── nlp_parser.cc
| └── WORKSPACE
├── serving
└── tensorflow
For those not familiar, models (as in https://github.com/tensorflow/models/) has tensorflow (https://github.com/tensorflow/tensorflow) as a git submodule, as does serving. Because of this coupled with the fact that the git submodules of tensorflow where on different commits and sometimes incompatible, I have removed the gitsubmodule from the projects and symlinked them to the tensorflow repo on the top most directory so that I can manage only one tensor flow repo instead of 3. That is I have done the following:
`cd models/syntaxnet; rm -rf tensorflow; ln -s ../../tensorflow/ .; cd -`
`cd serving; rm -rf tensorflow tf_models; ln -s ../tensorflow/ .; ln -s ../models .`
Now I want to build a target within my-lib that depends on serving, tensorflow, and models. I added these as local repositories in my WORKSPACE as follows (cat my-lib/WORKSPACE):
workspace(name = "myworkspace")
local_repository(
name = "org_tensorflow",
path = __workspace_dir__ + "/../tensorflow",
)
local_repository(
name = "syntaxnet",
path = __workspace_dir__ + "/../models/syntaxnet",
)
local_repository(
name = "tf_serving",
path = __workspace_dir__ + "/../serving",
)
load('#org_tensorflow//tensorflow:workspace.bzl', 'tf_workspace')
tf_workspace("~/base/tensorflow", "#org_tensorflow")
# ===== gRPC dependencies =====
bind(
name = "libssl",
actual = "#boringssl_git//:ssl",
)
bind(
name = "zlib",
actual = "#zlib_archive//:zlib",
)
Here is my BUILD file (cat my-lib/nlp/BUILD):
load("#tf_serving//tensorflow_serving:serving.bzl", "serving_proto_library")
cc_binary(
name = "nlp_parser",
srcs = [ "nlp_parser.cc" ],
linkopts = ["-lm"],
deps = [
"#org_tensorflow//tensorflow/core:core_cpu",
"#org_tensorflow//tensorflow/core:framework",
"#org_tensorflow//tensorflow/core:lib",
"#org_tensorflow//tensorflow/core:protos_all_cc",
"#org_tensorflow//tensorflow/core:tensorflow",
"#syntaxnet//syntaxnet:parser_ops_cc",
"#syntaxnet//syntaxnet:sentence_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_config_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_factory",
"#org_tensorflow//tensorflow/contrib/session_bundle",
"#org_tensorflow//tensorflow/contrib/session_bundle:signature",
],
)
Lastly, here is the output of the build (cd my-lib; bazel build nlp/nlp_parser --verbose_failures):
INFO: Found 1 target...
ERROR: /home/blah/blah/external/org_tensorflow/tensorflow/core/debug/BUILD:33:1: null failed: linux-sandbox failed: error executing command
(cd /home/blah/blah/execroot/my-lib && \
exec env - \
/home/blah/blah/execroot/my-lib/_bin/linux-sandbox #/home/blah/blah/execroot/my-lib/bazel-sandbox/c65fa6b6-9b7d-4710-b19c-4d42a3e6a667-31.params -- bazel-out/host/bin/external/protobuf/protoc '--cpp_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' '--plugin=protoc-gen-grpc=bazel-out/host/bin/external/grpc/grpc_cpp_plugin' '--grpc_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' -Iexternal/org_tensorflow -Ibazel-out/local-fastbuild/genfiles/external/org_tensorflow -Iexternal/protobuf/src -Ibazel-out/local-fastbuild/genfiles/external/protobuf/src external/org_tensorflow/tensorflow/core/debug/debug_service.proto).
bazel-out/local-fastbuild/genfiles/external/protobuf/src: warning: directory does not exist.
tensorflow/core/util/event.proto: File not found.
tensorflow/core/debug/debug_service.proto: Import "tensorflow/core/util/event.proto" was not found or had errors.
tensorflow/core/debug/debug_service.proto:38:25: "Event" is not defined.
Target //nlp:nlp_parser failed to build
INFO: Elapsed time: 0.776s, Critical Path: 0.42s
What is the correct way to add the modules as local_repository in WORKSPACE so that the proto imports work?
I was having a similar problem after trying to build a project of mine depending on tensorflow on Ubuntu after getting it building on OS X. What ended up working for me was disabling sandboxing with --spawn_strategy=standalone