I am hosting django-1.5 app on openshift. I need django-registration module which I have specified in requirements.txt file.
The problem is that openshift is not able to find latest version django-registration-1.0 but only django-registration-0.8 which is not compatible with django-1.5 Any idea how to resolve this or how to add manual link to latest version in requirements.txt?
I'm not getting why its not able to find package while it is available at PyPI.
remote: Searching for django-registration==1.0
remote: Reading http://mirror1.ops.rhcloud.com/mirror/python/web/simple/django-registration/
remote: Reading http://www.bitbucket.org/ubernostrum/django-registration/wiki/
remote: Reading <some other link>
remote: Reading <some other link>
remote: Reading <Some Other link>
remote: No local packages or download links found for django-registration==1.0
remote: Best match: None
I made it work using setuptools specifying dependency link, though why PyPI package is not working is still not clear to me.
from setuptools import setup, find_packages
setup(
...
...
packages=find_packages(),
include_package_data=True,
install_requires=['django-registration==1.0'],
dependency_links = [
"http://pypi.python.org/pypi/django-registration"
],
)
How about directly installing package by logging into the application gear via ssh and running:
source ~/python-2.6/virtenv/bin/activate
pip install --log $OPENSHIFT_DATA_DIR/inst.log https://URL_TO_CUSTOM_PACKAGE
OR
source ~/python-2.6/virtenv/bin/activate
pip install --log $OPENSHIFT_DATA_DIR/inst.log -E $VIRTUAL_ENV $path_to/package
Since the issue is still alive (argh!) and I couldn't install the last security release for django I had to find a workaround for this problem.
Inserting the following line to the top of the requirements.txt magically solved the problem:
--index-url https://pypi.python.org/simple
It just sets the base url for finding packages.
I know that the question is a bit old, but I had a similar problem with OpenShift. On PyPi the package wagtail had the latest version of 1.4.1, but on OpenShift only 1.3.1 was found. After git push it show an url in the output, which seemed to point to a mirror instead os the pypi.python.org.
I logged in the app and:
env | grep -i pypi
OPENSHIFT_PYPI_MIRROR_URL=http://mirror1.ops.rhcloud.com/mirror/python/web/simple
It seems that OpenShift by default uses it's own mirror for Python packages. A mirror that is a bit out of date. I don't know why. I can't really say whether it is better to do as tomako suggests or could the change be made to the env variable OPENSHIFT_PYPI_MIRROR_URL or how often the mirror is updated.
Related
I'm starting this thread with an answer, not a question. The questions are stated at the end:
I tried to add pip package 'tfx' to Apache Airflow using my own Dockerfile and docker-compose.yaml. I added my own DAG to Airflow and that failed to load with this error message:
doc_controls has not attribute 'inheritable_header'
It cost me only a day to find the cause. When you add this to your Dockerfile..
pip install tfx
..pip will install txf, tensorflow-2.6.0 and tensorflow-estimator-2.7.0. The latter is apparently depending on the not-yet-released code in the github repo tensorflow/docs which contains doc_controls.
So instead add this to keep tensorflow-estimator in line with packages that pip can find:
RUN pip install --no-cache-dir --user \
tfx==1.3.1 \
tensorflow==2.6.0 \
tensorflow-estimator==2.6.0
I'm loosing a lot of time solving problems with dependencies between pip packages and pip packages and the underlying C/C++ libraries. Am I the only one?
Here are my questions:
Am I correct to assume that pip is supposed to figure out which versions of dependencies of tfx to install. Should I normally be able to rely on pip to do this correctly or will pip simply install the latest version of all dependencies without regard to their mutual compatibility?
On the internet there are many Dockerfile around that do not specify any version numbers of the apt/pip packages to install. Such a Dockerfile is like a box of chocolates right? If you build the dockerfile a time t1 and at time t2 then their contents can differ in terms of versions right?
In general: Given a docker image why can one not get the Dockerfile that was used to construct the docker image?
Regards,
Chris
I also keep running into these dependency issues recently. I came across another post which might be of interest: Resolving new pip backtracking runtime issue. Based off this I think pip does try to figure out what versions of packages to install to avoid conflicts but I guess it sometimes struggles. I tried one of the tools, pipreqs but I didn't find it useful for my particular problem. In fact it broke things even more.
Also thanks for the solution to this one, I had the same problem.
I am trying to install graphviz on my RHEL VM. when I run
$sudo yum install graphviz
I get this:
This system is not registered with an entitlement server. You can use subscription-manager to register.
No package graphviz available.
Error: Nothing to do
I later found out that I get this same problem with all packages.
I have tried several solutions I have found online such as:
saving the .repo file found here (this link will download the file)
then running
#from dir containing graphviz-rhel.repo
$sudo yum-config-manager --add-repo graphviz-rhel.repo
the output was
This system is not registered with an entitlement server. You can use subscription-manager to register.
adding repo from: graphviz-rhel.repo
grabbing file graphviz-rhel.repo to /etc/yum.repos.d/graphviz-rhel.repo
repo saved to /etc/yum.repos.d/graphviz-rhel.repo
Then I ran
$sudo yum-config-manager --enable graphviz-rhel
This gives no output and $yum-config-manager list all does not list anything related to graphviz as a repo (enabled or disabled)
I tried the solution here: failed to install 'graphviz*' packages with yum command on my RHEL server
except I found the rpm file here
When I ran the rpm command I got an error because I was missing a couple dozen dependencies so I dont think following this solution for all of them is a reasonable solution.
If someone can either inform me why one of these didn't work or let me know how to accomplish my goal of getting yum install <package> to work I would greatly appreciate it.
As posted in the comments, in order to utilize yum on a RHEL system you need an active subscription
I updated my mac to high sierra, and now I can't install pycurl. It fails with this message : Curl is configured to use SSL, but we have not been able to determine which SSL backend it is using. Please see PycURL documentation for how to specify the SSL backend manually.
I searched on the documentation and the web and I found some solution that not fix my problem. the most popular is this one :
pip uninstall pycurl
export PYCURL_SSL_LIBRARY=openssl
pip install pycurl
here is the complete error
A solution similar to the one you found worked for me when issued from within my virtualenv. I use Homebrew as a package manager on macOS High Sierra, and Pipenv to manage my project dependencies and virtualenv. The error emerged after adding the PyVimeo API Library, which has PycURL as a dependency, to my project.
The generated errors were, first,
src/pycurl.c:137:4: warning: #warning "libcurl was compiled with SSL
support, but configure could not determine which library was used;
thus no SSL crypto locking callbacks will be set, which may cause
random crashes on SSL requests" [-Wcpp]
then,
ImportError: pycurl: libcurl link-time ssl backend (openssl) is
different from compile-time ssl backend (none/other)
As mentioned in the PycURL docs, the solution was to "tell [PycURL's] setup.py what SSL backend is used." Setting the environment variables recommended in the output of brew info openssl, alone, did not solve the problem.
Then I found a tangentially related Github issue comment and tried the following from within my project's virtualenv:
(env)$ pip uninstall pycurl
(env)$ pip install --upgrade pip
(env)$ export LDFLAGS=-L/usr/local/opt/openssl/lib
(env)$ export CPPFLAGS=-I/usr/local/opt/openssl/include
(env)$ export PYCURL_SSL_LIBRARY=openssl
(env)$ pip install pycurl
The install command gave this output:
Collecting pycurl Using cached
https://files.pythonhosted.org/packages/e8/e4/0dbb8735407189f00b33d84122b9be52c790c7c3b25286826f4e1bdb7bde/pycurl-7.43.0.2.tar.gz
Building wheels for collected packages: pycurl Running setup.py
bdist_wheel for pycurl ... done Stored in directory:
/Users/me/Library/Caches/pip/wheels/d2/85/ae/ebf5ff0f1378a69d082b4863df492bf54c661bf6306a2bd
Successfully built pycurl
tuspy 0.2.1 has requirement pycurl==7.43.0,
but you'll have pycurl 7.43.0.2 which is incompatible. Installing
collected packages: pycurl Successfully installed pycurl-7.43.0.2
I noted the (somewhat petty?) tuspy error and trudged on. This time, my script ran without PycURL complaining.
When executing this from the command-line of within my package:
python setup.py sdist bdist_egg upload
I get:
Server response (403): Must access using HTTPS instead of HTTP
This used to work many times until now. Searching for the err-msg didn't give me helpful infos, has anyone a clue what's going on?
Update: Use twine for uploading distributions to pypi.
Are you using a .pypirc file?
If you are maybe change the urls to point to the https links?
[distutils]
index-servers =
pypi
pypitest
[pypi]
repository=https://pypi.python.org/pypi
username=your_username
password=your_password
[pypitest]
repository=https://testpypi.python.org/pypi
username=your_username
password=your_password
Updating setuptools let's the error dissapear:
pip install setuptools -U
Then running the upload-command ends with:
Submitting dist/my.packagename-1.3.tar.gz to https://upload.pypi.org/legacy/
error: None
But still, no new version is available at pypi.
I'm installing GNU Radio and following the instruction here
But everytime I try to do sudo yum install gnuradio, it says
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: centos.mirror.cdnetworks.com
* extras: centos.mirror.cdnetworks.com
* updates: centos.mirror.cdnetworks.com
Setting up Install Process
No package gnuradio available.
Error: Nothing to do
It's a fresh installed CentOS 6.5 and I've never edited CentOS yum repository information. What's wrong with gnuradio? They've removed the package from yum repository?
In their website, they provide several ways to install it including PyBOMBS. But I prefer yum. Building from source is somewhat bothering me so it's the last thing I will try.
By default CentOS does not include all the repositories needed by gnuradio and its dependencies.
You additionally need to configure/add at least RPMForge and Epel for your CentOS.
References:
http://wiki.centos.org/AdditionalResources/Repositories/RPMForge#head-f0c3ecee3dbb407e4eed79a56ec0ae92d1398e01
http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x
This is what I was told, but I have not yet tested this so cannot say is is correct for sure.