I have installed virtualenvwrapper in a 3.5.0b1 virtualenv, called setupenv, to be able to generate new python 3.5 test environments easily.
Looking over the list of installed packages, I did see argparse version 1.3.0 installed. This (latest) version of argparse has not been tested with 3.5.
Is this dangerous?
As far as I know 3.2+ comes with its own argparse. Could this install break other packages relying on argparse? Why is this installed at all?
This is probably not dangerous. If you run:
python3.5 -c "import argparse; print(argparse.__file__)"
, you can see that the arparse.py installed with the interpreter takes precedence over the superfluously installed argparse package.
A bit of digging (or using the pipdeptree package) will show you that stevedore is dependent on argparse. This is just sloppy programming (or disregard of possible bandwidth issues).
In a package's setup.py you can easily test if you are running python < 2.7 or 3.0 <= python < 3.2 and only install argparse for those cases.
I would just de-install argparse from your setupenv virtualenv (pip uninstall argparse -y), virtualenvwrapper is not affected by the removal in my experience.
This is actually a bug in stevedore, it uses the pbr package and that supports specification of the python version using environment markers
but stevedore is not using that. The irony is that the example for this in pbr is with argparse, by specifying in the requirements.txt:
argparse; python=='2.6'
A bug report against stevedore was filed, but although the fix was trivial, it was not implemented for several releases. Finally the issue was
set to won't fix, probably because dropped support for 2.6 removed the
need for argparse altogether.
Related
I have just installed the stable version of TensorFlow 2.0 (released on October 1st 2019) in PyCharm.
The problem is that the keras package is unavailable.
The actual error is :
"cannot import name 'keras' from tensorflow"
I have installed via pip install tensorflow==2.0.0 the CPU version, and then uninstalled the CPU version and installed the GPU version , via pip install tensorflow-gpu==2.0.0.
Neither of the above worked versions of TensorFlow were working properly(could not import keras or other packages via from tensorflow.package_X import Y).
If I revert TensorFlow to version 2.0.0.b1, keras is available as a package (PyCharm recognises it) and everything runs smoothly.
Is there a way to solve this problem? Am I making a mistake in the installation process?
UPDATE --- Importing from the Python Console works and allows the imports without any error.
For PyCharm Users
For those who use PyCharm. Install future (EAP) release 2019.3 EAP build 193.3793.14 from here. With that, you will be able to use autocomplete for the current stable release of TensorFlow (i.e. 2.0). I have tried it and it works :).
For other IDEs
For users with other IDEs, this will be resolved only after the stable version is released, which is anyways the case now. But this might take some more time for a fix. See the comment here. I assume it will be wise to wait and keep using version 2.0.0.b1. On the other hand avoid imports from tensorflow_core if you do not want to refactor your code in the future.
Note: for autocomplete to work use import statement as below
import tensorflow.keras as tk
# this does not work for autocomplete
# from tensorflow import keras as tk
The autocomplete works for TensorFlow 2.0.0 on CPU version, but the autocomplete does not work for the GPU version.
SOLVED --- See the answers to this problem below.
SOLUTION 1 (best solution)
Is the accepted answer provided above. It works on EAP version, I tested it on several machines with Windows.
SOLUTION 2
Although PyCharm does not recognise the modules, running the .py file works. I still do not know if this is a problem of TensorFlow or PyCharm, but this is the solution that I have found, many people have run into this problem.
SOLUTION 3
Import the modules from tensorflow_core instead of tensorflow
Example: from tensorflow_core.python.keras.preprocessing.image import ImageDataGenerator
However, as mentioned by #Nagabhushan S N in the comment below and above in the accepted answer:
On the other hand avoid imports from tensorflow_core if you do not
want to refactor your code in the future.
I'm running a Windows computer with just a CPU (no GPU). When I run pip install tensorflow -vvv in order to see what pip is doing, it lists a lot of links, but for all of them, it says "Skipping link ... it is not compatible with this Python."
Does tensorflow support Python 3.6.4 on Windows? If so, what binary URL should I use to install it?
(I previously installed with this version due to reading this, but ran into this error without the DLL load failed message, so I'm wondering if there's a better version I should use.)
Also, I'm aware that Tensorflow says they support Python 3.x, but right now it hasn't been working for me.
You have probably installed Python 32bits, you need the 64bits version
Upon trying to install Tensorflow for conda environment, I encountered with the following error message, without any progress:
tensorflow-1.1.0-cp35-cp35mwin_amd64.whl is not a supported wheel on this platform
Have you tried uninstalling and re-installing TensorFlow using pip within your Conda environment? I.e.:
pip uninstall tensorflow
Followed by:
pip install tensorflow
If it doesn't work, the issue may be with your Python installation. TensorFlow only supports 64-bit Python 3.5+ on Windows (see more info here).
Perhaps you have Python's default installation, which comes in a 32-bit version. If that's the case, you can download the 64-bit Python 3.5 or later from here to run in your Conda environment and then you should be able to install/run TensorFlow without any issues.
Make sure that the Python version installed in the Environment is 3.5 not 3.6. Since 3.6 was released Conda automatically sets that version as default for python 3. However, it is still not supported by Tensorflow.
You can work using tensorflow library along with other essential libraries using the Dockerfile. Using Docker for environment are a good way to run experiments in reproducible manner as in this blog
You can also try using datmo in order setup environment and track machine learning projects for making it reproducible using datmo CLI tool.
I'm trying to understand why the easy_install of pyicu works and pip install doesn't (see below). also trying to understand "What is the difference between a PyPi project with a universal wheel and one without?" Will installs be "easier?". If so, will this merge request solve the problem of polyglot not installing on an Anaconda machine?
Need help/advice/solutions on how to best resolve python project install issue that is tied to underlying dependencies. I have two local fixes in GitHub Gists but would like to know the best way to have this fix "out there" so people like me can find it. What is the normal Python Community approach? The problem centers around three projects:
polyglot - a python multilingual NLP toolkit
pyicu - Python extension wrapping IBM's International Components for Unicode C++ library (ICU).
pycld2 - CLD (Compact Language Detection) library as maintained by Dick Sites
The goal:
Install polyglot on a MacOSX computer running Python Anaconda Distribution
Make the fix I found available to everyone; lots of issues published about the problem.
Here's the error trace:
The Problem (Lots of them):
Core polyglot dependency, pyicu, does not properly install when you use pip install. Discovered you must use easy_install for it build properly and work on MacOSX. If you don't use the easy_install, you get:
polyglot requires icu 54.1.1 to run in Anaconda, but...
Homebrew, the MacOSX tool to install icu, only installs version 58.1. That version is too new. Old stackoverflows advise brew install icu4c to fix problem, but Homebrew evolution makes that advice obsolete now.
pyicu does not have a universal wheel; but I created a merge request to add one to pyicu. Only way to fix this is with this channel's icu, https://anaconda.org/ccordoba12/icu. conda install icu will not work, but that's the normal conda way of doing things.
*pycld2 - CLD (Compact Language Detection) becomes a problem because after I build the wheel file locally, have to download the project and run setup.py install locally. There has to be a better way to do this right?
What I've Done to Solve the problem (should I do more, what should I do next?)
Created two Gists that can successfully install polyglot on a Mac running Anaconda for Python 2.7 or Python 3.5
Python 2.7 fix
Python 3.5 fix
created the merge request for pyicu
Both Gist fixes work. But, is this error in install tied to the wheel? If I installed pyicu with easy_install, the install works. But, with pip, it doesn't?
What are the steps to take in the Python community to fix it so people can find the solution or just pip install with no problems?
I did a test, and if the wheel file is built, the pip works with no issues.
My department has version < 1.4 version numpy isntalled in /usr/lib/somewhere/numpy. Since I don't have the permission to replace it with a new version. I installed numpy 1.5 in my home directory. However, later when I install scipy, it complained that the version in /usr/lib/somewhere/numpy has version < 1.4. How can I solve this problem?
Change sys.path so that your numpy directory comes in front of the global numpy directory.
That way your version should be imported instead of the other version. If you really want to make sure that the other version isn't used than you can use virtualenv to get your own private environment with all of your own libraries.
You should use virtualenv to create an environment isolated from system packages with the --no-site-packages option to avoid any conflicts with your system packages. You can then install numpy with pip or easy_install specifying the version you want. There are many tutorials out there about how to use virtualenv.