I'm starting this thread with an answer, not a question. The questions are stated at the end:
I tried to add pip package 'tfx' to Apache Airflow using my own Dockerfile and docker-compose.yaml. I added my own DAG to Airflow and that failed to load with this error message:
doc_controls has not attribute 'inheritable_header'
It cost me only a day to find the cause. When you add this to your Dockerfile..
pip install tfx
..pip will install txf, tensorflow-2.6.0 and tensorflow-estimator-2.7.0. The latter is apparently depending on the not-yet-released code in the github repo tensorflow/docs which contains doc_controls.
So instead add this to keep tensorflow-estimator in line with packages that pip can find:
RUN pip install --no-cache-dir --user \
tfx==1.3.1 \
tensorflow==2.6.0 \
tensorflow-estimator==2.6.0
I'm loosing a lot of time solving problems with dependencies between pip packages and pip packages and the underlying C/C++ libraries. Am I the only one?
Here are my questions:
Am I correct to assume that pip is supposed to figure out which versions of dependencies of tfx to install. Should I normally be able to rely on pip to do this correctly or will pip simply install the latest version of all dependencies without regard to their mutual compatibility?
On the internet there are many Dockerfile around that do not specify any version numbers of the apt/pip packages to install. Such a Dockerfile is like a box of chocolates right? If you build the dockerfile a time t1 and at time t2 then their contents can differ in terms of versions right?
In general: Given a docker image why can one not get the Dockerfile that was used to construct the docker image?
Regards,
Chris
I also keep running into these dependency issues recently. I came across another post which might be of interest: Resolving new pip backtracking runtime issue. Based off this I think pip does try to figure out what versions of packages to install to avoid conflicts but I guess it sometimes struggles. I tried one of the tools, pipreqs but I didn't find it useful for my particular problem. In fact it broke things even more.
Also thanks for the solution to this one, I had the same problem.
Related
I am trying to install tensorflow-text through miniconda in Spyder. I have managed to install other modules in Spyder such as tensorflow itself, pandas, scikit-learn, etc. However, using the same command as all the other installations (with the specific package name replaced by tensorflow-text)
conda install spyder-kernels tensorflow-text -y
I continue to get the same error whenever I try to install tensorflow-text:
PackagesNotFoundError: The following packages are not available from current channels:
- tensorflow-text
followed by a suggestion to search for the package on anaconda.org. As such, I searched for the tensorflow-text package on the anaconda site and found one, albeit for linux, by rocketce. Attempting to run the commands listed under the tensorflow-text installation instructions on that webpage also yielded the same error.
At first, I tried to install tensorflow-text through pip and was able to successfully run the command
pip install -U tensorflow-text==2.10.0
which seemed to install tensorflow-text. But I could not figure out how to access it or if it was correctly installed. Specifically, I am looking to use tensorflow-text in the Spyder IDE. I was able to get tensorflow working in the IDE, but not the specific tensorflow-text.
I am using a Windows 10 system; I could not find anything on the anaconda site for Windows 10. I am rather inexperienced (if you could not already tell from the nature and description of the problem), so patience and clear explanations are appreciated. Thanks in advance!
My main Python environment on OSX Big Sur is the base env from Anaconda.
Now, I regularly need to install packages that are not available via conda or conda-forge or not in the latest version.
I understand that there are several different approaches:
Using the pip package installed by conda and follow the preferred way of using pip within conda i.e. to add those packages at the end of a environment.yaml file and to delete and re-generate the env whenever a new package needs to be added, in order to avoid conflicts from the interplay between conda and pip.
Forking the latest available recipe as described here, updating the version number and details, and using my local version of the recipe.
Simply installing the packages via pip, npm (Node JS package manager), apm (Atom package manager) from within the activated conda environment.
Which approach is best? Is it safe to use pip with conda in the way described in #1?
#2 is more effort than the other two. But I am looking for a generic approach that will not mess up my installation.
previously, I installed the tensorflow 1.13 in my machine.
There are some projects depending on different version of tensorflow and I do not want to mixed up different version of tensowflow.
So I just tried create a env called tf2.0 and used pip to install tensorflow 2.0.0b1 in that specific virtual environment.
However, after I ran 'pip install tensorflow-gpu==2.0.0b1` in that "tf2.0" conda environment, I found that it takes effect globally, which mean I have to use tensorflow-gpu 2.0.0b1 even when that virtual env "tf2.0" disactivated.
I wish I could use tensorflow 1.13 when virtual env is deactivated.
It's hard to troubleshoot the described conditions without more details (exact commands run, showing PATH before and after and post activation, etc.). Nevertheless, you can try switching to following the most recent recommendations for mixing Conda and Pip. Namely, avoid installing things ad hoc, which is prone to using the wrong pip and clobbering packages, but instead define a YAML file and always create the whole env in one go.
As a minimal example:
my_env.yaml
name: my_env
channels:
- defaults
dependencies:
- python
- pip
- pip:
- tensorflow-gpu==2.0.0b1
which can be created with conda env create -f my_env.yaml. Typically, it is best to include everything possible in the "non-pip" section of dependencies.
It is mostly that you used a wrong pip. To make sure you are using correct pip, it is usually a good practice to do
python -m pip install —user PACKAGE_NAME
Given that you have conda, pip should be the last resort.
Conda channel conda-forge most likely has the latest package version you are looking for.
conda install -c conda-forge PACKAGE_NAME
If you have to use pip, make sure you are in an environment and that environment has its own pip.
conda create -n test python=3.7
conda activate test
python -m pip install PACKAGE_NAME
From your described problem, I can guess that your environment is not activated in which you are trying to install the tensorflow2.0
Please make sure to activate the environment after making it.
so after creating the environment do this-
conda activate tf2.0
make sure you see this
(tf2.0) C:\Users\XYZ>
And then you install your tensorflow.
I'm currently trying to install scrapy when I encountered my first error:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'conda-forge::automat-0.7.0-py_1'.
CondaError: Cannot link a source that does not exist. D:\ProgramFiles\Python\Scripts\conda.exe
Running conda clean --packages may resolve your problem.
Attempting to roll back.
CondaError: Cannot link a source that does not exist. D:\ProgramFiles\Python\Scripts\conda.exe
Running conda clean --packages may resolve your problem.
I researched this error and followed the advice on this link:
My issues were largely similar to his until I reached the comment which advised me to run conda update -n base conda.
When I ran this code, I encountered my next error:
CondaEnvironmentNotFoundError: Could not find environment: base .
You can list all discoverable environments with conda info --envs.
Kindly advice if my steps taken were appropriate and how can I fix this issue.
The weird thing is I installed scrapy before, and these errors occurred after I recently re-installed Anaconda.
I'm not sure what other info you might require to better understand the situation. Do let me know and I will assist promptly.
Thank You
Try the conda install scrapy channel instead of the conda-forge channel.
To understand the difference between these two channels please read the answer of the following question Should conda, or conda-forge be used for Python environments?
I've been trying to install tensorflow with GPU support using these steps:
http://www.nvidia.com/object/gpu-accelerated-applications-tensorflow-installation.html
and also using:
http://thelazylog.com/install-tensorflow-with-gpu-support-on-sandbox-redhat/
This is the error message that I'm getting when I try to run the bazel build command for building the tensorflow pip package (with the --config-cuda flag set):
The specified --crosstool_top '//third_party/gpus/crosstool:crosstool' is not a valid cc_toolchain_suite rule.
What's strange is that if i remove the --config=cuda flag, I don't get the error message while building and I'm able to install tensorflow successfully - but without GPU support.
I experienced the same issue, using the nvidia instructions. What I did was to drop the git reset line in the instructions, and it works.
Details (from the error message):
Close, reopen terminal
Run git clone (again), and cd tensorflow
Run ./configure
Bazel build, etc
This may be unrelated, but I experienced an issue with the .whl line, the error message was that the wheel cannot be found or something along those lines. This is the "And finally install the TensorFlow pip package" section. To resolve it in my case, I typed in the terminal all the way to "..._pkg/tensorflow", and then pressed tab for auto-completion. The file name that popped up was significantly longer than that in the guide, but it worked. Also, if anyone face a numpy not installed message based on the nvidia instructions, replace the python-pip and dev with python-numpy and run that line again to install.
Configuration: Fresh Ubuntu 16.04, GTX970M, running driver 367.48 (from CUDA installation), CUDA 8.0, CuDNN 5.1
Full setup path:
Fresh Ubuntu, with downloads and 3rd party apps selected during installation.
Control panel => Software and updates => Other Software => Canonical ticked
Install CUDA using nvidia instructions in CUDA documentation, .deb format
CuDNN 5.1 installed, the rest from the nvidia link.
I hope everything works out for you!
(I'm sorry for the poor formatting)
I was going through same problem and recently found the solution. The problem is with the installation of Bazel which leads to this kind of error.
After installation of bazel from installer, make sure that you would give the correct path to ~./bashrc and also activate the path using
source "path-to-your-bin-directory-for-bazel"
Please change the git source version slightly as shown below
$ git clone https://github.com/tensorflow/tensorflow
$ cd tensorflow
// $ git reset --hard 70de76e
$ git reset --hard 287db3a
And please refer the below l
https://github.com/tensorflow/tensorflow/issues/4944
Also, zlib has been updated since this TF build. You need to check http://www.zlib.net/ to get the latest version and SHA-256, then update tensorflow/workspace.bzl with that information (lines 254-266 in this build). At this time, the correct version info would include the following:
url = "http://zlib.net/zlib-1.2.11.tar.gz",
sha256 = "c3e5e9fdd5004dcb542feda5ee4f0ff0744628baf8ed2dd5d66f8ca1197cb1a1",
strip_prefix = "zlib-1.2.11",