BigQuery error: AttributeError: 'ClientOptions' object has no attribute 'api_audience' - google-bigquery

I keep receiving this error AttributeError: 'ClientOptions' object has no attribute 'api_audience' once I call the to_dataframe() on the query result from BigQuery. This worked fine before using the same virtual env and not sure what's happening now.
query.result() didn't raise errors but query.to_dataframe() raised the error.
These are the packages I have:
google-cloud==0.34.0
google-cloud-bigquery==2.34.3
google-cloud-bigquery-storage==2.16.2
google-cloud-core==2.3.2
google-cloud-storage==2.7.0

I was able to replicate your error and this is when I downgraded my google-api-core package to 2.7.3.
This can be resolved by upgrading your google-api-core to 2.8.0 and up. But the BEST PRACTICE is to always keep your packages to the latest version. You may upgrade your package to latest version by running below command.
pip install google-api-core --upgrade

the error may be related to version incompatibility between the google-cloud-core package and other Google Cloud packages.
there are two ways to solve that error
pip install --upgrade google-cloud-core
and the other one is a
pip install google-cloud-bigquery==2.29.0

Related

OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory

For some reason, I am getting this error on Colab, even if I don't use GPU... Any help would be greatly appreciated! Thanks! The error message is as following:
OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory
The reason is a mismatch of CUDA versions. I ran into this issue because the preinstalled version of pytorch did match the default version which I installed using %pip install torchaudio (CUDA 10.2). print(torch.__version__) gives 1.10.0+cu111 (CUDA 11.1).
So I reinstalled pytorch, torchaudio and torch vision with the command stated on the pytorch website
%pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
After restarting the environment, it should work.
This method uninstalls pytorch and reinstalls another version, it would be faster to just install the matching version of pytorch, in my case:
%pip install -q torchaudio==0.10.0+cu111 -f https://download.pytorch.org/whl/cu111/torch_stable.html
I don't know if it would be better to install the cu113 variant.
Also, I would suggest to check the error logs to find out the python package that causes the error. In my case, it was generated in torch-cluster and it simply resolved by downgrading torch-cluster to 1.5.9 (recent version is 1.6.0 which is release just couple of weeks back and was installed by default)
I've solved it by replacing the version of torchaudio installed by pip with the one from conda.
pip uninstall torchaudio
conda install torchaudio -c pytorch
Notice the message of conda, it installs the version with bundled CUDA lib:
The following NEW packages will be INSTALLED:
torchaudio pytorch/linux-64::torchaudio-0.11.0-py38_cu113

How to solve "ERROR: No matching distribution found for tensorflow==1.12.0"

I am trying to install tensorflow 1.12.0. This is the command that I am using pip install tensorflow==1.12.0. I got this command from this link. This is the error that I am getting.
ERROR: Could not find a version that satisfies the requirement
tensorflow==1.12.0 (from versions: 2.5.0rc0, 2.5.0rc1, 2.5.0rc2,
2.5.0rc3, 2.5.0) ERROR: No matching distribution found for tensorflow==1.12.0
What am I doing wrong?
You can install previous versions of Tensorflow directly from the Github release page. For example, the 1.12.0 version can be downloaded from https://github.com/tensorflow/tensorflow/releases/tag/v1.12.0.
My python version was 3.9. Intalling python version 3.6 solved the problem. I installed it in virtual environment with conda.

numpy installation into pypy3 virtual env : `undefined symbol: cblas_sgemm`

I am trying to install numpy into a pypy3 virtualenv, but I'm stuck with that error (at importing) :
venv_pypy/site-packages/numpy-1.16.0.dev0+1d38e41-py3.5-linux-x86_64.egg/numpy/core/_multiarray_umath.pypy3-60-x86_64-linux-gnu.so: undefined symbol: cblas_sgemm
I am on an up to date archlinux, numpy works fine with CPython, but I have a project using pandas (which depends on numpy) that I need to test on pypy.
I first tried the recommended method (pip install numpy in the venv) but didn't work. (install is fine, but still the same error at execution).
I then, I tried what is suggested https://stackoverflow.com/a/14391693/1745291 (linked from Numpy multiarray.so: undefined symbol: cblas_sgemm ), since I didn't installed ATLAS (aur package on arch I don't want to install), to try building with OpenBias. But still not working (same error, and the method could be outdated since it's from 2013)
...And finally, I tried a build with no accelerations (at least, that is claimed) following : https://docs.scipy.org/doc/numpy-1.15.0/user/building.html#disabling-atlas-and-other-accelerated-libraries
...But still the same result...
What am i doing wrong ?
You can try uninstall it from pip and install from apt (if you are using ubuntu etc.)
This approach solved my problem
pip3 uninstall numpy
sudo apt-get install python3-numpy

ObjectDetecionAPI TypeError: __new__() got an unexpected keyword argument 'serialized_options'

I did everything it says at https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md and lastly updated my protoc. When i enter $ protoc --version, it throws libprotoc 3.5.1 on the terminal. But when i try to do $ python object_detection/builders/model_builder_test.py, it throws an error TypeError: _ new _() got an unexpected keyword argument 'serialized_options'. What am i doing wrong?
Updating protobuf to 3.6 works for me.
pip install -U protobuf
Based on this thread in the TensorFlow repository, you should downgrade your protobuf to 3.4.0.
Updating protobuf to 3.8 works for me.
pip install -U protobuf
On python3, none of the above solution worked. So I uninstalled the existing installation using pip. Then installed again by pip3 install protobuf. Then it worked.

libgeos_c.so undefined symbol: GEOSisClosed

I was trying to set up the CKAN with couple extensions. The main extension is spatial ( https://github.com/ckan/ckanext-spatial)> During the tests server returns code 500.
The log is:
AttributeError: /usr/lib/libgeos_c.so.1: undefined symbol: GEOSisClosed
I couldn't find similar issue / attribute. Does anyone faced similar error?
There appears to be an issue with the recent versions of Shapely, according to this: https://github.com/Toblerity/Shapely/issues/176
It looks like this became a problem for installers during September, because the ckanext-spatial's pip-requirements.txt will get you the latest version "Shapely>=1.2.13". It sounds like the latest version is fixed though, so try that - 1.4.3 (released 1st Oct 2014) or failing that, an earlier one (e.g. 1.3.3).
(pyenv) $ pip install 'Shapely>=1.4.3'
Check if you installed the GEOS package
sudo apt-get install libgeos-c1
If still no luck try installing the development libraries:
sudo apt-get install libgeos-dev
I have shapely 1.5.9 installed on Ubuntu and I was receiving a similar error...
AttributeError: /usr/lib/libgeos_c.so.1: undefined symbol: GEOSCovers
I had to revert to a previous version of Shapely to get this to work. Try this command.
sudo pip install 'Shapely==1.4.3'