I am current running on Debian 8 Jessie with Python 2.7 and the current google-cloud-speech and storage (pip'd in with upgrade today). When I attempt to config it fails with:
ValueError: Protocol message RecognitionConfig has no "enable_automatic_punctuation" field.
from this call:
config = speech.types.RecognitionConfig(
encoding=speech.enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=8000,
language_code='en-US',
# Enable automatic punctuation
enable_automatic_punctuation=True)
The call was directly copy/paste'd from "https://cloud.google.com/speech-to-text/docs/automatic-punctuation#speech-enhanced-model-python".
Huh?
enable_automatic_punctuation is only available if you import speech_v1p1beta1 instead of speech_v1. Compare the documentation for RecognitionConfig for both beta and non beta.
Also, in the very same example that you have linked, if you click on View on Github, you can see the following import:
from google.cloud import speech_v1p1beta1 as speech
Also, related to this topic.
EDIT:
Also, that code is on Python 3, and you are using Python 2.7, be aware of that.
Related
I am getting the following error with plotnine==0.9 and matplotlib==3.6.
File "D:\Python\Python310\lib\site-packages\plotnine\stats\stat_density_2d.py", line 3, in <module>
import matplotlib._contour as _contour
ModuleNotFoundError: No module named 'matplotlib._contour'
If I downgrade matplotlib==3.5, the problem goes away.
It's discussed here and it's already fixed here Note that it's already merged to main.
It was due to a internal matplotlib call that is no longer supported and has been replaced.
So I guess you could choose between:
downgrade to mlp 3.5.3
install plotnine#main
till the next plotnine release.
Carlos's answer is correct. However if anybody else, like me, is uncertain of how to install plotnine#main, you can implement the fix rather easily:
Find the site_packages folder you python script uses. It usually is a subdirectory of the python version you are using, which can located reliably by trying to reinstall matplotlib or any other package you know you have access to, and checking the logs in the console. ex using python -m pip install matplotlib.
Go down into the site_packages/plotnine/stats directory and open up the stats_density_2d.py file in your editor of choice.
Apply & save the modifications made in the fix. Alternatively, overwrite the file with the one from the github.
ModuleNotFoundError: No module named 'matplotlib._contour'
Issues with matplotlib 3.6.1 and
plotnine 0.9.0
K.I.S.S.
in terminal:
pip show matplotlib #enables you to check version
pip install matplotlib==3.5 #revert and problem is resolved for now.
no more:
ModuleNotFoundError: No module named 'matplotlib._contour'
resolves issue for now that will stop progress...
I am trying to use Google Speech API to recognize speech, on windows with colab
here is the error
ImportError: cannot import name 'enums' from 'google.cloud.speech_v1'
Anybody knows how to solve this?
Looks like in the new version they have removed enums. Check this link, If you want enums then you have to switch to an old version.
As mentioned in #addno1's answer, enums and types have been removed in the 2.x versions of the library. It seems that you are using a 2.x version of the library, hence the error.
If your code is using the 1.x version of the library and if you would like to upgrade to the latest version of the library, refer to this migration guide(same mentioned in the other answer). You can refer to this quick start for setup instructions and an updated client library code given below.
# Imports the Google Cloud client library
from google.cloud import speech
# Instantiates a client
client = speech.SpeechClient()
# The name of the audio file to transcribe
gcs_uri = "gs://cloud-samples-data/speech/brooklyn_bridge.raw"
audio = speech.RecognitionAudio(uri=gcs_uri)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code="en-US",
)
# Detects speech in the audio file
response = client.recognize(config=config, audio=audio)
for result in response.results:
print("Transcript: {}".format(result.alternatives[0].transcript))
If you want to use the older code, you will have to downgrade the library version to 1.3.2 (last 1.x version) by running the pip command
pip install google-cloud-speech==1.3.2
Below are the mentioned error while importing pandas library in Power BI in python script.
Details: "ADO.NET: Python script error.
C:\USERS\YADAVP\ANACONDA3\lib\site-packages\numpy\__init__.py:140: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "PythonScriptWrapper.PY", line 2, in <module>
import os, pandas, matplotlib
File "C:\USERS\YADAVP\ANACONDA3\lib\site-packages\pandas\__init__.py", line 17, in <module>
"Unable to import required dependencies:\n" + "\n".join(missing_dependencies)
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\USERS\YADAVP\ANACONDA3\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.18.1" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.
What is the resolution to sort this kind of error in Power BI?
Forget Anaconda and use WinPython.
I tried Anaconda for days with all the workarounds available in StackOverflow and other forums, and they took me nowhere.
Then I tried WinPython, and it worked immediately. Of course, you will need to change the PowerBI options accordingly.
To install WinPython: https://github.com/winpython/winpython
To change the detected Python home directory: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts#enable-python-scripting
If you consider my answer, you won't need to downgrade Python, PBI, or anything else.
I had the same error. Unfortunately, PowerBI wont work with Jupyter Notebook Python.
So you have to install a "normal" Python: https://www.python.org/downloads/
And configure the Python you want to use in PowerBI and install your needed Python libraries via pip
Edit: Please use Python 3.8 because 3.9 doesnt support NumPy for now
I saw the first rocket.
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world.")
'Hello, world.' it's ok!
also 'python manage.py migrate' ok.
But when I import pandas...
(import only and no use.)
from django.http import HttpResponse
import pandas as pd
def index(request):
return HttpResponse("Hello, world.")
Browser thinks forever.
How can I fix this issue?
my environment
OS: CentOS Linux release 7.7.1908 (Core)
Pandas: 0.19.2
Apache: 2.4.6 (CentOS)
Django: 2.1
Browser: Google Chrome 76.0.3809.132
# python --version
Python 3.6.0 :: Anaconda 4.3.1 (64-bit)
simply, Anaconda is older?
Please let me know if there is any missing information.
Progress after that. Tried "Panda" update.
#Conda installs panda
I was able to clear the browser thinking during the import error.
I will re-install Newest Anaconda.
(_mysql_exceptions.OperationalError) (2006, "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)") (Background on this error at: sqlalche.me/e/e3q8)
Currently, if you don't use django, you can execute SQL from py file to MySQL.
The situation has improved. It hasn't been resolved yet, but the error has changed this morning. I will challenge again when the work is over.
Settled! I think anaconda was old. Speaking of which, it took about two years to build a VPS, and anaconda was the version at that time.
I’m spark-submitting a python file that imports numpy but I’m getting a no module named numpy error.
$ spark-submit --py-files projects/other_requirements.egg projects/jobs/my_numpy_als.py
Traceback (most recent call last):
File "/usr/local/www/my_numpy_als.py", line 13, in <module>
from pyspark.mllib.recommendation import ALS
File "/usr/lib/spark/python/pyspark/mllib/__init__.py", line 24, in <module>
import numpy
ImportError: No module named numpy
I was thinking I would pull in an egg for numpy —python-files, but I'm having trouble figuring out how to build that egg. But then it occurred to me that pyspark itself uses numpy. It would be silly to pull in my own version of numpy.
Any idea on the appropriate thing to do here?
It looks like Spark is using a version of Python that does not have numpy installed. It could be because you are working inside a virtual environment.
Try this:
# The following is for specifying a Python version for PySpark. Here we
# use the currently calling Python version.
# This is handy for when we are using a virtualenv, for example, because
# otherwise Spark would choose the default system Python version.
os.environ['PYSPARK_PYTHON'] = sys.executable
I got this to work by installing numpy on all the emr-nodes by configuring a small bootstrapping script that contains the following (among other things).
#!/bin/bash -xe
sudo yum install python-numpy python-scipy -y
Then configure the bootstrap script to be executed when you start your cluster by adding the following option to the aws emr command (the following example gives an argument to the bootstrap script)
--bootstrap-actions Path=s3://some-bucket/keylocation/bootstrap.sh,Name=setup_dependencies,Args=[s3://some-bucket]
This can be used when setting up a cluster automatically from DataPipeline as well.
Sometimes, when you import certain libraries, your namespace is polluted with numpy functions. Functions such as min, max and sum are especially prone to this pollution. Whenever in doubt, locate calls to these functions and replace these calls with __builtin__.sum etc. Doing so will sometimes be faster than locating the pollution source.
Make sure your spark-env.sh has PYSPARK_PATH pointing to the correct Python release. Add export PYSPARK_PATH=/your_python_exe_path to /conf/spark-env.sh file.