Import does not function as bat file; but works in spyder - numpy

I am still not the most sophisticated python user; but I cannot overcome this probably simple problem. I have a code that works perfectly with the spyder interface. I would like to make it a recurring task via creating a bat file. The bat file which in turn triggers a cmd interface does not import pandas_data reader and the code gets stuck and aborts.
import pandas_datareader.data as web
this line above creates the error below. It's a lengthy text.
File "C:\Users\myself\anaconda3\lib\site-packages\pandas_datareader\__init__.py", line 2, in <module>
from .data import ( File "C:\Users\myself\anaconda3\lib\site-packages\pandas_datareader\data.py", line 9, in <module>
from pandas.util._decorators import deprecate_kwarg File "C:\Users\myself\anaconda3\lib\site-packages\pandas\__init__.py", line 17, in <module>
"Unable to import required dependencies:\n" + "\n".join(missing_dependencies) ImportError: Unable to import required dependencies: numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\Users\myself\anaconda3\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.17.0" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.

Related

Run Python, Anaconda, Pandas, Numpy offline on a Server getting dependency error

After installing Anaconda on a Virtual Machine I run a script which is working on my local machine but not on my Virtual Machine.
I'm getting the error Message:
C:\Users\...\python>"C:\ProgramData\Anaconda3\python.exe" "C:\Users\...\reporting.py"
C:\ProgramData\Anaconda3\lib\site-packages\numpy\__init__.py:140: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "C:\Users\...\reporting.py", line 1, in <module>
import pandas as pd
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "C:\ProgramData\Anaconda3\python.exe"
* The NumPy version is: "1.18.5"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
The VM has no Internet. Is there any way to install all the required libs and frameworks?
You can package your script and used libraries using something like Pyinstaller. When you use an virtualenv keep in mind that all the packages that you use during pyinstaller execution are stored and may lead to an huge .exe

Error in Power BI while importing pandas library in python scrip

Below are the mentioned error while importing pandas library in Power BI in python script.
Details: "ADO.NET: Python script error.
C:\USERS\YADAVP\ANACONDA3\lib\site-packages\numpy\__init__.py:140: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "PythonScriptWrapper.PY", line 2, in <module>
import os, pandas, matplotlib
File "C:\USERS\YADAVP\ANACONDA3\lib\site-packages\pandas\__init__.py", line 17, in <module>
"Unable to import required dependencies:\n" + "\n".join(missing_dependencies)
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\USERS\YADAVP\ANACONDA3\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.18.1" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.
What is the resolution to sort this kind of error in Power BI?
Forget Anaconda and use WinPython.
I tried Anaconda for days with all the workarounds available in StackOverflow and other forums, and they took me nowhere.
Then I tried WinPython, and it worked immediately. Of course, you will need to change the PowerBI options accordingly.
To install WinPython: https://github.com/winpython/winpython
To change the detected Python home directory: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts#enable-python-scripting
If you consider my answer, you won't need to downgrade Python, PBI, or anything else.
I had the same error. Unfortunately, PowerBI wont work with Jupyter Notebook Python.
So you have to install a "normal" Python: https://www.python.org/downloads/
And configure the Python you want to use in PowerBI and install your needed Python libraries via pip
Edit: Please use Python 3.8 because 3.9 doesnt support NumPy for now

IntelliJ IDEA issue: xarray & pyparsing exception on import

This has something to do with the IntelliJ IDEA 2017.1.1 IDE. I do not get the following issue when executing my code via the command line.
===========================================================================
Python version: 3.6.1
xarray version: 0.9.6
pandas version: 0.20.3
numpy version: 1.12.1
I, for the first time, would like to use xarray.
I imported the module (no problem here) and then, without even using the module, ran my code. For example:
import xarray as xr
def something():
print("doing something...")
something()
This immediately throws an exception when I run it:
Exception ignored in: at 0x05A287B0>
Traceback (most recent call last): File "C:\Program Files
(x86)\Python36-32\lib\site-packages\pyparsing.py", line 160, in
_generatorType = type((y for y in range(1))) SystemError: error return without exception set
If I delete the import xarray as xr and rerun the code, I get no exception.
From the exception message, it looks like something called pyparsing.py
Any ideas?
pyparsing is probably installed as a dependency from some other package. I have run the pyparsing unit tests on both Python 3.6.1 and 3.6.2 (as well as most other popular Python versions back to 2.6) without any error.
I suspect that something in your environment is defining range to be something other than the normal builtin range method, and this is then causing the pyparsing code to fail.
I will fix this in pyparsing, to replace range(1) with just an empty list, which should give the same results for pyparsing, but without the susceptibility to being overwritten by a monkeypatch to range.
In the meantime, try explicitly importing pyparsing before importing xarray, or anything else for that matter. A simple import pyparsing should do.

No module named numpy when spark-submitting

I’m spark-submitting a python file that imports numpy but I’m getting a no module named numpy error.
$ spark-submit --py-files projects/other_requirements.egg projects/jobs/my_numpy_als.py
Traceback (most recent call last):
File "/usr/local/www/my_numpy_als.py", line 13, in <module>
from pyspark.mllib.recommendation import ALS
File "/usr/lib/spark/python/pyspark/mllib/__init__.py", line 24, in <module>
import numpy
ImportError: No module named numpy
I was thinking I would pull in an egg for numpy —python-files, but I'm having trouble figuring out how to build that egg. But then it occurred to me that pyspark itself uses numpy. It would be silly to pull in my own version of numpy.
Any idea on the appropriate thing to do here?
It looks like Spark is using a version of Python that does not have numpy installed. It could be because you are working inside a virtual environment.
Try this:
# The following is for specifying a Python version for PySpark. Here we
# use the currently calling Python version.
# This is handy for when we are using a virtualenv, for example, because
# otherwise Spark would choose the default system Python version.
os.environ['PYSPARK_PYTHON'] = sys.executable
I got this to work by installing numpy on all the emr-nodes by configuring a small bootstrapping script that contains the following (among other things).
#!/bin/bash -xe
sudo yum install python-numpy python-scipy -y
Then configure the bootstrap script to be executed when you start your cluster by adding the following option to the aws emr command (the following example gives an argument to the bootstrap script)
--bootstrap-actions Path=s3://some-bucket/keylocation/bootstrap.sh,Name=setup_dependencies,Args=[s3://some-bucket]
This can be used when setting up a cluster automatically from DataPipeline as well.
Sometimes, when you import certain libraries, your namespace is polluted with numpy functions. Functions such as min, max and sum are especially prone to this pollution. Whenever in doubt, locate calls to these functions and replace these calls with __builtin__.sum etc. Doing so will sometimes be faster than locating the pollution source.
Make sure your spark-env.sh has PYSPARK_PATH pointing to the correct Python release. Add export PYSPARK_PATH=/your_python_exe_path to /conf/spark-env.sh file.

OMP warning when numpy 1.8.0 is packaged with py2exe

import numpy
When I packaged above one line script as a single executable window application using py2exe, I get following warnings upon launch.
OMP: Warning #178: Function GetModuleHandleEx failed:
OMP: System error #126: The specified module could not be found.
This warning happen only when I build as single executable (i.e., only when bundle_files=1). Here's my setup.py for this.
from distutils.core import setup
import py2exe
setup(
options = {'py2exe': {'bundle_files': 1}},
windows=['testnumpy.py'],
zipfile = None,
)
This problem started with numpy 1.8.0. When I revert back to 1.6.2, the warnings don't show up.
Usually a single executable packaged by py2exe will catch warnings and traceback and save them into a log file. But somehow these warnings are not captured and the app creates a console window to show warning. I want to suppress this additional console window to show up.
How can I fix this warning problem?
What I tried (nothing worked):
I tried this redirecting sys.stderr.
I searched github numpy source for openMP assuming the OMP stands for it as mentioned here. But, nothing useful came out.
I have copied libiomp5md.dll to the same folder as setup.py.
I tried filterwarnings:
I tried sys.excepthook.
As I wrote in the comment, installing numpy 1.8.1rc1 from sourceforge did fix the issue, although I don't really know the differences...
I had this issue with numpy 1.13.1+mkl and scipy 1.19.1. Reverting to numpy 1.8.1rc1 is not an acceptable solution.
I tracked this issue to the scipy.integrate subpackage. The warning message pops up when this package is imported. It seems that perhaps libraries that use MKL don't like being invoked from library.zip, which is where py2exe places packages when using bundle option 2.
The solution is to exclude scipy and numpy in the py2exe setup script and copy their entire package folders into the distribution directory and add that directory to the system path at the top of the main python script.