I am doing some timeseries forecasting, while at it I am trying to import auto_arima using pyramid but it throws an Module not found error as - ''No module named 'pyramid.arima'
from pyramid.arima import auto_arima
I also tried importing auto_arima from pmdarima :
from pmdarima.arima import auto_arima
but this throws an error as -
"type object 'pmdarima.arima._arima.array' has no attribute 'reduce_cython'"
What am I doing wrong?...
I'm using pmdarima package without any issues, but your error is highly probably related to your numpy version. I would recommend to you to upgrade it (in case you use pip):
pip install --upgrade numpy
You can also try to import numpy package before importing auto_arima (some people experience strange behavior).
You can follow discussion on github issues - https://github.com/tgsmith61591/pmdarima/issues/91 (similar here or here). You're definitely not the first one with that issue.
If it doesn't help, please, paste your pmdarima and numpy versions.
Related
I'm trying to plot a subset of a field from a grib file on google colab. The issue I am finding is that due to google colab using an older version of python I can't get enough libraries to work together to 1.) get a field from the grib file and then 2.) extract a subset of that field by lat/lon, and then 3.) be able to plot with matplotlib/cartopy.
I've been able to do each of the above steps on my own PC and there are numerous answers on this forum already that work away from colab, so the issue is related to making it work on the colab environment, which uses python 3.7.
For simplicity, here are some assumptions that could be made for anybody who wants to help.
1.) Use this file, since its been what I have been trying to use:
https://noaa-hrrr-bdp-pds.s3.amazonaws.com/hrrr.20221113/conus/hrrr.t18z.wrfnatf00.grib2
2.) You could use any field, but I've been extracting this one (output from pygrib):
14:Temperature:K (instant):lambert:hybrid:level 1:fcst time 0 hrs:from 202211131800
3.) You can get this data in zarr format from AWS, but the grib format uploads to the AWS database faster so I need to use it.
Here are some notes on what I've tried:
Downloading the data isn't an issue, it's mostly relating to extracting the data (by lat lon) that is the main issue. I've tried using condacolab or pip to download pygrib, pupygrib, pinio, or cfgrib. I can then use these to download the data above.
I could never get pupygrib or pinio to even download correctly. Cfgrib I was able to get it to work with conda, but then xarray fails when trying to extract fields due to a library conflict. Pygrib worked the best, I was able to extract fields from the grib file. However, the function grb.data(lat1=30,lat2=40,lon1=-100,lon2-90) fails. It dumps the data into 1d arrays instead of 2d as it is supposed to per the documentation found here: https://jswhit.github.io/pygrib/api.html#example-usage
Here is some code I used for the pygrib set up in case that is useful:
!pip install pyproj
!pip install pygrib
# Uninstall existing shapely
!pip uninstall --yes shapely
!apt-get install -qq libgdal-dev libgeos-dev
!pip install shapely --no-binary shapely
!pip install cartopy==0.19.0.post1
!pip install metpy
!pip install wget
!pip install s3fs
import time
from matplotlib import pyplot as plt
import numpy as np
import scipy
import pygrib
import fsspec
import xarray as xr
import metpy.calc as mpcalc
from metpy.interpolate import cross_section
from metpy.units import units
from metpy.plots import USCOUNTIES
import cartopy.crs as ccrs
import cartopy.feature as cfeature
!wget https://noaa-hrrr-bdp-pds.s3.amazonaws.com/hrrr.20221113/conus/hrrr.t18z.wrfnatf00.grib2
grbs = pygrib.open('/content/hrrr.t18z.wrfnatf00.grib2')
grb2 = grbs.message(1)
data, lats, lons = grb2.data(lat1=30,lat2=40,lon1=-100,lon2=-90)
data.shape
This will output a 1d array for data, or lats and lons. That is as far as I can get here because existing options like meshgrib don't work on big datasets (I tried it).
The other option is to get data this way:
grb_t = grbs.select(name='Temperature')[0]
This is plottable, but I don't know of a way to extract a subset of the data from here using lat/lons.
If you can help, feel free to ask me anything I can add more details, but since I've tried like 10 different ways probably no sense in adding every failure. Really, I am open to any way to accomplish this task. Thank you.
I am currently learning python for data science, the problem I am running into is this
import pandas as pd
help(pd.read())
When I try to run this code it says 'AttributeError: module 'pandas' has no attribute 'read''
I saw there is a way to directly read XML files using pandas. I followed and used this package. However, I keep getting errors.
https://pypi.org/project/pandas-read-xml/
import pandas as pd
import pandas_read_xml as pdx
from pandas.io.json import json_normalize
The error was generated by last line and the error is
ImportError: cannot import name 'json_normalize'
I am using kernel python 3, can anyone tell me what was wrong with it?
When to use mxnet-cu101mkl = {version = "==1.5.0",sys_platform = "== 'linux'"}, I get error that I cannot longer import ndarray or nd:
ImportError: cannot import name 'ndarray'
I have no problem with this when using the same code with mxnet-cu101 (no mkl).
Is this just a bug or is this subpackage no longer supported?
I can confirm that mxnet-cu100mkl works fine (version 1.5.0). Very slight CUDA version difference to yours but the package shouldn't change. I think you might be importing a different mxnet here, possibly a folder called mxnet for example. Check the following:
import mxnet as mx
print(mx.__file__)
It should show the path to mxnet within site-packages for you Python environment. e.g.
/home/ec2-user/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/__init__.py
I am trying to use Google colab for my project for which I have to upload a few python files because I need those class files.But while executing the main function.It is constantly throwing me an error 'module object has no attribute' . Is there some memory issue with colab or what! Help would be much appreciated.
import numpy as np
import time
import tensorflow as tf
import NN
import Option
import Log
import getData
import Quantize
AttributeError: 'module' object has no attribute 'NN'
I uploaded all files using following code :
from google.colab import files
src = list(files.upload().values())[0]
open('Option.py','wb').write(src)
import Option
But its always giving me error on some or the other files which I am importing.
The updated version (for a few weeks) can save the files without you having to call open(fname, 'wb').write(src)
So, you only have to upload your 5 files: NN.py, Option.py, Log.py, getData.py, and Quantize.py (and probably other dependency + data) then try importing each one e.g. import NN to see if there's any error.