I am working on pre-processing a few geo-tif files. I want to obtain the minimum value for my raster image. My OS is Linux and Rasterio version 1.3.3. When I run my script in a conda environment, I realized that the minimum value is given as zero which is incorrect.
I am using the following code:
with rasterio.open(file) as tif_input_obj:
tif_data = tif_input_obj.read()
print(tif_data.min())
I tried running the same lines of code by typing python in my conda environment. This time the minimum value was obtained correctly. I have also tried gdal and faced the same problem.
has anyone faced this issue before?
Thank you for your help.
Related
I use IntelliJ premium, I updated the whole app, and I saw a very annoying new output style of it's Jupyter notebook. Then I reinstalled an older version, I see the jupyter output is still in new format. I guess it is because of Jupyter's package update.
How can I have the old-style table format?
the new style shows only 10 rows and for every run, you should change 10 to a higher number which becomes annoying after a couple of minutes. It has gotten really slow too.
I have a netcdf3 dataset that I need to get into a data frame (preferable pandas) for further manipulation but am unable to do so. Since the data is multi hierarchical, I am first reading it in xarray then converting it to pandas using the to_dataframe command. This has worked well on other data, but it just kills the kernel in this case. I have tried converting it to a netcdf4 file using ncks but still cannot open it. I have also tried calling a single row or column of the xarray data structure just to view it, and this similarly kills the kernel. I produced this data so it is probably not corrupted. Does anyone have any experience with this? The file size is about 890 MB if that's relevant.
I am running python 2.7 on a Mac.
I have downloaded several HDF files from the MODIS database.
According to the documentation, the layers have to be multiplied by 0.1 to obtain the real values.
I get an error when I put the name of the HDF-layer in the Raster Calculator, however it does work when I export it as a new raster before. But after multiplication with 0.1, I still do not get a continuous scale image but only black and white areas. I excluded the seven highest values as indicated in the documentation, but still no change.
Another way of getting the MODIS files is to use the respective toolbox. Data imported with this tool does show up correctly, but I cannot import most of it even though it is available under the link indicated above:
Failed to execute (CreateCustomGeoTransformation)
Failed to execute (ImportEvapotranspiration)
Has anyone experienced something similar?
I'm using IPython's Qtconsole and use default setting of printing setting.
It works well for polynomial, but do not work for Matrix
from sympy import init_printing, Matrix
init_printing()
a=Matrix([1,2])
a
the error is
ValueError:
\left[\begin{smallmatrix}1\\2\end{smallmatrix}\right]
^
Expected "\right" (at char 6), (line:1, col:7)
I have tried http://www.codecogs.com/latex/eqneditor.php and it seems the latex code is correct.
I have tried the dev version of sympy, it still doesn't work. I did not try dev version of matplotlib yet. Because there're only source for the dev version.
TLDR: It is a known issue, yet to be solved. You need to use a proper LaTeX.
Your problem might be related to this. The problem is due to matplotlibs very limited understanding of LaTeX. In this case the \begin{...} flag cannot be interpreted by matplotlib, although it is valid LaTeX.
I am trying to load a .csv file using Pandas read_csv method, the file has 29872046 rows and it's total size is 2.2G.
I notice that most of the lines loaded miss their values, for a large amount of columns. The csv file when browsed from shell contains those values...
Are there any limitations to loaded files? If not, how could this be debugged?
Thanks
#d1337,
I wonder if you have memory issues. There is a hint of this here.
Possibly this is relevant or this.
If I was attempting to debug it I would do the simple thing. Cut the file in half - what happens? If ok, go up 50%, if not down 50%, until able to identify the point where its happening. You might even want to start with 20 lines and just make sure it is size related.
I'd also add OS and memory information plus the version of Pandas you're using to your post in case its relevant (I'm running Pandas 11.0, Python 3.2, Linux Mint x64 with 16G of RAM so I'd expect no issues, say). Also, possibly, you might post a link to your data so that someone else can test it.
Hope that helps.