Is there any location I can configure and put natural earth shapefiles that all users can use? My air-gapped system can't reach out to the internet to download files, so I need to point cartopy at a local store of shapefiles and configure it to look there instead of trying to reach out if it can't find them in ~/.local/share/cartopy/shapefiles/ file structure. Even the simplest tests of my install fail for this reason. I've found a few references on how to place shapefiles within that structure, so I could mimic that somewhere else. I suppose it would also be possible to symlink each user's .local/share/cartopy/shapefiles directories to a central location, but that seems like a kludge. Is there a better way that I'm missing?
EDIT: (hope this is the stackoverflow way--I am the original submitter)
OK. I'm back on this. I downloaded the ne_110m_yada_yada shapefiles unzipped them and put them in my .local/share/cartopy/shapefile/natural_earth/cultural|physical) as I found them on a connected laptop when I ran this test program:
import cartopy
import matplotlib.pyplot as plt
def main():
ax = plt.axes(projection=cartopy.crs.PlateCarree())
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.5)
ax.add_feature(cartopy.feature.RIVERS)
ax.set_extent([-20, 60, -40, 40])
plt.show()
if __name__ == '__main__':
main()
That worked as expected. I hid the files in my .local cache and had my sysadmin hack the cartopy.config to point to the in /data/cartopy/ where I hoped it would find /data/cartopy/shapefiles/natural_earth/cultural|physical where the zipped data ultimately resided. Is it expecting zipped files there or is that only if it tries to go out and download them? Cartopy did not find the zip files there and tried to go to the internet. What are the rules for putting data in pre_existing_data_dir? Will cartopy find the zip files if placed in /data/cartopy/ and do the right thing. Or do I need the shapefile/natural_earth subdirectories? I'd like to just point it only as far as /data/cartopy and have it find any shapefile or raster data I have parked in that hierarchy, whatever the source. Natural Earth is merely the test case. I'm not ruling out that my sysadmin messed up, but he is extremely competent and I may have misled him.
The links and documentation provided in the initial answer were helpful, but not helpful enough for cartopy-challenged me.
Duplicate of Location of stored offline data for cartopy
In summary, a read-only cache of data can be configured with pre_existing_data_dir in cartopy.config. For a writeable cache (that cartopy can extend) data_dir is the config item. Cartopy will check the pre_existing_data_dir cache before checking the data_dir when determining whether or not it needs to fetch the resource on your behalf.
HTH
References:
Take a look at http://scitools.org.uk/cartopy/docs/latest/cartopy.html#cartopy.config for how to configure cartopy.
You may also be interested in similar questions at:
https://github.com/SciTools/cartopy/issues/734
Download data from Natural Earth and OpenStreetMap for Cartopy
*
Related
I am trying to read the view angles from a Sentinel-2 image (L1C SAFE compact format) for executing an atmospheric correction algorithm. I can get those values by parsing the file MTD_TL.xml, but I am not able to get them through rasterio.
I have tried to access to those data using the xml:SENTINEL2 and the xml:VRT metadata domains, but I can only access to the values from the file MTD_MSIL1C.xml (the main metadata file).
The whole point of using rasterio is being able of using GDAL's virtual file system, as the images will be read from S3 buckets. Any alternatives for easily reading MTD_TL.xml through the virtual file system would be also valid (and really appreciated).
Thank you!!
I answer to myself.
I could not find how to get the values I require, but according to https://gdal.org/user/virtual_file_systems.html the function VSIFOpenL may be used for opening the file. After that, manual parsing will do the trick :)
Ps. I must read the documentation slowly.
I can already see three ways, but none are quick (not compared to say, accessing a raw file on github)
Fork/download (requires registration)
Follow instructions here (i.e. download, open up in jupyter/ipython notebook)
Copy the code blocks manually, one by one (bad for long notebooks)
Is there an easier way? (I hoping, ideally, to add raw to the url somewhere, just like on github)
In case it's useful to others, put the notebook url here to extract raw code
I would like to check what files are already in the repository of Cartopy in order to find their names and use them as part of code.
I looked for "Cartopy" in Anaconda3 directory, also look for "*.zip" files
The line where I want to use this information is the following:
ax1.add_feature(cartopy.feature.NaturalEarthFeature('cultural', 'admin_0_boundary_lines_land', '10m'),
edgecolor='black',
facecolor='none')
I would like to know names of the features available and location on disk to add more.
I have been reading and working on SO questions related to the Street View House Numbers (SVHN) datasets. The files are available at 2 different locations:
Stanford:
The Street View House Numbers (SVHN) Dataset
kaggle:
Street View House Numbers (SVHN) | Kaggle
My question is related to the format of the digitStruct.mat files for each image set (train, test, and extras). These define the name, label, and bounding box dimensions for each image. As I understand, the mat file is written as a Matlab structure in HDF5 format (that can be read with h5py).
I have been able to access and read the digitStruct.mat files from kaggle with h5py. I cannot open the same files from the Stanford site with h5py (or with HDFView). Some SO posts I've read indicate the Stanford files are an older Matlab format and should be read with scipy.io.loadmat.
Are the files at Stanford and kaggle the same?
If not, what are the differences?
Should I be able to open the Stanford digitStruct.mat files with h5py?
If so, what method should I use to download and extract the Standford tar.gz files? (FYI, I'm on Win-7, and have been using HTTP download and WinZip to extract.)
I am adding additional info to document different behavior observed with different .mat files. It may help with diagnosis.
I can open and operate on .mat files from kaggle with this call:
h5f = h5py.File('digitStruct.mat','r')
For files from Stanford, I get different errors depending on the file and function used to open.
The command below executes without an error message. That leads me to believe it is not a Matlab v7.3 file that can be opened with h5py.
mat = scipy.io.loadmat('./Stanford/test_32x32.mat')
Both of these calls do not work (brief error message provided):
mat = scipy.io.loadmat('./test/digitStruct.mat')
Traceback...
NotImplementedError: Please use HDF reader for matlab v7.3 files
h5f = h5py.File('./test/digitStruct.mat','r')
Traceback...
OSError: Unable to open file (file signature not found)
In addition, I cannot open test/digitStruct.mat with HDFView. My conclusion for the Stanford digitStruct.mat files: they might be Matlab v7.3 files, but were corrupted when I downloaded. However, I'm not sure what I did wrong (since I can download and read kaggle files without problems).
With some Linux detective work, I figured out the problem.
As I suspected, the digitStruct.mat files extracted from the *.tar.gz files on the Stanford site are HDF5 (Matlab v7.3) files, and were corrupted when I downloaded.
To confirm, I downloaded the 3 tar.gz files with a browser on a Linux system, then used the tar command to extract them, and successfully opened with h5py on Linux. I then transferred them to my Windows system, and each worked as expected with h5py.
This is a little surprising, as I have used WinZip to extract tarball files in the past. Apparently there's something special about these that caused the corruption.
Hopefully this saves someone the same headache in the future.
Note: the 3 xxxx_32x32.mat files are an older Matlab format that must be accessed with scipy.io.loadmat()
I am using pandas to do some data analysis. Others in my company are wanting to process data in a similar fashion, but won't want to use a programming language to do it. After significant googling, I found Orange, which has the perfect interface for what I'm wanting people to do. However, the widgets don't do the types of tasks we're looking at. So, I decided to see if I could write my own widgets for Orange to do the tasks.
I'm trying to use Orange3; this seems like the best bet when I'm using WinPython. I must say that going through the documentation for widget creation (for Orange2) and the code for the Orange3 widgets is rather impressive - very nicely written and easy to use to implement what I'm wanting to do.
After writing a couple of widgets, how do I get them into Orange3? the widget creation tutorial is for Orange2 (in Python 2.7), and I haven't got it to work for Orange3.
My project is at the moment rather small:
dir/
orangepandas/
__init__.py
owPandasFile.py
pandasQtTable.py
setup.py
setup.py currently contains the following:
from setuptools import setup
setup(name='orangepandas',
version='0.1',
packages=['orangepandas'],
entry_points={'Orange.widgets': 'orangepandas = orangepandas'}
)
When I run python setup.py install on this and then try opening Orange3 canvas, I don't see my shiny new widget in its new group.
After tracing through how Orange3 imports external libraries, it seems that Orange relies on the actual widget file existing, rather than being inside a egg (zipped) file. Adding
zip_safe=False
to the setup options allowed Orange3 to import the widgets correctly. Orange3 uses os.path.exists in cache_can_ignore in canvas/registry/discovery.py to detect if the path exists at all, and if it doesn't, it doesn't try to import it. Using zip_safe=False makes sure that the addon stays uncompressed so that the individual files are accessible.
(For the next person who tries to do what I was doing.)