I created a Sentinel Image collection (s7) and applied ee.Reducer.percentile (3month data)
S2_PU_perc = s7.reduce(ee.Reducer.percentile(ee.List([0,25,50, 75,100])))
I now need to extract from this file (S2_PU_perc) the 20 random pixel values from the polygon shapefile that I have (with 200 polygons in it) and get my output as CSV (this will have pixels in the rows and bands in col, every band will have 5 percentile values). I need this code in Colab.
Thanks very much will appreciate the help.
Related
I try to apply histogram matching based on OpenCV and Scikit image on Sentinel 2 data, similar to https://www.geeksforgeeks.org/histogram-matching-with-opencv-scikit-image-and-python.
Sentinel 2 bands have a value range between 0 and 10000, also they have coordinates encoded. It looks like OpenCV and Scikit image only support a value range up to 255, as my resulting images are all black.
Is there any library that supports the value ranges of sentinel 2 images, without losing the geo information of the image?
Not sure if this helps, but are you working with the L2-A BOA imagery?
In the documentation I understand that the meaningful reflectance values go from “1” to “65535” (UINT) and "0" is reserved for NO_DATA.
As of Baseline 04.00 since 22-01-25 you also have to use the BOA_ADD_OFFSET for L2 or the RADIO _ADD_OFFSET for L1 to adjust values if you wish to compare them with pre v04.00 images. Currently all bands offset appear to be set to 1000, so you just subtract this value to get pre v04.00 values.
There is also QUANTIFICATION_VALUE which is used to scale down the values for each band - unsure of the size of that value though. But that might bring the pixel value to between 0-1 or perhaps 1-255?
See the"Sentinel-2 Products Specification Document" at https://sentinel.esa.int/documents/247904/685211/S2-PDGS-TAS-DI-PSD-V14.9.pdf/3d3b6c9c-4334-dcc4-3aa7-f7c0deffbaf7?t=1643013091529
for more details
I have spectrogram data from an audio analysis which looks like this:
On one axis I have frequencies in Hz and in the other times in seconds. I added the grid over the map to show the actual data points. Due to the nature of the used frequency analysis, the best results never give evenly spaced time and frequency values.
To allow comparison data from multiple sources, I would like to normalize this data. For this reason, I would like to calculate the peak values (maximum and minimum values) for specified areas in the map.
The second visualization shows the areas where I would like to calculate the peak values. I marked an area with a green rectangle to visualize this.
While for the time values, I would like to use equally spaced ranges (e.g 0.0-10.0, 10.0-20.0, 20.0-30.0), The frequency ranges are unevenly distributed. In higher frequencies, they will be like 450-550, 550-1500, 1500-2500, ...
You can download an example data-set here: data.zip. You can unpack the datasets like this:
with np.load(DATA_PATH) as data:
frequency_labels = data['frequency_labels']
time_labels = data['time_labels']
spectrogram_data = data['data']
DATA_PATH has to point to the path of the .npz data file.
As input, I would provide an array of frequency and time ranges. The result should be another 2d NumPy ndarray with either the maximum or the minimum values. As the amount of data is huge, I would like to rely on NumPy as much as possible to speed up the calculations.
How do I calculate the maximum/minimum values of defined areas from a 2d data map?
I am looking for suggestion on the fastest way to select a time-series for a given latitude and longitude from an xarray dataset. xarray dataset that I am working with is 3 dimensional of the shape [400, 2000, 7200] where the first dimension is time (400), then latitude (2000) and longitude (7200). I simply need to read in individual time-series for each of the grid cells in a given rectangle. So I am reading in time-series one by one for each grid cell with the given rectangle.
For this selection I am using .sel option.
XR..sel(latitude=Y, longitude=X)
Where XR is a xarray dataset, and Y and X are the given latitude and longitude.
This works well but turns about to be very slow when repeated several times. Is there a faster option to do this?
Thank you for your help!
I have a large dicom mri dataset for several patients. For each patient, there is a folder including many 2d slices of .dcm files and the data of each patient has different sizes. For example:
patient1: PixelSpacing=0.8mm,0.8mm, SliceThickness=2mm, SpacingBetweenSlices=1mm, 400x400 pixels
patient2: PixelSpacing=0.625mm,0.625mm, SliceThickness=2.4mm, SpacingBetweenSlices=1mm, 512x512 pixels
So my question is how can I convert all of them into {Pixel Spacing} = 1mm,1mm and {Slice Thickness = 1mm}?
Thanks.
These are two different questions:
About harmonizing positions and pixel spacing, these links will be helpful:
Finding the coordinates (mm) of identical slice locations for two MR datasets acquired in the same scanning session
Interpolation between two images with different pixelsize
http://nipy.org/nibabel/dicom/dicom_orientation.html
Basically, you want to build your target volume and interpolate each of its pixels from the nearest neighbors in the source volumes.
About modifying the slice thickness: If you really want to modify the slice thickness rather than the slice distance, I do not see any chance to do this correctly with the source data you have. This is because the thickness says which width of the raw data was used to calculate the values for a slice in your stack (e.g. by averaging or calculating an integral). With a slice thickness of 2 or 2.4mm in the source volumes, you will not be able to reconstruct the gray values with a thickness of 1 mm. If your question was referring to slice distance rather than slice thickness, answer 1 applies.
I'm in a little over my head on this one. I have approximately 300 shapefiles containing about 7000 polygons, each of which I'm trying to clip a systematic grid of points. Each shapefile has a unique number of polygons (buffers around a point location) and I need to have the grid points assigned to each polygon so that they can be recognized as discrete sets later on.
For example, polygon 1 in shapefile 1 will have a set of grid points associated with it. Polygon 2 in shapefile 1 will have another set of grid points, including many that may be the same as those in polygon 1. I would need an attribute field that identifies those points as belonging to that polygon. If it helps, this is for a discrete choice model being applied to resource selection. Any help is greatly appreciated!
Image: Polygons with grid points.
Image: Single shapefile containing polygons
Using Intersect should connect the two layers in a new feature class with one attribute table.
You can also try Spatial Join which will ad the table of one layer to the table of the other according to location.