pyqtgraph : how to allow a plot to only increase its range - pyqt5

I have a xy-plot that updates with new data.
At the first iteration, the XAxis range is automatically set to display the data. On the next iteration, new data is generated, with a potentially smaller min value, and a potentialy bigger max value. By default, pyqtgraph will adjust the range to again display the new data.
What I want is to keep the 'biggest' ranges, that is keep the smallest min limit between both data, and the biggest max limit.
Here a simple example inspired by the 'AutoRange' example from the doc :
# from the auto-range example : from pyqtgraph import examples; examples.run()
import time
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.mkQApp("Plot Auto Range Example")
win = pg.GraphicsLayoutWidget(show=True, title="Plot auto-range examples")
win.resize(800,600)
win.setWindowTitle('pyqtgraph example: PlotAutoRange')
xs = np.random.normal(size=100)
ys = np.random.normal(size=100)
p2 = win.addPlot(title="Auto Pan Only")
# AutoPan is not what I want : p2.setAutoPan(y=True)
# maybe something else here like
# p2.setRangeRule(x='onlyExtend')
# --> so that the x limits will only grow based on
# new data
curve = p2.plot(xs, ys)
t0 = time.time()
def update():
xs = np.random.normal(size=100)
ys = np.random.normal(size=100)
# other possibility : manually check the old and new limits, and keep
# the min/max I want, and then set it to the axes....
old_box = p2.viewRect()
old_xmin = old_box.left()
old_xmax = old_box.right()
# update the curve to new data (this is always wanted)
curve.setData(xs, ys)
new_box = p2.viewRect()
new_xmin = new_box.left()
new_xmax = new_box.right()
# .... this causes the xlimits to grow indefinetely
# p2.setRange(xRange=(min(old_xmin, new_xmin), max(old_xmax, new_xmax)))
timer = QtCore.QTimer()
timer.timeout.connect(update)
timer.start(50)
if __name__ == '__main__':
pg.exec()
As you can see I tried manually get/check/set the limits but ended up with endlessly increasing xlimits (way bigger than needed just to display data).
I guess it comes down to a difference between the limits, the range, the viewbox, the rectangles, and so on. To be fair I am not familiar with the QGraphicView framework.

Related

Csv file search speedup

I need to build a relief profile graph by coordinates, I have a csv file with 12,000,000 lines. searching through a csv file of the same height takes about 2 - 2.5 seconds. I rewrote the csv to parquet and it helped me save some time, it takes about 1.7 - 1 second to find one height. However, I need to build a profile for 500 - 2000 values, which makes the time very long. In the future, you may have to increase the base of the csv file, which will slow down this process even more. In this regard, my question is, is it possible to somehow reduce the processing time of values?
Code example:
import dask.dataframe as dk
import numpy as np
import pandas as pd
import time
filename = 'n46_e032_1arc_v3.csv'
df = dk.read_csv(filename)
df.to_parquet('n46_e032_1arc_v3_parquet')
Latitude1y, Longitude1x = 46.6276, 32.5942
Latitude2y, Longitude2x = 46.6451, 32.6781
sec, steps, k = 0.00027778, 1, 11.73
Latitude, Longitude = [Latitude1y], [Longitude1x]
sin, cos = Latitude2y - Latitude1y, Longitude2x - Longitude1x
y, x = Latitude1y, Longitude1x
while Latitude[-1] < Latitude2y and Longitude[-1] < Longitude2x:
y, x, steps = y + sec * k * sin, x + sec * k * cos, steps + 1
Latitude.append(y)
Longitude.append(x)
time_start = time.time()
long, elevation_data = [], []
df2 = dk.read_parquet('n46_e032_1arc_v3_parquet')
for i in range(steps + 1):
elevation_line = df2[(Longitude[i] <= df2['x']) & (df2['x'] <= Longitude[i] + sec) &
(Latitude[i] <= df2['y']) & (df2['y'] <= Latitude[i] + sec)].compute()
elevation = np.asarray(elevation_line.z.tolist())
if elevation[-1] < 0:
elevation_data.append(0)
else:
elevation_data.append(elevation[-1])
long.append(30 * i)
plt.bar(long, elevation_data, width = 30)
plt.show()
print(time.time() - time_start)
Here's one way to solve this problem using KD trees. A KD tree is a data structure for doing fast nearest-neighbor searches.
import scipy.spatial
tree = scipy.spatial.KDTree(df[['x', 'y']].values)
elevations = df['z'].values
long, elevation_data = [], []
for i in range(steps):
lon, lat = Longitude[i], Latitude[i]
dist, idx = tree.query([lon, lat])
elevation = elevations[idx]
if elevation < 0:
elevation = 0
elevation_data.append(elevation)
long.append(30 * i)
Note: if you can make assumptions about the data, like "all of the points in the CSV are equally spaced," faster algorithms are possible.
It looks like your data might be on a regular grid. If (and only if) every combination of x and y exist in your data, then it probably makes sense to turn this into a labeled 2D array of points, after which querying the correct position will be very fast.
For this, I'll use xarray, which is essentially pandas for N-dimensional data, and integrates well with dask:
# bring the dataframe into memory
df = dk.read('n46_e032_1arc_v3_parquet').compute()
da = df.set_index(["y", "x"]).z.to_xarray()
# now you can query the nearest points:
desired_lats = xr.DataArray([46.6276, 46.6451], dims=["point"])
desired_lons = xr.DataArray([32.5942, 32.6781], dims=["point"])
subset = da.sel(y=desired_lats, x=desired_lons, method="nearest")
# if you'd like, you can return to pandas:
subset_s = subset.to_series()
# you could do this only once, and save the reshaped array as a zarr store:
ds = da.to_dataset(name="elevation")
ds.to_zarr("n46_e032_1arc_v3.zarr")

how to make a memory efficient multiple dimension groupby/stack using xarray?

I have a large time series of np.float64 with a 5-min frequency (size is ~2,500,000 ~=24 years).
I'm using Xarray to represent it in-memory and the time-dimension is named 'time'.
I want to group-by 'time.hour' and then 'time.dayofyear' (or vice-versa) and remove both their mean from the time-series.
In order to do that efficiently, i need to reorder the time-series into a new xr.DataArray with the dimensions of ['hour', 'dayofyear', 'rest'].
I wrote a function that plays with the GroupBy objects of Xarray and manages to do just that although it takes a lot of memory to do that...
I have a machine with 32GB RAM and i still get the MemoryError from numpy.
I know the code works because i used it on an hourly re-sampled version of my original time-series. so here's the code:
def time_series_stack(time_da, time_dim='time', grp1='hour', grp2='dayofyear'):
"""Takes a time-series xr.DataArray objects and reshapes it using
grp1 and grp2. outout is a xr.Dataset that includes the reshaped DataArray
, its datetime-series and the grps."""
import xarray as xr
import numpy as np
import pandas as pd
# try to infer the freq and put it into attrs for later reconstruction:
freq = pd.infer_freq(time_da[time_dim].values)
name = time_da.name
time_da.attrs['freq'] = freq
attrs = time_da.attrs
# drop all NaNs:
time_da = time_da.dropna(time_dim)
# group grp1 and concat:
grp_obj1 = time_da.groupby(time_dim + '.' + grp1)
s_list = []
for grp_name, grp_inds in grp_obj1.groups.items():
da = time_da.isel({time_dim: grp_inds})
s_list.append(da)
grps1 = [x for x in grp_obj1.groups.keys()]
stacked_da = xr.concat(s_list, dim=grp1)
stacked_da[grp1] = grps1
# group over the concatenated da and concat again:
grp_obj2 = stacked_da.groupby(time_dim + '.' + grp2)
s_list = []
for grp_name, grp_inds in grp_obj2.groups.items():
da = stacked_da.isel({time_dim: grp_inds})
s_list.append(da)
grps2 = [x for x in grp_obj2.groups.keys()]
stacked_da = xr.concat(s_list, dim=grp2)
stacked_da[grp2] = grps2
# numpy part:
# first, loop over both dims and drop NaNs, append values and datetimes:
vals = []
dts = []
for i, grp1_val in enumerate(stacked_da[grp1]):
da = stacked_da.sel({grp1: grp1_val})
for j, grp2_val in enumerate(da[grp2]):
val = da.sel({grp2: grp2_val}).dropna(time_dim)
vals.append(val.values)
dts.append(val[time_dim].values)
# second, we get the max of the vals after the second groupby:
max_size = max([len(x) for x in vals])
# we fill NaNs and NaT for the remainder of them:
concat_sizes = [max_size - len(x) for x in vals]
concat_arrys = [np.empty((x)) * np.nan for x in concat_sizes]
concat_vals = [np.concatenate(x) for x in list(zip(vals, concat_arrys))]
# 1970-01-01 is the NaT for this time-series:
concat_arrys = [np.zeros((x), dtype='datetime64[ns]')
for x in concat_sizes]
concat_dts = [np.concatenate(x) for x in list(zip(dts, concat_arrys))]
concat_vals = np.array(concat_vals)
concat_dts = np.array(concat_dts)
# finally , we reshape them:
concat_vals = concat_vals.reshape((stacked_da[grp1].shape[0],
stacked_da[grp2].shape[0],
max_size))
concat_dts = concat_dts.reshape((stacked_da[grp1].shape[0],
stacked_da[grp2].shape[0],
max_size))
# create a Dataset and DataArrays for them:
sda = xr.Dataset()
sda.attrs = attrs
sda[name] = xr.DataArray(concat_vals, dims=[grp1, grp2, 'rest'])
sda[time_dim] = xr.DataArray(concat_dts, dims=[grp1, grp2, 'rest'])
sda[grp1] = grps1
sda[grp2] = grps2
sda['rest'] = range(max_size)
return sda
So for the 2,500,000 items time-series, numpy throws the MemoryError so I'm guessing this has to be my memory bottle-neck. What can i do to solve this ?
Would Dask help me ? and if so how can i implement it ?
Like you, I ran it without issue when inputting a small time series (10,000 long). However, when inputting a 100,000 long time series xr.DataArraythe grp_obj2 for loop ran away and used all the memory of the system.
This is what I used to generate the time series xr.DataArray:
n = 10**5
times = np.datetime64('2000-01-01') + np.arange(n) * np.timedelta64(5,'m')
data = np.random.randn(n)
time_da = xr.DataArray(data, name='rand_data', dims=('time'), coords={'time': times})
# time_da.to_netcdf('rand_time_series.nc')
As you point out, Dask would be a way to solve it but I can't see a clear path at the moment...
Typically, the kind of problem with Dask would be to:
Make the input a dataset from a file (like NetCDF). This will not load the file in memory but allow Dask to pull data from disk one chunk at a time.
Define all calculations with dask.delayed or dask.futures methods for entire body of code up until the writing the output. This is what allows Dask to chunk a small piece of data to read then write.
Calculate one chunk of work and immediately write output to new dataset file. Effectively you ending up steaming one chunk of input to one chunk of output at a time (but also threaded/parallelized).
I tried importing Dask and breaking the input time_da xr.DataArray into chunks for Dask to work on but it didn't help. From what I can tell, the line stacked_da = xr.concat(s_list, dim=grp1) forces Dask to make a full copy of stacked_da in memory and much more...
One workaround to this is to write stacked_da to disk then immediately read it again:
##For group1
xr.concat(s_list, dim=grp1).to_netcdf('stacked_da1.nc')
stacked_da = xr.load_dataset('stacked_da1.nc')
stacked_da[grp1] = grps1
##For group2
xr.concat(s_list, dim=grp2).to_netcdf('stacked_da2.nc')
stacked_da = xr.load_dataset('stacked_da2.nc')
stacked_da[grp2] = grps2
However, the file size for stacked_da1.nc is 19MB and stacked_da2.nc gets huge at 6.5GB. This is for time_da with 100,000 elements... so there's clearly something amiss...
Originally, it sounded like you want to subtract the mean of the groups from the time series data. It looks like Xarray docs has an example for that. http://xarray.pydata.org/en/stable/groupby.html#grouped-arithmetic
The key is to group once and loop over the groups and then group again on each of the groups and append it to list.
Next i concat and use pd.MultiIndex.from_product for the groups.
No Memory problems and no Dask needed and it only takes a few seconds to run.
here's the code, enjoy:
def time_series_stack(time_da, time_dim='time', grp1='hour', grp2='month',
plot=True):
"""Takes a time-series xr.DataArray objects and reshapes it using
grp1 and grp2. output is a xr.Dataset that includes the reshaped DataArray
, its datetime-series and the grps. plots the mean also"""
import xarray as xr
import pandas as pd
# try to infer the freq and put it into attrs for later reconstruction:
freq = pd.infer_freq(time_da[time_dim].values)
name = time_da.name
time_da.attrs['freq'] = freq
attrs = time_da.attrs
# drop all NaNs:
time_da = time_da.dropna(time_dim)
# first grouping:
grp_obj1 = time_da.groupby(time_dim + '.' + grp1)
da_list = []
t_list = []
for grp1_name, grp1_inds in grp_obj1.groups.items():
da = time_da.isel({time_dim: grp1_inds})
# second grouping:
grp_obj2 = da.groupby(time_dim + '.' + grp2)
for grp2_name, grp2_inds in grp_obj2.groups.items():
da2 = da.isel({time_dim: grp2_inds})
# extract datetimes and rewrite time coord to 'rest':
times = da2[time_dim]
times = times.rename({time_dim: 'rest'})
times.coords['rest'] = range(len(times))
t_list.append(times)
da2 = da2.rename({time_dim: 'rest'})
da2.coords['rest'] = range(len(da2))
da_list.append(da2)
# get group keys:
grps1 = [x for x in grp_obj1.groups.keys()]
grps2 = [x for x in grp_obj2.groups.keys()]
# concat and convert to dataset:
stacked_ds = xr.concat(da_list, dim='all').to_dataset(name=name)
stacked_ds[time_dim] = xr.concat(t_list, 'all')
# create a multiindex for the groups:
mindex = pd.MultiIndex.from_product([grps1, grps2], names=[grp1, grp2])
stacked_ds.coords['all'] = mindex
# unstack:
ds = stacked_ds.unstack('all')
ds.attrs = attrs
return ds

Stimuli changes with every frame being displayed.

I have a bit of code (displayed below) that is supposed to display the stimulus for 10 frames. We need pretty exact display times, so using number of frames is a must instead of core.wait(xx) as the display time won't be as precise.
Instead of drawing the stimuli, and leaving it for another 9 frames - the stimuli is re-drawn for every frame.
# Import what is needed
import numpy as np
from psychopy import visual, event, core, logging
from math import sin, cos
import random, math
win = visual.Window(size=(1366, 768), fullscr=True, screen=0, allowGUI=False, allowStencil=False,
monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
blendMode='avg', useFBO=True,
units='deg')
### Definitions of libraries
'''Parameters :
numpy - python package of numerical computations
visual - where all visual stimulus live
event - code to deal with mouse + keyboard input
core - general function for timing & closing the program
logging - provides function for logging error and other messages to one file
random - options for creating arrays of random numbers
sin & cos - for geometry and trigonometry
math - mathematical operations '''
# this is supposed to record all frames
win.setRecordFrameIntervals(True)
win._refreshThreshold=1/65.0+0.004 #i've got 65Hz monitor and want to allow 4ms tolerance
#set the log module to report warnings to the std output window (default is errors only)
logging.console.setLevel(logging.WARNING)
nIntervals=5
# Create space variables and a window
lineSpaceX = 0.55
lineSpaceY = 0.55
patch_orientation = 45 # zero is vertical, going anti-clockwise
surround_orientation = 90
#Jitter values
g_posJitter = 0.05 #gaussian positional jitter
r_posJitter = 0.05 #random positional jitter
g_oriJitter = 5 #gaussian orientation jitter
r_oriJitter = 5 #random orientation jitter
#create a 1-Dimentional array
line = np.array(range(38)) #with values from (0-37) #possibly not needed 01/04/16 DK
#Region where the rectangular patch would appear
#x_rand=random.randint(1,22) #random.randint(Return random integers from low (inclusive) to high (exclusive).
#y_rand=random.randint(1,25)
x_rand=random.randint(6,13) #random.randint(Return random integers from low (inclusive) to high (inclusive).
y_rand=random.randint(6,16)
#rectangular patch dimensions
width=15
height=12
message = visual.TextStim(win,pos=(0.0,-12.0),text='...Press SPACE to continue...')
fixation = visual.TextStim(win, pos=(0.0,0.0), text='X')
# Initialize clock to record response time
rt_clock = core.Clock()
#Nested loop to draw anti-aliased lines on grid
#create a function for this
def myStim():
for x in xrange(1,33): #32x32 grid. When x is 33 will not execute loop - will stop
for y in xrange(1,33): #When y is 33 will not execute loop - will stop
##Define x & y value (Gaussian distribution-positional jitter)
x_pos = (x-32/2-1/2 )*lineSpaceX + random.gauss(0,g_posJitter) #random.gauss(mean,s.d); -1/2 is to center even-numbered stimuli; 32x32 grid
y_pos = (y-32/2-1/2 )*lineSpaceY + random.gauss(0,g_posJitter)
if (x >= x_rand and x < x_rand+width) and (y >= y_rand and y < y_rand+height): # note only "=" on one side
Line_Orientation = random.gauss(patch_orientation,g_oriJitter) #random.gauss(mean,s.d) - Gaussian func.
else:
Line_Orientation = random.gauss(surround_orientation,g_oriJitter) #random.gauss(mean,s.d) - Gaussian func.
#Line_Orientation = random.gauss(Line_Orientation,g_oriJitter) #random.gauss(mean,s.d) - Gaussian func.
#stimOri = random.uniform(xOri - r_oriJitter, xOri + r_oriJitter) #random.uniform(A,B) - Uniform func.
visual.Line(win, units = "deg", start=(0,0), end=(0.0,0.35), pos=(x_pos,y_pos), ori=Line_Orientation, autoLog=False).draw() #Gaussian func.
for frameN in range (10):
myStim()
win.flip()
print x_rand, y_rand
print keys, rt #display response and reaction time on screen output window
I have tried to use the following code to keep it displayed (by not clearing the buffer). But it just draws over it several times.
for frameN in range(10):
myStim()
win.flip(clearBuffer=False)
I realize that the problem could be because I have .draw() in the function that I have defined def myStim():. However, if I don't include the .draw() within the function - I won't be able to display the stimuli.
Thanks in advance for any help.
If I understand correctly, the problem you are facing is that you have to re-draw the stimulus on every flip, but your current drawing function also recreates the entire (random) stimulus, so:
the stimulus changes on each draw between flips, although you need it to stay constant, and
you get a (on some systems quite massive) performance penalty by re-creating the entire stimulus over and over again.
What you want instead is: create the stimulus once, in its entirety, before presentation; and then have this pre-generated stimulus drawn on every flip.
Since your stimulus consists of a fairly large number of visual elements, I would suggest using a class to store the stimulus in one place.
Essentially, you would replace your myStim() function with this class (note that I stripped out most comments, re-aligned the code a bit, and simplified the if statement):
class MyStim(object):
def __init__(self):
self.lines = []
for x in xrange(1, 33):
for y in xrange(1, 33):
x_pos = ((x - 32 / 2 - 1 / 2) * lineSpaceX +
random.gauss(0, g_posJitter))
y_pos = ((y - 32 / 2 - 1 / 2) * lineSpaceY +
random.gauss(0, g_posJitter))
if ((x_rand <= x < x_rand + width) and
(y_rand <= y < y_rand + height)):
Line_Orientation = random.gauss(patch_orientation,
g_oriJitter)
else:
Line_Orientation = random.gauss(surround_orientation,
g_oriJitter)
current_line = visual.Line(
win, units="deg", start=(0, 0), end=(0.0, 0.35),
pos=(x_pos, y_pos), ori=Line_Orientation,
autoLog=False
)
self.lines.append(current_line)
def draw(self):
[line.draw() for line in self.lines]
What this code does on instantiation is in principle identical to your myStim() function: it creates a set of (random) lines. But instead of drawing them onto the screen right away, they are all collected in the list self.lines, and will remain there until we actually need them.
The draw() method traverses through this list, element by element (that is, line by line), and calls every line's draw() method. Note that the stimuli do not have to be re-created every time we want to draw the whole set, but instead we just draw the already pre-created lines!
To get this working in practice, you first need to instantiate the MyStim class:
myStim = MyStim()
Then, whenever you want to present the stimulus, all you have to do is
myStim.draw()
win.flip()
Here is the entire, modified code that should get you started:
import numpy as np
from psychopy import visual, event, core, logging
from math import sin, cos
import random, math
win = visual.Window(size=(1366, 768), fullscr=True, screen=0, allowGUI=False, allowStencil=False,
monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
blendMode='avg', useFBO=True,
units='deg')
# this is supposed to record all frames
win.setRecordFrameIntervals(True)
win._refreshThreshold=1/65.0+0.004 #i've got 65Hz monitor and want to allow 4ms tolerance
#set the log module to report warnings to the std output window (default is errors only)
logging.console.setLevel(logging.WARNING)
nIntervals=5
# Create space variables and a window
lineSpaceX = 0.55
lineSpaceY = 0.55
patch_orientation = 45 # zero is vertical, going anti-clockwise
surround_orientation = 90
#Jitter values
g_posJitter = 0.05 #gaussian positional jitter
r_posJitter = 0.05 #random positional jitter
g_oriJitter = 5 #gaussian orientation jitter
r_oriJitter = 5 #random orientation jitter
x_rand=random.randint(6,13) #random.randint(Return random integers from low (inclusive) to high (inclusive).
y_rand=random.randint(6,16)
#rectangular patch dimensions
width=15
height=12
message = visual.TextStim(win,pos=(0.0,-12.0),text='...Press SPACE to continue...')
fixation = visual.TextStim(win, pos=(0.0,0.0), text='X')
# Initialize clock to record response time
rt_clock = core.Clock()
class MyStim(object):
def __init__(self):
self.lines = []
for x in xrange(1, 33):
for y in xrange(1, 33):
x_pos = ((x - 32 / 2 - 1 / 2) * lineSpaceX +
random.gauss(0, g_posJitter))
y_pos = ((y - 32 / 2 - 1 / 2) * lineSpaceY +
random.gauss(0, g_posJitter))
if ((x_rand <= x < x_rand + width) and
(y_rand <= y < y_rand + height)):
Line_Orientation = random.gauss(patch_orientation,
g_oriJitter)
else:
Line_Orientation = random.gauss(surround_orientation,
g_oriJitter)
current_line = visual.Line(
win, units="deg", start=(0, 0), end=(0.0, 0.35),
pos=(x_pos, y_pos), ori=Line_Orientation,
autoLog=False
)
self.lines.append(current_line)
def draw(self):
[line.draw() for line in self.lines]
myStim = MyStim()
for frameN in range(10):
myStim.draw()
win.flip()
# Clear the screen
win.flip()
print x_rand, y_rand
core.quit()
Please do note that even with this approach, I am dropping frames on a 3-year-old laptop computer with relatively weak integrated graphics chip. But I suspect a modern, fast GPU would be able to handle this amount of visual objects just fine. In the worst case, you could pre-create a large set of stimuli, save them as a bitmap file via win.saveMovieFrames(), and present them as a pre-loaded SimpleImageStim during your actual study.

Exporting a 3D numpy to a VTK file for viewing in Paraview/Mayavi

For those that want to export a simple 3D numpy array (along with axes) to a .vtk (or .vtr) file for post-processing and display in Paraview or Mayavi there's a little module called PyEVTK that does exactly that. The module supports structured and unstructured data etc..
Unfortunately, even though the code works fine in unix-based systems I couldn't make it work (keeps crashing) on any windows installation which simply makes things complicated. Ive contacted the developer but his suggestions did not work
Therefore my question is:
How can one use the from vtk.util import numpy_support function to export a 3D array (the function itself doesn't support 3D arrays) to a .vtk file? Is there a simple way to do it without creating vtkDatasets etc etc?
Thanks a lot!
It's been forever and I had entirely forgotten asking this question but I ended up figuring it out. I've written a post about it in my blog (PyScience) providing a tutorial on how to convert between NumPy and VTK. Do take a look if interested:
pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/
It's not a direct answer to your question, but if you have tvtk (if you have mayavi, you should have it), you can use it to write your data to vtk format. (See: http://code.enthought.com/projects/files/ETS3_API/enthought.tvtk.misc.html )
It doesn't use PyEVTK, and it supports a broad range of data sources (more than just structured and unstructured grids), so it will probably work where other things aren't.
As a quick example (Mayavi's mlab interface can make this much less verbose, especially if you're already using it.):
import numpy as np
from enthought.tvtk.api import tvtk, write_data
data = np.random.random((10,10,10))
grid = tvtk.ImageData(spacing=(10, 5, -10), origin=(100, 350, 200),
dimensions=data.shape)
grid.point_data.scalars = np.ravel(order='F')
grid.point_data.scalars.name = 'Test Data'
# Writes legacy ".vtk" format if filename ends with "vtk", otherwise
# this will write data using the newer xml-based format.
write_data(grid, 'test.vtk')
And a portion of the output file:
# vtk DataFile Version 3.0
vtk output
ASCII
DATASET STRUCTURED_POINTS
DIMENSIONS 10 10 10
SPACING 10 5 -10
ORIGIN 100 350 200
POINT_DATA 1000
SCALARS Test%20Data double
LOOKUP_TABLE default
0.598189 0.228948 0.346975 0.948916 0.0109774 0.30281 0.643976 0.17398 0.374673
0.295613 0.664072 0.307974 0.802966 0.836823 0.827732 0.895217 0.104437 0.292796
0.604939 0.96141 0.0837524 0.498616 0.608173 0.446545 0.364019 0.222914 0.514992
...
...
TVTK of Mayavi has a beautiful way of writing vtk files. Here is a test example I have written for myself following #Joe and tvtk documentation. The advantage it has over evtk, is the support for both ascii and html.Hope it will help other people.
from tvtk.api import tvtk, write_data
import numpy as np
#data = np.random.random((3, 3, 3))
#
#i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))
#i.point_data.scalars = data.ravel()
#i.point_data.scalars.name = 'scalars'
#i.dimensions = data.shape
#
#w = tvtk.XMLImageDataWriter(input=i, file_name='spoints3d.vti')
#w.write()
points = np.array([[0,0,0], [1,0,0], [1,1,0], [0,1,0]], 'f')
(n1, n2) = points.shape
poly_edge = np.array([[0,1,2,3]])
print n1, n2
## Scalar Data
#temperature = np.array([10., 20., 30., 40.])
#pressure = np.random.rand(n1)
#
## Vector Data
#velocity = np.random.rand(n1,n2)
#force = np.random.rand(n1,n2)
#
##Tensor Data with
comp = 5
stress = np.random.rand(n1,comp)
#
#print stress.shape
## The TVTK dataset.
mesh = tvtk.PolyData(points=points, polys=poly_edge)
#
## Data 0 # scalar data
#mesh.point_data.scalars = temperature
#mesh.point_data.scalars.name = 'Temperature'
#
## Data 1 # additional scalar data
#mesh.point_data.add_array(pressure)
#mesh.point_data.get_array(1).name = 'Pressure'
#mesh.update()
#
## Data 2 # Vector data
#mesh.point_data.vectors = velocity
#mesh.point_data.vectors.name = 'Velocity'
#mesh.update()
#
## Data 3 additional vector data
#mesh.point_data.add_array( force)
#mesh.point_data.get_array(3).name = 'Force'
#mesh.update()
mesh.point_data.tensors = stress
mesh.point_data.tensors.name = 'Stress'
# Data 4 additional tensor Data
#mesh.point_data.add_array(stress)
#mesh.point_data.get_array(4).name = 'Stress'
#mesh.update()
write_data(mesh, 'polydata.vtk')
# XML format
# Method 1
#write_data(mesh, 'polydata')
# Method 2
#w = tvtk.XMLPolyDataWriter(input=mesh, file_name='polydata.vtk')
#w.write()
I know it is a bit late and I do love your tutorials #somada141. This should work too.
def numpy2VTK(img, spacing=[1.0, 1.0, 1.0]):
# evolved from code from Stou S.,
# on http://www.siafoo.net/snippet/314
# This function, as the name suggests, converts numpy array to VTK
importer = vtk.vtkImageImport()
img_data = img.astype('uint8')
img_string = img_data.tostring() # type short
dim = img.shape
importer.CopyImportVoidPointer(img_string, len(img_string))
importer.SetDataScalarType(VTK_UNSIGNED_CHAR)
importer.SetNumberOfScalarComponents(1)
extent = importer.GetDataExtent()
importer.SetDataExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetWholeExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetDataSpacing(spacing[0], spacing[1], spacing[2])
importer.SetDataOrigin(0, 0, 0)
return importer
Hope it helps!
Here's a SimpleITK version with the function load_itk taken from here:
import SimpleITK as sitk
import numpy as np
if len(sys.argv)<3:
print('Wrong number of arguments.', file=sys.stderr)
print('Usage: ' + __file__ + ' input_sitk_file' + ' output_sitk_file', file=sys.stderr)
sys.exit(1)
def quick_read(filename):
# Read image information without reading the bulk data.
file_reader = sitk.ImageFileReader()
file_reader.SetFileName(filename)
file_reader.ReadImageInformation()
print('image size: {0}\nimage spacing: {1}'.format(file_reader.GetSize(), file_reader.GetSpacing()))
# Some files have a rich meta-data dictionary (e.g. DICOM)
for key in file_reader.GetMetaDataKeys():
print(key + ': ' + file_reader.GetMetaData(key))
def load_itk(filename):
# Reads the image using SimpleITK
itkimage = sitk.ReadImage(filename)
# Convert the image to a numpy array first and then shuffle the dimensions to get axis in the order z,y,x
data = sitk.GetArrayFromImage(itkimage)
# Read the origin of the ct_scan, will be used to convert the coordinates from world to voxel and vice versa.
origin = np.array(list(reversed(itkimage.GetOrigin())))
# Read the spacing along each dimension
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return data, origin, spacing
def convert(data, output_filename):
image = sitk.GetImageFromArray(data)
writer = sitk.ImageFileWriter()
writer.SetFileName(output_filename)
writer.Execute(image)
def wait():
print('Press Enter to load & convert or exit using Ctrl+C')
input()
quick_read(sys.argv[1])
print('-'*20)
wait()
data, origin, spacing = load_itk(sys.argv[1])
convert(sys.argv[2])

Numpy: regrid by averaging?

I'm trying to regrid a numpy array onto a new grid. In this specific case, I'm trying to regrid a power spectrum onto a logarithmic grid so that the data are evenly spaced logarithmically for plotting purposes.
Doing this with straight interpolation using np.interp results in some of the original data being ignored entirely. Using digitize gets the result I want, but I have to use some ugly loops to get it to work:
xfreq = np.fft.fftfreq(100)[1:50] # only positive, nonzero freqs
psw = np.arange(xfreq.size) # dummy array for MWE
# new logarithmic grid
logfreq = np.logspace(np.log10(np.min(xfreq)), np.log10(np.max(xfreq)), 100)
inds = np.digitize(xfreq,logfreq)
# interpolation: ignores data *but* populates all points
logpsw = np.interp(logfreq, xfreq, psw)
# so average down where available...
logpsw[np.unique(inds)] = [psw[inds==i].mean() for i in np.unique(inds)]
# the new plot
loglog(logfreq, logpsw, linewidth=0.5, color='k')
Is there a nicer way to accomplish this in numpy? I'd be satisfied with just a replacement of the inline loop step.
You can use bincount() twice to calculate the average value of every bins:
logpsw2 = np.interp(logfreq, xfreq, psw)
counts = np.bincount(inds)
mask = counts != 0
logpsw2[mask] = np.bincount(inds, psw)[mask] / counts[mask]
or use unique(inds, return_inverse=True) and bincount() twice:
logpsw4 = np.interp(logfreq, xfreq, psw)
uinds, inv_index = np.unique(inds, return_inverse=True)
logpsw4[uinds] = np.bincount(inv_index, psw) / np.bincount(inv_index)
Or if you use Pandas:
import pandas as pd
logpsw4 = np.interp(logfreq, xfreq, psw)
s = pd.groupby(pd.Series(psw), inds).mean()
logpsw4[s.index] = s.values