I have spectrogram data from an audio analysis which looks like this:
On one axis I have frequencies in Hz and in the other times in seconds. I added the grid over the map to show the actual data points. Due to the nature of the used frequency analysis, the best results never give evenly spaced time and frequency values.
To allow comparison data from multiple sources, I would like to normalize this data. For this reason, I would like to calculate the peak values (maximum and minimum values) for specified areas in the map.
The second visualization shows the areas where I would like to calculate the peak values. I marked an area with a green rectangle to visualize this.
While for the time values, I would like to use equally spaced ranges (e.g 0.0-10.0, 10.0-20.0, 20.0-30.0), The frequency ranges are unevenly distributed. In higher frequencies, they will be like 450-550, 550-1500, 1500-2500, ...
You can download an example data-set here: data.zip. You can unpack the datasets like this:
with np.load(DATA_PATH) as data:
frequency_labels = data['frequency_labels']
time_labels = data['time_labels']
spectrogram_data = data['data']
DATA_PATH has to point to the path of the .npz data file.
As input, I would provide an array of frequency and time ranges. The result should be another 2d NumPy ndarray with either the maximum or the minimum values. As the amount of data is huge, I would like to rely on NumPy as much as possible to speed up the calculations.
How do I calculate the maximum/minimum values of defined areas from a 2d data map?
I've got the following task: there are two outputs from DAQ, namely speed and the raw data acquired along with this speed. I'd like to use speed as a parameter to define certain number of bins, and fit the raw data which corresponds to the speed into the specific bin. I am not sure how to do this in LabVIEW - because when I check the histogram function, it seems that it only requires one input (1D array of values).
Many thanks, any help is much appreciated. Aileen
The Histogram VI takes an array of data and the number of bins you want, and determines the boundaries of the bins automatically. It sounds like that's the one you're looking at.
The General Histogram VI allows you to specify the bins yourself. If you can't find it, perhaps you only have the LabVIEW Base Package development system, as it's only present in the Full Development System and above.
If you don't have General Histogram and you need to create a histogram using your own bin boundaries, it wouldn't be too hard to create. Without writing the code for you, you could do something like:
Create a 1D array containing your bin boundaries in ascending order.
Use a For loop to index through the array of bin boundaries
In the loop, use (e.g.) >, <=, and And functions to get a Boolean array which contains True for each value in the data array that should be in the current bin
Use Boolean to (0,1) and Add Array Elements to count the number of True values.
If any of that's unclear, please edit your question with more details and perhaps an example of some input data and what you want the output to be.
This is an implementation of nekomatic's description.
The first SubVi just creates the 1D array containing your bin boundaries.
X_in and Y_in are the independent and dependent input datasets. Both have to be of equal length but must not be sorted. In the inner For loop it will be checked if X_in fits into the current bin. If so, X_in and the corresponding Y_in value are stored in a temporary arrays which are averaged afterwards.
Maybe it is not the most efficient code but at least it seems to be not slower than the General Histogram VI
I am wondering if there is a method to approach this problem.
The reason I need this is because for a certain trend of data I need to use a specific formula and for the next trend of the data I need to use a different formula.
Also, the data is not simple but there are two distinct slopes.
All data points are in excel cells.I haven't started the code yet. I am thinking about using (0,1,2,3,4) data points and finding slope and keep moving by 1 (1,2,3,4,5) then somehow calculate a difference in the 2 slopes and when they are significant. to call that the transition point
You may be able to reduce the problem to finding inflection points. This can be defined as point where the data flattens briefly to either resume a trend, change it (but in the same direction), or reverse it. You can do this by finding small time clusters with slope of zero. Or a better idea would be to divide your y data into horizontal bins. If a certain threshold of number of data points in a bin is reached, a change in trend is in progress. You can vary the inflection sensitivity by varying the bin size and/or varying the minimum number of points in a bin.
I use Paraview to visualize a 2D distance map.
Below what I obtain where geodesics are represented with different colors.
I use the VTK file format RECTILINEAR_GRID.
I would like to add a dimension z where the height would depend on the scalar field value u without having to rewrite an other file.
Example can be found here.
Thanks to lib comment, Warp by Scalar filter indeed answers my question.
It is available in the menu Filter->Alphabetical->Warp by Scalar.
Just leaving the default values gave me what I need.
I have some irregularly spaced data and need to analyze it. I can successfully interpolate this data onto a regular grid using mlab.griddata (or rather, the natgrid implementation of it). This allows me to use pcolormesh and contour to generate plots, extract levels, etc. Using plot.contour, I then extract a certain level using get_paths from the contour CS.collections().
Now, what I'd like to do is then, with my original irregularly spaced data, interpolate some quantities onto this specific contour line (i.e., NOT onto a regular grid). The similarly named griddata function from Scipy allows for this behavior, and it almost works. However, I find that as I increase the number of original points, I can get odd erratic behavior in the interpolation. I'm wondering if there's a way around this, i.e., another way to interpolate irregularly spaced (or regularly spaced data for that matter, since I can use my regularly spaced data from mlab.griddata) onto a specific line.
Let me show some numerical examples of what I'm talking about. Take a look at this figure:
The top left shows my data as points, and the line shows an extracted level of level=0 from some data D that I have at those points (x,y) [note, I have data 'D', 'Energy', and 'Pressure', all defined in this (x,y) space]. Once I have this curve, I can plot the interpolated quantities of D, Energy, and Pressure onto my specific line. First, note the plot of D (middle, right). It should be zero at all points, but it's not quite zero at all points. The likely cause of this is that the line that corresponds to the 0 level is generated from a uniform set of points that came from mlab.griddata, whereas the plot of 'D' is generated from my ORIGINAL data interpolated onto that level curve. You can also see some unphysical wiggles in 'Energy' and 'Pressure'.
Okay, seems easy enough, right? Maybe I should just get more original data points along my level=0 curve. Getting some more of these points, I then generate the following plots:
First look at the top left. You can see that I've sampled the hell out of the (x,y) space in the vicinity of my level=0 curve. Furthermore, you can see that my new "D" plot (middle, right) now correctly interpolates to zero in the region that it originally didn't. But now I get some wiggles at the start of the curve, as well as getting some other wiggles in the 'Energy' and 'Pressure' in this space! It is far from obvious to me that this should occur, since my original data points are still there and I've only supplemented additional points. Furthermore, some regions where my interpolation is going bad aren't even near the points that I added in the second run -- they are exclusively neighbored by my original points.
So this brings me to my original question. I'm worried that the interpolation that produces the 'Energy', 'D', and 'Pressure' curves is not working correctly (this is scigrid's griddata). Mlab's griddata only interpolates to a regular grid, whereas I want to interpolate to this specific line shown in the top left plot. What's another way for me to do this?
Thanks for your time!
After posting this, I decided to try scipy.interpolate.SmoothBivariateSpline, which produced the following result:
You can now see that my line is smoothed, so it seems like this will work. I'll mark this as the answer unless someone posts something soon that hints that there may be an even better solution.
Edit: As requested, below is some of the code used to generate these plots. I don't have a minimally working example, and the above plots were generated in a larger framework of code, but I'll write the important parts schematically below with comments.
# x,y,z are lists of data where the first point is x[0],y[0],z[0], and so on
minx=min(x)
maxx=max(x)
miny=min(y)
maxy=max(y)
# convert to numpy arrays
x=np.array(x)
y=np.array(y)
z=np.array(z)
# here we are creating a fine grid to interpolate the data onto
xi=np.linspace(minx,maxx,100)
yi=np.linspace(miny,maxy,100)
# here we interpolate our data from the original x,y,z unstructured grid to the new
# fine, regular grid in xi,yi, returning the values zi
zi=griddata(x,y,z,xi,yi)
# now let's do some plotting
plt.figure()
# returns the CS contour object, from which we'll be able to get the path for the
# level=0 curve
CS=plt.contour(x,y,z,levels=[0])
# can plot the original data if we want
plt.scatter(x,y,alpha=0.5,marker='x')
# now let's get the level=0 curve
for c in CS.collections:
data=c.get_paths()[0].vertices
# lineX,lineY are simply the x,y coordinates for our level=0 curve, expressed as arrays
lineX=data[:,0]
lineY=data[:,1]
# so it's easy to plot this too
plt.plot(lineX,lineY)
# now what to do if we want to interpolate some other data we have, say z2
# (also at our original x,y positions), onto
# this level=0 curve?
# well, first I tried using scipy.interpolate.griddata == scigrid like so
origdata=np.transpose(np.vstack((x,y))) # just organizing this data like the
# scigrid routine expects
lineZ2=scigrid(origdata,z2,data,method='linear')
# plotting the above curve (as plt.plot(lineZ2)) gave me really bad results, so
# trying a spline approach
Z2spline=SmoothBivariateSpline(x,y,z2)
# the above creates a spline object on our original data. notice we haven't EVALUATED
# it anywhere yet (we'll want to evaluate it on our level curve)
Z2Line=[]
# here we evaluate the spline along all our points on the level curve, and store the
# result as a new list
for i in range(0,len(lineX)):
Z2Line.append(Z2spline(lineX[i],lineY[i])[0][0]) # the [0][0] is just to get the
# value, which is enclosed in
# some array structure for some
# reason otherwise
# you can then easily plot this
plt.plot(Z2Line)
Hope this helps someone!