How to augment a time series dataset using random windows? - tensorflow

I have a dataset of multiple snippets of a time series. Is there an easy way in TensorFlow 2.9 to augment each of these snippets using some kind of sliding window (I know this exists), but then make the size and position of this window random? So that every snippet gets a bunch of random variations. The only thing I care about is that the data inside a window remains in order. Any ideas?

Related

Peak detect and hold in Labview

I've inherited a labview "circuit" that integrates G's to output IPS. The problem is, the output text window (double), at full speed, has numbers scrolling so fast, you can't read them. I only need to see the largest number detected. I'm not too well versed in LabView - Can anyone help me through a function that will display the largest number outputted to the text window for a duration of 1/2 second? I'm basically looking for a peak detect-and-hold function... I'd prefer to work with the double precision value that is constantly updated if possible, rather than the array feeding my integrator. I tried looking through the Functions>signal processing menu, saw one peak detector, but not sure that's the right utility.
Thanks!
Easier to use the Array Max & Min PtByPt.vi, this can be found in the signal processing, point by point menu. Below a VI snippit with how it works.
It will update the maximum value every 10 points. Also attached a waveform chart that shows the values.

Oxyplot: IsValidPoint on realtime LineSerie

I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!

Given a pair of images, how to automatically create an animation sequence morphing one image into the other?

Is there a programmatic way to convert two images into an animation sequence (e.g., an animated GIF) like the following example?
This image sequence, taken from a http://memrise.com course, doesn't seem to have manually-edited frames, but seems automatically transformed using some kind shape morphing algorithm. Is there a common term used to describe such an animation or algorithm? Is there a feature in ImageMagick or Photoshop/Gimp that generates such animations, given a pair of images?
Ideally the technique could be scriptable so I could create animations for several pairs of start-end images.
Edit: I have just been told about Gimp's tool under Filters->Animation->Blend, which appears to do the same thing as jQuery morph: each frame i is start + (finish - start)/N*i. In other words, you're transitioning each pixel independently from the start value to the finish value, without any shape morphing. The example gives is more complicated, as it modifies the contours of both images to achieve its compelling effect.
Other examples:
http://static.memrise.com/uploads/mems/32000121024054535.gif
http://static.memrise.com/uploads/mems/225428000121109232837.gif
I have written a tool that doesn't require setting manual keypoints and is not restricted to a domain (like faces). Anyway, the images have to be similar (e.g. two faces or two cars from the same perspective).
https://github.com/kallaballa/Poppy
There is also a web-version created with emscripten.
I generated the above animation using following command line:
poppy flame.png glyph.png flame.png
Although this is an old question, since ImageMagick is mentioned, for anyone who comes here from google it may be worth looking at this imagemagick plugin called shapemorph.
GIMP can't do that directly, but over the years a series of (now poorly maintaind) plug-ins to do that where released by third parties. The keyword for searching for this is "morph" - you should find a bunch of stand alone programs to do that as well, from "gratis" to full fledged Free Software, such as xmorph
Given pairs of vector files (.wmf extension) it is possible to use linear interpolation of shapenodes in Visual Basic for Applications to create frames for GIF animations , though this would take along time to explain. For some examples see
http://www.giless.co.uk/animatorMorphGIFs.htm (it is like a slideshow)
I have made some improvements since then, as well!

Visualizing a large data series

I have a seemingly simple problem, but an easy solution is alluding me. I have a very large series (tens or hundreds of thousands of points), and I just need to visualize it at different zoom levels, but generally zoomed well out. Basically, I want to plot it in a tool like Matlab or Pyplot, but knowing that each pixel can't represent the potentially many hundreds of points that map to it, I'd like to see both the min and the max of all the array entries that map to a pixel, so that I can generally understand what's going on. Is there a simple way of doing this?
Try hexbin. By setting the reduce_C_function I think you can get what you want. Ex:
import matplotlib.pyplot as plt
import numpy as np
plt.hexbin(x,y,C=C, reduce_C_function=np.max) # C = f(x,y)
would give you a hexagonal heatmap where the color in the pixel is the maximum value in the bin.
If you only want to bin in one direction, see this this method.
First option you may want to try is Gephi- https://gephi.org/
Here is another option, though I'm not quite sure it will work. It's hard to say without seeing the data.
Try going to this link- http://bl.ocks.org/3887118. Do you see toward the bottom of the page data.tsv with all of the values? IF you can save your data to resemble this then the HTML code above should be able to build your data in the scatter plot example shown in that link.
Otherwise, try visiting this link to fashion your data to a more appropriate web page.
There are a set of research tools called TimeSearcher 1--3 that provide some examples of how to deal with large time-series datasets. Below are some example images from TimeSearcher 2 and 3.
I realized that simple plot() in MATLAB actually gives me more or less what I want. When zoomed out, it renders all of the datapoints that map to a pixel column as vertical line segments from the minimum to the maximum within the set, so as not to obscure the function's actual behavior. I used area() to increase the contrast.

How can I speed up this 3D grid-based rendering system?

I have recently been developing an isometric, rendering system to map out 3D grids in Javascript. All of the items on the grid are cubes of equal dimensions, the only differences between each one is a texture to represent a value for that coordinate. My application requires large grids to be graphed, even though only a small portion is visible in the viewport at once.
Because I am using Canvas, which is slow to draw thousands of shapes per frame, I set my script to loop through each block but only draw its faces if they are 1.) next to an empty grid space and 2.) inside the viewport. This system works fine for smaller grids, but as my application will need considerably large ones (1000+x1000+x128), I will need to add some performance improvements for the final product.
Does anyone that has worked with rendering systems know any way I can further optimize my engine? One thing that I guess may be effective will be trying to not loop through each grid value, even if it is not being drawn. However, I do not know the most efficient way to know whether to loop through a grid value or not (I am currently going through EVERY value, then calculating whether it should be drawn).
If I have been too vague, please tell me and I will be happy to elaborate. Thank you for your time and expertise; I am a student and any help will greatly aid my learning.
Some pointers to you: you might want to have a look at classic culling algorithms using things like octree (or quadtrees in your case), ...