I wonder how to find a certain keyword (potentially multiple times) in a (long, lets say 1-2 hours) audio file with their corresponding time stamps of start and end of it.
Let's say we do it with Tensorflow as described here. The problems I see are the following: In reality the keyword we use to train can be a bit longer or shorter (for example you can say "no" or "nooooo", ranging therefore from 0.1s to 3s maybe). In the link they use padding for that so the input for training and inference is always in the same shape.
So what is in reality the best way to deal with:
Different input lengths of the audio snippets to train? Padding and cutting might destroy important information or add nonsensical "emptiness".
How to find/run inference in the long audio file? Moving window with a resolution of 16kHz would be the obvious way but that can't be an efficient way.
How to then get the timestamps?
Thanks!
Related
I'm kinda new to Neural Networks and just started to learn coding them by trying some examples.
Two weeks ago I was searching for an interesting challenge and I found one. But I'm about to give up because it seems to be too hard for me... But I was curious to know if anyone of you is able to solve this?
The Problem: Assume there are ".htm"-files that contain tables about the same topic. But the table structure isn't the same for every file. For example: We have a lot ".htm"-files containing information about teachers substitutions per day per school. Because the structure of those ".htm"-files isn't the same for every file it would be hard to program a parser that could extract the data from those tables. So my thought was that this is a task for a Neural Network.
First Question: Is it a task a Neural Network can/should handle or am I mistaken by that?
Because for me a Neural Network seemed to fit for this kind of a challenge I tried to thing of an Input. I came up with two options:
First Input Option: Take the HTML Code (only from the body-tag) as string and convert it as Tensor
Second Input Option: Convert the HTML Tables into Images (via Canvas maybe) and feed this input to the DNN through Conv2D-Layers.
Second Question: Are those Options any good? Do you have any better solution to this?
After that I wanted to figure out how I would make a DNN output this heavily dynamic data for me? My thought was to convert my desired JSON-Output into Tensors and feed them to the DNN while training and for every prediction i would expect the DNN to return a Tensor that is convertible into a JSON-Output...
Third Question: Is it even possible to get such a detailed Output from a DNN? And if Yes: Do you think the Output would be suitable for this task?
Last Question: Assuming all my assumptions are correct - Wouldn't training this DNN take for ever? Let's say you have a RTX 2080 ti for it. What would you guess?
I guess that's it. I hope i can learn a lot from you guys!
(I'm sorry about my bad English - it's not my native language)
Addition:
Here is a more in-depth Example. Lets say we have a ".htm"-file that looks like this:
The task would be to get all the relevant informations from this table. For example:
All Students from Class "9c" don't have lessons in their 6th hour due to cancellation.
1) This is not particularly suitable problem for a Neural Network, as you domain is a structured data with clear dependcies inside. Tree based ML algorithms tend to show much better results on such problems.
2) Both you choices of input are very unstructured. To learn from such data would be nearly impossible. The are clear ways to give more knowledge to the model. For example, you have the same data in different format, the difference is only the structure. It means that a model needs to learn a mapping from one structure to another, it doesn't need to know any data. Hence, words can be Tokenized with unique identifiers to remove unnecessary information. Htm data can be parsed to a tree, as well as json. Then, there are different ways to represent graph structures, which can be used in a ML model.
3) It seems that the only adequate option for output is a sequence of identifiers pointing to unique entities from text. The whole problem then is similar to Seq2Seq best solved by RNNs with an decoder-encoder architecture.
I believe that, if there is enough data and htm files don't have huge amount of noise, the task can be completed. Training time hugely depends on selected model and its complexity, as well as diversity of initial data.
When you have variable sized input images and use the ImageDeserializer to resize the images, how are you supposed to deal with a mean file? Computing the mean file is easy when the input images are all the same size. Wouldn't it be better if the ImageDeserializer would be capable of compute the means?
The order in which image pre-processing steps are executed by default are:
Take a crop that has the desired image size
Subtract the mean
Hence, your mean file hence only has to contain a mean for the desired input size.
For computing the mean yourself, you will have to repeat these steps, at least to some degree. If you're on .NET, then you may want to have a look at this post where .NET image pre-processing is discussed.
I agree that it would be helpful if there was some tool to compute the mean file. I can understand though why the image deserializer does not do it automatically: You need to transform your training and test data via the same mean file. If you subtract mean from training data automatically, you'll have no way of repeating the same operation on the test data. Plus there is randomization that could make it messy in some cases, etc.
I've been looking for an answer everywhere and I was only able to find some bits and pieces. What I want to do is to load multiple mp3 files (kind of temporarily merge them) and then cut them into pieces using silence detection.
My understanding is that I can use Mp3FileReader for this but the questions are:
1. How do I read say 20 seconds of audio from an mp3 file? Do I need to read 20 times reader.WaveFormat.AverageBytesPerSecond? Or maybe keep on reading frames until the sum of Mp3Frame.SampleCount / Mp3Frame.SampleRate exceeds 20 seconds?
2. How do I actually detect the silence? I would look at an appropriate number of the consecutive samples to check if they are all below some threshold. But how do I access the samples regardless of them being 8 or 16bit, mono or stereo etc.? Can I directly decode an MP3 frame?
3. After I have detected silence at say sample 10465, how do I map it back to the mp3 frame index to perform the cutting without re-encoding?
Here's the approach I'd recommend (which does involve re-encoding)
Use AudioFileReader to get your MP3 as floating point samples directly in the Read method
Find an open source noise gate algorithm, port it to C#, and use that to detect silence (i.e. when noise gate is closed, you have silence. You'll want to tweak threshold and attack/release times)
Create a derived ISampleProvider that uses the noise gate, and in its Read method, does not return samples that are in silence
Either: Pass the output into WaveFileWriter to create a WAV File and and encode the WAV file to MP3
Or: use NAudio.Lame to encode directly without a WAV step. You'll probably need to go from SampleProvider back down to 16 bit WAV provider first
BEFORE READING BELOW: Mark's answer is far easier to implement, and you'll almost certainly be happy with the results. This answer is for those who are willing to spend an inordinate amount of time on it.
So with that said, cutting an MP3 file based on silence without re-encoding or full decoding is actually possible... Basically, you can look at each frame's side info and each granule's gain & huffman data to "estimate" the silence.
Find the silence
Copy all the frames from before the silence to a new file
now it gets tricky...
Pull the audio data from the frames after the silence, keeping track of which frame header goes with what audio data.
Start writing the second new file, but as you write out the frames, update the main_data_begin field so the bit reservoir is in sync with where the audio data really is.
MP3 is a compressed audio format. You can't just cut bits out and expect the remainder to still be a valid MP3 file. In fact, since it's a DCT-based transform, the bits are in the frequency domain instead of the time domain. There simply are no bits for sample 10465. There's a frame which contains sample 10465, and there's a set of bits describing all frequencies in that frame.
Plain cutting the audio at sample 10465 and continuing with some random other sample probably causes a discontinuity, which means the number of frequencies present in the resulting frame skyrockets. So that definitely means a full recode. The better way is to smooth the transition, but that's not a trivial operation. And the result is of course slightly different than the input, so it still means a recode.
I don't understand why you'd want to read 20 seconds of audio anyway. Where's that number coming from? You usually want to read everything.
Sound is a wave; it's entirely expected that it crosses zero. So being close to zero isn't special. For a 20 Hz wave (threshold of hearing), zero crossings happen 40 times per second, but each time you'll have multiple samples near zero. So you basically need multiple samples that are all close to zero, but on both sides. 5 6 7 isn't much for 16 bits sounds, but it might very well be part of a wave that will have a maximum at 10000. You really should check for at least 0.05 seconds to catch those 20 Hz sounds.
Since you detected silence in a 50 millisecond interval, you have a "position" that's approximately several hundred samples wide. With any bit of luck, there's a frame boundary in there. Cut there. Else it's time for reencoding.
I have a sequence of gps values each containing: timestamp, latitude, longitude, n_sats, gps_speed, gps_direction, ... (some subset of NMEA data). I'm not sure of what quality the direction and speed values are. Further, I cannot expect the sequence to be evenly spaced w.r.t. the timestamp. I want to get a smooth trajectory at an even time step.
I've read the Kalman Filter is the tool of choice for such tasks. Is this indeed the case?
I've found some implementations of the Kalman Filter for Python:
http://www.scipy.org/Cookbook/KalmanFiltering
http://ascratchpad.blogspot.de/2010/03/kalman-filter-in-python.html
These however appear to assume regularly spaced data, i.e. iterations.
What would it take to integrate support of irregularly spaced observations?
One thing I could imagine is to repeat/adapt the prediction step to a time-based model. Can you recommend such a model for this application? Would it need to take into account the NMEA speed values?
Having looked all over for an understandable resource on Kalman filters, I'd highly recommend this one: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
To your particular question regarding irregularly spaced observations: Look at Chapter 8 in the reference above, and under the heading "Nonstationary Processes". To summarize, you'll need to use a different state transition function and process noise covariance for each iteration. Those are the only things you'll need to change at each iteration, since they're the only components dependent on delta t.
You could also try kinematic interpolation to see if the results fit to what you expect.
Here's a Python implementation of one of these algorithms: https://gist.github.com/talespaiva/128980e3608f9bc5083b
I'm trying to figure out if there is a good way to merge two HMMs into one, when the underlying states are the same, but the observations aren't temporally linked.
I have two independent observation streams describing the same hidden state space. The underlying order of each observation stream remains the same, but they are not emitted at the same time.
For instance, say I have audio recordings of two separate speakers reading aloud the same passage of text, where the hidden state space becomes the letters in the text, while the stream of phonemes from each audio comprise the observation space. Each speaker records the audio separately, and use a different cadence when reading.
I can clearly make a prediction of the text using each speaker independently, and try and reconcile the results after the fact... but I sense that combining the observation streams into a single HMM may produce a better result.
Does anyone know a good way to reconcile this?
Merging the states would require aligning these streams first... ie some kind of log-likelihood optimization.
But its possible to use statistics from multiple streams to predict the "observations" - modern data compressors basically do just that.
Eg. see http://www.mattmahoney.net/dc/dce.html#Section_432
I am not sure if there are methods to merge two HMM's after they have each been fitted to different observation sequences.
But there exists an algotihm to train one Markov Model on multiple independent observation sequences.
It is coverered for example in the paper
"A tutorial to Hidden Markov models and selected applications in speech recognition"
by Rabiner
Unfortunately, I haven't yet found an implementiation of this algorithm.
Here is my corresponding question on stackexchange: https://stats.stackexchange.com/questions/53256/two-sequences-one-hmm