Converting an executable file into an analog waveform signal - voice-recognition

I have been trying to convert a digital binary file (.exe) into waveform to listen the resulted audio. I have been looking for any possible software/open source code to help me in achieving this, but no use.
My ultimate goal is to represent the .exe file as a spectogram to analyse the behaviour of the frequencies in the executable file. My understanding that I have to identify the range of frequencies first, which could be done by plotting the waveform first.
Any reference would be appreciated.
Edit:
I have a collection of binary files and I need to classify them according to their sound statistical features (frequency behaviour). My plan was to get the waveform of the actual binary file (by dividing the file into 1 signed byte each) and then convert the waveform into spectrogram picture and apply deep learning analysis for voice recognition
So, the depth of each sample will be 8-bits, and the sampling rate will be either 8Khz or 16 Khz. But I am confused of how to determine the frequencies related from the executable file

Related

Handling Dynamic Sample Rates when Saving Audio File

So, I am recording a WAVE file using 16-bit PCM samples that are received from a widget that streams them over in real time. Pretty basic stuff, right? Except the problem is that the widget might dynamically change the sample rate of the audio data that it is sending.
It might start out at a nice 44.1 kHz stream but then might change to a 23 kHz sample rate, or vice-versa. My understanding is that conventional WAVE files do not handle varying sample rates like this, so I am trying to determine the best way to handle the situation.
One approach I came up with was to put something like a ResamplerDmoStream in front of the WaveFileWriter, locking the WaveFileWriter to a sample rate of 44.1 kHz, and just resampling all incoming data to 44.1 kHz.
Another idea that might work is to find a supported output file format that may have native support for varying sample rates, write all the data to that file and then perform a post-process resampling step to create a conventional 44.1 kHz WAVE file.
Anyone else out there had to deal with this kind of situation and has a better idea?
Thanks!
Peace!

extracting motion-compensated frames during HEVC encoding

I am trying to analyze H.265 coding performance. Is there a way to export the predicted frames for H.265/HEVC encoding? Specifically, how should I obtain reconstructed frames after compensating with the motion vectors, but before applying the residual? Is there a way to do this with ffmpeg, or any other codec analysis tool?
Yes you can do it with HM decoder.
What you need to do is to find the exact line of the code in the TDecCu.cpp file, where two pointers piResi and piPred are accessed to be added and reconstruct the block. There, you may print piPred alone.

How to save Gnuradio Waterfall Plot?

I want to measure spectrum Occupancy of any one of the GSM band using Gnuradio and a USRP for 24 hours.
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
Is there any way to save the waterfall plot of gnuradio in image file or any other format?
Middle-mouse-button -> Save.
If not is there any other way to show the spectrum occupancy for certain amount of time in one image or graph?
This is a typical case for "offline processing and visualization". I'd recommend you just build a GNU Radio flow graph that takes the samples from the USRP, applies decimating band pass filters (best case: in shape of the GSM pulse shape), and then calculates the power of the resulting sample streams (complex_to_mag_squared) and then just saves these power vectors.
Then you could later easily visualize them with e.g. numpy/matplotlib, or whatever tool you prefer.
The problem really is that GSM spectrum access happens in the order of microseconds, and you want to observe for 24 hours – no visualization in this world can both represent accurately what's happening and still be compact. You will need to come up with some intelligent measure built atop of the pure occupancy information.

How to detect silence and cut mp3 file without re-encoding using NAudio and .NET

I've been looking for an answer everywhere and I was only able to find some bits and pieces. What I want to do is to load multiple mp3 files (kind of temporarily merge them) and then cut them into pieces using silence detection.
My understanding is that I can use Mp3FileReader for this but the questions are:
1. How do I read say 20 seconds of audio from an mp3 file? Do I need to read 20 times reader.WaveFormat.AverageBytesPerSecond? Or maybe keep on reading frames until the sum of Mp3Frame.SampleCount / Mp3Frame.SampleRate exceeds 20 seconds?
2. How do I actually detect the silence? I would look at an appropriate number of the consecutive samples to check if they are all below some threshold. But how do I access the samples regardless of them being 8 or 16bit, mono or stereo etc.? Can I directly decode an MP3 frame?
3. After I have detected silence at say sample 10465, how do I map it back to the mp3 frame index to perform the cutting without re-encoding?
Here's the approach I'd recommend (which does involve re-encoding)
Use AudioFileReader to get your MP3 as floating point samples directly in the Read method
Find an open source noise gate algorithm, port it to C#, and use that to detect silence (i.e. when noise gate is closed, you have silence. You'll want to tweak threshold and attack/release times)
Create a derived ISampleProvider that uses the noise gate, and in its Read method, does not return samples that are in silence
Either: Pass the output into WaveFileWriter to create a WAV File and and encode the WAV file to MP3
Or: use NAudio.Lame to encode directly without a WAV step. You'll probably need to go from SampleProvider back down to 16 bit WAV provider first
BEFORE READING BELOW: Mark's answer is far easier to implement, and you'll almost certainly be happy with the results. This answer is for those who are willing to spend an inordinate amount of time on it.
So with that said, cutting an MP3 file based on silence without re-encoding or full decoding is actually possible... Basically, you can look at each frame's side info and each granule's gain & huffman data to "estimate" the silence.
Find the silence
Copy all the frames from before the silence to a new file
now it gets tricky...
Pull the audio data from the frames after the silence, keeping track of which frame header goes with what audio data.
Start writing the second new file, but as you write out the frames, update the main_data_begin field so the bit reservoir is in sync with where the audio data really is.
MP3 is a compressed audio format. You can't just cut bits out and expect the remainder to still be a valid MP3 file. In fact, since it's a DCT-based transform, the bits are in the frequency domain instead of the time domain. There simply are no bits for sample 10465. There's a frame which contains sample 10465, and there's a set of bits describing all frequencies in that frame.
Plain cutting the audio at sample 10465 and continuing with some random other sample probably causes a discontinuity, which means the number of frequencies present in the resulting frame skyrockets. So that definitely means a full recode. The better way is to smooth the transition, but that's not a trivial operation. And the result is of course slightly different than the input, so it still means a recode.
I don't understand why you'd want to read 20 seconds of audio anyway. Where's that number coming from? You usually want to read everything.
Sound is a wave; it's entirely expected that it crosses zero. So being close to zero isn't special. For a 20 Hz wave (threshold of hearing), zero crossings happen 40 times per second, but each time you'll have multiple samples near zero. So you basically need multiple samples that are all close to zero, but on both sides. 5 6 7 isn't much for 16 bits sounds, but it might very well be part of a wave that will have a maximum at 10000. You really should check for at least 0.05 seconds to catch those 20 Hz sounds.
Since you detected silence in a 50 millisecond interval, you have a "position" that's approximately several hundred samples wide. With any bit of luck, there's a frame boundary in there. Cut there. Else it's time for reencoding.

Transform discrete data to continuous

I'm writing a program that has, as one facet, a wave filtration/resolution routine. The more data I collect, the bigger the files stored to the device get. I'm collecting data at discrete time steps, and in the interest of accuracy I'm doing this pretty frequently. However, I noticed that the overall wave form tends to be wide enough that I could be collecting data at about half the rate I am and still be able to draw an accurate-enough-for-my-purposes waveform over the data.
So the question: is there a way to, from this data, create a continuous mathematic description of the curve? I haven't been able to find anything. My data is float inside of NSNumbers contained by an NSArray.
The two things I would like to be able to do are get intersections points for a threshold and find local maximums. The ability to do either one of these would be sufficient.
-EDIT-
If anyone knows a good objective-c FFT method for 1-dimensional real arrays I would love to hear it.
Apple includes an FFT in the Accelerate framework.
Using Fourier Transforms
Example: FFT Sample
Also: Using the Apple FFT and Accelerate Framework