I'm hell bent on making this work with NAudio, so please tell me if there's a way around this. I have streaming raw audio coming in from a serial device, which I'm trying to play through WaveOut.
Attempt 1:
'Constants 8000, 1, 8000 * 1, 1, 8
Dim CustomWaveOutFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Pcm, SampleRate, Channels, AverageBPS, BlockAlign, BitsPerSample)
Dim rawStream = New RawSourceWaveStream(VoicePort.BaseStream, CustomWaveOutFormat)
'Run in background
Dim waveOut = New WaveOut(WaveCallbackInfo.FunctionCallback())
'Play stream
waveOut.Init(rawStream)
waveOut.Play()
This code works, but there's a tiny problem - the actual audio stream isn't raw PCM, it's raw MuLaw. It plays out the companding like a Beethoven's 5th on cheese-grater. If I change the WaveFormat to WaveFormatEncoding.MuLaw, I get a bad format exception because it's raw audio and there are no RIFF headers.
So I moved over to converting it to PCM:
Attempt 2:
Dim reader = New MuLawWaveStream(VoicePort.BaseStream, SampleRate, Channels)
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(reader)
Dim waveOutStream = New BlockAlignReductionStream(pcmStream)
waveOut.Init(waveOutStream)
Here, CreatePcmStream tries to get the length of the stream (even though CanSeek = false) and fails.
Attempt 3
waveOutStream = New BufferedWaveProvider(WaveFormat.CreateMuLawFormat(SampleRate, Channels))
*add samples when OnDataReceived()*
It too seems to suffer from lack of having a header.
I'm hoping there's something minor I missed in all of this. The device only streams audio when in use, and no data is received otherwise - a case which is handled by (1).
To make attempt (1) work, your RawSourceWaveStream should specify the format that the data really is in. Then just use another WaveFormatConversionStream.CreatePcmStream, taking rawStream as the input:
Dim muLawStream = New RawSourceWaveStream(VoicePort.BaseStream, WaveFormat.CreateMuLawFormat(SampleRate, Channels))
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(muLawStream);
Attempt (2) is actually very close to working. You just need to make MuLawStream.Length return 0. You don't need it for what you are doing. BlockAlignReductionStream is irrelevant to mu-law as well since mu law block align is 1.
Attempt (3) should work. I don't know what you mean by lack of a header?
In NAudio you are building a pipeline of audio data. Each stage in the pipeline can have a different format. Your audio starts off in Mu-law, then gets converted to PCM, then can be played. A buffered WaveProvider is used for you want playback to continue even though your device has stopped providing audio data.
Edit I should add that the IWaveProvider interface in NAudio is a simplified WaveStream. It has only a format and a Read method, and is useful for situations where Length is unknown and repositioning is not possible.
Related
I am trying to record audio using a 12 bit resolution ADC, take the sample buffer and send it through CAN FD to another device, which takes samples of this audio and creates a .wav and plays it. The problem is that I see the data of the microphone being sent through CAN FD to the other device, but I am not able to transform this data into a .wav file properly and hear what I say through the microphone. I only hear beeps.
I'm creating a new .wav every 4 CAN FD messages in order to make some kind of real time communication and decrease the delay, but I don't think this is possible or if I am thinking it the proper way.
In this thread I take the message sent by the CAN FD and concatenate it in a buffer in order to introduce it in a .wav file. I have tried bigger buffers but it doesn't change the outcome.
How could I be able to take the data from the CAN FD and hear it?
Clarification: I know using CAN FD to transmit audio isn't the proper way, but it is for a master project.
struct canfd_frame frame;
CAN_MSG msg;
int trama_can[72];
int nbytes;
while (status_libreria == 0)
;
unsigned char buffer[256];
// FILE * fPtr;
int i=0,x=0;
//fPtr = fopen("Test.txt", "w");
while (1) {
do {
nbytes = read(s, &frame, sizeof(struct canfd_frame));
} while (nbytes == 0);
msg.id.ext = frame.can_id;
msg.dlc = frame.len;
if (msg.dlc > 8)
msg.dlc = 8; //Protecci�n hasta adaptar AC3LIB a CANFD
Numas_memcpy(&(msg.data.bdata), &(frame.data), msg.dlc);
can_frame_2_ac3lib(&msg, BUS_VERTICAL);
for(x=0;x<64;x++) buffer[i*64+x] = frame.data[x];
printf("%d \r\n",frame.data[x]);
printf("i:%d \r\n",i);
// Copiar datos a fichero.wav y reproducirlo simultaneamente
if (i == 3) {
printf("Datos IN\r\n");
write_wav("prueba.wav",256 , (short int *)buffer, 16000);
//fwrite(buffer,1,sizeof(buffer),fPtr);
//fclose(fPtr);
system("aplay prueba.wav -f cd");
i = 0;
system("rm prueba.wav");
}
i++;
}
32 first bytes of the audio file being recorded
In the picture, as you can see, the data is being recorded. moreover, this data is the same data as in the ADC, but when I play it, I only hear noise.
Simplify the problem first. Make sure you can transmit known data from one end to the other first at low rates. I'm sure the suggestion below will sound far too trivial. But until you are absolutely confident you understand it all, I predict you sill have many struggles.
Slowly - one frame per second, or even slower.
Learn to send one 0x55 byte from one end to the other and verify at the receiver.
Learn to send a few 0x55 in one frame and verify.
Learn to send 0x12345678 - verify it ends up with the bytes in the right order at the other end
Learn to send a counter. Check it at the receiver, make sure you do not drop any data.
Now do it all again but 10x faster.
Continue until you can send a counter at 10x the rate you need to for the audio without dropping any frames at all, for minutes and then hours.
Stress the rest of the system to make sure it still works under stress.
Only now, can you start to learn about sending audio.
Trust me, you will learn a lot!
WebCodecs is released in Chrome 86. But there's no real code example on how to use it yet. Given a video url, how to extract video frames as ImageData using webcodecs?
What you describe is the entire complex process of acquiring raw bitmap-like data (e.g. something you can dump on a canvas), from a formatted file or a stream of data chunks.
In case of files (including the case where your URL points to a complete file, such as an .mp4 file), this is generally made of 2 steps:
Parsing the container file into individual chunks of encoded video and/or audio
Decoding these chunks of encoded video/audio
WebCodecs only facilitates step 2 of this process, i.e. what is called decoding. The reasoning behind this decision was that parsing the container is computationally trivial, so you can efficiently do this with the File APIs already, but you still need to implement parsing/processing the container yourself.
Luckily, plenty of libraries exist already, many of which ironically existed long before the emergence of the WebCodecs API.
MP4Box is one example, helping you acquire encoded video and audio chunks, which you can then feed into a VideoDecoder or AudioDecoder.
With MP4Box, the key piece of your code will be centered around the onSamples callback you provide, and it'll look something like this:
mp4BoxFile.onSamples = (trackId, user, chunks) =>
{
for (let i = 0; i < chunks.length; i++)
{
let chunk = chunks[i];
let encodedChunk = new EncodedVideoChunk({
// you'll need to deep-inspect chunk to figure these out
type: "key", // or "delta"
timestamp: ...
duration: ...
data: chunk.data
});
// pass encodedChunk to a VideoDecoder instance's decode method
}
};
This is just a rough sketch of how your code will probably look, it probably won't work without more inspection, and it'll take a lot of trial and error, because this is very low level stuff.
WebCodecs is not the silver bullet you probably expected, but it can help you build one.
Do WAV files allow any arbitrary number of bitsPerSample?
I have failed to get it to work with anything less than 8. I am not sure how to define the blockAlign for one thing.
Dim ss As New Speech.Synthesis.SpeechSynthesizer
Dim info As New Speech.AudioFormat.SpeechAudioFormatInfo(AudioFormat.EncodingFormat.Pcm, 5000, 4, 1, 2500, 1, Nothing) ' FAILS
ss.SetOutputToWaveFile("TEST4bit.wav", info)
ss.Speak("I am 4 bit.")
My.Computer.Audio.Play("TEST4bit.wav")
AFAIK no, 4-bit PCM format is undefined, it wouldn't make much sense to have 16 volume levels of audio; quality would be horrible.
While technically possible, I know no decent software (e.g. Wavelab) that supports it, your very own player could though.
Formula: blockAlign = channels * (bitsPerSample / 8)
So for a mono 4-bit it would be : blockAlign = 1 * ((double)4 / 8) = 0.5
Note the usage of double being necessary to not end up with 0.
But if you look at the block align definition below, it really does not make much sense to have an alignment of 0.5 bytes, one would have to work at the bit-level (painful and useless because at this quality, non-compressed PCM would just sound horrible):
wBlockAlign
The block alignment (in bytes) of the waveform data. Playback
software needs to process a multiple of wBlockAlign bytes of data at
a time, so the value of wBlockAlign can be used for buffer
alignment.
Reference:
http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/Docs/riffmci.pdf page 59
Workaround:
If you really need 4-bit, switch to ADPCM format.
I am dealing with raw PCM audio data (the audio data of a PCM file without the header).
This data is provided to me in the form of a vector of double.
I would like to pass this data to another function, and this function expects the audio data in the form of a byte vector.
I tried
Dim nBytes() As Byte = nDoubles.SelectMany(Function(d) BitConverter.GetBytes(d)).ToArray()
but that wouldn't give the expected results.
I guess I have to deal with the conversion manually, but I am unsure how this should be done.
Can anybody help?
Thank you.
Since the required format for the other function is 16-bit, 48 kHz, which is the same as your source data, it's a simple case of converting the source to an array of Short, then serializing this as a Byte array.
The problem with the code you suggest for this is that the first step is missed, so it basically serializes the Double array. However, you can re-use this for the second step. So, you can do something like:
Dim nShorts() As Short = New Short(nDoubles.Length - 1) {}
For i = 0 To nDoubles.Length - 1
nShorts(i) = Convert.ToInt16(nDoubles(i))
Next
Dim nBytes() As Byte = nShorts.SelectMany(Function(s) BitConverter.GetBytes(s)).ToArray()
I have a streamed WCF service. In one operation, I receive a file, for upload purposes.
If I try to do something like this
request.FileContent.Length
Then I receive an OperationNotSupported exception. That's Ok.
But how could I get the file size without actually transfering it entirely?
I know I could send this information along with the call, as a Header, but I don't want to go this way.
If WCF is able to limit the request size trough maxReceiveMessageSize. How can I use the same information to check the message/stream size?
In general, you can't know the size of a byte stream without reading it all and counting the bytes, unless there is data at the start of the stream which tells you how many bytes there are in the entire stream, or some other out-of-band way to communicate the length of the stream, such as in the WCF message headers. You will have to go with the Header approach if you want to know the size without reading the stream.
The WCF maxReceiveMessageSize works by counting the bytes as they are received and throwing an exception if the limit is exceeded... it doesn't know the stream length either, and can't pre-emptively prevent the message being received without first reading the maximum allowed number of bytes.
But how could I get the file size without actually transfering it entirely? I know I could send this information along with the call, as a Header, but I don't want to go this way.
You're going to have to send the size of the byte stream down the pipe first there is no other way. (if there was some inbuilt way thats all it would be doing anyway)
It doesn't add much complexity to prepend it to the stream:
var bytes = File.ReadAllBytes("somefile.txt");
stream.Write(BitConverter.GetBytes((Int32)bytes.Length), 0, 4);
stream.Write(bytes, 0, bytes.Length);
and then on the other side when reading the stream:
byte[] fileLengthBytes =new byte[4];
stream.Read(fileLengthBytes, 0, 4);
int length = BitConverter.ToInt32(fileLengthBytes, 0);
//you know the size of the file now, log it or show the user
var fileBytes = new byte[length];
stream.Read(fileBytes, 0, fileBytes.Length);
this is only an example - you may not want to create a byte[] buffer if your stream is large.