Android: MediaPlayer.setDataSource(FileDescriptor fd) vs MediaPlayer.setDataSource(FileDescriptor fd, long offset, long length) - android-mediaplayer

I have created 3 Asset file descriptor of 3 sounds (put in res/raw)
AssetFileDescriptor afd1 = mContext.getResources().openRawResourceFd(R.raw.mp3_file_1);
AssetFileDescriptor afd2 = mContext.getResources().openRawResourceFd(R.raw.mp3_file_2);
AssetFileDescriptor afd3 = mContext.getResources().openRawResourceFd(R.raw.mp3_file_3);
Then I put them into an array:
array.add(afd1);
array.add(afd2);
array.add(afd3);
Then I create an instance of MediaPlayer and let it play only the first sound in the array
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(array.get(0).getFileDescriptor());
mp.prepare();
mp.start()
However, all the 3 sounds in the array are played.
Then I tried to use setDataSource(FileDescriptor, long, long) instead of setDataSource(FileDescriptor fd), the mp plays ONLY the 1st sound in the array as I wanted.
AssetFileDescriptor afd = array.get(0);
mp.setDataSource(afd.getFileDescriptor(),
afd.getStartOffset(), afd.getLength());
My question is what is the difference between the TWO setDataSource methods above? With the code I included here, why setDataSource(array.get(0)) plays all 3 sounds in the array?
Many thanks.

The second one is told length (and offset) while the first one plays as long as the file descriptor returns some data. Resources are usually stored in an archive so reading from the file descriptor continues past the first song where it finds the second song and then the third song.

Related

Audio through CAN FD into headphones

I am trying to record audio using a 12 bit resolution ADC, take the sample buffer and send it through CAN FD to another device, which takes samples of this audio and creates a .wav and plays it. The problem is that I see the data of the microphone being sent through CAN FD to the other device, but I am not able to transform this data into a .wav file properly and hear what I say through the microphone. I only hear beeps.
I'm creating a new .wav every 4 CAN FD messages in order to make some kind of real time communication and decrease the delay, but I don't think this is possible or if I am thinking it the proper way.
In this thread I take the message sent by the CAN FD and concatenate it in a buffer in order to introduce it in a .wav file. I have tried bigger buffers but it doesn't change the outcome.
How could I be able to take the data from the CAN FD and hear it?
Clarification: I know using CAN FD to transmit audio isn't the proper way, but it is for a master project.
struct canfd_frame frame;
CAN_MSG msg;
int trama_can[72];
int nbytes;
while (status_libreria == 0)
;
unsigned char buffer[256];
// FILE * fPtr;
int i=0,x=0;
//fPtr = fopen("Test.txt", "w");
while (1) {
do {
nbytes = read(s, &frame, sizeof(struct canfd_frame));
} while (nbytes == 0);
msg.id.ext = frame.can_id;
msg.dlc = frame.len;
if (msg.dlc > 8)
msg.dlc = 8; //Protecci�n hasta adaptar AC3LIB a CANFD
Numas_memcpy(&(msg.data.bdata), &(frame.data), msg.dlc);
can_frame_2_ac3lib(&msg, BUS_VERTICAL);
for(x=0;x<64;x++) buffer[i*64+x] = frame.data[x];
printf("%d \r\n",frame.data[x]);
printf("i:%d \r\n",i);
// Copiar datos a fichero.wav y reproducirlo simultaneamente
if (i == 3) {
printf("Datos IN\r\n");
write_wav("prueba.wav",256 , (short int *)buffer, 16000);
//fwrite(buffer,1,sizeof(buffer),fPtr);
//fclose(fPtr);
system("aplay prueba.wav -f cd");
i = 0;
system("rm prueba.wav");
}
i++;
}
32 first bytes of the audio file being recorded
In the picture, as you can see, the data is being recorded. moreover, this data is the same data as in the ADC, but when I play it, I only hear noise.
Simplify the problem first. Make sure you can transmit known data from one end to the other first at low rates. I'm sure the suggestion below will sound far too trivial. But until you are absolutely confident you understand it all, I predict you sill have many struggles.
Slowly - one frame per second, or even slower.
Learn to send one 0x55 byte from one end to the other and verify at the receiver.
Learn to send a few 0x55 in one frame and verify.
Learn to send 0x12345678 - verify it ends up with the bytes in the right order at the other end
Learn to send a counter. Check it at the receiver, make sure you do not drop any data.
Now do it all again but 10x faster.
Continue until you can send a counter at 10x the rate you need to for the audio without dropping any frames at all, for minutes and then hours.
Stress the rest of the system to make sure it still works under stress.
Only now, can you start to learn about sending audio.
Trust me, you will learn a lot!

Fail at sending byte array through serial port excel-vba

I am trying to get information from an Excel worksheet and send it through a serial port as a byte array, using Windows API. This is just a small part of it:
lngSize = UBound(byteData) - LBound(byteData)
WriteFile(udtPorts(intPortID).lngHandle, byteData, lngSize, _lngWrSize, udtCommOverlap)
My current problem: when I am sending a byte array of length 1 (just one byte), I receive it correctly (I am using a hyperterminal to check what I'm sending), but when I send an array of length > 1, here comes the problem; instead of receiving it like this:
letter = 65
For i = 0 To 5
dataToSend(i) = letter
letter = letter + 1
Next
what I should receive
what I get is this:
what I receive
I really cannot figure out what could be the problem and I would be grateful if someone had a clue. Thank you!
First, the correct number of elements in an array is:
lngSize = UBound(byteData) - LBound(byteData) + 1 ' <-- add 1
More importantly, your code is not applying the call convention for the WriteFile API. Namely, the second parameter should be a LPCVOID pointer to the first Byte to transfer. Passing the array's name byteData to the function wont achieve that, because the array is a complex COM data structure, not like a C array. What you should do is:
First get the address of the array's data structure, using VarPtrArray:
Then add 12 to it to get the address of the first byte.
.
Private Declare Function VarPtrArray Lib "VBE7" Alias "VarPtr" (var () As Any) As Long
...
WriteFile(udtPorts(intPortID).lngHandle, VarPtrArray(byteData()) + 12, lngSize, _lngWrSize, udtCommOverlap)
' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For information about handling arrays' data and their pointers, excellent examples can be found [on this page].(https://www.codeproject.com/Articles/729235/VB-and-VBA-Array-Buffering)
Also make sure that you declared you array as a Byte array, like
Redim byteData(someSize) As Byte
' ^^^^^^^
There might be other errors in the parts of code you didn't show (possibly the settings of udtCommOverlap), but hopefully these corrections will put you on the right track.

How to retrieve and iterate over sound data chunks in iOS?

Given an audio file 'coolsound.aif', how might I approach the task of retrieving the sound data chunks (SSND chunks) and iterating over them to do some arbitrary processing? I hope to be able to achieve something like the following:
/*
* Pseudocode of what I'd like to do
*/
// get SSND chunks out of audio file somehow
Array soundDatachunks = getSSNDChunksFromSoundFile("coolsound.aif");
// iterate over each chunk
foreach(soundDataChunks as chunk){
// Now iterate over each element in the waveForm data array
foreach(chunk.waveForm as w){
//Just log it to debug console for now
Log(w);
}
}
Other info:
- My aim is to use the waveform data to visualize the audio file graphically.
- The audio file was recorded using AudioToolbox in this manner.
- SSND chunk has the structure as appears in this source:
typedef struct {
ID chunkID;
long chunkSize;
unsigned long offset;
unsigned long blockSize;
unsigned char WaveformData[];
} SoundDataChunk;
There are a few different API's that you can use and it all depends on how much control you want, and what you plan on doing with the audio data. The ExtendedAudioFile API is made for basic operations like getting the audio data and drawing it.
It may seem like a lot of code to make this happen, like you have to create a AudioStreamBasicDescription object and configure it just right, but it allows you to read an entire audio file very quickly and access all the samples for drawing.
GW

Streaming non-PCM raw audio using NAudio

I'm hell bent on making this work with NAudio, so please tell me if there's a way around this. I have streaming raw audio coming in from a serial device, which I'm trying to play through WaveOut.
Attempt 1:
'Constants 8000, 1, 8000 * 1, 1, 8
Dim CustomWaveOutFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Pcm, SampleRate, Channels, AverageBPS, BlockAlign, BitsPerSample)
Dim rawStream = New RawSourceWaveStream(VoicePort.BaseStream, CustomWaveOutFormat)
'Run in background
Dim waveOut = New WaveOut(WaveCallbackInfo.FunctionCallback())
'Play stream
waveOut.Init(rawStream)
waveOut.Play()
This code works, but there's a tiny problem - the actual audio stream isn't raw PCM, it's raw MuLaw. It plays out the companding like a Beethoven's 5th on cheese-grater. If I change the WaveFormat to WaveFormatEncoding.MuLaw, I get a bad format exception because it's raw audio and there are no RIFF headers.
So I moved over to converting it to PCM:
Attempt 2:
Dim reader = New MuLawWaveStream(VoicePort.BaseStream, SampleRate, Channels)
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(reader)
Dim waveOutStream = New BlockAlignReductionStream(pcmStream)
waveOut.Init(waveOutStream)
Here, CreatePcmStream tries to get the length of the stream (even though CanSeek = false) and fails.
Attempt 3
waveOutStream = New BufferedWaveProvider(WaveFormat.CreateMuLawFormat(SampleRate, Channels))
*add samples when OnDataReceived()*
It too seems to suffer from lack of having a header.
I'm hoping there's something minor I missed in all of this. The device only streams audio when in use, and no data is received otherwise - a case which is handled by (1).
To make attempt (1) work, your RawSourceWaveStream should specify the format that the data really is in. Then just use another WaveFormatConversionStream.CreatePcmStream, taking rawStream as the input:
Dim muLawStream = New RawSourceWaveStream(VoicePort.BaseStream, WaveFormat.CreateMuLawFormat(SampleRate, Channels))
Dim pcmStream = WaveFormatConversionStream.CreatePcmStream(muLawStream);
Attempt (2) is actually very close to working. You just need to make MuLawStream.Length return 0. You don't need it for what you are doing. BlockAlignReductionStream is irrelevant to mu-law as well since mu law block align is 1.
Attempt (3) should work. I don't know what you mean by lack of a header?
In NAudio you are building a pipeline of audio data. Each stage in the pipeline can have a different format. Your audio starts off in Mu-law, then gets converted to PCM, then can be played. A buffered WaveProvider is used for you want playback to continue even though your device has stopped providing audio data.
Edit I should add that the IWaveProvider interface in NAudio is a simplified WaveStream. It has only a format and a Read method, and is useful for situations where Length is unknown and repositioning is not possible.

InputStream.available() is returning 0 or No file size

Please consider the following j2me code segment:
1. FileConnection fc = (FileConnection) Connector.open("file:///root1/photos/2.png");
2. InputStream is = fc.openInputStream();
3. System.out.println(is.available());
4. byte[] fileBytes = new byte[is.available()];
5. int sizef = is.read(fileBytes);
6. System.out.println("filesize:"+sizef);
In this case line 3 & 6 both outputs 0 as file size. But when I put is.read(anyByteArray) this line after line 2 it shows the proper file size. Why is this happening? I think I don't understand these class very well. Any pointer for better understanding?
Thanks for your help.
Don't know about j2me, but the Java6 javadoc for InputStream.available() says this:
Note that while some implementations
of InputStream will return the total
number of bytes in the stream, many
will not. It is never correct to use
the return value of this method to
allocate a buffer intended to hold all
data in this stream.