change pitch of .3gp file while playing - android-mediaplayer

In my project i have recorded sound using mediaplayer and save as .3gp file but when i want to play it using some audio effect or fast forwarding or change pitch of audio while playing. i have used mediaplayer but not working.then i used audiotrack but audiotrack takes only bytestream as input to play. i just want to play .3gp file and change pitch while playing.. i use this one below.
Help me...thanks in advance...
public void play() {
File path = new File(
Environment.getExternalStorageDirectory().getAbsolutePath()
+ "/sdcard/meditest/");
File[] f=path.listFiles();
isPlaying=true;
int bufferSize = AudioTrack.getMinBufferSize(outfrequency,
channelConfigurationout, audioEncoding);
short[] audiodata = new short[bufferSize];
try {
DataInputStream dis = new DataInputStream(
new BufferedInputStream(new FileInputStream(
f[0])));
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, outfrequency,
channelConfigurationout, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate((int) (frequency*1.5));
AudioManager audioManager = (AudioManager)this.getSystemService(Context.AUDIO_SERVICE);
// Set the volume of played media to maximum.
audioTrack.setStereoVolume(1.0f,1.0f);
Log.d("Clapper","player start");
audioTrack.play();
while (isPlaying && dis.available() > 0) {
int i = 0;
while (dis.available() > 0 && i < audiodata.length) {
audiodata[i] = dis.readShort();
i++;
if(i/50==0)
Log.d("Clapper", "playing now"+i);
}
audioTrack.write(audiodata, 0, audiodata.length);
}
Log.d("Clapper","AUDIO LENGTH: "+String.valueOf(audiodata));
dis.close();
audioTrack.stop();
} catch (Throwable t) {
Log.e("AudioTrack", "Playback Failed");
}
Log.d("Clapper","AUDIO state: "+String.valueOf(audioTrack.getPlayState()));
talkAnimation.stop();
if(audioTrack.getPlayState()!=AudioTrack.PLAYSTATE_PLAYING)
{
runOnUiThread(new Runnable() {
public void run() {
// TODO Auto-generated method stub
imgtalk.setBackgroundResource(R.drawable.talk1);
}
});
}
}

I tried library called Sonic. Its basically for Speech as it use PSOLA algo to change pitch and tempo.
Sonic Library

i got your problem .Media player does not support cha

Consider using a SoundPool
http://developer.android.com/reference/android/media/SoundPool.html
It supports changing the pitch in realtime while playing
The playback rate can also be changed. A playback rate of 1.0 causes the sound to play at its original frequency (resampled, if necessary, to the hardware output frequency). A playback rate of 2.0 causes the sound to play at twice its original frequency, and a playback rate of 0.5 causes it to play at half its original frequency. The playback rate range is 0.5 to 2.0.
Once the sounds are loaded and play has started, the application can trigger sounds by calling SoundPool.play(). Playing streams can be paused or resumed, and the application can also alter the pitch by adjusting the playback rate in real-time for doppler or synthesis effects.
http://developer.android.com/reference/android/media/SoundPool.html#setRate(int, float)

IF you want to change pitch while playing sound you have to use sound pool .this is the best way to do this.you can fast forward your playing by some amount and see you feel that pitch has been changed.

Related

vlcj - How to change the volume of audio before playing it?

I tried this
val player: MediaPlayer = MediaPlayerFactory("-vvv").mediaPlayers().newMediaPlayer()
val result0: Boolean = player.audio().setVolume(50) // result0: true
player.media().play("/path/to/audio.ogg")
val result1: Boolean = player.audio().setVolume(50) // result1: false
and this
val player: MediaPlayer = MediaPlayerFactory("-vvv").mediaPlayers().newMediaPlayer()
val result0 = player.audio().setVolume(50) // result0: true
player.media().prepare("/path/to/audio.ogg")
val result1: Boolean = player.audio().setVolume(50) // result1: false
player.controls().play()
val result2: Boolean = player.audio().setVolume(50) // result2: false
but the volume remains at 100%.
The only way I found is to make something like this
val player: MediaPlayer = MediaPlayerFactory("-vvv").mediaPlayers().newMediaPlayer()
player.events().addMediaPlayerEventListener(object : MediaPlayerEventAdapter() {
override fun mediaPlayerReady(mediaPlayer: MediaPlayer) {
mediaPlayer.submit {
mediaPlayer.audio().setVolume(50)
}
}
})
player.media().play("/path/to/audio.ogg")
But the solution is a bit far from ideal. Because it starts to play, plays a bit, and then whoosh, the volume has changed.
I tried vlcj 4.4.0 and 4.5.2, VLC 3.0.8 and 3.0.10, jdk8 and 14, but it works in the same way.
This is something that unfortunately does not work in VLC 3.x, but does work in the upcoming VLC 4.x (at the time of writing this answer, VLC 4 is still in development).
The following code works for me using the latest VLC 4 built from source, and the latest vlcj-5 snapshot:
import uk.co.caprica.vlcj.player.component.AudioPlayerComponent;
import uk.co.caprica.vlcj.test.VlcjTest;
public class AudioMediaPlayerComponentTest extends VlcjTest {
public static void main(String[] args) throws Exception {
String mrl = "/home/music/some-cool-synthwave-tune.mp3";
AudioPlayerComponent audioMediaPlayerComponent = new AudioPlayerComponent();
audioMediaPlayerComponent.mediaPlayer().audio().setVolume(5);
audioMediaPlayerComponent.mediaPlayer().media().play(mrl);
Thread.currentThread().join();
}
}
The initial volume for the media player comes from the OS volume settings, and in fact the OS volume setting is linked both ways to the media player. Changing the volume in one place is reflected in the other.
Volume handling through LibVLC generally just seems much better in VLC 4.
If you're stuck on VLC 3, which is reasonable at the present time, then unfortunately you're also stuck with some sort of compromise solution like using the "ready" event that you've already found.
All the ready event does is to wait for the first position-changed event, and that event was created specifically as a compromise for purposes like this.
I tested all the native event callbacks available for the media player, and nothing worked to set the volume before playback had actually started.
This leaves you with the following, as you already found:
import uk.co.caprica.vlcj.player.base.MediaPlayer;
import uk.co.caprica.vlcj.player.component.AudioPlayerComponent;
import uk.co.caprica.vlcj.test.VlcjTest;
public class AudioMediaPlayerComponentTest extends VlcjTest {
public static void main(String[] args) throws Exception {
String mrl = "/home/music/some-cool-synthwave-tune.mp3";
AudioPlayerComponent audioMediaPlayerComponent = new AudioPlayerComponent() {
#Override
public void mediaPlayerReady(MediaPlayer mediaPlayer) {
mediaPlayer.audio().setVolume(30);
}
};
audioMediaPlayerComponent.mediaPlayer().media().play(mrl);
Thread.currentThread().join();
}
}
A completely sideways alternative might be to play the shortest possible silent media as a kind of pre-roll - when that media is finished (there's a finished or stopped event you can listen for) you should then be able to set the volume and play your actual media. I did not try this.

What image format should be employed for real time processing using Camera2 in android?

I am developing an android application which processes Camera2 preview frames and displays processed frames on the Texture. At first, I tested with camera1 api, it works fine for real time image processing.
private class CameraPreviewCallback implements Camera.PreviewCallback {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
processingRunnable.setNextFrame(data, camera);
}
}
Then, I changed my code which utilizes camera2 api. For getting preview frames, I set ImageFormat as YUV_420_888
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 3);
mImageReaderPreview.setOnImageAvailableListener(mOnPreviewAvailableListener, mBackgroundHandler);
private final ImageReader.OnImageAvailableListener mOnPreviewAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireLatestImage();
if(mImage == null) {
return;
}
processingRunnable.setNextFrame(convertYUV420888ToNV21(mImage));
mImage.close();
}
};
However, it's working slower than camera1. May be it's because of having one extra conversion from YUV_420_888 to NV21. Since Camera1 can directly provides NV21 frame from Camera1.
Conversion could be expensive, depending on how you implement it and what the layout of the YUV_420_888 on a given device is.
Certainly if it's written in pure Java is probably going to be slow.
That said, if the device you're using is at the LEGACY hardware level, camera2 has to run in a legacy mode that can be slow for receiving YUV information. For those devices, staying on API1 may be preferable for your use case.

C# Audio File is played in a loop although it is stopped

I have an older implementation using NAudio 1.6 to play a ring tone signalling an incoming call in an application. As soon as the user acceptes the call, I stop the playback.
Basically the follwing is done:
1. As soon as the I get an event that a call must be signalled, a timer is started
2. Inside this timer Play() on the player
3. When the timer starts again, a check is performed if the file is played by checking the CurrentTime property against the TotalTime propery of the WaveStream
4. When the user accepts the call, Stop() is called on the player and also stop the timer
The point is, that we run sometimes in cases where the playback is still repeated although the timer is stopped and the Stop() was called on the player.
In the following link I read that the classes BufferedWaveProvider and WaveChannel32 which are used in the code are always padding the buffer with zero.
http://mark-dot-net.blogspot.com/2011/05/naudio-and-playbackstopped-problem.html
Is it possible that the non-stopping playback is due to usage of the classes BufferedWaveProvider and WaveChannel32?
In NAudio 1.7 the AudioFileReader class is there. Is this class also padding with zeros? I did not find a property like PadWithZeroes in this class. Does it make to use AudioFileReader in this case of looped playback?
Below the code of the current implementation of the TimerElapsed
void TimerElapsed(object sender, ElapsedEventArgs e)
{
try
{
WaveStream stream = _audioStream as WaveStream;
if (stream != null && stream.CurrentTime >= stream.TotalTime )
{
StartPlayback();
}
}
catch (Exception ex)
{
//do some actions here
}
}
The following code creates the input stream:
private WaveStream CreateWavInputStream(string path)
{
WaveStream readerStream = new WaveFileReader(path);
if (readerStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
{
readerStream = WaveFormatConversionStream.CreatePcmStream(readerStream);
readerStream = new BlockAlignReductionStream(readerStream);
}
if (readerStream.WaveFormat.BitsPerSample != 16)
{
var format = new WaveFormat(readerStream.WaveFormat.SampleRate, 16, readerStream.WaveFormat.Channels);
readerStream = new WaveFormatConversionStream(format, readerStream);
}
WaveChannel32 inputStream = new WaveChannel32(readerStream);
return inputStream;
}

Using Kinect to calculate distance traveled

I'm trying to develop what seems to be a simple program that uses the Kinect for Xbox360 to calculate the distance traveled by a person. The room that the Kinect will be pointed at will be 10 x 10. After the user presses the button, the subject will move about this space. Once the subject reaches their final destination in the area, the user will press the button again. The Kinect will then output how far the subject traveled in between both button presses. Having never developed for the Kinect before, it's been pretty daunting to get started. My issue is that I'm not entirely sure what I should be using to measure the distance. In my research, I've found ways to calculate the distance an object is FROM the Kinect but that's about it.
What you have hear is a simple question of dealing with a Cartesian plane. The Kinect has 20 joints that exist in XYZ space, and distance is measured in meters. In order to access these joints, you have these statements inside a "Tracker" class (this is C#... not sure if you're using C# or C++ in the SDK):
public Tracker(KinectSensor sn, MainWindow win, string fileName)
{
window = win;
sensor = sn;
try
{
sensor.Start();
}
catch (IOException)
{
sensor = null;
MessageBox.Show("No Kinect sensor found. Please connect one and restart the application", "*****ERROR*****");
return;
}
sensor.SkeletonFrameReady += SensorSkeletonFrameReady; //Frame handlers
sensor.ColorFrameReady += SensorColorFrameReady;
sensor.SkeletonStream.Enable();
sensor.ColorStream.Enable();
}
These access the color and skeleton streams from the Kinect. The skeleton stream contains the joints, so you focus on that with these statements:
//Start sending skeleton stream
private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
//Access the skeleton frame
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
//Check to see if there is any data in the skeleton
if (this.skeletons == null)
//Allocate array of skeletons
this.skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
//Copy skeletons from this frame
skeletonFrame.CopySkeletonDataTo(this.skeletons);
//Find first tracked skeleton, if any
Skeleton skeleton = this.skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();
if (skeleton != null)
{
//Initialize joints
///<summary>
///Joints to be displayed, projected, recorded, etc.
///</summary>
Joint leftFoot = skeleton.Joints[JointType.FootLeft];
}
}
So, at the beginning of your program, you want to pick a joint (there are 20... choose one that will ALWAYS be facing towards the Kinect when you are executing the program) and get its location with something like the following statements:
if(skeleton.Joints[JointType.FootLeft].TrackingState == JointTrackingState.Tracked)
{
double xPosition = skeleton.Joints[JointType.FootLeft].Position.X;
double yPosition = skeleton.Joints[JointType.FootLeft].Position.Y;
double zPosition = skeleton.Joints[JointType.FootLeft].Position.Z;
}
At the end, you'll want to have a slight delay before you stop the stream... some time between the click and when you shut off the stream from the Kinect. You will then do the math you need to do to get the distance between the two points. If you don't have the delay, you won't be able to get your Cartesian point.

Objective-C - Passing Streamed Data to Audio Queue

I am currently developing an app on iOS that reads IMA-ADPCM audio data in over through a TCP socket and converts it to PCM and then plays the stream. At this stage, I have completed the class that pulls (or should I say reacts to pushes) in the data from the stream and decoded it to PCM. I have also setup the Audio Queue class and have it playing a test tone. Where I need assistance is the best way to pass the data into the Audio Queue.
The audio data comes out of the ADPCM decoder as 8 Khz 16bit LPCM at 640 bytes a chunk. (it originates as 160 bytes of ADPCM data but decompresses to 640). It comes into the function as uint_8t array and passes out an NSData object. The stream is a 'push' stream, so everytime the audio is sent it will create/flush the object.
-(NSData*)convertADPCM:(uint8_t[]) adpcmdata {
The Audio Queue callback of course is a pull function that goes looking for data on each pass of the run loop, on each pass it runs:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
I've been working on this for a few days and the PCM conversion was quite taxing and I am having a little bit of trouble assembling in my head the best way to bridge the data between the two. It's not like I am creating the data, then I could simply incorporate data creation into the fillbuffer routine, rather the data is being pushed.
I did setup a circular buffer, of 0.5 seconds in a uint16_t[] ~ but I think I have worn my brain out and couldn't work out a neat way to push and pull from the buffer, so I ended up with snap crackle pop.
I have completed the project mostly on Android, but found AudioTrack a very different beast to Core-Audio Queues.
At this stage I will also say I picked up a copy of Learning Core Audio by Adamson and Avila and found this an excellent resource for anyone looking to demystify core audio.
UPDATE:
Here is the buffer management code:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
int frame = 0;
double frameCount = bufferSize / self.streamFormat.mBytesPerFrame;
// buffersize = framecount = 8000 / 2 = 4000
//
// incoming buffer uint16_t[] convAudio holds 64400 bytes (big I know - 100 x 644 bytes)
// playedHead is set by the function to say where in the buffer the
// next starting point should be
if (playHead > 99) {
playHead = 0;
}
// Playstep factors playhead to get starting position
int playStep = playHead * 644;
// filling the buffer
for (frame = 0; frame < frameCount; ++frame)
// framecount = 4000
{
// pointer to buffer
SInt16 *data = (SInt16*)buffer->mAudioData;
// load data from uint16_t[] convAudio array into frame
(data)[frame] = convAudio[(frame + playStep)];
}
// set buffersize
buffer->mAudioDataByteSize = bufferSize;
// return no Error - Osstatus will return an error otherwise if there is one. (I think)
return noErr;
}
As I said, my brain was fuzzy when I wrote this, and there's probably something glaringly obvious I am missing.
Above code is called by the callback:
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
soundHandler *sHandler = (__bridge soundHandler*)inUserData;
CheckError([sHandler fillBuffer: inCompleteAQBuffer],
"can't refill buffer",
"buffer refilled");
CheckError(AudioQueueEnqueueBuffer(inAQ,
inCompleteAQBuffer,
0,
NULL),
"Couldn't enqueue buffer (refill)",
"buffer enqued (refill)");
}
On the convAudio array side of things I have dumped the it to log and it is getting filled and refilled in a circular fashion, so I know at least that bit is working.
The hard part in managing rates, and what to do if they don't match. At first, try using a huge circular buffer (many many seconds) and mostly fill it before starting the Audio Queue to pull from it. Then monitor the buffer level to see his big a rate matching problem you have.