soundjs play mp3, When playing at a higher speed, the pitch becomes higher - createjs

The soundjs plugin is used in the project. When the mp3 is played in fast forward, the pitch becomes higher。
Is it possible to not raise the pitch?
demo:
https://codesandbox.io/s/fervent-grass-g9j248?file=/index.html

Related

Decreasing speed decreases sound quality

Decreasing the playback speed of AudioPlayer severely decreases the quality of the audio being played; the audio becomes very "noisy".
Is there any way to fix this or is it an issue with the just_audio implementation?
Reproduce:
final AudioPlayer player = AudioPlayer(); // Create audio player
player.setAsset("..."); // Load audio file
player.setSpeed(0.5); // Halve speed
player.play(); // Start playing
Just to preface this answer, time stretching is a difficult thing to do in real-time because it has to stretch time without stretching the sound waves (stretching the sound waves would lower the frequency and hence the pitch, so it has to stretch time while filling the gaps with fabricated extensions to the existing sound waves). As a result, the very best real time algorithm will still introduce artifacts and distortions.
Now to answer your question, just_audio doesn't provide any options to change the time stretching algorithm, but it does use the best available algorithms for each platform, for general purpose usage. The Android implementation uses Sonic which is better quality than Android's own built-in algorithm. On iOS/macOS, AVAudioTimePitchAlgorithmTimeDomain is used which seems to produce the least distortion at speeds below 1.0 out of the different algorithms Apple provides, although newer iPhones/iOS versions may produce higher quality output. On web browsers, it uses whatever algorithm that web browser provides.
If you need to try out alternatives, you would need to make a copy of just_audio and edit the code that selects the algorithm. You are unlikely to find better options for Android and web, but you might like to experiment with the different iOS/macOS algorithms by searching for AVAudioTimePitchAlgorithmTimeDomain in the code and changing it to one of the other options listed in Apple's documentation. You may find one of the other algorithms works better if you have a specialised use case.

How to playback multiple audio files synchronously in Expo-av?

In my app users record themselves singing over a backing track, and then later playback the recorded audio and this backing track at the same time. I use expo-av for my audio system. The problem is that at the playback stage the audio is often out of sync because expo only really supports asynchronous audio. Does anyone have any advice on how to approach this problem at a high level?
A few of my ideas:
Mix the two audio files into a single file for playback. This almost works except for the fact that the recording and backing track are also out of sync. If I knew exactly how much they were offset, I could just add that amount of silence to one of the files when mixing. However, I haven't found a way to accurately calculate this offset.
Reduce time it takes for recording and playback to start, so that the latency is not noticeable. Some things I've found that help here are recording at lower quality and using smaller audio files. Any other tips here would be appreciated.
Use a different audio library than expo-av. Is there one that comes to mind that better supports synchronous audio? Ideally it would also be supported by Expo or at least React Native.

How does playbackRate affect bandwidth consumption with videojs?

I'm using videojs to playback videos stored on AWS. My users will often play back the video at 4x, 8x, or 16x speed. I can control the playback speed using:
videojs('my-player', {playbackRates: [1, 4, 8, 16]})
How does this impact bandwidth usage? Does a video played at 4x speed consume 1/4 of the bandwidth?
Are there other web video frameworks that would be better suited to minimizing data transfer out when playback speed is increased?
Most (if not all) HTML5 video player libraries are just wrappers for native HTML5. So buffering etc is handled by browser according to standardized RFC protocols. On the other hand, HLS/DASH features requires custom implementation by player.

control speed of sound xcode

I'm wondering whether it's possible to slow down a sound in xcode. I mean I'll add some .mp3 file in my supporting files in xcode and I'll create app which will be able to speed it up or slow down. For example with slider. Is it even possible? If yes, could anyone help me with some idea? Thanks
AVAudioPlayer has a rate property which should be able to help you accomplish your goal.
http://developer.apple.com/library/IOS/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html
The audio player’s playback rate. #property float rate
Discussion
This property’s default value of 1.0 provides normal playback rate.
The available range is from 0.5 for half-speed playback through 2.0
for double-speed playback.
To set an audio player’s playback rate, you must first enable rate
adjustment as described in the enableRate property description.
I also found a good SO post on the AVAudioPlayer's rate:
AVAudioPlayer rate
Seems like as you'd mentioned you could set a slider with values from 0.5 to 2.0 and on valuechanged modify the audio players rate by using
- (IBAction)changeValue:(UISlider *)sender
{
//made up assumed ivar names
if ([_audioPlayer respondsToSelector:#selector(setEnableRate:)])
_audioPlayer.enableRate = YES;
if ([_audioPlayer respondsToSelector:#selector(setRate:)])
_audioPlayer.rate = [NSNumber numberWithFloat:slideValue];
}
Playing PCM audio at a faster or slower rate than its sample rate changes its pitch and also introduces considerable artefacts. If you're OK with this, the approach you would use is to decode the MP3 into PCM audio and then use Direct Digital Synthesis Oscillator to control the playback rate.
If you want to maintain pitch but change speed, you need a audio time-stretching algorithm.
Dirac3 from DSP Dimension is a commercial product that can do it, and which is available for licensing for use in iOS application. Other commercial solutions exist.
DSP Dimension's blog provides a helpful tutorial on the basics of how to implement pitch-shifting
using a FFT. Time stretching is essentially the same process. However, there's a fair bit of secret sauce in the DIRAC plug-in they don't tell you about.
Be warned that unless you're a Electronics engineering, physics or maths graduate, you'll probably it tough going to fill in the blanks.

Is OpenAL the right audio library to use for cross platform audio processing?

I am making an application that will do things like pitch shifting and time stretching to audio files, and play them back in real time. Is OpenAL the right library for this? Or is there something that could do this better, and would be easy to reuse for different platforms?
OpenAL can't do pitch shifting or time stretching. For that, you'll need a 3rd party library such as SoundTouch.
As well, OpenAL doesn't support realtime audio processing. You can kind of fake it using buffer queues, but it's a bit hokey because you'd need to keep polling to see when a buffer has finished playing and then queue the next processed buffer, and you'd need to keep your buffers very small or risk laggy audio response. However, small queued buffers can potentially lead to performance, timing, and clicking issues.