NetStream creating a seamless dynamic playlist AS3 - dynamic

I need to create a playlist dynamically with near seamless transitions in AS3.
I have tried to use the play2 command with .APPEND. It does work in a non dynamic setting.
But what I have is this, at the launch of the application, I know what the first video is, then, before that video ends, I will know what the next video to play will be and so on until i get the message that I played the last video.
So, at the beginning, I do not know how many videos there will be, neither do I know the order of the files that will play.
If I try to add a video with APPEND while the stream is already playing, it seems to replace the currently playing video instead of starting to buffer and play only at the end of the current video.
I also can not use appendBytes as the video files have to be in h.264 format
Anyone's help would be greatly appreciated as I do not know in which direction to look anymore. I can give more details if necessary.
Thank you very much.

This is a bit of an off-the-cuff answer, but the logic is sound & should give you another direction to pursue.
Firstly, the concept: with Flash video you have 2 completely separate processes occurring simultaneously:
buffering / loading
the video playing
Thus, playing & streaming can & do occur simultaneously, but separately & that is where the logic should be hooked in.
So, on to the implementation: would be to have a primary player, and a secondary (shadow) player / loader. The primary player should be responsible for loading the initial video & playing it.
[& here comes the magic]
Once buffering in the primary player is complete, determined by the NetStream.Buffer.Flush NetStatusEvent on the NetStream object. Then begin buffering the following video in the shadow player, initialising the connection & using NetStream.Pause, to begin buffering, but not playing, while the primary player plays out.
When playing is complete in the primary player (determined by the NetStream.Play.Stop event) you can pass the variables (NetConnention, NetStream & Video) (always passed by reference) from the shadow player over to the primary player & it should continue practically seamlessly. Then clear the values from the shadow player & repeat the above process, waiting for buffering to complete before loading the next video; ad infinitum.
Alternatively, you can have a more balanced approach - although in my mind this will be more resource intensive (as you'll have 2 video players continually active) - and have a primary & secondary player, where they alternate. As soon as one buffer is complete, you begin buffering the next, as soon as playing is complete, you switch from one player to the other.
This will be be pretty fiddly to assemble (hence the lack of an example, as it is complicated, and in essence, your job ;) as you'll be dealing with 2 sets of NetConnections, NetStreams & Videos - which are complicated to begin with, lots of events that require handling...
But, I don't think play2() is your answer here, that is used primarily to reawaken broken/closed NetConnections. The problem that faces you here is seamless asynchronisation of 2 separate NetConnections & NetStreams.
Ping me if you still need assistance/explanation here, this is a bit of an old Q & I don't want to write a few hundred lines of code if you've already moved on...
Best, a.)

Related

Prevent cordova-plugin-playlist from buffering all tracks at once

I've been trying to solve this issue for a few weeks now. The issue revolves around a Cordova plugin called cordova-plugin-playlist that utilizes AVQueuePlayer.
The issue is that when a large number of tracks (30+) are added, several of the tracks time out when attempting to buffer. Because of this, AVQueuePlayer is only able to play some of the tracks that I'm attempting to load (it just skips the error tracks when attempting to play them). The tracks that time out are always random. Attempting to add only the first 15 or so tracks from the same list succeeds, so it appears to be directly related to the number of tracks being added.
What I've figured out by logging the requests to my server is that AVQueuePlayer is attempting to buffer all of the tracks all at once, rather than buffering only the current and maybe the next track. When there are 20 or fewer tracks, all of the tracks load and play fine, but when there are 30 or more, the request seems to be too much to handle, and the requests begin to time out before some of the tracks are able to load.
All of the tracks are added via AVQueuePlayer's insertItem method. Is there something about this method that causes a track to immediately begin buffering as soon as it is added? Is there a way to prevent this behavior? I would like only the current and next tracks to buffer. Or am I fundamentally misunderstanding something? Thanks in advance for all your help!
I've solved the issue. In case it helps anyone, it wasn't an issue with AVQueuePlayer itself, but rather with the subclass AVBidirectionalQueuePlayer included with the cordova-plugin-playlist plugin. The issue lies within the overridden insertItem method in AVBidirectionQueuePlayer.m (line 217 in my case).
if (CMTIME_IS_NUMERIC(item.duration)) {
NSLog(#"duration: %5.2f", (double) CMTimeGetSeconds(item.duration));
if (CMTimeCompare(_estimatedDuration, kCMTimeZero) == 0)
_estimatedDuration = item.duration;
else
_estimatedDuration = CMTimeAdd(_estimatedDuration, item.duration);
}
The item.duration call triggers the track to load (and it's called each time a track is added, so it triggers ALL of the tracks to load), which is fine for a smaller number of tracks, but with 30+ tracks, some an hour or longer, my server was overloaded and the requests were timing out.
In my particular instance, it seems that item.duration is never NUMERIC here anyway, so my solution was to comment out the IF statement entirely.

Set AudioAttributes Volume

After searching for a very long time for a way to play notification noises only through the headphones (when plugged in), on a stream separate from STREAM_MUSIC, in a way that could interrupt and be completely audible over any background music, Android finally came out with the AudioAttributes API. By using the following code, I'm able to achieve exactly what I want for notifications, at least in API 21 or higher (STREAM_MUSIC is the best option I've found for lower versions):
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_SONIFICATION)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build();
Unfortunately, there doesn't appear to be any way to adjust the volume of the sonification in my app's settings. I currently use the AudioManager in the following way, but it only allows volume adjustments to streams, and none of STREAM_ALARM, STREAM_NOTIFICATION, STREAM_RING, or STREAM_MUSIC applies to whatever routing strategy is used for the sonification:
audioManager.setStreamVolume(AudioManager.STREAM_NOTIFICATION, originalVolume, 0);
Does anyone have any suggestion on how to set the volume corresponding to the AudioAttributes output? Keep in mind that the audio is actually played in a BroadcastReceiver that's used for the actual notification, and the audio setting would be specified in just some settings Activity.
Well, it appears that I missed a critical table in the API documentation:
https://source.android.com/devices/audio/attributes.html
It seems that STREAM_SYSTEM is the equivalent of what I was attempting to do with AudioAttributes. Basically, using the code I have above is sufficient for API 21 and forward, and use of STREAM_SYSTEM does everything necessary for the AudioManager and APIs prior to 21.

NAudio: Recording Audio-Card's Actual Output

I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.

Circular Buffer Audio Recording iOS: Possible?

A client of mine wants to continually record audio and when he clicks submit he wants to submit the last 10 seconds only. So he wants a continual recording and only keeping the last x seconds.
I would think this requires something like a circular buffer, but (as a somewhat newbie for iOS) it looks like AVAudioRecorder can only write to a file.
Are there any options for me to implement this?
Can anyone give a pointer?
I would use the Audio Queue Services. This will allow you isolate certain parts of the buffer. Here is the guide to it: http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1

Symbian/S60 audio playback rate

I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.