Automatically delete / unlist YouTube video after Live Stream ends - youtube-livestreaming-api

As far as I remember, the enableDvr flag used to work like this. If it was not set, the video would not get archived. This is how we used it in our implementation. However, recently we got feedback that the videos with the flag disabled stay in YouTube after the live stream ends.
Has there been a change in the API and how this flag is used?
Also, the documentation states:
Important: You must set the value to true and also set the enableArchive property's value to true if you want to make playback available immediately after the broadcast ends.
On the other hand, the API changelog states that the enableArchive flag was removed back in 2013... Why is it still in the documentation then?
In addition, when setting up an event in YouTube Creators Studio there is an option to automatically unlist the video after the live stream ends. I can't find this option in the LiveBroadcast object. Is it available via the API or would I need to unlist or delete the video manually when the broadcast ends?

They have a defect in their documentation. In the March 27, 2013 revision history, they renamed this property to recordFromStart.
https://developers.google.com/youtube/v3/live/docs/liveBroadcasts#recordFromStart

Related

Audiobook chapters that don't start at beginning of file

We've implemented a SMAPI service and are attempting to serve up an audiobook. We can select the audiobook and start playback, but we run into issues when we want to move between chapters because our audio files are not split by chapter. Each audiobook is divided into roughly equal-length parts, and we have information on which part and how far into the part each chapter starts.
So we've run into an issue where our getMetadata response is giving back the chapters of the audiobook because that's how we'd like a user to be able to navigate the book, but our getMediaURI responses for each chapter are giving back URLs for the parts the audio files are divided into, and we seem to be unable to start at a specific position in those files.
Our first attempt to resolve the issue was to include positionInformation in our getMediaURI response. That would still leave us with an issue of ending a chapter at the appropriate place, but might allow us to start at the appropriate place. But according to the Sonos docs, you're not meant to include position information for individual audiobook chapters, and it seems to be ignored.
Our second thought, and possibly a better solution, was to use the httpHeaders section of the getMediaURI response to set a Range header for only the section of the file that corresponds to the chapter. But Sonos appears to have issues with us setting a Range header, and seems to either ignore our header or break when we try to play a chapter. We assume this is because Sonos is trying to set its own Range headers.
Our current thought is that we might be able to pass the media URLs through some sort of proxy, adjusting the Sonos Range header by adding an offset to the start and end values based on where the chapter starts in the audio file.
So right now we return <fileUrl> from getMediaURI and Sonos sends a request like this:
<fileUrl>
Range: bytes=100-200
Instead we would return <proxyUrl>?url=<urlEncodedFileUrl>&offset=3000 from getMediaURI. Sonos would send something like this:
<proxyUrl>?url=<htmlEncodedFileUrl>&offset=3000
Range: bytes=100-200
And the proxy would redirect to something like this:
<fileUrl>
Range: bytes=3100-3200
Has anyone else dealt with audio files that don't match up one-to-one with their chapters? How did you deal with it?
The simple answer is that Sonos players respect the duration of the file, not the duration expressed in the metadata. You can't get around this with positionInformation or Cloud Queues.
However, the note that you shouldn't use positonInformation for chapters in an audiobook seems incorrect, so I removed it. The Saving and Resuming documentation states that you should include it if a user is resuming listening. You could use this to start playback at a specific position in the audio file. Did you receive an error when you attempted to do this?
Note that you would not be able to stop playback within the file (for example, if a chapter ended before the file ended). The player would play the entire file before stopping. The metadata would also not change until the end of the file. So, for example, if the metadata for the file is "Chapter 2" and chapter 2 ends before the end of the file, the Sonos app would still display "Chapter 2" until the end of the file.
Also note that the reporting APIs have been deprecated. See Add Reporting for the new reporting endpoint that your service should host.

Getting metadata (track info, artist name etc.) for radio stream

I have already checked the following links but they weren't much helpful (in parenthesis I've explained why it didn't work in my case as suggested in their answers)
Streams - hasOutOfBandMetadata and getStreamingMetadata (our content is already HLS)
Sonos player not calling GetStreamingMetadata (getMetdata is not called, only getMediaMetada is called since radio stream has unique id and is not a collection)
In Sonos API documentation it is mentioned that "hasOutOfBandMetadata" is deprecated and it is recommended that metadata be embedded inline with the content. However due to some limitations it can't be achieved in our service thus I have to go with the old way itself (whatsoever it is).
I suppose, ideally "getStreamingMetadata" should be called after setting "hasOutOfBandMetadata" to true but it's not happening.
Secondly, for testing purposes I set "secondsRemaining" and "secondsToNextShow" for different values to find out that "description" is also being displayed for those different time intervals (if I set secondsRemaining/secondsToNextShow to 20 then description is displayed for 20 seconds, if set to 200 then for 200 seconds and likewise). After the time lapses, information inside "description" disappears. So I guess there must be some call going to refresh metadata after the time lapses but couldn't figure out which call.
Kindly explain what is the proper way to get metadata for a continuous radio stream. On TuneIn radio you can find Radio Paradise for which metadata is getting updated as track changes. Even if they use metadata inline with their content there must be some way to achieve this.
Can you please post the calls and the the response that you are sending? This would help with troubleshooting this issue. Also what mimeType are you trying to use?
At this time the only full supported method for getting metadata for a continuous radio stream on Sonos that will be guaranteed to work in future releases is to embed metadata in line.

Does setting currentTime always trigger canplay?

I can't find an exact answer online. The closest I get is a description of the canplay event.
The user agent can resume playback of the media data, but estimates that if playback were to be started now, the media resource could not be rendered at the current playback rate up to its end without having to stop for further buffering of content.
Here is my code
// fyi at this point i just loaded the video from an <input>. the currentTime is 0.
var posterFrame = 3;
var longEnough = vid.duration >= posterFrame;
vid.oncanplay = function () { getThumb.apply(self); }
vid.currentTime = longEnough ? posterFrame : 0;
It works just fine for me but I am concerned that sometimes setting currentTime won't trigger oncanplay and the whole thing will just stop.
While writing my own html5 video player, I came across this same question. Further searching lead me to this bug report:
https://bugzilla.mozilla.org/show_bug.cgi?id=773885
which suggests the behavior as a dupe to bug report 664842. While 664842 is specific to the related 'canplaythrough', there is related discussion by Mozilla devs:
The spec says: canplaythrough fires when "readyState is newly equal to HAVE_ENOUGH_DATA.".
Elsewhere it says "When the ready state of a media element whose networkState is not NETWORK_EMPTY changes, the user agent must follow the steps given below: [...] If the new ready state is HAVE_ENOUGH_DATA [...] the user agent must finally queue a task to fire a simple event named canplaythrough."
There is a followup request to make canplay & canplaythrough one time events here:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12982
which says the behavior won't be changed.
Firefox interprets the spec as allowing canplay/canplaythrough to be fired multiple times as readyState changes allow. This interpretation is useful because canplaythrough is fired optimistically based on estimated transfer completion time. This estimate may become invalid if, for example, network conditions or the bitrate of the media changes. It is useful to signal this change in state by moving to and from HAVE_ENOUGH_DATA (and firing the related events) as conditions change.
Seeking to a new location in the media may result in a previous canplaythrough estimate becoming invalid if the seek location is in an unbuffered segment of the media that requires a new network request to transfer or significant decoding work. It then becomes more consistent (and therefore easier to code against) if these events are also fired when seeking to a range that is already buffered.
Although that discussion is from 2011, considering that the behavior seems to still be implemented, I think it's safe to say that yes, it should alway trigger it, so long as you don't pass it anything odd like null media.

WebRTC Changing Media Streams on the Go

Now since device enumeration is present in chrome, i know i can select a device during "getUserMedia" negotiation. I was also wondering whether i could switch devices during the middle of a call (queue up a local track and switch tracks or do i have to renegotiate the stream)? I am not sure if this is something that is still blocked or now is "allowable"
I have tried to make a new track, but i can't figure out how to switch the track on the go. I know this was previously impossible, but was wondering now if it is possible?
Even i have the same requirement. I have to record the video using MediaRecorder. For this I am using navigator.getUserMedia with constraints of audio and video. You can pass the video or audio tracks dynamically by getting the available devices from navigator.mediaDevices.enumerateDevices() and attaching the respective device to constraints and calling navigator.getUserMedia with new constraints again. The point to be noted when doing this is, you have to kill the existing tracks using track.stop() method.
You can see my example here.
StreamTrack's readyState is getting changed to ended, just before playing the stream (MediaStream - MediaStreamTrack - WebRTC)
In Firefox, you can use the RTPSender object to call replaceTrack() to replace a track on the fly (with no renegotiation). This should eventually be supported by other browsers as part of the spec.
Without replaceTrack(), you can remove the old stream, add a new one, deal with onnegotiationnedded, and let the client process the change in streams.
See the replaceTrack() test in the Mozilla source: https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack
Have you tried calling getUserMedia() when you want to change to a different device?
There's an applyConstraints() method in the Media Capture and Streams spec that makes it possible to change constraints on the fly, but it hasn't been implemented yet:
dev.w3.org/2011/webrtc/editor/getusermedia.html#the-model-sources-sinks-constraints-and-states
dev.w3.org/2011/webrtc/editor/getusermedia.html#methods-1

ALAsset invalid after Camera Roll changes?

I write some photos to the photo library using UIImageWriteToSavedPhotosAlbum() and at the same time I display the contents of this asset group (ALAssetsGroupSavedPhotos) using enumerateAssetsUsingBlock: and friends. Sometimes the assets returned by enumerating the group become sort of “invalid”, meaning that the defaultRepresentation call returns nil, although the asset is still in memory.
I noticed that this seems to happen after the photo library gets modified by the UIImageWriteToSavedPhotosAlbum() call. Is this a documented behaviour? How can I prevent it? Reloading the assets is not a feasible option, as the user might already be somewhere deeper in the UI working with the asset.
this is an unfortunate, but documented behavior. For reference:
"ALAssetsLibraryChangedNotification Sent when the contents of the
assets library have changed from under the app that is using the data.
When you receive this notification, you should discard any cached
information and query the assets library again. You should consider
invalid any ALAsset, ALAssetsGroup, or ALAssetRepresentation objects
you are referencing after finishing processing the notification."
So what you have to do is to register an observer for ALAssetsLibraryChangedNotification. (And there is a bug in regarding this notification on iOS 5.X, see Open Radar.)
When you receive the notification you have to reenumerate all groups and assets. There is at the moment no other way. This is very unfortunate from a GUI perspective and we can only hope Apple improves this mechanism in the future.
Cheers,
Hendrik