How to add squeezeback ad in a MediaLive HLS stream output? - aws-media-live

I'm using AWS MediaLive to build and deliver HLS stream of a video file stored in S3. I would like to know how to include squeezeback ads in this HLS stream playout.

Thanks for your post. AWS Elemental MediaLive does not do DVE effects; it cannot do squeezebacks. MediaLive does have a keying layer, which enables moving graphics overlays onto the channel output. This would enable you to put up an LBAR animated graphic, but it would cover a region of the video. If you want a true video squeezeback, this would need to be done upstream of MediaLive.
More information on the motion graphics overlay keying can be found at:
https://docs.aws.amazon.com/medialive/latest/ug/feature-mgi.html

Related

capture MediaStream from a local video file in react naitve

The purpose is to play a local video on the host side and stream it on the participants' side. The video should pause and seek for everyone when the host does it.
For this, I want to capture a stream (of MediaStream type) of a local video file while it is playing and pass it into WebRTC.
Just like on the web we have a captureStream() method to capture stream from video or canvas, do we have anything similar in react-native? Or any other way to achieve the same goal?
I could not find a relative solution with the RTCView of react-native-webrtc or the react-native-video. Any type of solution or suggestion would be helpful.
Thank you in advance.

OpenTok TokBox: Video in vertical presentation looks like in horizontal presentation after archiving

Our aim is to show portait video (vertical orientation in terms of TokBox) without black areas right and leftside after archiving. Now it looks like landscape with black areas on right and left side.
We are using php server and android client for streaming.
Our steps to convert live stream in video on demand through archieving are:
start session
update stream with the parameter layoutClassList = verticalPresentation (php library)
start archieving
live stream is on -> create subsriber and watch the stream. IMPORTANT! The stream has no black areas and has CORRECT presentation on subsriber side!
stop archieving
waiting TokBox upload archieving file to Amazon s3 bucket -> the file ALREADY contains black areas right-leftside. WRONG! (please watch the video on link for better understanding https://s3-us-west-1.amazonaws.com/edtv-dev1-input/46176492/9f26ef23-aee6-42f2-8c51-d8e2685abcc9/archive.mp4 )
processing the file
Are thereabove the correct steps to achieve the goal - get video file without black areas (in portrait orientation)? Are we missing anything?
Is archieving process on TokBox sensitive to horizontal/vertical presentation? is it possible to archive the video in vertical orientation?
UPDATE: What we wanted was not composed, but INDIVIDUAL stream! TokBox creates zip file, but Amazon AWS was able to transcode it and get the correct result both in portrait and landscape orientations.
NOTE: As a default result file on Amazon AWS after Individual stream archiving is *.zip (json + video file in it). The trascoder we used gave us video without sound. So we added lambda that unzipped the file. Now everything is ok, but took a lot of time and headache.
Tokbox developer here
For composed archiving, the only two options currently available for output resolution are 640x480 and 1280x720. Trying to fit a portrait video into a canvas of the available resolutions will result in the video you are seeing.
Possible solutions:
Use the custom layout control [1]: you can override the "object-fit" property to "cover". This may not result in exactly what you want, since the output resolution will still be 640x480 or 1280x720, but the video will occupy the whole canvas, at the expense of cropping the top and bottom part. See [2]
The best solution in my opinion is to use "individual stream archiving", where the resolution will be kept as the original, and you get a file per stream. Please check [3]
https://tokbox.com/developer/guides/archiving/layout-control.html
https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
https://tokbox.com/developer/rest/#start_archive
How can we get URL within the zip created by opentok which was uploaded in s3

How to tell YouTube that a livestream is a 360 video?

We are successfully streaming video to YouTube already. However we don't know how to create the livestreams for a 360 video via the API:
My guess is that we miss some documentation about how to tell YouTube that a video stream needs to be played back in a 360 video player. We are using this code snippet to generate the liveStreams resource:
NSDictionary *stream = #{#"snippet": #{#"title": broadcast.title ? broadcast.title : #"mimoLive Livestream"},
#"cdn": #{#"resolution": resolution,
#"frameRate": framerate,
#"ingestionType": #"rtmp"}};
Is there a (un)documented key we need to add here?
Refering to YouTube API:
https://developers.google.com/youtube/v3/live/docs/liveStreams
(BTW: Facebook recently added the option is_spherical to their API to make this work)
You need to set the contentDetails.Projection field to 360 when creating a new broadcast object. It is set to rectangular by default. This is documented under https://developers.google.com/youtube/v3/live/docs/liveBroadcasts

Webrtc stream local video file

How would one stream a local media file(video file) to peers?( i am using janus-gateway - videoroom plugin for this ).
For audio there is webAudio, but what about the video?
Thanks!
Update: Maybe someone has an example? Or a small code snippet? Maybe a link to some lib?
Render the local video on Canvas & create stream object from Canvas element.
And then you can add the stream to PeerConnection.
Then stream will be sent to remote peer(Janus/Browser/any server).
Demo: https://webrtc.github.io/samples/src/content/capture/canvas-pc/
Source: https://github.com/webrtc/samples/blob/gh-pages/src/content/capture/canvas-pc/js/main.js#L45

DSC-HX400 RAW image data & Movie Recording

I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.