Extract image of every frame of a video using react-native-ffmpeg - react-native

I have looked all over the internet to get a way to extract image of everyframe of a video using react-native-ffmpeg. I am making a mobile app and I want to show all per frame images on the video timeline. I want to do this natively on mobile so that I can utilise hardware power of mobile. That is the reason I am looking for react-native-ffmpeg kind of library. Am I in the right direction? This npmjs.com/package/react-native-ffmpeg is what I am trying to use. I need to know the command to do the job.

To calculate frame rate of the video follow this link :
https://askubuntu.com/questions/110264/how-to-find-frames-per-second-of-any-video-file
After finding the frame rate, you can extract each frame now, for example :
To extract all frames from a 24 fps movie using ffmpeg :
The %03d dictates that the ordinal number of each output image will be formatted using 3 digits.
ffmpeg -i input.mov -r 24/1 out%03d.jpg
another resource :
https://gist.github.com/loretoparisi/a9277b2eb4425809066c380fed395ab3
also refer to .execute() method in react-native-ffmpeg.

Related

How to send a texture with Agora Video SDK for Unity

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

OpenTok TokBox: Video in vertical presentation looks like in horizontal presentation after archiving

Our aim is to show portait video (vertical orientation in terms of TokBox) without black areas right and leftside after archiving. Now it looks like landscape with black areas on right and left side.
We are using php server and android client for streaming.
Our steps to convert live stream in video on demand through archieving are:
start session
update stream with the parameter layoutClassList = verticalPresentation (php library)
start archieving
live stream is on -> create subsriber and watch the stream. IMPORTANT! The stream has no black areas and has CORRECT presentation on subsriber side!
stop archieving
waiting TokBox upload archieving file to Amazon s3 bucket -> the file ALREADY contains black areas right-leftside. WRONG! (please watch the video on link for better understanding https://s3-us-west-1.amazonaws.com/edtv-dev1-input/46176492/9f26ef23-aee6-42f2-8c51-d8e2685abcc9/archive.mp4 )
processing the file
Are thereabove the correct steps to achieve the goal - get video file without black areas (in portrait orientation)? Are we missing anything?
Is archieving process on TokBox sensitive to horizontal/vertical presentation? is it possible to archive the video in vertical orientation?
UPDATE: What we wanted was not composed, but INDIVIDUAL stream! TokBox creates zip file, but Amazon AWS was able to transcode it and get the correct result both in portrait and landscape orientations.
NOTE: As a default result file on Amazon AWS after Individual stream archiving is *.zip (json + video file in it). The trascoder we used gave us video without sound. So we added lambda that unzipped the file. Now everything is ok, but took a lot of time and headache.
Tokbox developer here
For composed archiving, the only two options currently available for output resolution are 640x480 and 1280x720. Trying to fit a portrait video into a canvas of the available resolutions will result in the video you are seeing.
Possible solutions:
Use the custom layout control [1]: you can override the "object-fit" property to "cover". This may not result in exactly what you want, since the output resolution will still be 640x480 or 1280x720, but the video will occupy the whole canvas, at the expense of cropping the top and bottom part. See [2]
The best solution in my opinion is to use "individual stream archiving", where the resolution will be kept as the original, and you get a file per stream. Please check [3]
https://tokbox.com/developer/guides/archiving/layout-control.html
https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
https://tokbox.com/developer/rest/#start_archive
How can we get URL within the zip created by opentok which was uploaded in s3

Projekktor: supporting multiple video sizes

I am using Projekktor to display video, but if someone is using, say, an iPhone I want to send out a smaller video than the full 1080p that might be sent to a browser.
Is there a built-in way to do this, or do I need to do a user-agent check and create a playlist based on the device manually?
You can configure Projekktor to fetch a specific video file depending on the dimensions of the video display.
To do so you need to provide multiple video video files with different resolutions for each format you want to deliver and set a "quality" property for each of them.
To alter the dimensions/quality mapping you have to set the "playbackQualities" config option
The whole logic is described in detail over here.

How to create thumbnail from an uploaded video on dmcloud.net

I am using the Python SDK to upload videos on dmcloud.net.
Now I want to give the user of my application the possibility to choose a thumbnail to the video he uploaded. How can I generate different thumbnails from an uploaded video on dmcloud.net.
Thanks.
Through the SDK, you can update the video thumbnail using the set_thumbnail method of the Media API. You have two options:
Specify a URL to a new thumbnail
Specify a timecode, which is a time offset within the video
For your end-users, provide a way to let them specify a URL or an offset (verify it's an offset that's appropriate for the video's duration). If you're using a custom player, you can probably let them choose an offset visually by pausing the player on their frame of choice and then extracting a time offset from your player for the dmcloud API call.
If you choose to set a thumbnail by timecode (offset), the SDK call would look like this:
cloudkey.media.set_thumbnail(id, timecode)
SDK Link: http://www.dmcloud.net/doc/api/python-sdk.html#media-object

Code to play AVI file in xcode

I am creating an iPhone application, which can play many media formats. I am not able to play AVI file format, where as I am able to play other formats (for e.g. MP3, MP4, MOV etc). When I try to play AVI, it shows black screen and display is hidden. Any suggestions on what I can do to fix this issue?
Take a look at this tutorial on playing video with ffmpeg and SDL, both available for the iPhone.
An ffmpeg and SDL Tutorial or How to Write a Video Player in Less Than 1000 Lines
It's written in C, but consider it pseudocode, and adapt it in Objective C accordingly.
.avi is a container format, not a specific audio/video codec. Depending on the contents of the .avi container, it may or may not be possible to decode the video on an iOS device (due to hardware limitations). If it is possible to decode the video in real time, you may have some luck using the libav (aka ffmpeg) library to decode it.