Pipe PhantomJS output to FFmpeg - selenium

I am running PhantomJS using Selenium WebDriver (C#).
I'm trying to video record my browser using FFmpeg as shown in the 2 tutorials:
https://mindthecode.com/recording-a-website-with-phantomjs-and-ffmpeg/
https://gist.github.com/phanan/e03f75082e6eb114a35c
As explained in the tutorials, I'll need to pipe output the phantomJS process to ffmpeg.
I know how to add arguments to phantomjs, like the following:
var svz = PhantomJSDriverService.CreateDefaultService();
svz.AddArgument("ffmpeg -y -c:v png -f image2pipe -r 24 -t 10 -i - -c:v libx264 -pix_fmt yuv420p -movflags +faststart output.mp4");
var driver = new PhantomJSDriver(svz);
However, I don't know how to add the argument as pipe. I have tried adding the pipe symbol before the actual argument but that doesn't seem to work.
So, my question is, how do I pipe the output of PhantomJS using selenium webdriver?

Related

How can I stabilize a video using FFMPEG in Android

I have built and successfully imported FFMPEG library in the android studio. How can I develop a program to stabilize a video using that?
You should implementate ffmpeg library with vid.stab like
https://github.com/tanersener/mobile-ffmpeg
You should give read/write storage permission
You should execute two ffmpeg commands:
-y -i $VIDEO -vf vidstabdetect=shakiness=10:accuracy=15:result=${VIDEO}trfFile -f null -
-y -i $VIDEO -vf vidstabtransform=smoothing=30:input=${VIDEO}output.mp4 -c:v mpeg4 /storage/emulated/0/Android/output.mp4
Where VIDEO is constant of your video path
It takes a lot of time because FFMPEG can't use phone GPU and render takes 2 minutes of 6 seconds of video.

video recording multiple instances of chrome driver

I use xvfb to open up a display and run selenium tests in parallel using chromedriver. so you can have multiple chromedrivers running at a time. how can I add video recording so that i can record each session separately? Is there a plugin I can use to add video recording for each selenium session?
You can use ffmpeg or avconv (a fork of ffmpeg) for it.
avconv \
-f x11grab \ # input for grabbing from X11
-r 15 \ # Frame rate
-s 400x300 \ # Size and width of area to capture
-i :0.0 \ # The display number you created with xvfb
-vcodec libx264 \ # the video codec
/tmp/output-file.mp4

resize one video in 2 sizes in single command

When user uploads video then I make its 2 sizes. Earlier, I was doing this in two steps like following
First Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" out.mp4
Second Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" out1.mp4
But now to reduce processing time, I want to combine these 2 steps in one. I have read https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs and make following command
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" bigVideo.mp4 \ -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" smallVideo.mp4
But it is giving following error
[NULL # 0xaee5440] Unable to find a suitable output format for ' -filter:v'
-filter:v: Invalid argument
so can anyone suggest me how i can solve it?
I tried to run both commands using the following script:
#!/bin/bash
for cmd in "$#"; do {
echo "Process \"$cmd\" started";
$cmd & pid=$!
PID_LIST+=" $pid";
} done
trap "kill $PID_LIST" SIGINT
echo "Parallel processes have started";
wait $PID_LIST
echo
echo "All processes have completed";
You can save it as filename.sh and make executable. after that you need to pass two of more commands as arguments, for example I ran as:
./filename.sh "ffmpeg -i input.mp4 -s 720x480 output1.mp4" "ffmpeg -i input.mp4 -s 1170x480 output2.mp4"
Your command was bit complicated for me so I try to run simple commands using parallel script.

How to capture screen and audio input and push to rtmp server?

I use avconv on ubuntu,I found this command
avconv -f alsa -i pulse -f x11grab -r 25 -s 1280x720 -i :0.0+0,0 -acodec libfaac -vcodec libx264 -pre:0 lossless_ultrafast -threads 0 video.mkv
to save as a file, and this command
avconv -i ./test.m4v -re -c copy -f flv "rtmp://localhost/livestream"
to push live stream.
How can I combine them together?
Firstly, you should ask such questions on video.stackexchange.com and not here.
Secondly, let's take apart the two commands that you have found:
-f alsa - format for the input is alsa
-i pulse - you are reading pulse (the pulseaudio driver)
-f x11grab - planning to read from the screen on x11
-r 25 -s 1280x720 - rate and size of the incoming video stream
-i :0.0+0,0 - this selects where the incoming video comes from
-acodec libfaac - here the output options start, you're setting audio code to libfaac, or at least trying to... since this option has been deprecated long time ago, currently -c:a would be used
-vcodec libx264 - setting video code, except that you should be using -c:v
-pre:0 lossless_ultrafast -threads 0 - some sort of parameters about how encoding should be done
video.mkv - this is the output file
And the second one
-i ./test.m4v - the file you're reading
-re - "Read input at native frame rate"
-c copy - do not reencode, but simply pipe as is
-f flv - the container format
"rtmp://localhost/livestream" - where you're planning to write all that.
When you understand that, it should be clear that what you are planning to do is to use the input and encoding part from the first command, and the format and output from the second one.
Here i didn't have time to check that everything that you found is working, you should do that yourself.

which commands line are used to take a screenshot on android device (except screencap)

I want to take a screenshot of my rooted samsung device using command line.
I have a constraint of time, I shouldn't exceed 2 s.
So my question is how can i take a screenshot using command line.
The following commands can be used to take a screenshot.
adb shell /system/bin/screencap -p /sdcard/screenshot.png
adb pull /sdcard/screenshot.png screenshot.png
Source
Update: After some research I've noticed a similar question that has an answer that might help you:
If the slow part is raw to png conversion (time adb shell screencap -p /sdcard/x.png is considerably slower than time adb shell screencap /sdcard/nonpng.raw, as I have it in games)
This shell script by max_plenert is a better example:
adb shell screencap /sdcard/mytmp/rock.raw
adb pull /sdcard/mytmp/rock.raw
adb shell rm /sdcard/mytmp/rock.raw
// remove the header
tail -c +13 rock.raw > rock.rgba
// extract width height and pixelformat:
hexdump -e '/4 "%d"' -s 0 -n 4 rock.raw
hexdump -e '/4 "%d"' -s 4 -n 4 rock.raw
hexdump -e '/4 "%d"' -s 8 -n 4 rock.raw
convert -size 480x800 -depth 8 rock.rgba rock.png
Source