How to capture screen and audio input and push to rtmp server? - rtmp

I use avconv on ubuntu,I found this command
avconv -f alsa -i pulse -f x11grab -r 25 -s 1280x720 -i :0.0+0,0 -acodec libfaac -vcodec libx264 -pre:0 lossless_ultrafast -threads 0 video.mkv
to save as a file, and this command
avconv -i ./test.m4v -re -c copy -f flv "rtmp://localhost/livestream"
to push live stream.
How can I combine them together?

Firstly, you should ask such questions on video.stackexchange.com and not here.
Secondly, let's take apart the two commands that you have found:
-f alsa - format for the input is alsa
-i pulse - you are reading pulse (the pulseaudio driver)
-f x11grab - planning to read from the screen on x11
-r 25 -s 1280x720 - rate and size of the incoming video stream
-i :0.0+0,0 - this selects where the incoming video comes from
-acodec libfaac - here the output options start, you're setting audio code to libfaac, or at least trying to... since this option has been deprecated long time ago, currently -c:a would be used
-vcodec libx264 - setting video code, except that you should be using -c:v
-pre:0 lossless_ultrafast -threads 0 - some sort of parameters about how encoding should be done
video.mkv - this is the output file
And the second one
-i ./test.m4v - the file you're reading
-re - "Read input at native frame rate"
-c copy - do not reencode, but simply pipe as is
-f flv - the container format
"rtmp://localhost/livestream" - where you're planning to write all that.
When you understand that, it should be clear that what you are planning to do is to use the input and encoding part from the first command, and the format and output from the second one.
Here i didn't have time to check that everything that you found is working, you should do that yourself.

Related

ffmpeg Error: Too many packets buffered for output stream

I am working on an electron app that uses ffmpeg, I am developing on a win10 machine so I am using command prompt and I have installed the npm package 'ffmpeg-ffprobe-static'. I can run ffmpeg commands in the terminal by calling the package like so:
C:\Users\martin\myproject\node_modules\ffmpeg-ffprobe-static>ffmpeg.exe -h
ffmpeg version 4.3 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9.3.1 (GCC) 20200621
....
I have an ffmpeg command to combine two flac files into a single mp3 file that has been working fine until I encountered this error:
[mjpeg # 0000022537ace640] bits 85 is invalid
Error while decoding stream #0:1: Invalid data found when processing input
Too many packets buffered for output stream 0:0.
[libmp3lame # 0000022537ac3480] 3 frames left in the queue on closing
Conversion failed!
The same command works for other flac files, so there's something about these Billy Martin songs that can be played perfectly fine in vlc but cause ffmpeg to crash:
//running this command:
ffmpeg.exe -i "G:\RenderTune broken files\broken flac example\05 - Billy Martin - Phillie Dog.flac" -i "G:\RenderTune broken files\broken flac example\08 - Billy Martin - Stax.flac" -y -filter_complex concat=n=2:v=0:a=1 -c:a libmp3lame -b:a 320k "G:\RenderTune broken files\broken flac example\COMBINED_FILES.mp3"
//results in this output:
[mjpeg # 0000022537ace640] bits 85 is invalid
Error while decoding stream #0:1: Invalid data found when processing input
Too many packets buffered for output stream 0:0.
[libmp3lame # 0000022537ac3480] 3 frames left in the queue on closing
Conversion failed!
I uploaded the broken flac files here: https://www.mediafire.com/folder/0v9hbfrap727y/broken+flac
If I run this same command with other flac files it works fine:
ffmpeg.exe -i "G:\RenderTune broken files\working flac example\5. Gossip.flac" -i "G:\RenderTune broken files\working flac example\6. Let The Children Play.flac" -y -filter_complex concat=n=2:v=0:a=1 -c:a libmp3lame -b:a 320k "G:\RenderTune broken files\working flac example\COMBINED_FILES.mp3"
I have tried adding -max_muxing_queue_size 9999 to my ffmpeg command like many posts suggest but that does not fix it, does anybody know how to prevent this error?
[edit]
I tried one of the posted solutions:
ffmpeg.exe -y -i "G:\RenderTune broken files\working flac example\5. Gossip.flac" -i "G:\RenderTune broken files\working flac example\6. Let The Children Play.flac" -filter_complex "[0:a][1:a]concat=n=2:v=0:a=1[a]" -map "[a]" -c:a libmp3lame -b:a 320k "G:\RenderTune broken files\working flac example\COMBINED_FILES.mp3"
which crashed with a different error:
[libmp3lame # 000002453725f3c0] Queue input is backward in time.9x
... lots of these [mp3 # ..] messages
[mp3 # 0000024537344400] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2449071 >= 2445999
[mp3 # 0000024537344400] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2449071 >= 2447151
[mp3 # 0000024537344400] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2449071 >= 2448303
[flac # 00000245372d4140] invalid residual315.7kbits/s speed=59.7x
[flac # 00000245372d4140] decode_frame() failed
Error while decoding stream #1:0: Invalid data found when processing input
size= 24871kB time=00:10:37.09 bitrate= 319.8kbits/s speed=60.5x
video:0kB audio:24869kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.004673%
Something is wrong involving the album art image. Ignore it by adding an output label to your concat filter output and only mapping the concatenated audio:
ffmpeg.exe -y -i "G:\RenderTune broken files\working flac example\5. Gossip.flac" -i "G:\RenderTune broken files\working flac example\6. Let The Children Play.flac" -filter_complex "[0:a][1:a]concat=n=2:v=0:a=1[a]" -map "[a]" -c:a libmp3lame -b:a 320k "G:\RenderTune broken files\working flac example\COMBINED_FILES.mp3"
Otherwise the default stream selection will choose the filter output plus the broken image causing the error shown in your question.
This solved the problem to me (afterwards says about broken flags but this is easily fixed by recodening again or by playing with settings).
Linux 21.04, ffmpeg
ffmpeg version 4.2.3
File example: here (for few days)
ffmpeg -fflags +genpts -i peta_alien.ogv -map 0 -c:v copy -c:a aac -max_muxing_queue_size 4000 peta_alien.avi

ffmpeg image sequence specify input framerate

I am trying to set the input framerate of a sequence of images (many folders):
if I am working with a single image sequence everything works properly:
ffmpeg -framerate 30 -i folder01/img%05d.jpeg -filter:v "crop=640:360" -r 30 outfilm.mp4
then, because I have more folders (and I was unable to get the -i concat:filesequence1|filesequence2 working) I tried to use:
ffmpeg -framerate 30 -f concat -safe 0 -i filelist.txt -filter:v "crop=640:360" -r 30 outfilm.mp4
but I receive an error:
Option framerate not found.
then if I omit the -framerate 30, everything runs smoothly, but ffmpeg defaults to a 25 fps value for the input image sequences.
Any ideas on how to fix this?
Use
ffmpeg -f concat -safe 0 -r 30 -i filelist.txt -filter:v "crop=640:360" -r 30 outfilm.mp4
When -r is used as an input option, it generates new timestamps at the given rate and sets that as the input framerate.

which commands line are used to take a screenshot on android device (except screencap)

I want to take a screenshot of my rooted samsung device using command line.
I have a constraint of time, I shouldn't exceed 2 s.
So my question is how can i take a screenshot using command line.
The following commands can be used to take a screenshot.
adb shell /system/bin/screencap -p /sdcard/screenshot.png
adb pull /sdcard/screenshot.png screenshot.png
Source
Update: After some research I've noticed a similar question that has an answer that might help you:
If the slow part is raw to png conversion (time adb shell screencap -p /sdcard/x.png is considerably slower than time adb shell screencap /sdcard/nonpng.raw, as I have it in games)
This shell script by max_plenert is a better example:
adb shell screencap /sdcard/mytmp/rock.raw
adb pull /sdcard/mytmp/rock.raw
adb shell rm /sdcard/mytmp/rock.raw
// remove the header
tail -c +13 rock.raw > rock.rgba
// extract width height and pixelformat:
hexdump -e '/4 "%d"' -s 0 -n 4 rock.raw
hexdump -e '/4 "%d"' -s 4 -n 4 rock.raw
hexdump -e '/4 "%d"' -s 8 -n 4 rock.raw
convert -size 480x800 -depth 8 rock.rgba rock.png
Source

Wrong frame rate when saving camera feed to a file using FFMPEG

I'm trying to save the live feed from an IP camera to a file but the resulting file always plays much faster than the original speed.
I have tried with the following commands:
ffmpeg -i http://171.22.3.47/image -vcodec copy -an -t 900 c:\output.mp4
ffmpeg -i http://171.22.3.47/image -c:v libx264 -an c:\output.mp4
Does anybody know what I'm missing? Both commands create the file and I can use Windows Media Player to play them, but they run much faster.
Try forcing output framerate by adding -r key
ffmpeg -i http://171.22.3.47/image -c:v libx264 -an -r 30 c:\output.mp4
You can also try to slow down the resulting video as an option. This will make output.mp4 2 times slower:
ffmpeg -i output.mp4 -filter:v "setpts=2.0*PTS" -c:v libx264 -an output-slow.mp4

avconv: getting aac to work. -strict experimental doesn't work

I am trying to get the following screencast command to work:
avconv -f alsa -ar 44100 -ac 2 -i default -acodec aac -strict experimental -ab 320k -f x11grab -s 1024x600 -r 24 -i :0.0 -vcodec rawvideo screencast.mp4
But I still get the following error:
encoder 'aac' is experimental and might produce bad results.
Add '-strict experimental' if you want to use it
Other sites suggest making sure that the -strict experimental appears immediately after the aac parameter, which I have done, to no effect.
Move both the -acodec aac and -strict experimental to somewhere after the last -i parameter in the command line, before the output file name.
Parameters to avconv are parsed as "avconv [input1 options] -i input1 [input2 options] -i input2 [output options] outputfile", so when you added these parameters before the second -i they were interpreted as options to the second input, not to the output.