resize one video in 2 sizes in single command - ffmpeg-php

When user uploads video then I make its 2 sizes. Earlier, I was doing this in two steps like following
First Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" out.mp4
Second Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" out1.mp4
But now to reduce processing time, I want to combine these 2 steps in one. I have read https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs and make following command
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" bigVideo.mp4 \ -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" smallVideo.mp4
But it is giving following error
[NULL # 0xaee5440] Unable to find a suitable output format for ' -filter:v'
-filter:v: Invalid argument
so can anyone suggest me how i can solve it?

I tried to run both commands using the following script:
#!/bin/bash
for cmd in "$#"; do {
echo "Process \"$cmd\" started";
$cmd & pid=$!
PID_LIST+=" $pid";
} done
trap "kill $PID_LIST" SIGINT
echo "Parallel processes have started";
wait $PID_LIST
echo
echo "All processes have completed";
You can save it as filename.sh and make executable. after that you need to pass two of more commands as arguments, for example I ran as:
./filename.sh "ffmpeg -i input.mp4 -s 720x480 output1.mp4" "ffmpeg -i input.mp4 -s 1170x480 output2.mp4"
Your command was bit complicated for me so I try to run simple commands using parallel script.

Related

I need help creating a shell script to tell me if a process is up and running so I can monitor that process in PRTG

I'm using a monitoring system called PRTG to monitor our environment. PRTG KB suggested using a script to monitor processes called SSH Script. The script needs to be stored in /var/prtg/scripts.
I found a script that someone used for PRTG:
#!/bin/sh
pgrep wrapper $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi
However, PRTG is returning the following error code within the Web GUI:
Response not well-formed: "pgrep: only one pattern can be provided Try `pgrep --help' for more information. 1:0:Wrapper Down "
However, when I run the script on the Linux server it prints out:
0:1:OK
So my question would be what would be the best script to use to tell PRTG that a process is "Down" or "UP"?
###################
Editing for further clarification:
I changed the script up and it works awesome on the command line...but it appears that the issue is with how PRTG is reading the output. Apparently it's not in the correct format. So here's my script:
#!/bin/bash
SERVICE="wrapper"
if pgrep -x "$SERVICE" >/dev/null
then
echo "$SERVICE is running"
else
echo "$SERVICE stopped"
fi
This is what PRTG is erroring out with:
Response not well-formed: "wrapper is running "
So...PRTG is saying that the sensor I'm using wants the script output into this format:
The returned data for standard SSH Script sensors must be in the following
format:
returncode:value:message
Value has to be a 64-bit integer or float. It is used as the resulting value
for this sensor (for example, bytes, milliseconds) and stored in the
database. The message can be any string (maximum length: 2000 characters).
The SSH script returncode has to be one of the following values:
VALUE DESCRIPTION
0 OK
1 WARNING
2 System Error
3 Protocol Error
4 Content Error
So I guess now...the question is how do I get that script to output what PRTG wants to see?
From the changes you made to the script, is shall look like below, and called like this example: servicecheck.sh sshd
#!/bin/sh
pgrep -x $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi

GNU Parallel -q option causing BCP "unknown option" errors (different string quotes on local vs remote hosts)

Seeing very strange behavior where when when using gnu parallel to distribute export jobs using bcp from mssql-tools. It appears that when using the -q option for parallel, strings are interpreted differently on local host than on remote hosts.
Running only as a loop through files on local host, the bcp processes throws no errors
However, distributing the file exports with parallel, the bcp processes executing on the local host throw
/opt/mssql-tools/bin/bcp: unknown option
errors, while those executing on remote hosts (via a --sshloginfile param) finish successfully. The basic code being run looks like...
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
TO_SERVER_IP="-S 172.18.54.22"
DB="$dest_db" #TODO: enforce being more careful with this value
TABLE="$tablename" # MUST exist beforehand, case matters
USER=$(tail -n+1 $source_home/mssql-creds.txt | head -1)
PASSWORD=$(tail -n+2 $source_home/mssql-creds.txt | head -1)
DATAFILES="/some/path/to/files/"
TARGET_GLOB="*.tsv"
RECOMMEDED_IMPORT_MODE='-c' # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER="\\\t" # (currently not used) DO NOT use format like "'\t'", nested quotes seem to cause hard-to-catch error, want "\t" literal
....
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4 # MUST exist beforehand, case matters
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7 # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER=$8 # not currently used
WORKDIR=$9
LOGDIR=${10}
....
/opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE
-t "\t" \
-e ${localfile}.bcperror.log
}
export -f bcpexport
parallelization_pernode=5
parallel -q -j $parallelization_pernode \
--sshloginfile $source_home/parallel-nodes.txt \
--env bcpexport \
bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER $workingdir $logdir \
::: $DATAFILES/$TARGET_GLOB #from hdfs nfs gateway
Looking at the bash interpretation of the processes (by running ps -aux | grep bcp on the hosts that parallelis given in the --sshloginfile) for the remote hosts we see...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
for the local host, the bash interpretation is...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
that is, they look the same.
My current thought is that the "\t" in the bcp command is being interpreted in a problematic way. Debugging parallel without vs with the -q option we see...
$ parallel -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 4: Running on HW04.ucera.local: t
Number 1: Running on HW04.ucera.local: t
Number 2: Running on HW03.ucera.local: t
Number 5: Running on HW03.ucera.local: t
Number 3: Running on HW02.ucera.local: t
$ parallel -q -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 1: Running on `hostname`:
Number 4: Running on `hostname`:
Number 3: Running on `hostname`: \t
Number 2: Running on `hostname`: \t
Number 5: Running on `hostname`: \t
The bcp command needs the "\t" literal not the "t" literal (and I suspect several other similar string corruptions (also I do believe that \t is the default for bcp anyway, but this is just an example and want to keep \t for code clarity)), but not sure how to get this for both local and remote nodes or even why this behavior differs by remote vs local.
Basically, need the the strings to be exactly the same for both local and remote hosts even if strings have spaces or escape characters in them (note, I think this used to not be the case (have older script on other machines that don't have this problem))
Not sure if this is counts more as a parallel problem or a bcp problem (currently thinking something is going wrong with the -q option in parallel, but not sure). Anyone have any debugging suggestions or fixes? Ideas of what could be happening?
Firstly, the reason why hostname is not expanded is due to -q. It quotes the ` so that it does not expand.
Secondly, I think what you see is the different behaviours in built-in echo and /bin/echo. Built-in echo depends on the shell. Here I compare echo \\\\t in different shells:
$ parallel --onall --tag -S sh#lo,bash#lo,csh#lo,tcsh#lo,ksh#lo,zsh#lo echo \\\\t ::: a
bash#lo \t a
tcsh#lo a
sh#lo a
ksh#lo \t a
zsh#lo a
csh#lo \t a
That does not, however, get you closer to a solution. If I were you I would use env_parallel to copy the environment variables. And if the login shell on the remote systems are not the same as your shell, then set PARALLEL_SHELL to force using that shell.
So:
#!/bin/bash
env_parallel --session
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
:
:
PARALLEL_SHELL=bash env_parallel -q -j $parallelization_pernode ...
(no need to use neither --env nor 'export -f' when using 'env_parallel --session')
# Cleanup (not needed if this is the last line in the script)
env_parallel --end-session

How to make ffmpeg exit when Input is broken

I have written a bash script to keep a ffmpeg command up and running
#!/bin/bash
while :
do
echo `ffmpeg -re -i http://domain.com/index400.m3u8 -vcodec copy -acodec copy -f mpegts udp://127.0.0.1:10000?pkt_size=1316`
done
The problem is, sometimes the input is broken, yet ffmpeg does not exit when that happens so that it is restarted by the above script. Instead what happens is the same process is kept running eventhough it is not transferring any packet to the UDP address (output). And I need to manually go into the terminal and kill it (kill -9 #processID)
I need a way to make ffmpeg kill its own process whenever the input is broken.
Appreciate your help.

How to capture screen and audio input and push to rtmp server?

I use avconv on ubuntu,I found this command
avconv -f alsa -i pulse -f x11grab -r 25 -s 1280x720 -i :0.0+0,0 -acodec libfaac -vcodec libx264 -pre:0 lossless_ultrafast -threads 0 video.mkv
to save as a file, and this command
avconv -i ./test.m4v -re -c copy -f flv "rtmp://localhost/livestream"
to push live stream.
How can I combine them together?
Firstly, you should ask such questions on video.stackexchange.com and not here.
Secondly, let's take apart the two commands that you have found:
-f alsa - format for the input is alsa
-i pulse - you are reading pulse (the pulseaudio driver)
-f x11grab - planning to read from the screen on x11
-r 25 -s 1280x720 - rate and size of the incoming video stream
-i :0.0+0,0 - this selects where the incoming video comes from
-acodec libfaac - here the output options start, you're setting audio code to libfaac, or at least trying to... since this option has been deprecated long time ago, currently -c:a would be used
-vcodec libx264 - setting video code, except that you should be using -c:v
-pre:0 lossless_ultrafast -threads 0 - some sort of parameters about how encoding should be done
video.mkv - this is the output file
And the second one
-i ./test.m4v - the file you're reading
-re - "Read input at native frame rate"
-c copy - do not reencode, but simply pipe as is
-f flv - the container format
"rtmp://localhost/livestream" - where you're planning to write all that.
When you understand that, it should be clear that what you are planning to do is to use the input and encoding part from the first command, and the format and output from the second one.
Here i didn't have time to check that everything that you found is working, you should do that yourself.

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.