How to automate audio track replacement in a video? - automation

I would like to implement a automation of audio enhancement for self created videos.
The videos have all in common
the mic is a classical laptop mic (mid quality with lots of white noise)
I have a white noise profile created with audacity and if I manually apply the noise reduction to the audio track the audio sounds nearly like in a studio created.
Now I want to run a app or script to:
apply the noise reduction with a given noise profile and write the new audio file as MP3 to disk
apply the new audio track to the given video (replace existing voice track with the new track)
save the new video as a new file to the disk
Anyone around who can help on this? I need to decide build or buy...
And I do not know if even tools exist to automate this steps...
My development requirements:
platform neutral language or platform, preferably java
of
if applications exists (e.g. under linux, so nearly platform neutral)
the app or packages with a brief description how to handle it (e.g. audacity and ffmpeg but I did not find something helpful to get started)

Requirements:
sox
ffmpeg
Create some longer sample recording where you don't speak at all but capture some common noises for you mic and surrounding.
Convert it to wav, name it noisesample.wav and save it to your video folder. Open command line, navigate to folder and execute:
sox noisesample.wav -n noiseprof noise_profile_file
Create folder mp3
mkdir mp3
Convert every video to mp3 for later processing
for f in *.mp4; do ffmpeg -i "$f" -c:a libmp3lame "mp3/${f%.mp4}.mp3"; done
Change directory to mp3 and apply sox noise reduction (like audacitys) to every mp3 file
cd mp3
for f in *.mp3; do sox "$f" "${f%.mp3}_new.mp3" noisered noise_profile_file 0.26
Change to parent directory again, create out directory and combine new mp3 files with mp4 files
cd ..
mkdir out
for f in *.mp4; do ffmpeg -i "$f" -i "mp3/${f%.mp4}_new.mp3" -map 0:v -map 1:a -c:v copy -shortest "out/${f%.mp4}.mp4"; done
This should be do the trick you asked for.
As long as your noise profile remain the same you can reuse it for all your videos.

Related

AWS MediaConvert Throws Error 1076 for Safari Recored Videos

I'm getting this error
Demuxer: [ReadPacketData File read failed - end of file hit at length [13155941]. Is file truncated?] while trying to process the video with AWS Mediaconvert.
The video is being recorded from the ios safari/chrome browsers with the Mimetype of video/mp4.
I'm using the npm module aws-sdk.
It working fine for all the videos (video/mp4 and other formats as well) selected using file input (means from my device)
Just for an update: Using AWS Elastic Transcoder works with safari recorded videos.
That error is most likely because the source file contained variable size track fragment offsets, which is a characteristic of MediaRecorder outputs. MediaConvert was enhanced with the ability to handle these types of inputs as of November 11th, 2021, so I would recommend testing the assets again.
If you continue to have issues, you can try remuxing the source file in ffmpeg with a command such as:
ffmpeg -i source.mp4 -c copy remuxed_source.mp4

How to capture frame continuously during camera streaming

I'm using https://github.com/pedroSG94/RTSP-Server library module for streaming and I want to capture frames continuously.
ffmpeg is best for recording streams
ffmpeg -i "rtsp://xx:yy#x.x.x.x:554/cam/realmonitor?channel=1&subtype=0" -vcodec copy -acodec copy -y x.mp4
Here:
-i: The RTSP URL that needs to be recorded
-vcodec: The original video codec
-acodec: The original audio codec
copy: Create a copy of the stream
x.mp4: The name of the file to be created.

ffmpeg - fragmented mp4 takes long time to start playing on Chrome

When using ffmpeg, to stream the output file to S3, it is required to use "-movflags frag_keyframe", and the generated file will be a fragmented mp4.
When I try to play this files from S3 it takes a long time to start playing. I have tried the recommended movflags from MDN Docs and all the possible combinations.
ffmpeg -i https://notreal-bucket.s3-us-west-1.amazonaws.com/video/video.mov -f mp4 -movflags frag_keyframe+empty_moov pipe:1 | aws s3 cp - s3://notreal-bucket/video/output.mp4
Link of the file: https://happy-testing.s3-eu-west-1.amazonaws.com/stack-overflow/help.mp4
Edit:
I think it's not possible.
1) According to https://www.adobe.com/devnet/video/articles/mp4_movie_atom.html, the moov atom should be at the beginning of the file: "If the planned delivery method is progressive download or streaming (RTMP or HTTP), the moov atom will have to be moved to the beginning of the file. This ensures that the required movie information is downloaded first, enabling playback to start right away."
2) With ffmpeg writing directly to s3, the moov atom at the beginning will be empty.
3) With a fragmented video with empty moov atom, Chrome downloads the moof of each fragment before starting to play.

Using GNUradio audio sink in windows enviornment

I have just downloaded GNU Radio's installer for the windows environment and am having trouble with simple audio playback from a wav file.
The audio file will play but is 'choppy' and seems to be playing back at the wrong sample rate (difficult to tell for sure due to the intermittent audio). I am using the correct sample rate from the file (11.025kHz). I have also tried adding a throttle block between the file and the audio sink, although I know that is not recommended. I'm not sure if there are 'issues' with the port to Windows or if there is some additional hardware config needed (above what is typically done in Linux) Attached is a screen grab of the GNUradio flowgraph.
note: the same 'choppy' audio is heard when the wav file source block is replaced by a signal source block.
Image showing the internal data samples related to the second comment below:

Batch OCRing PDFs that haven't already been OCR'd

If I have 10,000 PDFs, some of which have been OCRed, some of which have 1 page that has been OCRed but the rest of the pages have not, how can I go through all the PDFs and only OCR the pages that haven't already been done?
This is exactly what I was looking for, I have thousands of scanned PDF files, where some were already OCR'ed and some are not.
So, I combined information I found on fora and Stack Overflow, and made my own solution that does EXACTLY that, which I have summarized for you here:
scan through all subdirectories recursively for PDF files;
check if the PDF was already OCR'ed, and if not, process the PDF with OCR with high quality, in the language(s) you can specify;
save the OCR PDF in-place, as PDF/A, and overwriting the old (not-OCR'ed) one.
I am on Windows 10, and could not find the definitive answer. I tried doing this with Acrobat Pro, but that gave me many errors, and Acrobat's batch processing stops on every error or password-protected file. I also tried many other batch-OCR tools on Windows, but none worked well.
I spent countless hours manually checking which files already had a text-layer "under" the image.
UNTIL! Microsoft announced that it was now very easy to run Linux under Windows, on the same machine, on the same filesystem.
There are many more tools and utilities available on Linux than Windows, so I thought I would give that a try.
So, here it is, step by step:
Enable the Windows subsystem for Linux in the Windows Control Panel; there are many guides. Google it. It's a couple of minutes.
Install Linux from the Windows Store. Open the Windows Store, search for Ubuntu, and install. Takes around 5 minutes.
Now you have the "Ubuntu app". Run it. It shows you the linux bash, and with file access to your Windows files through /mnt/c. It's magic!
You need some Linux "apps", namely pdffonts and ocrmypdf; which you can install by using the command sudo apt install pdffonts and sudo apt install ocrmypdf. We will use these apps to check if there is an embedded font in a PDF, and if not, OCR the PDF. (see note below).
Install the very small bash script (below) to your home directory ~.
Go to (cd) the directory where all your PDF's are saved. For example: /mnt/c/Users/name/OneDrive/Documents.
Run the command: find . -type f -name "*.pdf" -exec /your/homedir/pdf-ocr.sh '{}' \;
Done!
Running this might, of course, take a long time, depending on how many PDF's you have, and how many of those are not OCR'ed yet.
Here is the sh-script. You should save it somewhere in your home folder so that it is easy to call from anywhere. Like so:
type cd ~. This will bring you to your home folder.
type pico pdf-ocr.sh. This will bring up an editor. Paste the below script code. Then press Ctrl+X, and press Y. Your file is now saved.
type sudo chmod +x pdf-ocr.sh. This will give the script permission to be run.
MYFONTS=$(pdffonts -l 5 "$1" | tail -n +3 | cut -d' ' -f1 | sort | uniq)
if [ "$MYFONTS" = '' ] || [ "$MYFONTS" = '[none]' ]; then
echo "Not yet OCR'ed: $1 -------- Processing...."
echo " "
ocrmypdf -l eng+deu+nld -s "$1" "$1"
echo " "
else
echo "Already OCR'ed: $1"
echo " "
fi
What does this do?
Well, the find command looks up all PDF files in the current directory including subdirectories. It then "sends" these files to the script, in which pdffonts checks if there are embedded fonts. If so, skip the file and try the next one. If no embedded fonts are found, use ocrmypdf to do the OCR-ing.
I found the quality of OCR from ocrmypdf VERY good, even better than Acrobat's. You can of course tweak the settings. I can imagine for example that you might want to use other languages for OCR than eng+deu+nld. You can look up all options here: https://ocrmypdf.readthedocs.io/en/latest/
Note: I am making the assumption here that if a PDF file has no embedded fonts (so it's basically an image (scan) in a PDF-file), that it has not OCR'ed. I know that this might not always be accurate and/or true, but for me that is enough to determine which files to put through OCR. So that it is not neccesary to re-do hundreds or thousands of PDF files....
I know that it is a bit more hassle to install Linux under Windows, but as it is very easy to do if you have basic Linux skills. For me it was worth the effort because I now have made "one click" batch processor that works. I could not find a solution for that with Windows-tools.
I hope someone finds this and finds this useful. If anyone has improvements, please post them here.
Thanks.
Jos Jonkeren
Why don't you re-OCR everything? The amount of time you spend agonizing over repeated work probably exceeds the time taken for the work itself.
If by OCRed you mean that they contain the text in machine-readable form, you could use a library like Apache PDFBox to try to extract the text from the second page of the document. If it throws an error or returns garbage, it's most likely not OCRed.
Unburying this thread.
You can know which PDF files have already been OCRed by testing them with pdffonts. If there are embedded fonts, it's very probable that the PDF is already OCRed.
As for the batch processing, I wrote a little script that can batch OCR to pdf/word/excel/csv output format.
You may find it at https://github.com/deajan/pmOCR
pmOCR (poor man's OCR is a wrapper for Abbyy OCR CLI for linux or Tesseract 3 open source solution).