When using ffmpeg, to stream the output file to S3, it is required to use "-movflags frag_keyframe", and the generated file will be a fragmented mp4.
When I try to play this files from S3 it takes a long time to start playing. I have tried the recommended movflags from MDN Docs and all the possible combinations.
ffmpeg -i https://notreal-bucket.s3-us-west-1.amazonaws.com/video/video.mov -f mp4 -movflags frag_keyframe+empty_moov pipe:1 | aws s3 cp - s3://notreal-bucket/video/output.mp4
Link of the file: https://happy-testing.s3-eu-west-1.amazonaws.com/stack-overflow/help.mp4
Edit:
I think it's not possible.
1) According to https://www.adobe.com/devnet/video/articles/mp4_movie_atom.html, the moov atom should be at the beginning of the file: "If the planned delivery method is progressive download or streaming (RTMP or HTTP), the moov atom will have to be moved to the beginning of the file. This ensures that the required movie information is downloaded first, enabling playback to start right away."
2) With ffmpeg writing directly to s3, the moov atom at the beginning will be empty.
3) With a fragmented video with empty moov atom, Chrome downloads the moof of each fragment before starting to play.
Related
I'm getting this error
Demuxer: [ReadPacketData File read failed - end of file hit at length [13155941]. Is file truncated?] while trying to process the video with AWS Mediaconvert.
The video is being recorded from the ios safari/chrome browsers with the Mimetype of video/mp4.
I'm using the npm module aws-sdk.
It working fine for all the videos (video/mp4 and other formats as well) selected using file input (means from my device)
Just for an update: Using AWS Elastic Transcoder works with safari recorded videos.
That error is most likely because the source file contained variable size track fragment offsets, which is a characteristic of MediaRecorder outputs. MediaConvert was enhanced with the ability to handle these types of inputs as of November 11th, 2021, so I would recommend testing the assets again.
If you continue to have issues, you can try remuxing the source file in ffmpeg with a command such as:
ffmpeg -i source.mp4 -c copy remuxed_source.mp4
I'm using https://github.com/pedroSG94/RTSP-Server library module for streaming and I want to capture frames continuously.
ffmpeg is best for recording streams
ffmpeg -i "rtsp://xx:yy#x.x.x.x:554/cam/realmonitor?channel=1&subtype=0" -vcodec copy -acodec copy -y x.mp4
Here:
-i: The RTSP URL that needs to be recorded
-vcodec: The original video codec
-acodec: The original audio codec
copy: Create a copy of the stream
x.mp4: The name of the file to be created.
I would like to implement a automation of audio enhancement for self created videos.
The videos have all in common
the mic is a classical laptop mic (mid quality with lots of white noise)
I have a white noise profile created with audacity and if I manually apply the noise reduction to the audio track the audio sounds nearly like in a studio created.
Now I want to run a app or script to:
apply the noise reduction with a given noise profile and write the new audio file as MP3 to disk
apply the new audio track to the given video (replace existing voice track with the new track)
save the new video as a new file to the disk
Anyone around who can help on this? I need to decide build or buy...
And I do not know if even tools exist to automate this steps...
My development requirements:
platform neutral language or platform, preferably java
of
if applications exists (e.g. under linux, so nearly platform neutral)
the app or packages with a brief description how to handle it (e.g. audacity and ffmpeg but I did not find something helpful to get started)
Requirements:
sox
ffmpeg
Create some longer sample recording where you don't speak at all but capture some common noises for you mic and surrounding.
Convert it to wav, name it noisesample.wav and save it to your video folder. Open command line, navigate to folder and execute:
sox noisesample.wav -n noiseprof noise_profile_file
Create folder mp3
mkdir mp3
Convert every video to mp3 for later processing
for f in *.mp4; do ffmpeg -i "$f" -c:a libmp3lame "mp3/${f%.mp4}.mp3"; done
Change directory to mp3 and apply sox noise reduction (like audacitys) to every mp3 file
cd mp3
for f in *.mp3; do sox "$f" "${f%.mp3}_new.mp3" noisered noise_profile_file 0.26
Change to parent directory again, create out directory and combine new mp3 files with mp4 files
cd ..
mkdir out
for f in *.mp4; do ffmpeg -i "$f" -i "mp3/${f%.mp4}_new.mp3" -map 0:v -map 1:a -c:v copy -shortest "out/${f%.mp4}.mp4"; done
This should be do the trick you asked for.
As long as your noise profile remain the same you can reuse it for all your videos.
I have just downloaded GNU Radio's installer for the windows environment and am having trouble with simple audio playback from a wav file.
The audio file will play but is 'choppy' and seems to be playing back at the wrong sample rate (difficult to tell for sure due to the intermittent audio). I am using the correct sample rate from the file (11.025kHz). I have also tried adding a throttle block between the file and the audio sink, although I know that is not recommended. I'm not sure if there are 'issues' with the port to Windows or if there is some additional hardware config needed (above what is typically done in Linux) Attached is a screen grab of the GNUradio flowgraph.
note: the same 'choppy' audio is heard when the wav file source block is replaced by a signal source block.
Image showing the internal data samples related to the second comment below:
I have a 27GB file that I am trying to move from an AWS Linux EC2 to S3. I've tried both the 'S3put' command and the 'S3cmd put' command. Both work with a test file. Neither work with the large file. No errors are given, the command returns immediately but nothing happens.
s3cmd put bigfile.tsv s3://bucket/bigfile.tsv
Though you can upload objects to S3 with sizes up to 5TB, S3 has a size limit of 5GB for an individual PUT operation.
In order to load files larger than 5GB (or even files larger than 100MB) you are going to want to use the multipart upload feature of S3.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UploadingObjects.html
http://aws.typepad.com/aws/2010/11/amazon-s3-multipart-upload.html
(Ignore the outdated description of a 5GB object limit in the above blog post. The current limit is 5TB.)
The boto library for Python supports multipart upload, and the latest boto software includes an "s3multiput" command line tool that takes care of the complexities for you and even parallelizes part uploads.
https://github.com/boto/boto
The file did not exist, doh. I realised this after running the s3 commands in verbose mode by adding the -v tag:
s3cmd put -v bigfile.tsv s3://bucket/bigfile.tsv
s3cmd version 1.1.0 supports the multi-part upload as part of the "put" command, but its still in beta (currently.)