Premiere export settings for background video - background

I'm not sure if it's allowed ask these questions here, but looks important for us webdevelopers (even bad dev like me :p ).
The question is about export setting videons on Premiere. I'm looking for a background video 30s like airbnb or paypal. Yesterday, I check paypal size and it's only 10/15 Mb for more than a minute. How did they do?

Obviously you want a low average bit rate. Things that can help with that are: keep the resolution low (you can scale it up a bit on the client); use H.264 High Profile (for the H.264 version); use 2-pass encoding; use variable bit rate. You can try increasing the GOP length too.
I assume there's no audio, so that shouldn't be an issue. (Can't remember if Adobe has an option for no sound track, but you can set the audio to a very low bit rate, or post-process it with ffmpeg or something to remove the audio track.)
If you have any control over the video content, you can try to keep it compressible. For example, avoid video with lots of detail or rapid motion. You might be able to selectively blur parts in a way that doesn't look bad. If it doesn't move too fast, you might be able to decrease the frame rate.
If you really want to optimize, you'll probably need to experiment a lot.

Related

Decreasing speed decreases sound quality

Decreasing the playback speed of AudioPlayer severely decreases the quality of the audio being played; the audio becomes very "noisy".
Is there any way to fix this or is it an issue with the just_audio implementation?
Reproduce:
final AudioPlayer player = AudioPlayer(); // Create audio player
player.setAsset("..."); // Load audio file
player.setSpeed(0.5); // Halve speed
player.play(); // Start playing
Just to preface this answer, time stretching is a difficult thing to do in real-time because it has to stretch time without stretching the sound waves (stretching the sound waves would lower the frequency and hence the pitch, so it has to stretch time while filling the gaps with fabricated extensions to the existing sound waves). As a result, the very best real time algorithm will still introduce artifacts and distortions.
Now to answer your question, just_audio doesn't provide any options to change the time stretching algorithm, but it does use the best available algorithms for each platform, for general purpose usage. The Android implementation uses Sonic which is better quality than Android's own built-in algorithm. On iOS/macOS, AVAudioTimePitchAlgorithmTimeDomain is used which seems to produce the least distortion at speeds below 1.0 out of the different algorithms Apple provides, although newer iPhones/iOS versions may produce higher quality output. On web browsers, it uses whatever algorithm that web browser provides.
If you need to try out alternatives, you would need to make a copy of just_audio and edit the code that selects the algorithm. You are unlikely to find better options for Android and web, but you might like to experiment with the different iOS/macOS algorithms by searching for AVAudioTimePitchAlgorithmTimeDomain in the code and changing it to one of the other options listed in Apple's documentation. You may find one of the other algorithms works better if you have a specialised use case.

Frame by frame decode using Media Source Extension

I've been digging through the Media Source Extension examples on the internet and haven't quite figured out a way to adapt them to my needs.
I'm looking to take a locally cached MP4/WebM video (w/ 100% keyframes and 1:1 ratio of clusters/atoms to keyframes) and decode/display them non-sequentially (ie. frame 10, 400, 2, 100, etc.) and to be able to render these non-sequential frames on demand at rates from 0-60fps. The simple non-MSE approach using the currentTime property fails due to the latency in setting this property and getting a frame displayed.
I realize this is totally outside normal expectations for video playback, but my application requires this type of non-sequential high speed playback. Ideally I can do this with h264 for GPU acceleration but I realize there could be some platform specific GPU buffers to contend with, though it seems that a zero frame buffer should be possible (see here). I am hoping that MSE can accomplish this non-sequential high framerate low latency playback, but I know I'm asking for a lot.
Questions:
Will appendBuffer accept a single WebM cluster / MP4 Atom made up of a single keyframe, and also be able to decode at a high frequency (60fps)?
Do you think what I'm trying to do is possible in the browser?
Any help, insight, or code suggestions/examples would be much appreciated.
Thanks!
Update 4/5/16
I was able to get MSE mostly working with single frame MP4 fragments in Firefox, Edge, and Chrome. However, Chrome seems to be running into the frame buffer issue linked above and I haven't found a way to pre-process a MP4 to invoke this "low delay" mode. Anyone have any clues if it's possible to create such a file with an existing tool like MP4Box?
Firefox and Edge decode/display the individual frames immediately with very little latency, but of course something breaks once I load this video into a Three.js WebGL project (no video output, no errors). I'm ignoring this for now as I'd much rather have things working on Chrome as I'll be targeting Android as well.
I was able to get this working pretty well. The key was getting Chrome to enter its "low delay" mode by muxing a specially crafted MP4 file using modified mp4box sources. I added one line in movie_fragments.c so it read:
if (movie->moov->mvex->mehd && movie->moov->mvex->mehd->fragment_duration) {
trex->track->Header->duration = 0;
Media_SetDuration(trex->track);
movie->moov->mvex->mehd->fragment_duration = 0;
}
Now every MP4 created will have the MEHD fragment duration set to 0 which causes Chrome to process it as a live stream.
I still have one remaining issue related to the timestampOffset property which in combination with the FPS set in the media fragments control the playback speed. Since I'm looking to control the FPS directly I don't want any added delay from the MSE playback engine. I'll post a separate question here to address that.
Thanks,
Dustin

WebRTC - Peerconnection constraints

I've been working on a WebRTC videoconferencing app which is working great, taking into account the current state of WebRTC.
However, I have been exploring the possibilities to add constraints to the video and audio streams being send over by PeerConnection.
More specific in improving the performance of the video.
When videoconferencing on old (slow) laptops, we noticed that the quality of the image is really high but the frame per second is low. The stream is hacky.
About the audio quality, we give it a 8,5 for Chrome but only a 5,5 to 6 for Firefox.
I am not really interested in applying constraints to getUserMedia since this stream is being shown to the user aswell, and we don't want to change anything about this local output. (Unless there isn't another way)
I have found alot of information on W3G's drafts about MediaStreams and WebRTC itself.
These define certain constraints like default fps, minfps, minwidth and minheight of the image. On webrtc.org is also alot of information available like choosing codec etc.
But these settings can only be made "under the hood". It seems these settings cannot be addressed from RTCPeerConnection API level?
Certain examples on the net manipulate the SDP strings in the Offer / Answer part of the WebRTC handshake, is this the way to go ?
TL;DR : How to apply - and What is the best way to apply - constraints on WebRTC like minfps, maxfps, default fps, minwidth, maxwidth, dpi of image, bandwidth of video and audio, audio KHz and any other way to improve performance or quality of the stream(s).
Big thanks in advance !
Right now, most of those can't be set in Firefox or Chrome. A few can be adjusted (with care/pain) in the SDP, but even if there's an SDP option defined for something it doesn't mean that the browsers look at it.
Both Mozilla and Google are looking to improve CPU overload detection and reaction (reduce frame size dynamically, etc). Right now, this effectively isn't being done. Upcoming releases of FF (FF24) will adapt to the capture resolution (as a maximum), but we don't have constraints for that yet, just about:config prefs (see media.*). That would allow you to set a different default resolution for Firefox.

control speed of sound xcode

I'm wondering whether it's possible to slow down a sound in xcode. I mean I'll add some .mp3 file in my supporting files in xcode and I'll create app which will be able to speed it up or slow down. For example with slider. Is it even possible? If yes, could anyone help me with some idea? Thanks
AVAudioPlayer has a rate property which should be able to help you accomplish your goal.
http://developer.apple.com/library/IOS/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html
The audio player’s playback rate. #property float rate
Discussion
This property’s default value of 1.0 provides normal playback rate.
The available range is from 0.5 for half-speed playback through 2.0
for double-speed playback.
To set an audio player’s playback rate, you must first enable rate
adjustment as described in the enableRate property description.
I also found a good SO post on the AVAudioPlayer's rate:
AVAudioPlayer rate
Seems like as you'd mentioned you could set a slider with values from 0.5 to 2.0 and on valuechanged modify the audio players rate by using
- (IBAction)changeValue:(UISlider *)sender
{
//made up assumed ivar names
if ([_audioPlayer respondsToSelector:#selector(setEnableRate:)])
_audioPlayer.enableRate = YES;
if ([_audioPlayer respondsToSelector:#selector(setRate:)])
_audioPlayer.rate = [NSNumber numberWithFloat:slideValue];
}
Playing PCM audio at a faster or slower rate than its sample rate changes its pitch and also introduces considerable artefacts. If you're OK with this, the approach you would use is to decode the MP3 into PCM audio and then use Direct Digital Synthesis Oscillator to control the playback rate.
If you want to maintain pitch but change speed, you need a audio time-stretching algorithm.
Dirac3 from DSP Dimension is a commercial product that can do it, and which is available for licensing for use in iOS application. Other commercial solutions exist.
DSP Dimension's blog provides a helpful tutorial on the basics of how to implement pitch-shifting
using a FFT. Time stretching is essentially the same process. However, there's a fair bit of secret sauce in the DIRAC plug-in they don't tell you about.
Be warned that unless you're a Electronics engineering, physics or maths graduate, you'll probably it tough going to fill in the blanks.

Webcam as magnifier

My father has a very poor sight and he needs a magnifier if he wants to read anything. He needs really big zoom, up to 100 times. He wanted to buy a special reader, such as this. But, unfortunately, he can't afford it, since it costs more than 2000 euros. I was thinking to try to make something myself, probably with a webcam. Do you think it would be possible? Is there any webcam that can zoom that well? Maybe a normal (HD) camera? He has a huge TV, so I was thinking to make a holder for the cam and connect it to the TV. He can then sit on the couch and read.
Any thoughts? I'm looking for the best and cheaper as possible solution. Any help would be great.
You would need to retrofit the webcam with a closeup lens. Make sure you get a high resolution webcam, 720p or better.
I'm less sure about the software, but surely there is something out there that will let you use a webcam as a simple viewer.
Of note: I am posting this from my ipad. With Goodreader software, you can pretty much zoom as large as you want, and the iPad is much less expensive than the device you described.