HTML5 Video File Size - html5-video

I'm wondering what size file is ideal to use with the HTML5 Video Tag. I'm building a site for a friend and they have given me a file which is 80mb.. is this to large? What is a good size to aim for?
Thanks in advance.

A broadband speed of 2.5 Mbit/s or more is recommended for streaming movies, for example to an Apple TV, Google TV or a Sony TV Blu-ray Disc Player, 10 Mbit/s for High Definition content.
One hour of video encoded at 300 kbit/s (this is a typical broadband video as of 2005 and it is usually encoded in a 320 × 240 pixels window size) will be:
(3,600 s × 300,000 bit/s) / (8×1024×1024) requires around 128 MB of storage.
Hope this helps. :)
I DO NOT think that 80 MB is too big. Note that this totally depends on your internet speed (the average 2.5 Mbit/s). Like mine is 3 Mbit/s, so if most web streaming videos will play quite nicely.
It depends if your video is HD or not. It doesn't always matter what size of the file, because it buffers while the video is playing. It mainly matters what size the video is. As said above, of the video's SD, it will work well on most internet connections (the average is 2.5 Mbit/s, and some people have 3 Mbit/s like me), and more HD videos will require a much faster connection, something more like 10 Mbit/s (yeah, that's a lot!). Note that emerging technologies like the fiber-optics found in Google's new product, Google Fiber, would allow much higher definition videos to play with a lot of fluidity. (Google Fiber's connection speed is about 800 Mbit/s, I believe! Correct me if I'm wrong)
Tell me if this helps. :)
And click the gray check next to my answer if it does. :)

Related

WebRTC: Bad Encoding Performance for Screensharing via CGDisplayStream (h264/vp8/vp9)

I am using the objective-c framework for WebRTC for building a screensharing app. The video is captured using CGDisplayStream. I have a working demo but at 2580x1080 I get only 3-4 fps. My googAvgEncodeMs is around 100-300ms (should be >10ms ideally) which explains why the screensharing is far from being fluid (30fsp+). I also switched between codecs (h264/vp8/vp9) but with all of them I get the same slow experience. The contentType in webRTC is set to screen (values: [screen,realtime]).
The cpu usage of my mac is then between 80-100%. My guess is that there is some major optimisation (qpMax, hardware-acceleration etc...) in the c++ code of the codecs that I have missed. Unfortunately my knowledge on codecs is limited.
Also interesting: Even when I lower the resolution to 320x240 the googAvgEncodeMs is still in the range of 30-60ms.
I am running this on a MacBook Pro 15 inch from 2018. When running a random webrtc inside Chrome/Firefox etc I get smoother results than with the vanilla webrtc framework.
WebRTC uses software encoding and that is the real culprit. Also encoding 2580 x 1080 in software is not going to be practical. Try reducing H and V resolution in half and it will improve performance with some loss in quality. Also if you are doing screen sharing and video support is not critical, you can drop frame rate to 10 frames per second. Logical solution is to figure out how to incorporate h/w acceleration.

Any follow-up to videoHD Stitching Project by Luke Yeager?

any progress made in stitching video since the last answer by Luke Yeager ?
I plan to develop 360 surround view for my car
Since --- StitcHD Project --- by Luke is already 5 years old
I expect some progress to be announced in technology, faster GPU live video processing and better depth maps matching.
https://github.com/lukeyeager/StitcHD
I would prefer WebRTC video tools but didn't get any answer on how to connect 4 usb webcams and get 4 live video streams for stitching.
If you want to enumerate all devices using WebRTC,
https://webrtc.github.io/samples/src/content/devices/input-output/
shows how to do that and then open a specific device. You can open multiple devices at the same time but mind you that this is going to be taxing for the USB bus.
The canvas sample which shows how to grab a frame from the video stream and convert it into an image might be useful too.

WebRTC : Video black screen bitrate

Is the bit rate of black screen shown when video is muted same as the original video's bit rate or is it significantly less because it is just a black screen?
It is significantly less. Since there is essentially no video information being sent to the remote party. How much depends on a lot of factors (connection quality etc).
I just did a quick test and the outgoing bit rate at 640x480 # 27fps was around 900 kbps to 1 mbps. Disabling the video track of the stream resulted in an outgoing bitrate of 30 kbps.
Please keep in mind that this was only a simple test I did. You can get this kind of information yourself by evaluating the reports in peerConnection.getStats
Some documentation and libraries for getStats
https://www.w3.org/TR/webrtc-stats
https://github.com/muaz-khan/getStats
https://www.callstats.io/2015/07/06/basics-webrtc-getstats-api/
Came across chrome://webrtc-internals, which has inbuilt tracking for bit rates and has other good features.
As seen in graph, bit rate before video was muted was ~150k which reduces to ~30k on muting the video.

What characteristics should have a .wav file as result of TTS engine to be be listened with high quality?

I'm trying to generate high quality voice-over using Microsoft Speech API. What kind of values I should pass in to this constructor to guarantee high quality audio?
The .wav file will be used latter to feed FFmpeg, so audio will be re-encoded latter to a more compact form. My main goal is keep the voice as clear as I can, but I really don't know which values guarantee the best quality perceived by humans.
First of all, just to let you know I haven't used this Speech API, I'll give you an answer based on my Audio processing work.....
You can choose EncodingFormat.Pcm for Pulse Code Modulation
samplesPerSecond is sampling frequency. Because it is voice you can cover it with 16000hz for sure. If you are really perfectionist you can go with 22050 for example. Higher the value is, the audio file size will be larger. If file size isn't a problem you can even go with 32000 or 44100 but there won't be much noticable difference....
bitsPerSample - go with 16 if possible
1 or 2, mono or stereo ..... it won't affect the quality of the sound
averageBytesPerSecond ..... this would be samplesPerSecond*bytesPerSample (for example 22050*2)
blockAlign ..... this would be Bytes Per Sample*numberOfChanels (for example if you have 16bit PCM Mono audio, 16bits are 2 bytes, Mono is 1, so blockAlign is 2*1)
That last one, the byte array doesn't speaks much for itself, I'm not sure what it serves for, I believe the first 6 arguments are enough for audio to be generated.
I hope this was helpful
Cheers

iPhone app with audio files is just too big. How do I reduce the size?

I have a BlackBerry app that I am about to port to the iPhone. The app contains mp3 files which causes the BlackBerry version to be about 10MB in size (even after I reduced the quality of the files to 92kbps). 10MB won't do for the iPhone. Does anyone know of any best practices when it comes to including audio files in your iPhone app? I'm interested in knowing suggested format(s), quality, channels (left, right) etc. I will also need to play more than one file at a time (very important).
Thanks.
You could consider downloading (some of) the MP3 files after your app is installed. For low bitrate you're better off recompressing with AAC though (perhaps at 48-64 kbps); it provides better quality than MP3 at the same size. Also consider mono instead of stereo if it makes no difference.
Why won't 10 MB for the iPhone work?
Applications on the iPhone can be as large as 2 GB with apps larger than 10 MB can be downloaded over wifi or through iTunes.