Increasing Speed of Transcoding Process in AWS Elastic Transcoder When Multiple Outputs Produced? - amazon-elastic-transcoder

I've been testing with Elastic Transcoder for a while. I've been using Zencoder currently and plan to replace it with Elastic Transcoder. I've an issue about transcoding time with multiple outputs and I've trying to find a solution to reduce the time of transcoding if anyway to achieve that.
I've uploaded an input video file, it can be any format and any resolution. I want Elastic Transcoder encodes them to produce two output formats, mp4 and webm, with the resolution of 640x360 with one request. I've define two presets for this.
The first one is for mp4 files, like this:
Codec H.264
Codec Options
InterlacedMode:Progressive,MaxReferenceFrames:3,Level:3,ColorSpaceConversionMode:None,Profile:baseline
Maximum Number of
Frames Between Keyframes 90
Fixed Number of Frames
Between Keyframes false
Bit Rate 720
Frame Rate 29.97
Video Max Frame Rate
Max Width 640
Max Height 360
Sizing Policy Fill
Padding Policy NoPad
Display Aspect Ratio auto
The second one for webm,like this:
Codec Options
Maximum Number of
Frames Between Keyframes 90
Fixed Number of Frames
Between Keyframes false
Bit Rate 600
Frame Rate 30
Video Max Frame Rate
Max Width 640
Max Height 360
Sizing Policy Fill
Padding Policy NoPad
Display Aspect Ratio auto
In Zencoder, encoding process for mp4 and webm starts concurrently. So, for ex: if input video duration is 13 seconds then encoding process takes time 13 seconds approximately for two outputs, mp4 and webm.
In Aws Elastic Transcoder, this encoding process takes time 26 seconds approximately. I think it's because of, it doesn't encode two outputs at the same time. So, this is a problem. I need to reduce this time.
Can I configure it in Elastic Transcoder to process two outputs at the same time?
Or
Do I need to send two request at the same time for getting two output formats to reduce the transcoding time?

I've go deeper into the details and found the solution.
Actually, AWS processes two outputs concurrently but the problem is related with the webm takes so much longer. It is because VP9 codec have been configured in preset settings and it takes so much time comparing to VP8 codec. Change it to VP8 has solved my problem.
Both of codec types has pros and cons. But nothing is important than speed in my situation.

Related

How to preload all .ts of m3u8 with video.js

I am using video.js to play m3u8 links.
I found it would continuously download .ts segments during playing.
I would like to create a loading overlay for preparation period, and when all is done, it disappears and user can watch video just like local.
So, is it possible to preload all segments during the loading period before playing?
Update
I found the m3u8 file contains .ts links, is it possible to predownload those blobs and intercept fetch requests to return downloaded blobs as response?
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:60
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-START-TIME:3516
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:55:59+08:00
#EXTINF:2.01,
2758527764_1103365203_1.ts?start=0&end=91931&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:01+08:00
#EXTINF:1.979,
2758527764_1103365203_1.ts?start=91932&end=171643&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:02+08:00
#EXTINF:1.932,
2758527764_1103365203_1.ts?start=171644&end=248159&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:04+08:00
#EXTINF:2.002,
2758527764_1103365203_1.ts?start=248160&end=318659&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:06+08:00
#EXTINF:2.064,
2758527764_1103365203_1.ts?start=318660&end=393295&type=mpegts&resolution=320x240
I think that waht you would to do isn't the case of use of adaptive streaming, so you shouldn't use HLS or DASH.
Maybe you could achieve this type of experience using sample mp4 playback.
You can set the "videojs.Vhs.GOAL_BUFFER_LENGTH" value to a high figure. It's the number of seconds which will be pre-loaded. However there is a playback issue when too much is buffered. Reason is all these buffered segments ends up eating ram. On mid range mobile devices more than few minutes of pre-load make the video un-usable.
I use it with few thousand students who can pre-load larger chunk (10min), so they are not interrupted continuously when having a low bandwidth connection.

How to limit the frame rate in Vulkan

I know that the present mode of the swap chain can be used to sync the frame rate to the refresh rate of the screen (with VK_PRESENT_MODE_FIFO_KHR for example).
But is there a way of limiting the frame rate to a fraction of the monitor refresh rate? (eg. I want my application to run at 30 FPS instead of 60.)
In other words, is there a way of emulating what wglSwapIntervalEXT(2) does for OpenGL?
Vulkan is a low-level API. It tries to give you the tools you need to build the functionality you want.
As such, when you present an image, the API assumes that you want the image presented as soon as possible (within the restrictions of the swapchain). If you want to delay presentation, then you delay presentation. That is, you don't present the image until it's near the time to present a new image, based on your own CPU timings.

picamera mmalobj - Render last n seconds from Ringbuffer

Is there an easy way to playback video data stored in a stream-object (http://picamera.readthedocs.io/en/release-1.13/api_streams.html) (e.g. a PiCameraCircularIO with h264 encoding) using one of the PiRenderers?
The mmalobj API (http://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#id3) seems to support the playback of buffers, though it is hard to understand and everything I tried to use an MMALVideoDecoder and setting the input to a the data of a PiCameraCircularIO buffer failed.
I'm using the circular stream advancde recipe (http://picamera.readthedocs.io/en/release-1.13/recipes2.html#splitting-to-from-a-circular-stream) but rather than saving the last n seconds to a file, I want to playback them.

What are good strategies to transfer an audio over web https?

My andriod app is bandwidh constraint. It should work in as low as 64kbps net.
The user will record voice (max length 120 sec, avrage 60 sec)
and then will encode it with encoder (options are:
1. Losless: FLAC or ALAC?
2. lossy: MP3?
Say file is 1024 kb i.e. 1 MB.
So I am thinking sending file by dividing into of chunks of size 32kb
and
if response is received in 1 sec after request:
exponentially increasing size of chunks then
else
The app will binary search for exact chunk size.
3. Is my approach to transfer an audio from android to server
feasible for low speed connections?
4. Or is it better to push the entire file in
multi-part form-data to server in one https post call?
Assuming you are doing this:
Record an audio file
Compress file
Upload file
You are uploading over https which uses tcp. There is no reason to exponentially increase the size of chunk because internally TCP does this automatically to fairly share bandwidth. It is called Slow Start.
No reason to chunk up in to pieces and let it grow. Additionally, the max packet size is 64k.
Just open a stream and send it. Let the underlying network protocols take care of the details.
On your server, you probably have to change server settings to allow large file uploads and increase the timeout settings.

webrtc voice packetization size

I was wondering how can I change voice packets size in webrtc application? I am using opus and I am trying to change packet size from 20 ms to 40 ms. I thought I can achieve it through changing ptime in sdp. However when I captured packets with wireshark there is not difference between packets with ptime=20 and ptime=40 and the difference between consecutive time stamps is always 960. I would expect the difference to be 1920 for 40ms ptime. I imagine I am completely wrong in my assumptions, is there any way to actually change packet sizes?