Objective :: Wanted to calculate out of Google Hangout session, how long 30 FPS quality is maintained.
Eg :: Out of 5min GoogleHangout session, using "googFrameRateReceived" variable in chrome://webrtc-internals/, need to know how much % of time 30 FPS quality is maintained.
I couldn't get the location of "googFrameRateReceived" variable in source code.
The framerate received can be queried using the getStats() API of WebRTC. See https://webrtc.github.io/samples/src/content/peerconnection/constraints/ for the most authoritative sample. The w3c webrtc spec also has a related example, you need to look for a different type of report though.
Related
We are building an app with a video upload functionality. We were wondering if we could use a Youtube account to upload all of our user videos. They should only be accessible via our app... we don't mind if ads show up while viewing them.
If the app grows, we're looking at potential thousands of uploads per day.
Does Youtube support this? If a few videos get flagged, will the "master" account be shut down?
Finally, if Youtube is the not right choice, do you have any recommendation? We would like to avoid hosting them as much as possible... Since streaming large amounts of videos is an enormous challenge for a start up.
Thank you!
Some information on the video uploads:
https://developers.google.com/youtube/v3/docs/videos/insert
This method supports media upload. Uploaded files must conform to
these constraints: Maximum file size: 128GB Accepted Media MIME types:
video/*, application/octet-stream
You can get the qouta information here: https://developers.google.com/youtube/v3/getting-started#quota
Projects that enable the YouTube Data API have a default quota
allocation of 1 million units per day, an amount sufficient for the
overwhelming majority of our API users.
...
Different types of operations have different quota costs.
A simple read operation that only retrieves the ID of each returned
resource has a cost of approximately 1 unit. A write operation has a
cost of approximately 50 units. A video upload has a cost of
approximately 1600 units.
Yes, youtube can block API access, not only on flagged videos, but at any time as described here: https://developers.google.com/youtube/terms/api-services-terms-of-service#termination
24.2 Termination by YouTube. Notwithstanding anything to the contrary, YouTube reserves the right to (i) suspend or terminate access to, or
use of, any aspects of the YouTube API Services by you, your API
Client(s) and those acting on your behalf), and (ii) terminate the
Agreement (or any portion thereof), as applied to any specific user or
API Client, category of users or API Clients, or all users or API
Clients at any time. For example, we may need to exercise such rights
in instances of your breach of this Agreement, court order, when we
believe there to have been misconduct or conduct which may create
potential liability for YouTube or its Affiliates. Although we will
try to give you reasonable notice, we have no obligation to do so.
For the welcome screen of my app, we are trying to serve up a webpage in a webview that consists of a video and some text. (We want to go this route so that we could quickly update the welcome screen and test changes on the fly, versus having to submit and get approval each time.)
The video is only 8.6mB and is currently being played via HTML5 , hosted on an S3 and served via CloudFront. However, the playback still tends to be a bit choppy at times. Does anyone have any recommendations as to an optimal way to host and serve up the video to make it play smoothly? Are there any specific settings for the S3 or CloudFront anyone would recommend that could help?
Thanks in advance for any help anyone can provide.
The most common technique currently is to use ABR in parallel with a CDN to provide smooth playback.
ABR, Adaptive Bit Rate, involves making multiple copies of the video at different bit rates, from low to high and hosting these on the server.
The client receives an index file for the videos, e.g. an m3u8 manifest file, and then chooses the best bit rate for the current conditions to allow smooth playback without buffering.
If network conditions improve the client will 'step up' bit rates and if it gets worse it will 'step down' bit rates.
Typically a low or medium bit rate is chosen as the first one to allow quick and smooth start up.
You can see this effect on services like Netflix as they start up, and you can also see it on YouTube if you right click the video and select 'Stats for Nerds'.
Some links for ABR in AWS Elastic transcoding - you can set the bit rates you want, for e.g. see the note below from their FAQ re HLS jobs:
Specify that the transcoding job create a playlist that references the outputs. You should order your bit rates from lowest to highest, with the audio only stream last, since this order will be maintained in the generated playlist file. Once your transcoding job has completed, the output bucket will contain a proper arrangement of your master and individual M3U8 playlists, and MPEG-2 TS media stream fragments.
Take a look at the sample request on this page here which includes two different bit rates (video service providers will generally have more than 2 but this gives you a feel for the approach):
http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-job.html
Azure Media Services has a built in "Adaptive Streaming" preset that is content-adaptive and can adjust the encoding settings to meet your content that is coming in.
See the following - https://learn.microsoft.com/en-us/azure/media-services/media-services-autogen-bitrate-ladder-with-mes
I have developed a site that hosts user videos. I store the video files in AWS S3, I deliver them through AWS Cloudfront and I use video.js as the site's player with HTML5 as default and flash as fallback.
Generally the video streaming seems to work fine but in some cases I receive complaints from users for slow or choppy video playback. I want to create some tests to measure the performance of streaming in order to be able to distinguish user problems (e.g. slow connection at the user side) or with my service.
Are there any best practices or tools to collect video delivery metrics? I'm interested in open source solutions or something that I can implement myself because it's just a personal project, but I don't want to rediscover the wheel.
Testing progressive download implies checking the transmission bandwidth and its continuity. For example for a high transmission rate the initial client buffer will be filled faster and the playback will start sooner. However, losing that transmission capacity at some later time can cause re-buffering. The total transmission time of your file must be lower than the video duration.
To identify potential issues you can start with the S3 bucket logs and the CloudFront cache statistics and access logs.
There's a load testing tool written in Java called Apache JMeter. It cannot use JavaScript so it must be configured to request the files directly.
The disadvantage of using a load test tool in a single location is pretty evident. Different geographical areas and carriers have different characteristics and test results will be different.
There are online, non open-source tools that can load test from multiple locations but they are generally paid though some offer free trials.
Here's another way to look at this.
but in some cases I receive complaints from users for slow or choppy video playback.
If you're using an Adaptive HLS stream, and you're CloudFront, and the video is still choppy to some users, that's probably because of their own internet connection speeds.
In that case, you can encode your video in multiple resolutions (using just one AWS MediaConvert job, btw) - like 1080p, 720p, 360p, 240p, 144p etc.
And then Videojs has a stream switcher plugin that will 1) automatically start playing the highest possible resolution - and no higher - that's right for the viewer's connection and 2) give the user the option via a "Settings" (gear) icon in the control bar that they can use to switch resolutions manually.
That way, even those with really poor internet connections should be able to watch your video.
Of course, the other alternative is to use progressive download videos that the viewer can simply click play, then immediately click pause, and wait for the video to buffer, and then play it after it's fully downloaded.
Check out the Videojs Resolution Switcher demo here.
-- Ravi Jayagopal
I'm new to Windows 8 and I'm particularly interested in Live Tiles.
I was wondering - How frequently can an app update a live tile? For example, is it possible to create a clock with seconds?
A Live Tile update every second is far too frequent.
The MSDN guidelines for Live Tiles documentation does not set explicit limits on how often a tile can be updated, but does make a few recommendations about the frequency of updates. For an app with very frequently changing content, the highest average frequency expected is approximately 15 minutes. A few choice excerpts:
For nonpersonalized content, such as weather updates, we recommend that the tile be updated no more than once every 30 minutes. This allows your tile to feel up-to-date without overwhelming your user.
For example, a busy social media app might update every 15 minutes, a weather app every two hours, a news app a few times a day, a daily offers app once a day, and a magazine app monthly.
Recommendations for clock updates are absent, as that isn't an intended purpose for Live Tile updates. The apps in the store showing the time on their tile use scheduled tile notifications creatively (intended to be used for one-time calendar events), or frequently push notifications to WNS (Windows Push Notifications Service). The former is very tough, if not impossible, to implement correctly even for per-minute precision (see app reviews), and the latter is prone to being flagged as abuse and throttled by WNS.
This post explains the thinking behind live tiles and how they are implemented, which I'm sure you'll find useful.
I haven't seen a limit on frequency of sending a notification, but practically seconds might be unrealistic to assume consistently.
I tried to display live clock with seconds running in the tile. But It is getting updated every 4 seconds. So, minimum amount of time needed to update tile is 4 seconds. We cannot reduce more than that.
Does anyone know what's the ratio between the number of tweets we get from twitter sample API over the total number of tweets which the Twitter server receives? I am doing some analysis based on the data read from the sample API, and would like to estimate the actual workload handled by Twitter server. I observed that the number of tweets we get from the API varies over time. So, I presume it is something like percentage sample. Any clue is highly appreciated.
Thanks
The sample stream /statuses/sample does return roughly 1% of all tweets. Twitter samples the tweets by delivering only tweets created within a 10-millisecond window out of the 1,000 milliseconds in every second. If you want more details, you can read my blog post: http://blog.falcondai.com/2013/06/666-and-how-twitter-samples-tweets-in.html
When Twitter Spritzer (basically the old-fashioned Streaming API) was launched, it was supposedly about 1-2% of all tweets. Based on my use of the current Streaming API, I'd be surprised if it was any more than 1% right now, and possibly less. According to the docs, the "Twitter streaming volume is not constant," but they neglect to mention if the volume outputted by the API is proportional to the rate of actual tweets.
On 2 February 2015 Twitter announced intent to reset the streaming API sample rate to 1% (it had crept higher unintentionally):
The public Streaming API sample endpoints (aka POST statuses/filter and GET statuses/sample) are intended to be levelled at approximately 1% of the public Tweet volumes at any time.
Due to some past inconsistencies in configuration, there have been periods of time where the volumes of Tweets delivered via the Streaming API may have exceeded these parameters.
This notice is to indicate that over the next couple of weeks, we will be making changes to the public Streaming API to rebalance the volume of Tweets at the 1% capacity that was intended.
This plot shows the effect of the reset on a typical tweet stream.
This is something I found at
https://brightplanet.com/2013/06/25/twitter-firehose-vs-twitter-api-whats-the-difference-and-why-should-you-care/. I hope you find this useful.
Studies have estimated that using Twitter’s Streaming API users can
expect to receive anywhere from 1% of the tweets to over 40% of tweets
in near real-time.
There are references to the studies they have cited at the bottom of the webpage.