The way live streaming price structure is mentioned on the website with a foot note that it's applicable to only audience minutes is quite confusing , and since I am not being to get any answer from the sales team or contact them , I want to know if I use live streaming API in my website , and then suppose I buy 1000 minutes from agora , after which i live stream for 20 minutes to 10 people , then after the live stream will I(the host) be left with 800 minutes or 980 minutes .
Agora pricing model is "pay as you go" and If you stream 20 min to 10 people(audience) + 1 host
~Usage will be:
200 min for the audience downstream
20 min for the host upstream
= 220 min of total usage.
I would recommend you to take a look at the pricing doc https://docs.agora.io/en/Interactive%20Broadcast/billing_interactive_broadcast?platform=All%20Platforms and reach out to Agora sales team for more details.
Related
i run into this error in AWS cloudwatch
which does not make sense as I think/thought we had 0 high resolution metrics(high resolution only records for 3 hours). We typically just do 1 minute interval reporting. How do I find all metrics with high resolution? In this way I am hoping I can edit them to not high resolution.
I searched around a ton on the documentation and I looked into micrometer code which seems to default to highResolution = false and a step of 2 minutes. (We are using micrometer). I am trying to figure out next steps on figuring out why AWS thinks this data is high resolution data.
I was also thinking 'ok, perhaps it would roll up to 1 minute data then 5 minute data' so in my query I tried 1 minute and 5 minute but I still get the error of only 3 hours of data.
Error is thrown because you're using the query syntax (SELECT ...) and that only supports the latest 3 hours of data. The feature is called Metrics Insights, you can see the limits here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-insights-limits.html
Error is not related to high resolution metrics. Even if they were high resolution, when you're setting the period to 5 min, you would only retrieve datapoints aggregated to 5 min granularity.
I am running a Kusto Query in my Azure Diagnostics where I am querying logs of last 1 week and the query times out after 10 mins. Is there a way I can increase the timeout limits? if yes can someone please guide me the steps. I downloaded Kusto explorer but couldnt see any easy way of connecting my Azure cluster. Need help as how can i increase this timeout duration from inside Azure portal for query I am running?
It seems like 10 minutes are the max value for timeout.
https://learn.microsoft.com/en-us/azure/azure-monitor/service-limits
Query API
Category
Limit
Comments
Maximum records returned in a single query
500,000
Maximum size of data returned
~104 MB (~100 MiB)
The API returns up to 64 MB of compressed data, which translates to up to 100 MB of raw data.
Maximum query running time
10 minutes
See Timeouts for details.
Maximum request rate
200 requests per 30 seconds per Azure AD user or client IP address
See Log queries and language.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/api/timeouts
Timeouts
Query execution times can vary widely based on:
The complexity of the query
The amount of data being analyzed
The load on the system at the time of the query
The load on the workspace at the time of the query
You may want to customize the timeout for the query.
The default timeout is 3 minutes, and the maximum timeout is 10 minutes.
Have to come up with a proposal to use Amazon S3 with CloudFront as CDN.
One of the important thing is to do a cost estimate. I read over AWS website and forums, used their calculator, but couldn't come to a conclusion with the final number (approx) that I will be confident of. Honestly, I got confused between terms like "Data Transfer Out", "GET and Other Requests" and whether I need to fill in the details both at Amazon S3 and Amazon CloudFront and then do a sum total.
So need help here to estimate my monthly bill.
I will be using S3 to store files (mostly images)
I will be configuring cloud front with my S3 bucket to deliver the content.
Most of the client base (almost 95%) is in US.
Average file size: 500KB
Average number of files stored on S3 monthly: 80000 (80K)
Approx number of total users requesting for the file monthly or approx number of total requests to fetch the file from CloudFront: 30 Millions monthly
There will be some invalidation requests per month (lets say 1000)
Would be great if I can get more understanding as to how my monthly bill will be calculated and what approximately it will be.
Also, with the above data and estimates, any approx on how much the monthly bill, if I use Akamai or Rackspace.
I'll throw another number into the ring.
Using http://calculator.s3.amazonaws.com/calc5.html
CloudFront
data transfer out
0.5MB x 30 million = ~15,000GB
Average size 500kb
1000 invalidation requests
95% US
S3
Storage
80K x 0.5MB 4GB
requests
30million
My initial result is $1,413. As #user2240751 noted, a factor of safety of 2 isn't unreasonable, so that's in the $1,500 - $3,000/month range.
I'm used to working with smaller numbers, but the final amount is always more than you might expect because of extra requests and data transfer.
Corrections or suggestions for improvements welcome!
Good luck
The S3 put and get request fields (in your case) should be restricted to the number of times you are likely to call / update the files in S3 from your application only.
To calculate the Cloudfront service costs, you should work out the rough outbound bandwidth of your page load (number of objects served from cloudfront per page - then double it - to give yourself some headroom), and fill in the rest of the fields.
Rough calc.
500GB data out (guess)
500k average object size
1000 invalidation requests
95% to US based edge location
5% to Europe based edge location
Comes in at $60.80 + your S3 costs.
I think the maths here is wrong 0.5MB * 30,000,000 is 14503GB NOT 1500GB - thats a factor of 10 out unless I'm missing something
Which means your monthly costs are going to be around $2000 not $200
Looking at the documentation i found this information :
Data Extraction API requests are rate limited by number of requests and data download volume per unit of time (hour, day, week, or month). If you exceed the limit, a 403 error occurs.
Someone could tell me more about this limit ? How many call per day/month/year ???
Just spoke to webtrends support. The limit is 600 requests per use per hour or 1 Gig per user per hour.
I'm using GAPI library (in PHP) for querying Google Analytics API.
I request 2 dimensions (pagePath, date), 2 metric (pageviews, visits), past 365 days time range, and 2 filters for pagePath. Average time to get data for one query is 25-30 sec.
When I use only 1 metric (pageviews), average response time is 3 sec.
Why would there be such a difference when using 1 or 2 metrics?
I'm guessing that the path/date/pageviews is stored pre-calculated, while the path/date/visits needs to be calculated off the data-store (be thankful you're not applying complicated segments - then it gets really slow).
There's hints about how this might work in the google BigTable paper.