I have a basic question relating to Cloud storage pricing. I see many clould service providers mentioning charges, for example "$1 per 25 GB per month". Let us consider two cases -
If I am storing 25 GB every day but deleting data of previous day then I will eventually have 25 GB storage on last day of the month. In this case my charges can be $1.
If I am storing 25 GB every day and deleting data of previous day, but I wrote 25 * 30 GB amount of data entire month. So even if I was cleaning the data my total storage amounts to 750 GB and so my charges should be 750/25 * 1 = $30.
What will be my cost at the end of the month?
Storage charges only apply for the duration of time that data existed, which is your 1st case.
Put differently, storing one 50 GB object for 10 days costs the same as storing two 25 GB objects for 5 days each (or one 500 GB object for 1 day, for that matter).
See also the second point here about prorated storage charges.
In your 2nd case, "wrote 25 * 30 GB" actually describes network ingress -- which is free (see here under "General network usage")
Related
I'd like to estimate the value of each "conversion" starting from a free trial signup all the way to that user becoming a paid user.
Let's say I have an online coding course website that offers a free 30-day trial. After the trial period, the cost is $100 per month.
I bring in 100,000 users/mo to the signup page of my site via Google Ads paid search
I consistently get ~1,000 free trial signups per month (1% conversion rate). All signups are considered free trial users
Of the 1,000 trial users, 500 log in exactly 2x within the first week (let's call these L2W1 for Logged in 2x within Week 1)
Of the 1,000 trial users, 100 log in at least 3x within the first week (L3W1). These users are mutually exclusive from above
Of the 500 L2W1 users, 50 users (10%) sign up for at least 1 course
Of the 100 L3W1 users, 50 users (or 50%) sign up for at least 1 course
On average, I get 35 paid users monthly
10 of 50 L2W1 users become Paid Users
25 / 50 L3W1 - in other words, 50% of L3W1 Users convert to a Paid User
To recap, assume these are the only events that I am currently tracking:
100,000 Site Visitors ---> 1,000 Trial Signups
500 L2W1 ---> 50 Course Signups (CS) ---> 15 Paid Users = $1,000
100 L3W1 ---> 50 Course Signups (CS) ---> 35 Paid Users = $2,500
Total: 35 Paid Users (PU) ---> $3,500
To keep things simple, let's ignore life-time value (e.g. the average user subscribes for 3.5 months).
QUESTION: Is there a mathematical equation I could use to assign values (or percentages) to each conversion event? As I receive more information, I'd like to provide that to Google as a signal that each conversion type down the funnel is a sign of a more qualified user and therefore more valuable to my business.
I can simply take the 1,000 Trial Signups and divide $3,500 (and ignore all other conversion types) in which case each Trial Signup is worth $3.50. However, only 35/1000 Trial Users convert to a Paid User so there is valuable information I am leaving on the table and not informing Google for automated bidding purposes.
I'm thinking something like this is better:
1 Trial Signup = $1.00
1 L2W1 = $2.00
1 L3W1 = $3.00
1 CS = $4.00
1 PU = $90.00
.. so a Paid User who goes through several of the steps above will equate to about $100. Not sure if this is a good approach or if the math makes sense. Any pointers or tips would be greatly appreciated.
I've been reading on Bayes Theorem and it seems to be a good model to use for this case but I'm not at all familiar with it enough to know if it's applicable to this situation.
We're having trouble understanding the numbers in the daily summary emails we get from Fabric.
A search on SO only shows one somewhat-related question/answer.
Here are the emails from 2 consecutive days:
Our questions are:
Does “Monthly Active” mean over the last 30 days? If so, how can there be a 36% drop in 1 day if the counts went from 101 to 93 (an 8% drop)?
Why does “Daily Active” show a 75% drop if the current day is 1 and the previous day was 0?
Why does “Total Sessions” show a 94% drop if the current day is 1 and the previous day was 0?
Does the “Time in App per User” mean the average for the month or for the prior day? If it's for the month, why would 1 extra session cause the value to change so much? If it's for the day, why does it show “11:33m” even though the Total Sessions was 0?
Sometimes the “Time in App per User” ends in an “m” and sometimes it ends in an “s”. For example, “11:33m” and “0:44s”. Does that mean that “11:33m” is “11 hours and 33 minutes” and “0:44s” is “0 minutes and 44 seconds”? Or does the “11:33m” still mean “11 minutes and 33 seconds” and I should ignore the suffix?
Thanks for reaching out. Todd from Fabric here. The % change is actually % difference vs. what we expected based on the previous behavior of your app. This compensates for day of week etc.
The long session when getting zero, suggests that either the session was live/not reported to us at UTC midnight. The session gets created on session start and the duration gets set at the end.
Thanks!
For years I've been using webpage requests like the following to retrieve 20 days at a time of minutewise stock data from Google:
http://www.google.com/finance/getprices?q=.INX&i=60&p=20d&f=d,c,h,l,o,v
= Retrieve for .INX (S&P 500 index) 60-second interval data for the last 20 days, with format Datetime(in Unix format), Close, High, Low, Open, Volume.
The Datetime is in Unix format (seconds since 1/1/1970, prefixed with an "A") for the first entry of each day, and subsequent entries show the intervals that have passed (so 1 = 60 seconds after the opening of the market that day).
That worked up until 9/10/2017, but today (9/17) it only returns day-end data (it even reports the "interval" between samples as 86400). Pooey! I can get that anywhere, in bulk.
But if I ask for fewer days, or broader intervals, it seems to return data - but weird data. Asking for data every 120 seconds returns exactly that - but only for every other market day. Weird!
Has anyone got a clue what might have happened?
Whoa! I think I figured it out.
Google still returns minutewise data for the same approximate limitations (up to 20 calendar days), but instead of d=10 returning all the market data for the last 10 calendar days, it return the data for the last 10 market days. Previously, to get the last 10 market days you would ask for d=14 (M-Fx2, plus two weekends). Now, Google interprets the d variable as market days, and asking for d=20 exceeds the limits on what they will deliver.
It now appears that d=15 is the limit (three weeks of market days). No clue on why I got the very weird every-other-day data for a while... but maybe if you exceed their d-limits the intervals get screwy. Dunno. Don't care. Easy fix.
The documentation for Google Custom Search JSON/ATOM API is unclear regarding pricing for additional queries. It states:
The API provides 100 search queries per day for free. If you need
more, you may sign up for billing in the API Console. Additional
requests cost $5 per 1000 queries, up to 10k queries per day.
For those that use that API in excess of the initial free 100, does the $5/1000 additional queries reset each day, or does that number roll over for subsequent days?
For instance, if I have a total number of queries on 3 consecutive days of 110, 120, and 130, will the account be billed $5 each day for the 10, 20, and 30 extra queries? Or will I be billed $5 the first day and by the end of the 3rd day I'll still have a bucket of 940 additional queries left to use for future overages?
In case anyone is also still looking for an answer to this question, (as I was a day ago), it turns out that for the Google Custom Search API the billing for $5/1000 queries is prorated.
Google Cloud Support emailed me the following:
With the Google Custom Search, you have a free 100 search queries per day which resets daily. But for example if you exceeded the daily limit, you will be charged $5 per 1,000 queries. In case that you go over the 100 queries per day and did not reach the 1,000 queries, it will be prorated from the $5. This resets daily as well. Please take note that 10k queries is the maximum daily.
I then clarified with them the following example, which they agreed was correct.
An example of making an average of 180 JSON API queries/day:
100 queries/day are free
80 queries/day are charged at $5 * (80/1000) = $0.40/day
Monthly it would be $12.00 (40cents * 30)
For those of you who use the Amazon Product Advertising API, what experience have you had with running into their throttle? Supposedly, the limit is set at 1 request per second, is that your experience?
I want my site to grow to be nation-wide, but I'm concerned about its capability to make all the Amazon API requests without getting throttled. We cache all the responses for 24 hours, and also throttle our own users who make too many searches within a short period.
Should I be concerned? Any suggestions?
I believe they have changed it. Per this link:
https://forums.aws.amazon.com/message.jspa?messageID=199771
Hourly request limit per account = 2,000 + 500 * [Average associate revenue driven per day over the past 30 days period]/24 to a maximum of 25,000 requests per hour.
Here is the latest on request limits that I could find, effective Sept 3rd, 2012.
If your application is trying to submit requests that exceed the
maximum request limit for your account, you may receive error messages
from Product Advertising API. The request limit for each account is
calculated based on revenue performance. Each account used to access
the Product Advertising API is allowed an initial usage limit of 1
request per second. Each account will receive an additional 1 request
per second (up to a maximum of 10 requests per second) for every
$4,600 of shipped item revenue driven per hour in a trailing 30-day
period.
https://affiliate-program.amazon.com/gp/advertising/api/detail/faq.html
They have updated their guidelines, you now have more requests when you sell more items.
Effective 23-Jan-2019, the request limit for each account is calculated based on revenue performance attributed to calls to the
Product Advertising API (PA API) during the last 30 days.
Each account used for Product Advertising API is allowed an initial
usage limit of 8640 requests per day (TPD) subject to a maximum of 1
request per second (TPS). Your account will receive an additional 1
TPD for every 5 cents or 1 TPS (up to a maximum of 10) for every $4320
of shipped item revenue generated via the use of Product Advertising
API for shipments in the last 30 days.
Source: https://docs.aws.amazon.com/AWSECommerceService/latest/DG/TroubleshootingApplications.html
Amazon enforces limits on how many calls you can make per hour and per second.
You can increase the former by following the sanctioned route (increase commission revenue) or by privately petitioning Amazon with a valid reason. When whitelisted, your limit will go up to 25,000 calls per hour, which is more than good enough for the vast majority of projects I can think of.
The latter limit is murkier and enforced depending on the type of query you make. My interpretation is that it is meant to keep serial crawlers who do batch item lookups in check. If you are simply doing keyword searches etc., I would not worry so much about it. Otherwise, the solution is to distribute your calls across multiple IPs.
One other point to keep in mind if you are querying multiple locales is to use separate accounts per locale. Some locales are grouped and will count to the same call quota. European Amazons, for instance, form such a pool.