Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm building app on top of Amazon S3. How can I keep my S3 running under a set budget? Suppose I don't want unexpected traffic to over charge my AWS account. I'd rather it remain unavailable.
There is no way to set a budget for AWS.
But this feature is being requested very often,
so probably one day it will be implemented.
https://forums.aws.amazon.com/thread.jspa?threadID=58127
AWS has announced the general availability of the functionality to Monitor Estimated Charges Using Billing Alerts via Amazon CloudWatch as of May 10, 2012 (which according to Daniel Lopez' answer [+1] has been available to AWS premium accounts since end of 2011 already):
We regularly estimate the total monthly charge for each AWS service
that you use. When you enable monitoring for your account, we begin
storing the estimates as CloudWatch metrics, where they'll remain
available for the usual 14 day period. [...]
As outlined in the introductory blog post, You can start by using the billing alerts to let you know when your AWS bill will be higher than expected, see Monitor Your Estimated Charges Using Amazon CloudWatch for more details regarding this functionality.
This is already pretty useful for many basic needs, however, using the CloudWatch APIs to retrieve the stored metrics yourself (see the GetMetricStatistics API and Getting Statistics for a Metric for usage samples) actually allows you to drive arbitrary workflows and business logic based upon this data.
Regarding the latter, the scope of this offering is stressed as well though:
It is important to note that these are estimates, not predictions. The
estimate approximates the cost of your AWS usage to date within the
current billing cycle and will increase as you continue to consume
resources. [...] It does not take trends or potential changes in your AWS usage pattern
into account. [emphasis mine]
It seems there is still no solution provided by Amazon.
Take a look on Amazon Price-Watcher - Monitor your bill and auto-shut down your instances
So here is a basic script I've put together in Python which will sit and monitor the current price of your instance, and shut it down if it goes over a certain price-limit. (In the future, this can be changed to maybe throttling incoming bandwidth, or emailing the admin).
As of December 2011, if you have an AWS premium account you can use CloudWatch to monitor your estimated charges and if they go over a certain limit you can trigger different actions (such as shutting down the machine)
http://blog.bitnami.org/2011/12/monitor-your-estimated-aws-charges-with.html
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 22 days ago.
Improve this question
I have an s3 bucket in us-east-2 region and access is mainly from Nepal. When I use my Wifi, it is really slow but when using mobile data it is fast enough. And it is also fast when using VPN outside my country. What could be the reason behind it. Also the speed was good enough just a day before. Just today it started to slow down for no reason. Is it due to my Wifi provider? What should I do in this situation?
Buckets are globally accessible, but they reside in a specific AWS Region. The geographical distance between the request and the bucket contributes to the time it takes for a response to be received.
To decrease the distance between the client and the S3 bucket, consider moving your data into a bucket in another Region that's closer to the client. You can configure cross-Region replication so that data in the source bucket is replicated into the destination bucket in the new Region. As another option, consider migrating the client closer to the S3 bucket.
You can also try S3 Transfer Acceleration, which manages fast, easy, and secure transfers of files over long geographic distances between the client and an S3 bucket. It takes advantage of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location, it is routed to Amazon S3 over an optimized network path. Transfer Acceleration is ideal for transferring gigabytes to terabytes of data regularly across continents. It's also useful for clients that upload to a centralized bucket from all over the world.
Can S3 bucket be slowed down by Internet Provider?
If you are connecting to S3 over the Internet, the performance of your your Internet connection can affect S3 upload and download time. Because of the difference in the network latency between WiFi and Mobile network, I encourage you to test whether the cause of your issues is with your network rather than with your AWS setup. Here is a robust guideline on how to troubleshoot slow or inconsistent speeds when downloading or uploading to Amazon S3.
We have our applications (Map Reduce Jobs, microservices) completely running out of AWS.
We intend to use a single service for viewing (for debug purposes), monitoring and alarms (notifications based on a threshold) on logs.
Is there any specific benefits of using external service providers like sumo logic over the one provided by AWS itself (cloudwatch in this case)
In full disclosure, I'm an engineer at Sumo Logic. But here is an analysis done by one of my colleagues a few months ago as to why you would want to use Sumo Logic specifically over AWS Cloudwatch itself:
You can’t easily search across multiple Cloudwatch Log Groups in Cloudwatch. In Sumo, you can define metadata to easily query across log groups, and even log sources outside of AWS within the same query.
Cloudwatch does not include any pre-built apps (out of the box queries, dashboards, and alerts). Sumo Logic provides a wide variety of apps and integrations, all included with your subscription: https://www.sumologic.com/applications/
With Cloudwatch, you pay for dashboards, you pay to query your data, and you pay to use its query API. These are all included in your Sumo Logic subscription (depending on the subscription level you choose).
Sumo provides 30 days of log retention out of the box. Data retention is another a la carte cost when using CloudWatch. Sumo Logic also provides you with the ability to forward your logs off to S3 for long-term storage outside of our platform.
Cloudwatch does not include advanced analytics operators. Sumo Logic includes operators like Outlier, Log Reduce, and Log Compare, which are all part of the Sumo Logic platform.
Regarding search time, Sumo Logic vs AWS CloudWatch Insights (AWS CloudWatch log search): Here is a quote from a customer with 100 AWS accounts worth of CloudTrail logs: "We can search all 100 of our accounts in Sumo in the same amount of time it takes us to search 1 account with AWS's CloudWatch.”
Sumo Logic provides Threat Intelligence as part of your subscription as well, to be able to check all of your logs against Crowdstrike’s threat database of known IoC’s.
Sumo training and certification is included with your subscription.
On a personal note, I can also say that Sumo Logic's post-sales support is top-notch, as we put a huge emphasis on customer success.
Please keep in mind that this analysis by my colleague is a few months old, so some of these items may have been addressed by AWS since then.
I am hosting a small web-based application with Apache Web Server on EC2. On my monthly fee I usually see ~40GB usage of data transfer out, which cause about $5 or so a month.
Although this is no big money, I am curious on how these data transfer out were generated. I am sure at Midnight there won't be anyone actually visiting the web-based application. And yet there are some data transfer out at ~50M per hour (as I can see from the details report from amazon).
Is there any way to figure out what process actually generates those data-transfer out activity (even at Midnight when no one uses the web-application)?
thanks!
J.
How you looked at Boundary, may be they can help. They can monitor data going in and out of your EC2 instance (networking) You can see details like what ports the packets are coming from and where they are going to.
You have to install and agent on your machine and sign up for a trial.
Google BigQuery ensures a minimum availability of its service?
Given the eventual failure of any component in Google's infrastructure, might happen to lose some or all information I have uploaded?
How Google can ensure data availability even if a failure occurs?
I mean what happens if a node (server) goes down? What happens to data that is stored on it? And if fail 10 or 100 nodes? What would have to happen for that service becomes unavailable?
I am researching on the availability of this platform and what mechanisms has to be fault-tolerant
Thanks
BigQuery has a 99.9% monthly uptime SLA.
Check https://developers.google.com/bigquery/docs/sla for details.
The whole system is based on a highly replicated fault tolerant architecture, but not all details from it are made public.
If you also need 24x7 fast phone and email support, you can get it. Details at https://cloud.google.com/support/packages.
I am planning to build a dashboard to monitor the AWS expenditure, after googling I realized AWS has no API so that developers can hook and build an app to get the real time data. Is there any way to achieve it. Kindly help me out
I believe, you are looking to monitor current AWS usage.
AWS provides optoins for same through "AWS programmatic billing access".
Once you enable it, AWS will upload csv file of your current usage every few hours to specified S3 bucket.
You need to write a program using your favourite programming language AWS S3 SDK to download and parse csv file and get real time data.
Newvem has a very good set of How to Guides available to work with AWS.
One of the guide
http://www.newvem.com/how-to-set-up-programmatic-billing-access-for-your-aws-account/
talks about enabling programmatic billing access.
Also refer, http://www.newvem.com/how-to-track-costs-of-amazon-s3-cloud-objects/ , this talks about how to track cost of Amazon S3.
2) As mentioned by Mike, AWS also provides a way where you can get billing alert using Cloudwatch.
I hope above helps.
I recommend to refer Newvem how to guides to get more insight into AWS and its offerings.
Thanks,
Taral Shah
If you're looking to monitor actual spending data, #Taral is correct. AWS Programmatic Billing Access is the best tool for recording the data. However you don't actually have to write a dashboard tool to view it, there are a lot of them out there already.
The company I work for, Cloudability, automatically imports all of your AWS Detailed Billing data and let's you build out all of the AWS spending and usage reports you need without ever having to write any code or mess with any spreadsheets.
If you want to learn more, there's a good blog post at http://blog.cloudability.com/introducing-cloudabilitys-aws-cost-analytics-powered-by-aws-detailed-billing-files/
For more information about Cloudwatch enabled monitroing refer
http://aws.amazon.com/about-aws/whats-new/2012/05/10/announcing-aws-billing-alerts/ for more
To learn AWS faster way, refer how to guides of Newvem at
http://www.newvem.com/amazon-cloud-knowledge-center/how-to-guides/
Regards
Taral
First thing is to enable detailed billing export to a S3 bucket (see here)
Then I wrote a simplistic server in Python (BSD licenced) that retrieves your detailed bill and breaks it down per service-type and usage type (see it on this GitHib repo).
Thus you can check anytime what your costs are and which services cost you the most etc.
If you tag your EC2 instances, S3 buckets etc, they will also show up on a dedicated line.
CloudWatch has an "estimated billing" API, which will get you most of the way there. See this ServerFault question for more detail: https://serverfault.com/questions/350971/how-can-i-monitor-daily-spending-on-aws
If you are looking for more detail you will need to download your CSV-formatted bill and parse it. But your question is too generic to provide any specifically useful answer. Even this will not be real time though.