How does AWS charge for use of Fargate tasks? - aws-fargate

I have a Docker image that is running as a Fargate task. I am curious to know how AWS bills for the use of it. Currently I have a hard limit of 1GB and a soft limit of 512MB. If I bump the hard limit up to 2GB to avoid memory issue in certain cases, will I be charged for 2GB all the time or only the period that the container needs it? For most of time my application does not even need 512MB but occasionally it needs 2GB.

Visit here for pricing details
https://aws.amazon.com/fargate/pricing/
The lowest vCPU is 0.25 which provides memory upto 2 GB and is charged based on the CPU utilized.

Related

Azure Used Vs Allocted Vs Maximum in elastic pool

I'm currently doing some cleanup on an Azure environment and just wanted to check calculations. I have about 55 Databases on an elastic server, which is currently sitting at 3.64TB of the maximum 4TB. Having a look at each of the databases within this pool, I can see that they have their own Used / Allocated / Maximum sizes. Each of these is ranging between 0.1% and 80% of their allocated 250GB. Is the allocated size of the elastic pool dependent on the maximum sizes of each of the databases within that elastic pool? IE if I took a database that is using 1GB of 250GB and reduced the maximum size of this database within the elastic pool from the default 250GB down to 20GB, would it have any positive/negative implications? If anyone can suggest any good resources for azure environment maintenance plans it would be greatly appreciated as I'm coming from an AWS background.
Your allotted area will immediately expand; you do not need to be concerned; this is typical. You'll see that occupied space is always close to available space. The key is when the total amount of allotted and utilised storage space approaches the maximum storage size.
If the database size reaches the maximum size, use the following statement to raise the maximum size or use Azure portal to alter the maximum size.
ALTER DATABASE AzureDB2 MODIFY (EDITION='STANDARD', MAXSIZE= 50 GB)
The database receives a certain amount of log space depending on the tier. When you specify the storage size in vCore, you are given a fixed amount of space for your logs. If you pick 1TB of storage, for example, 300GB is set aside for logs.

Google Pub/Sub + Cloud Run scalability

I have a python application writing pubsub msg into Bigquery. The python code use the google-cloud-bigquery library and the TableData.insertAll() method quota is 10,000 requests per second per table.Quotas documentation.
Cloud Run container auto scaling is set to 100 with 1000 requests per container.So technically, I should be able to reach 10 000 requests/sec right? With the BQ insert API being the biggest bottleneck.
I only have a few 100 requests per sec at the moment, with multiple service running at the same time.
CPU and RAM at 50%.
Now confirming your project structure, and a few details given in the comments; I would then review the Pub/Sub quotas and limits, especially the Quota and the Resource limits, both tables where you can check this information depending on the size and the Throughput quota units sections tells you how to calculate quota usage.
I would answer your question as a yes, you are able to reach 10,000 req/sec. And as in this question depending on the byte size you can have 10,000 row inserts unless the recommendation is 500.
The concurrency in Cloud Run can be modified in case you need to change it.

Imageresizer cache azure storage quota limits

I'm using ImageResizer as an Azure webapp with a service plan with 50 Gb file storage. My settings for DiskCache are:
<diskcache dir="~/imagecache" autoclean="true" hashModifiedDate="true" subfolders="1024" asyncWrites="true" asyncBufferSize="10485760" cacheAccessTimeout="15000" logging="true"/>
But that doesn't seem to stop the imagecache folder to get to the 50 Gb limit quite quickly. I have around 100 Gb of images in blob storage (original size), not all will be used on the same day, however the same image could be cached with different parameters multiple times. The images cached are around 200Kb average?.
Is there a way to stop the storage filling up so quick? Is there maybe a better way of using DiskCache? or use something else? The Premium Plans with 250Gb and decent CPU/RAM are far too expensive to justify the cost for this.
Thanks
You can't limit the cache by files size, only by a (very) rough count. Deleting the cache and setting subfolders="256" should keep you under 50GB, assuming that 200kb average holds true.
... However, if your cache fills up "quickly" (as in 1-3 days), then you're probably going to experience serious cache churn and poor performance as your disk write queue skyrockets.
You might consider using a CDN if you can't get storage space for, say, 10 days worth of cached files.

Amazon Web Services Apache Server

I am trying to get a feel for the costs imposed by running apache on AWS continually. Assuming that the service is scarcely used, does anyone know how many cpu hours that would eat up in a month just by sitting there and running? I understand that this is slightly impractical but I am trying to figure out what the cost of entry is to deploy an application on this platform (as compared to GAE). I suspect it to be small but I would like to know.
Amazon charges for EC2 instances by uptime, not CPU time. The cheapest Linux instance type costs 8.5c / hour, or about $37 / month. You can reduce this by either signing up for a reserved instance that you plan to run for an extended period, or by using a spot priced instance where you bid the price you're prepared to pay.
You will also incur bandwidth charges for data transfer in and out of the EC2 network, and storage charges if you store any data permanently on AWS. These should be small compared with the cost of running the instance.
You can always have an estimates here:
http://calculator.s3.amazonaws.com/calc5.html

How can I monitor the bandwidth of an object in the Amazon S3 service?

How can programmatically monitor the bandwidth of an object in AWS S3 service? I would like to do this to prevent excessive bandwidth usage by clients who are using our services and costing us more than we can afford. We like to limit 1TB bandwidth for each object.
The detailed usage reports are just per bucket, not per object.
What you could do is enable logging and parse the logs once an hour or so. It's certainly not instant, but it would prevent people from going way over your usage limits.
Also, s3stat is a good option up to a point. Once you start doing more than ~ 50 million requests per month, they have trouble crunching the data.