Can we get the consumption details of any specific container under specific blob? Like if we have 3 blobs A,B,C and each blob has 3 container. Thus A has 3 containers A1,A2 & A3 and B has 3 Containers B1,B2,B3 and C has 3 Containers C1,C2,C3
Microsoft Azure reports provide consumption of overall subscription. Can we get the bandwidth, storage, CDN Transactions, Data transfer of Specific container wise.
Please Suggest.
As Emily stated, blobs are under containers, and containers are under Storage accounts. If you're looking for metrics for a Storage account (e.g. E2E latency, total requests), then yes you can do that. This is all provided in the Storage Account blade on the Portal. For container specific metrics, it might be worth checking out Message Analyzer. The following article may help https://azure.microsoft.com/en-us/documentation/articles/storage-e2e-troubleshooting/
Related
We are setting up an active/active configuration using either front door or traffic manager as our front end. Our services are located in both Central and East US 2 paired regions. There is an AKS cluster in each region. The AKS clusters will write data to a storage account located in their region. However, the files in the storage accounts must be the same in each region. The storage accounts must be zone redundant and read/writeable in each region at all times, thus none of the Microsoft replication strategies work. This replication must be automatic, we can't have any manual process to do the copy. I looked at Data Factory but it seems to be regional, so I don't think that would work, but it's a possibility....maybe. Does anyone have any suggestions on the best way to accomplish this task?
I have tested in my environment.
Replication between two storage accounts can be implemented using the Logic App.
In the logic app, we can create two workflows. One for replicating data from storage account 1 to storage account 2. Other for replicating data from storage account 2 to storage account 1.
I have tried to replicate blob data between storage accounts in different regions.
The workflow is :
When a blob is added or modified in the storage account 1, the blob will be copied to the storage account 2
Trigger : When a blob is added or modified (properties only) (V2) (Use connection setting of storage account1)
Action : Copy blob (V2) ) (Use connection setting of storage account2)
Similar way, we can create another workflow for replication of data from Storage Account 2 to Storage Account 1.
Now, the data will be replicated between the two storage accounts.
According to the S3 FAQ:
"Amazon S3 Standard, S3 Standard-Infrequent Access, and S3 Glacier
storage classes replicate data across a minimum of three AZs to
protect against the loss of one entire AZ. This remains true in
Regions where fewer than three AZs are publicly available."
I'm not clear on what this means. Suppose you store your data in a region with fewer than three AZs that are "publicly available." Does that mean that Amazon will store your data in an AZ within that region that is not publicly available, if necessary? Or that it will store your data in an AZ in another region to make up the difference?
S3 will store your data in an AZ that is not publicly available. The same is also true for DynamoDB, and possibly other services as well.
Source:
I want to say I heard it at a re:Invent session. I’ll try to find a link to some documentation.
This says even if you have mentioned AZ where publicly available AZs are < 3, Amazon S3 makes sure to replicate your data in a total of at least 3 AZs(including public & non-public).
I am thinking to store some of the data of my services in an Azure blob storage. I really require the retention of data for 15 days and I would access it rarely one file in a day. I may get max 5 MB of data per day to store.
I am not sure which Access tier (Hot, cold, Archive) could be a better option in pricing wise. I am not clear after reading the doc. Does any one give some pointers. Thanks.
I really require the retention of data for 15 days and I would access it rarely one file in a day.
Azure storage offers three storage tiers for Blob object storage so that you can store your data most cost-effectively depending on how you use it.
The Azure hot storage tier is optimized for storing data that is accessed frequently.
The Azure cool storage tier is optimized for storing data that is infrequently accessed and stored for at least 30 days.
The Azure archive storage tier is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours). The archive storage tier is only available at the blob level and not at the storage account level.
So according to your demand, I suggest that you could choose cool Acces tier.
For more details, you could refer to this article.
I am having a query related to pricing for Azure File Service that how pricing happens like as per usage of GB data of file or will it be based on available size for file share like 5TB per share.
I am referring this link http://azure.microsoft.com/en-us/pricing/details/storage/ but this is not giving me the exact pricing.
My Scenario is like let’s say:-
1) for two months I would require 4GB file data on File Share and then for another 4 month I would require 5GB of file data so how the costing will happen?
2) Would I require two VMs for this to maintain the availability and how the costing will happen for two VMs usage?
Kindly help me on this.
The price is per GB used.
Regarding your questions:
The share has a maximum if 5TB size, but you are only billed for what you actually use, not the maximum size. In your example, you will be billed for 4GB for the first 2 months, and 5GB for the next 4.
You do not require any VMs to maintain availability of the shares themselves - Azure Storage offers SLA on the share uptime. Of course, if you have an app that is running in a VM, and you need uptime guaranteed for the app, you have to look at how many VMs you need, but that is one level above storage - and those needs are not based on storage uptime considerations.
I have to big files range in size between 20 GB to 90 GB. I will download files with Internet Download Manager (IDM) to my Windows server at Azure Virtual Machine. I will need to transfer these files to my Azure Storage account to use it later. The total files size about 550 GB.
Will Azure Storage Explorer do the job, or there are a better solution?
My Azure account is a BizSpark one with 150 $ limit, shall I remove the limit before transferring the files to the storage account?
Any other advice?
Thanks very much in advance.
You should look at the AzCopy tool (http://aka.ms/AzCopy) - it is designed for large transfers of data to and from Azure Storage.
You will save network egress cost if your storage account is in the same region as the VM where you are uploading from.
As for cost, this depends on what all you are using. You can use the Azure price calculator (http://azure.microsoft.com/en-us/pricing/calculator/) to help with estimating, or just use the pricing info directly from Azure website and calculate an estimated usage to see whether you will fit within your $150 limit.