Many 4 character storage containers being created in my storage account - azure-storage

I have an Azure storage account.
For a while now, something has been creating 4 character empty containers as shown here, there are hundreds of them:
This storage account is used by:
Function Apps
Document Db (Cosmos)
Terraform State
Container Registry for Docker images
It's not a big deal but I don't want millions of empty containers being created by an unknown process.
Note1: I have looked for any way to find more statistics / history of these folders but I cant find any
Note2: We don't have any custom code that creates storage containers in our release pipelines (ie... PowerShell or CLI)
thanks
Russ

It seems the containers are used to store logs of Azure Function. I have a storage account just for azure function and web app. We could see it has the containers like yours via Storage Explorer.

Related

Any raise in cost post migration of classic cloud storage to ARM

We have classic cloud storage account in Azure. It does not hold any Tables, Queues, File Shares except Containers. Also the containers have blob data. It does not have any VM disks. The following are few questions that I have:
Would the Access keys change post migration to ARM(both primary and secondary)?
If there are any SAS token generated with classic cloud storage, would that also change post migrating to ARM?
Is there any cost increase for the same set of data stored post the migration to ARM?
Would there be any change in the URL post ARM migration?
Any specific amount of time that would be taken during the migration of ARM?
Have not tried, but would like to make the decision based on the reply

Replication between two storage accounts in different regions, must be read/writeable and zone redundant

We are setting up an active/active configuration using either front door or traffic manager as our front end. Our services are located in both Central and East US 2 paired regions. There is an AKS cluster in each region. The AKS clusters will write data to a storage account located in their region. However, the files in the storage accounts must be the same in each region. The storage accounts must be zone redundant and read/writeable in each region at all times, thus none of the Microsoft replication strategies work. This replication must be automatic, we can't have any manual process to do the copy. I looked at Data Factory but it seems to be regional, so I don't think that would work, but it's a possibility....maybe. Does anyone have any suggestions on the best way to accomplish this task?
I have tested in my environment.
Replication between two storage accounts can be implemented using the Logic App.
In the logic app, we can create two workflows. One for replicating data from storage account 1 to storage account 2. Other for replicating data from storage account 2 to storage account 1.
I have tried to replicate blob data between storage accounts in different regions.
The workflow is :
When a blob is added or modified in the storage account 1, the blob will be copied to the storage account 2
Trigger : When a blob is added or modified (properties only) (V2) (Use connection setting of storage account1)
Action : Copy blob (V2) ) (Use connection setting of storage account2)
Similar way, we can create another workflow for replication of data from Storage Account 2 to Storage Account 1.
Now, the data will be replicated between the two storage accounts.

Azure - Storage Account/ARM Issue

Pretty new to Azure and struggling with creating a VM from an existing vhd. I get the following error when executing New-AzureQuickVM -ImageName MyVirtualHD.vhd -Windows -ServiceName test:
CurrentStorageAccountName is not accessible. Ensure that current storage account is accessible and the same location or affinity group as your cloud service.
Select-AzureRMSubscription does not return anything for the CurrentStorageAccount property. Get-AzureRMStorageAccount does list my storage account.
Azure has two deployment models: "Classic" and "Resource Manager" (ARM). You're not seeing your ARM-created storage accounts because you're using classic-mode powershell commands to list storage accounts, and your storage accounts were created with the (newer) Resource Management API (and the classic API will only list storage accounts created with the "classic" management API).
Your example shows you mixing the two types. (also - don't worry about resource groups in this context - that's not your issue - resource groups are unrelated for this).
Once you select your subscription (via Select-AzureRmSubscription), and then Get-AzureRmStorageAccount, you should see all of your newly created storage accounts.
Also: Set-AzureSubscription does something different - it's for altering subscription properties. You want Select-... for selecting the default subscription to work with.

Storing Uploaded Files in Azure Web Sites: File System or Azure Storage

When using Azure Web Sites (WAWS) general opinion seems to be that uploaded content such as photo's or files should be stored in Azure Storage Blobs and not in the WAWS File System.
Clearly using Azure Storage is a great idea if you have a lot of data and need scale and redundancy however for small or simple sites it seems to add another layer of complexity and also means you can't easily use things like ImageResizer without purchasing the Azure compatible licence etc.
So given that products like WordPress from the Azure Gallery uses "/site/wwwroot/wp-content/uploads/" to store all uploaded files on WAWS is there anything wrong with using the WAWS file system for storage or are there other considerations to take into account when using Azure WAWS?
The major drawback to using the WAWS storage is that your data is now intermingled with the application. By saving all of your plugins/images/blobs externally in a database or blob storage, you retain the flexibility to redeploy your application to a new region/datacenter by just pushing your code to the new website and changing connection strings.
If your plugins/images are stored on disk in WAWS, then you need to make sure that you are backing it up appropriately. If anything happens, you need to restore the site along with all of the data that had been uploaded.
Azure Web Sites is using Azure storage as a file storage so essentially the level of complexity you're talking about is abstracted.
Another great benefit that comes with this approach is if you scale your web site to multiple instances all of them will work with exact same file content.
Of course if you want to use pure Azure Storage features like snapshots or sharing specific content to specific users this is not available as is. But for the web site purposes is quite good.
Hope that helps

Moving azure storage containers from one blob to another

Hello I have two blobs in my account:
Blob1
Blob2
Blob2 is empty, how can I take all the containers from Blob1 and move it to Blob2?
I am doing this because I would like to use a different subscription to help save some money. It doesn't seem like its possible any other way.
This is all under the same windows live account.
Thank you!
I am glad to hear that Azure Support was able to reassign your subscription. In the future, if you would like to copy Azure Storage blobs from one account to another, you can use the Copy Blob REST API. If you are using Azure Storage Client Library, the corresponding method is ICloudBlob.StartCopyFromBlob. The Blob service copies blobs on a best-effort basis and you can use the value of x-ms-copy-id header to check the status of a specific copy operation.