What are the charges incurrent for keeping Azure App Insights in one region and the Log Analytics Workspace in another? - azure-log-analytics

I have my App Insights(Classic) in West US2. I want to move it to workspace based mode. My Log Analytics Workspace is in another region (West US). Is it ok to migrate? What are the pricing impacts pertaining to the resources being in different regions?

If you already have a workspace in a different region, it is fine but if you want to migrate it to a different region when you already have instances in it. Unfortunately, it is not possible, Microsoft can manually move the region if you want and there won't be any extra pricing if you change the region.
We have Azure Mover which came into the picture in September 2020 but unfortunately, Application insights don't support it.

Related

Replication between two storage accounts in different regions, must be read/writeable and zone redundant

We are setting up an active/active configuration using either front door or traffic manager as our front end. Our services are located in both Central and East US 2 paired regions. There is an AKS cluster in each region. The AKS clusters will write data to a storage account located in their region. However, the files in the storage accounts must be the same in each region. The storage accounts must be zone redundant and read/writeable in each region at all times, thus none of the Microsoft replication strategies work. This replication must be automatic, we can't have any manual process to do the copy. I looked at Data Factory but it seems to be regional, so I don't think that would work, but it's a possibility....maybe. Does anyone have any suggestions on the best way to accomplish this task?
I have tested in my environment.
Replication between two storage accounts can be implemented using the Logic App.
In the logic app, we can create two workflows. One for replicating data from storage account 1 to storage account 2. Other for replicating data from storage account 2 to storage account 1.
I have tried to replicate blob data between storage accounts in different regions.
The workflow is :
When a blob is added or modified in the storage account 1, the blob will be copied to the storage account 2
Trigger : When a blob is added or modified (properties only) (V2) (Use connection setting of storage account1)
Action : Copy blob (V2) ) (Use connection setting of storage account2)
Similar way, we can create another workflow for replication of data from Storage Account 2 to Storage Account 1.
Now, the data will be replicated between the two storage accounts.

Is it possible to specify the AWS region for the storage location?

Not sure if this is even the correct place to ask, but I couldn't find any relevant information on this topic (apart from an old forum post that was last answered a year ago).
Like the question says, does anyone know if it's possible to specify a AWS or GCP region that FaunaDB will use?
I saw that on the Pricing page, the Business Plan offers Data locality* and Premium regions* but they are marked as future feature, with no further information like a roadmap or planned release quarter.
Many of my clients are Canadian or from Europe and they are already asking me about hosting their data in their own country. I know that AWS and Google offer data center locations in Canada (for example), so I'm just looking for any further information on this and if/when it will be possible.
I really, really don't want to have to host my own database on a private server.
Thanks in advance!
It is not possible to specify an AWS region. Fauna database transactional replication involves all deployed regions.
We are working towards the data locality feature, but is not available yet nor does it have a finalized definition.
When the data locality feature is closer to completion, we'll be able to say more.
Hi ouairz As eskwayrd mentions, it's not possible to select individual regions in which to locate your data. We do plan to provide a a set of individual replication zones across the globe which you select to control your data residency needs. For example, there would be a EU zone which would you would use for data that must stay resident to EU member states. Other zones may include, for instance, Asia, Australia, North America, etc. We are considering a Canadian zone. Please feel free to reach out to product#fauna.com for more details

Azure - Storage Account/ARM Issue

Pretty new to Azure and struggling with creating a VM from an existing vhd. I get the following error when executing New-AzureQuickVM -ImageName MyVirtualHD.vhd -Windows -ServiceName test:
CurrentStorageAccountName is not accessible. Ensure that current storage account is accessible and the same location or affinity group as your cloud service.
Select-AzureRMSubscription does not return anything for the CurrentStorageAccount property. Get-AzureRMStorageAccount does list my storage account.
Azure has two deployment models: "Classic" and "Resource Manager" (ARM). You're not seeing your ARM-created storage accounts because you're using classic-mode powershell commands to list storage accounts, and your storage accounts were created with the (newer) Resource Management API (and the classic API will only list storage accounts created with the "classic" management API).
Your example shows you mixing the two types. (also - don't worry about resource groups in this context - that's not your issue - resource groups are unrelated for this).
Once you select your subscription (via Select-AzureRmSubscription), and then Get-AzureRmStorageAccount, you should see all of your newly created storage accounts.
Also: Set-AzureSubscription does something different - it's for altering subscription properties. You want Select-... for selecting the default subscription to work with.

Sync data options between Windows 8 and Phone 8

I would like to create an app where the user can add and view data. Either adding at a desktop/tablet or phone and reading from either source. I would like the data store to be synced between any of the user's devices.
I'm starting to play with the Trial of Azure, and it looks promising. Probably a solid way to sync data through to cloud between users' devices. Other than syncing between a users devices, I have no need for cloud services currently.
I've seen some apps that do a 'Backup/Restore' model with the user's SkyDrive account. But this seems to be a manual process. I'd like to see it done seamlessly.
I've looked into Sync services, but that would be more like a hub/spoke solution. Again, I don't need a central database.
What are some options? At this point, I would be fine using just Windows 8 patterns/practices.
Because they are separate devices, you will need to have some service layer to do the store/forward for you. With that you have two basic choices, use the end user's own storage (aka SkyDrive) or use your own storage (aka Windows Azure).
SkyDrive is fully supported through the Live SDKs and provides a secure way to allow a user to share store their data, and synchronize it across multiple devices. You get security, and there is no cost for the server side storage on your part. The user owns their storage, not you. The limitation is that you may have issues sharing that same data across other devices or users where SkyDrive (or whatever file sync service you use) is not available.
With a service layer, like Azure, you have a lot more flexibility, but you also will be responsible for maintaining (and paying for) that server side storage / services. Have you looked at "Windows Azure Mobile Services". With your Azure account you get 10 free Azure Mobile Services. You will pay for the SQL data storage on the backend, and that cost will depend on the amount of data you store on the server side. You also need to make sure to architect your application in a way to protect an individual users' data, but it is actually pretty easy to do, well documented, and gives you a lot of options.
Lastly, you may consider what type of data you want to share. SkyDrive is great for "Files". Pics, Songs, Videos, etc. Windows Azure Mobile Services (WAMS) is great for "Data".
Neither model is right or wrong. It just depends on your goals.
Hope that helps you go through the thought process

How to control azure storage account costs in Azure when using WAD tables

We do not use our Azure storage account for anything except standard Azure infrastructure concerns (i.e. no application data). For example, the only tables we have are the WAD (Windows Azure Diagnostics) ones, and our only blob containers are for vsdeploy, iislogfiles, etc. We do not use queues in the app either.
14 cents per gigabyte isn't breaking the bank yet, but after several months of logging WAD info to these tables, the storage account is quickly nearing 100 GB.
We've found that deleting rows from these tables is painful, with continuation tokens, etc, because some contain millions of rows (have been logging diagnostics info since June 2011).
One idea I have is to "cycle" storage accounts. Since they contain diagnostic data used by MS to help us debug unexpected exceptions and errors, we could log the WAD info to storage account A for a month, then switch to account B for the following month, then C.
By the time we get to the 3rd month, it's a pretty safe bet that we no longer need the diagnostics data from storage account A, and can safely delete it, or delete the tables themselves rather than individual rows.
Has anyone tried an approach like this? How do you keep WAD storage costs under control?
Account rotation would work, if you don't mind the manual work to be done updating your configurations and redeploying every month. That would probably be the most cost-effective route, as you wouldn't have to pay for all the transaction to query and delete the logs.
There are some tools that will purge logs for you. Azure Diagnostics Manager from Cerebrata [which is currently showing me an ad to the right :) ] will do it, though it's a manual process too. I think they have some Powershell commandlets to do it as well.