Is it possible to enable the Geo-Replication in an already running VM?
How we can make sure that Geo-Replication is working?
Is there any other option for securing the Data safe?
You have to enable geo replication on your storage account associated with your VM (Azure storage account you used while creating the VM). You can enable this via Service Management REST APIs for Create and Update Storage Account or via the Windows Azure Portal. Current options for geo replicating data in your storage account are GRS (Geographically Redundant Storage) and RA-GRS (Read-Access Geographically Redundant Storage). For more information on this, please refer to blog post here.
I guess you mean Ge-Replication for a Storage Account. You should be able to enable it from the management portal.
You can optionally enable read from secondary, which will give you read access to the data on the secondary.
Related
I've a situation where my central MySQL db and file system (S3) runs on a EC2.
But one of my application runs locally at my client site on a PI-3 device, which needs to look up data and files from both the DB and file system on cloud. The application generates transactional records in turn and need to upload the DB and FS (may be at day end).
The irony is that sometimes the cloud may not be available due to connectivity issues (being in a remote area).
What could be the best strategies to accommodate this kind of a scenario?
Can AWS Greengrass help in here?
How to keep the Lookup data (DB and FS)in sync with the local devices?
How to update/sync the transactional data generated by the local devices?
And finally, what could be the risks in such a deployment model?
Appreciate some help/suggestions.
How to keep the Lookup data (DB and FS)in sync with the local devices?
You can have a Greengrass Group and includes all of the devices in the that group. Make the devices subscribe to a topic e.g. DB/Cloud/update. Once device received the message on that topic, trigger a on-demand lambda to download the latest information from the Cloud. To make sure the device do not miss any update when offline, you can use persistent session, it will make sure device will receive all the missing message when it is back online.
How to update/sync the transactional data generated by the local devices?
You may try with the Stream Manager. https://docs.aws.amazon.com/greengrass/latest/developerguide/stream-manager.html
Right now, it is allowed you to add a local use lambda to pre-process the data and sync it up with the cloud
To resolve a few issues we are running into with docker and running multiple instances of some services, we need to be able to share values between running instances of the same docker image. The original solution I found was to create a storage account in Azure (where we are running our kubernetes instance that houses the containers) and a Key Vault in Azure, accessing both via the well defined APIs that microsoft has provided for Data Protection (detailed here).
Our architect instead wants to use Kubernetes Persitsent Volumes, but he has not provided information on how to accomplish this (he just wants to save money on the azure subscription by not having an additional storage account or key storage). I'm very new to kubernetes and have no real idea how to accomplish this, and my searches so far have not come up with much usefulness.
Is there an extension method that should be used for Persistent Volumes? Would this just act like a shared file location and be accessible with the PersistKeysToFileSystem API for Data Protection? Any resources that you could point me to would be greatly appreciated.
A PersistentVolume with Kubernetes in Azure will not give you the same exact functionality as Key Vault in Azure.
PesistentVolume:
Store locally on a mounted volume on a server
Volume can be encrypted
Volume moves with the pod.
If the pod starts on a different server, the volume moves.
Accessing volume from other pods is not that easy.
You can control performance by assigning guaranteed IOPs to the volume (from the cloud provider)
Key Vault:
Store keys in a centralized location managed by Azure
Data is encrypted at rest and in transit.
You rely on a remote API rather than a local file system.
There might be a performance hit by going to an external service
I assume this not to be a major problem in Azure.
Kubernetes pods can access the service from anywhere as long as they have network connectivity to the service.
Less maintenance time, since it's already maintained by Azure.
Is there any difference between Azure redis cache and redis.io , If I have Azure Subscription, I need to purchase seperate plan for Azure Redis.
IF have a difference ? When to Use Azure redis vs Redis.io.
Azure Redis Cache is a Managed Azure service, which creates and manages the Redis instance(s) (updates, automatic failover etc.) on behalf of the customer and provides the customer with TCP endpoint(s) to communicate with. It ultimately consists of one or more instances of Redis (as described in redis.io) server, however the customer doesn't have access to the physical VMs as the service is a managed service. Various management operations are available via the Azure portal or the Management interfaces. Details are here: Azure Redis Cache sevice.
There shouldn't be any difference in Redis functionality between what you see on redis.io and the Azure Redis service.
Once you have an Azure subscription, you pay for the Azure services that you use based on their pricing. The pricing details for the Azure Redis Cache service are here: Azure Redis Cache Pricing
I want to deploy spinnaker for my team. But I encounter a problem. The document of spinnaker said:
Before you can deploy Spinnaker, you must configure it to use one of the supported storage types.
Azure Storage
Google Cloud Storage
Redis
S3
Can spinnaker use local storage such as mysql database?
The Spinnaker microservice responsible for persisting your pipeline configs and application metadata, front50, has support for the storage systems you listed. One could add support for additional systems like mysql by extending front50, but that support does not exist today.
Some folks have had success configuring front50 to use s3 and pointing it at a minio installation.
How do I set WCF tracing in Azure (production environment) so that I'll have logging of all WCF errors?
Can't you use Windows Azure Diagnostics for this purpose? Once it is properly configured, your trace logs will be available in a Windows Azure Storage account that you have specified in your code. More information about Windows Azure Diagnostics can be found here: https://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics/.
Just like Guarav said, you can simply use the Azure diagnostics to log all errors to your storage account (there's a good read in the MSDN Magazine: Take Control of Logging and Tracing in Windows Azure).
Now I personally don't like the 'flat' logging when working with WCF. I find it very important to be able to trace through activities. That's why for all Azure projects where I use WCF I don't use the normal diagnostics.
I use a trick documented by Christian Weyer where I log to a classic *.svclog file and have those files shipped to my storage account. Then I use the CloudBerry Storage Explorer to simply view those logs that include the activites. This is possible by creating a custom XmlWriterTraceListener that writes to a local resource which is shipped to your storage account.