Blob storage folders bckups - azure-synapse

we have a lot of pipelines in the synapse workspace.
using serverless sqlpool which is set to online
dedicated sql pool is paused as we do not use it to hold data...
using DevOps Repository
the support team will be making some clean-up in the environment. i.e. Running an old terraform to re-create the environment, etc.
How is it possible to make sure that
Question:
I understand in our DevOps Repository everything seems to be backed-up except the blob storage folders...
How can we make sure that if in-case something gets lost/ or goes wrong during the workspace clean-up, we will be able to get everything back...?
Thank you

ADLS Gen2 has its own tools for ensuring that DR event won’t affect you. One of the most powerful tools there is replication including Geo-Replicated Storage option.
Data Lake Storage Gen2 already handles 3x replication under the hood to guard against localized hardware failures. Additionally, other replication options, such as ZRS or GZRS, improve HA, while GRS & RA-GRS improve DR. When building a plan for HA, in the event of a service interruption the workload needs access to the latest data as quickly as possible by switching over to a separately replicated instance locally or in a new region.
In a DR strategy, to prepare for the unlikely event of a catastrophic failure of a region, it is also important to have data replicated to a different region using GRS or RA-GRS replication. You must also consider your requirements for edge cases such as data corruption where you may want to create periodic snapshots to fall back to. Depending on the importance and size of the data, consider rolling delta snapshots of 1-, 6-, and 24-hour periods, according to risk tolerances.
For data resiliency with Data Lake Storage Gen2, it is recommended to geo-replicate your data via GRS or RA-GRS that satisfies your HA/DR requirements. Additionally, you should consider ways for the application using Data Lake Storage Gen2 to automatically fail over to the secondary region through monitoring triggers or length of failed attempts, or at least send a notification to admins for manual intervention. Keep in mind that there is tradeoff of failing over versus waiting for a service to come back online.
For more details refer to Best practices for using Azure Data Lake Storage Gen2.
And also here a great article which talks about : Azure Synapse Disaster Recovery Architecture.

Related

how to clone bigquery datasets

We are evaluating bigquery and snowflake for our new cloud warehouse. Does bigquery has a cloning feature built-in? This will enable our developers to create multiple development environments quickly and we can also restore to point-in-time .Snowflake has a zero-copy clone to minimize the storage footprint. For managing DEV/QA environments in bigquery do we need to manually copy the datasets from prod? Please share some insights.
You can use a pre-GA feature Big query data transfer service to create copies of datasets, you can also schedule and configure the jobs to run periodically so that the target dataset is in sync with source dataset. Restoring to a point in time is available via FOR SYSTEM_TIME AS OF in FROM clause
I don't think there is an exact snowflake clone equivalent on Big query. What would this mean?
You will be charged for additional storage and for data transfer if its cross-region (pricing equivalent to Compute Engine network egress between regions)
Cloning is not instantaneous, for large tables(> 1 TB) you might still have to wait for a while before you see a new copy is created

Does Horizontal scaling(scale out) option available in AZURE SQL Managed Instance?

Does Horizontal scaling(scale out) option available in AZURE SQL Managed Instance ?
Yes, Azure SQL managed instance support scale out.
You you reference the document #Perter Bons have provided in comment:
Document here:
Scale up/down: Dynamically scale database resources with minimal downtime
Azure SQL Database and SQL Managed Instance enable you to dynamically
add more resources to your database with minimal downtime; however,
there is a switch over period where connectivity is lost to the
database for a short amount of time, which can be mitigated using
retry logic.
Scale out: Use read-only replicas to offload read-only query workloads
As part of High Availability architecture, each single database,
elastic pool database, and managed instance in the Premium and
Business Critical service tier is automatically provisioned with a
primary read-write replica and several secondary read-only replicas.
The secondary replicas are provisioned with the same compute size as
the primary replica. The read scale-out feature allows you to offload
read-only workloads using the compute capacity of one of the
read-only replicas, instead of running them on the read-write
replica.
HTH.
Yes scale out option is available in Business Critical(BC) tier. The BC utilizes three nodes. One is primary and two are secondary. They use Always on on the backend. If you need to utilize for reporting, just ApplicationIntent=Readonly in the connection string and your application will be routed one of the secondary nodes.

Azure SQL Secondary read only copy

We want to have secondary read only database for Analytics,Reporting, Monitoring, and exposing to another application. Since we are using Azure DB, DTU is increasing because of these reads. So I want to have secondary database(Read scale out), so that I can share this secondary DB credential to them, so that it will not have any impact on primary database. So Please help me to setup the secondary database(Read scale out) in Azure. I have heard about geo replication, but it is only for certain region.
The capability to use a local readable secondary is in preview. It will work on larger reservation sizes (premium and up, though perhaps some of the standards will work). This is not limited to certain regions today.
Active geo dr can also be used to do reads scale out (but please note that it costs money for the dr copy since it gives you disaster recovery not just read scale out).
Instructions for both can be found here:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-read-scale-out
Intra-db resource governance would be the other way to split mixed workloads. However, this feature does not currently exist in sql azure (though it is a roadmap item).

How to increase queries per minute of Google Cloud SQL?

As in the question, I want to increase number of queries per second on GCS. Currently, my application is on my local machine, when it runs, it repeatedly sends queries to and receives data back from the GCS server. More specifically, my location is in Vietnam, and the server (free tier though) is in Singapore. The maximum QPS I can get is ~80, which is unacceptable. I know I can get better QPS by putting my application on the cloud, same location with the SQL server, but that alone requires a lot of configuration and works. Are there any solutions for this?
Thank you in advance.
colocating your application front-end layer with the data persistence layer should be your priority: deploy your code to the cloud as well
use persistent connections/connection pooling to cut on connection establishment overhead
free tier instances for Cloud SQL do not exist. What are you referring to here? f1-micro GCE instances are not free in Singapore region either.
depending on the complexity of your queries, read/write pattern, size of your dataset, etc. performance of your DB could be I/O bound. Ensuring your instance is provisioned with SSD storage and/or increasing the data disk size can help lifting IOPS limits, further improving DB performance.
Side note: don't confuse commonly used abbreviation GCS (Google Cloud Storage) with Google Cloud SQL.

Reliability of Windows Azure Storage Logging

We are in the process of creating a piece of software to backup a storage account (blobs & tables, no queues) and while researching how to do this we came across the possibility storage logging. We would like to use this feature to do smart incremental backups after an initial full backup. However in the introductory post for this feature here the following caveat is mentioned:
During normal operation all requests are logged; but it is important to note that logging is provided on a best effort basis. This means we do not guarantee that every message will be logged due to the fact that the log data is buffered in memory at the storage front-ends before being written out, and if a role is restarted then its buffer of logs would be lost.
As this is a backup solution this behavior makes the features unusable, we can't miss a file. However I wonder if this has changed in the meantime as Microsoft has built a number of features on top of it like blob function triggers and very recently their new Azure Event Grid.
My question is whether this behavior has changed in the meantime or are the logs still on a best effort basis and should we stick to our 'scanning' strategy?
The behavior for Azure Storage logs is still same. For your case, you might be better off using the EventGrid notification for Blob storage: https://azure.microsoft.com/en-us/blog/introducing-azure-event-grid-an-event-service-for-modern-applications/