How to find earliest restore point for database on Azure SQL Managed Instance - azure-sql-managed-instance

Azure SQL Database Managed Instance shows the earliest time when the database can be restored on Azure portal. Is there a way to find it programmatically using PowerShell or Azure CLI?

Earliest restore date can be programmatically retrieved using Get-AzureRmSqlInstanceDatabase PowerShell command using the following code:
Get-AzureRmSqlInstanceDatabase -InstanceName "jovanpop-test-instance" -ResourceGroupName "my_rg" |
Select-Object Name, CreationDate, EarliestRestorePoint
Name CreationDate EarliestRestorePoint
---- ------------ --------------------
ValidationDB 1/22/2019 11:00:42 AM 1/22/2019 11:02:35 AM
WideWorldImportersDWFull 7/4/2018 7:55:59 AM 1/15/2019 11:31:00 AM
tpcc1000 8/27/2018 2:37:23 PM 1/15/2019 11:31:00 AM
WideWorldImportersStandard 7/4/2018 8:00:14 AM 1/15/2019 11:31:00 AM
If you are initiating point-in-time restore, these dates are the boundary values that you can use.

Related

Load data from csv in google cloud storage as bigquery 'in' query

I want to compose such query using bigquery, my file stored in Google cloud platform storage:
select * from my_table where id in ('gs://bucket_name/file_name.csv')
I get no results. Is it possible? or am I missing something?
You are able using the CLI or API to do adhoc queries to GCS files without creating tables, a full example is covered here Accessing external (federated) data sources with BigQuery’s data access layer
code snippet is here:
BigQuery query --external_table_definition=healthwatch::date:DATETIME,bpm:INTEGER,sleep:STRING,type:STRING#CSV=gs://healthwatch2/healthwatchdetail*.csv 'SELECT date,bpm,type FROM healthwatch WHERE type = "elevated" and bpm > 150;'
Waiting on BigQueryjob_r5770d3fba8d81732_00000162ad25a6b8_1 ... (0s)
Current status: DONE
+---------------------+-----+----------+
| date | bpm | type |
+---------------------+-----+----------+
| 2018-02-07T11:14:44 | 186 | elevated |
| 2018-02-07T11:14:49 | 184 | elevated |
+---------------------+-----+----------+
on other hand you can create a permament EXTERNAL table with autodetect schema to facilitate WebUI and persistence read more about that here Querying Cloud Storage Data

How to set DTU for Azure Sql Database via SQL when copying?

I know that you can create a new Azure SQL DB by copying an existing one by running the following SQL command in the [master] db of the destination server:
CREATE DATABASE [New_DB_Name] AS COPY OF [Azure_Server_Name].[Existing_DB_Name]
What I want to find out is if its possible to change the number of DTU's the copy will have at the time of creating the copy?
As a real life example, if we're copying a [prod] database to create a new [qa] database, the copy might only need resources to handle a small testing team hitting the QA DB, not a full production audience. Scaling down the assigned DTU's would result in a cheaper DB. At the moment we manually scale after the copy is complete but this takes just as long as the initial copy (several hours for our larger DB's) as it copies the database yet again. In an ideal world we would like to skip that step and be able to fully automate the copy process.
According to the the docs is is:
CREATE DATABASE database_name
AS COPY OF [source_server_name.] source_database_name
[ ( SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3' | 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' | 'GP_GEN5_48' | 'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' | 'BC_GEN5_48' | 'BC_GEN5_80' |
| { ELASTIC_POOL(name = <elastic_pool_name>) } } )
]
[;]
CREATE DATABASE (sqldbls)
You can also change the DTU level during a copy from the PowerShell API
New-AzureRmSqlDatabaseCopy
But you can only choose "a different performance level within the same service tier (edition)" Copy an Azure SQL Database.
You can, however, copy the database into an elastic pool in the same service tier, so you wouldn't be allocating new DTU resources. You might have a single pool for all your dev/test/qa datatabases and drop the copy there.
If you want to change the service tier, you could a Point-in-time Restore instead of a Database Copy. The database can be restored to any service tier or performance level, using the Portal, PowerShell or REST.
Recover an Azure SQL database using automated database backups

How do I update a database that's in use?

I'm building a web application using ASP.NET MVC with SQL Server and my development process is going to be like
Make changes in SQL Server locally
Create LINQ-to-SQL classes as necessary
Before committing any change set that has a database, script out the database so that I can regenerate it if I ever need to
What I'm confused about is how I'm going to update the production database which will have live data in set.
For example, let's say I have a table like
People
========================================
Id | FirstName | LastName | FatherId
----------------------------------------
1 | 'Anakin' | 'Skywalker' | NULL
2 | 'Luke' | 'Skywalker' | 1
3 | 'Leah' | 'Skywalker' | 1
in production and locally and let's say I add an extra column locally
ALTER TABLE People ADD COLUMN LightsaberColor VARCHAR(16)
and update my LINQ to SQL, script it out, test it with sample data and decide that I want to add that column to production.
As part of a deployment process, how would I do that? Does there exist some tool that could read my database generation file (call it GenerateDb.sql) and figure out that it needs to update the production People table to put default values in the new column, like
People
==========================================================
Id | FirstName | LastName | FatherId | LightsaberColor
----------------------------------------------------------
1 | 'Anakin' | 'Skywalker' | NULL | NULL
2 | 'Luke' | 'Skywalker' | 1 | NULL
3 | 'Leah' | 'Skywalker' | 1 | NULL
???
You should have a staging DB that is identical to the production database.
When you add any changes to the database, you should perform these changes to the staging DB first, and you can of course compare the dev and staging DB to generate a script with the difference.
Visual Studio has a Schema Compare that generate a script with the differences between a two databases.
There are some other tools a well that does the same.
So, you can generate the script, apply it to the staging Db and if everything went fine, you can apply the script on the production DB
Actually that is right you must have a staging process whenever we commit features we use TFS from Development to Production that is called staging you can look up the history of the TFS whether the database or the solution. and if you're not using TFS in Visual Studio and MSSQL Server.
I guess that your are commiting youre features directly to your server that is your production test. or you can test that in your test server first to see the changes.
Another thing is that I guess if you use stored procedures you can use Temporary Tables if you're asking about the script.
I guess that it's your first time commiting in a live server..

Some databases not getting backed up through the job

I am getting this weird problem with my backup job.
I have more than 1000 DB's on my SQL server stack. There is a weekly and daily backup jobs set to take the backup. Weekly jobs runs on every sunday and does the full backup and then the daily job does the differential backup.
Now I realized that the last Full backup of some of the databases is not updated.
I checked the job history and the job was successful without any error. When I check in the backup folder, the timing of the backup seems a bit doubtful e.g
26/10/2014 23:34 2,759,240,192 DB1 2014-10-26 23.19.59 Full.bak
26/10/2014 23:36 319,891,968 DB2 2014-10-26 23.34.46 Full.bak
26/10/2014 23:36 160,771,072 DB3 2014-10-26 23.36.17 Full.bak
26/10/2014 23:48 3,505,426,944 DB4 2014-10-26 23.36.59 Full.bak
27/10/2014 00:03 4,322,182,144 DB5 2014-10-26 23.48.12 Full.bak
***27/10/2014 00:25 7,266,648,576 DB6 2014-10-27 00.04.13 Full.bak
27/10/2014 03:31 67,190,649,344 DB7 2014-10-27 00.25.51 Full.bak***
27/10/2014 03:32 270,017,024 DB8 2014-10-27 03.31.19 Full.bak
There are some replication job runs at around 2 O'clock. I thought that may be the problem and I changed the time for my differential backup job a bit early so that it finishes before the replication job starts.
But the result is same . Job is successful. but most of the backups did not get updated and this time the last backup was taken at 19:36 and after that no backup at all.
Its really critical,Can someone please help me with what can be the reason for this ?
Your help is much appreciated.
I am using SQL server 2012
Thanks

How to setup weekly auto backup in SQL Server 2012?

Please advice how can i setup automated database backup in my SQL Server 2012.
I need to take all databases (currently it contains only 3 ) in SQL server an automated weekly backup which runs on Every Friday at 0100 h (1 AM). These back up files (*.bak) should be placed in E:\Backups folder.
In Microsoft SQL Server Management Studio, open the Object Explorer and then:
Right-clic on Management > Maintenance Plans
Clic on New Maintenance Plan...
Give a name to your plan
Create as many subplans as you need for your strategy
Select a subplan and drag'n'drop the appropriate tasks from the Toolbox panel
To backup a database, the appropriate task is Back Up Database Task
For the configuration of the backup schedule, you just need to follow the wizard and define what you want. If you need more information, i suggest you to go on the official website of Microsoft:
Create a Full Database Backup
Hope this will help you
You can either create a SQL Server agent job or maintenance plan in ssms as mentioned, or use a 3rd party application. I use ApexSQL Backup at the moment, as it offers in depth schedule for any created job. You can specify if you want to create daily, weekly or monthly schedule. Besides, you can always pause, or delete these schedules if you don’t want to use them for some reason.
Go to MS SQL Server Management studio→SQL Server Agent→New Job
Under General tab enter Backup name
Under Steps tab:
Type Step name
Select database you want to backup
Enter backup query
Note: Sample backup query here with backup name date and time (e.g. TestBackup_Apr 4 2017 6,00PM.bak)
DECLARE #MyFileName nvarchar(max)
SELECT #MyFileName = (SELECT N'E:\TestbackupFolder\TestBackup_' +replace(rtrim(convert(char,getdate())), ':',',')+ '.bak')
BACKUP DATABASE [yourdatabasename] TO DISK=#MyFileName
WITH INIT;
In Schedules→New, go to new schedule and set date times as required
Check this opensource backup windows service MFSQLBackupService, it did a great job for me.
I can advice you to try the software Cloudberry as a backup agent to backup the data you wish. It sets the automated backup in the way you want to do it and place the backups where you want.