CockroachDB how to restore a dropped column? - sql

I accidentally dropped a column. I have no backup set up for this single node setup. Does cockroach have any auto backup mechanism or am I screwed?

We could use time-travel queries to restored deleted data within a garbage collection window before the data is deleted forever.
The garbage collection window is determined by the gc.ttlseconds field in the replication zone configuration.
Examples are:
SELECT name, balance
FROM accounts
AS OF SYSTEM TIME '2016-10-03 12:45:00'
WHERE name = 'Edna Barath`;
SELECT * FROM accounts AS OF SYSTEM TIME '-4h';
SELECT * FROM accounts AS OF SYSTEM TIME '-20m';
I noticed that managed CockroachDB run database backup (incremental or full) hourly up to 30 days. You may be able to restore the whole database from it.
Please note that the restoration will cause your cluster to be unavailable for the duration of the restored. All current data is deleted.
We can manage our own backup, including incremental, database and table level backup. We need to configure a userfile location or a cloud storage location. This require billing information.

CockroachDB stores old versions of data at least through its configured gc.ttlseconds window (default one day). There's no simple way that I know of to instantly restore, but you can do
SELECT * FROM <tablename> AS OF SYSTEM TIME <timestamp before dropping the column>
And then manually reinsert the data from there.

Related

Database copy limit per database reached. The database X cannot have more than 10 concurrent database copies (Azure SQL)

In our application, we have a master database 'X'. For each new client, we will create a new database copy of master database 'X'.
I am using the following SQL command which will be executed against Azure SQL server.
CREATE DATABASE [NEW NAME] AS COPY OF [MASTER DB]
We are using a custom queue tier so that we can create more than one client at a time parallelly.
I am facing issues in following scenario.
I am trying to create 70 clients. Once 25 clients got created I am getting below error.
Database copy limit per database reached. The database 'BlankDBClient' cannot have more than 10 concurrent database copies
Can you please share your thoughts on this?
SQL Azure has logic to do various operations online/automatically for you (backups, upgrades, etc). There are IOs required to do each copy, so there are limits in place because the machine does not have infinite iops. (Those limits may change a bit over time as we work to improve the service, get newer hardware, etc).
In terms of what options you have, you could:
Restore N databases from a database backup (which would still have IO limits but they may be higher for you depending on your reservation size)
Consider models to copy in parallel using a single source to hierarchically create what you need (copy 2 from one, then copy 2 from each of the ones you just copied, etc)
Stage out the copies over time based on the limits you get back from the system.
Try a larger reservation size for the source and target during the copy to get more IOPS and lower the time to perform the operations.
In addition to Connor answer, you can consider to a have a dacpac or bacpac of that master database stored on Azure Storage and once you have submitted 25 concurrent database copies you can start restoring the dacpac from Azure Storage.
You can also monitor how many database copies are showing COPYING on the state_desc column of the following queries, after sending the first batch of 25 copies, and when those queries return less than 25 rows, start sending more copies until reaching the 25 limit. Keep doing this until finishing the queue of copies required.
Select
[sys].[databases].[name],
[sys].[databases].[state_desc],
[sys].[dm_database_copies].[start_date],
[sys].[dm_database_copies].[modify_date],
[sys].[dm_database_copies].[percent_complete],
[sys].[dm_database_copies].[error_code],
[sys].[dm_database_copies].[error_desc],
[sys].[dm_database_copies].[error_severity],
[sys].[dm_database_copies].[error_state]
From
[sys].[databases]
Left
Outer
Join
[sys].[dm_database_copies]
On
[sys].[databases].[database_id] = [sys].[dm_database_copies].[database_id]
Where
[sys].[databases].[state_desc] = 'COPYING'
SELECT state_desc, *
FROM sys.databases
WHERE [state_desc] = 'COPYING'

Copy Azure SQL Database and Change Scale

We are using the command CREATE DATABASE X AS COPY OF Y to copy an Azure SQL database so that we can take a transactionally consistent backup to our local network. The database is running as a P2 so the copy is also created as a P2 and we are therefore incurring double charges due to daily rate charging of the new database sizes.
Is there any way to copy a database with a different scale setting? Or, are there other ways to take the transactionally consistent backup?
As far as I know currently the way to do the transactionally consistent backups is to either use the COPY command, which you are doing, or rely on the point in time Backup/Recovery provided by Microsoft. If your goal is simply to have the backup somewhere you may look at the GeoReplication options (standard and active) which gets the data into another region in Azure. If your requirements is definitely to get a local copy, the COPY + Export is pretty much your option.
There is not a way currently to perform a COPY from one Database tier level to another; however, in code you can change the tier level for a database, so in theory you could change the Copy to a lower tier immediately after the COPY (there is a sample on how do to this with PowerShell on MSDN using Set-AzureSqlDatabase). However, SQL Database is billed at the day, so you even if you change this immediately, you'd get charged for the P2 instance of the copy for that day. If you are doing these COPY-Export operations daily and deleting the Copy as soon as you get the export down then you won't be saving any money. They have announced that hourly billing is coming to SQL Database along with pricing changes and some other things. It looks like the new pricing will go into affect Nov 1st, and while it's not explicit, I'm assuming that means hourly billing then as well. With Hourly billing at least once you get the copy completed you can reduce the tier on the copy and only pay for that one hour, then after you pull down the export you can delete the copy and save money.
You can set the size of the DB during copy.
CREATE DATABASE db_copy
AS COPY OF ozabzw7545.db_original ( SERVICE_OBJECTIVE = 'P2' ) ;
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current

SQL Server DB size - why is it so large?

I am building a database which contains about 30 tables:
The largest amount of columns in a table is about 15.
For datatypes I am mostly using VarChar(50) for text
and Int og SmallInt for numbers.
Identity columns is Uniqueidentifiers
I have been testing a bit filling in data and deleting
again. I have no deleted all data, so everey table is empty.
But, if I look inside the properties of the database in
Management Studio, the size says 221,38 MB!
How comes that? Please help, I am getting notifications
from my hosting company that I am exceeding my limits .
Best regards,
:-)
I would suggest that you look first at the recovery mode for the database. By default, the recovery mode is FULL. This fills the log file with all transactions that you perform, never deleting them until you do a backup.
To change the recovery mode, right click on the database and choose Properties. In the properties list, choose the Options (on the right hand pane). Then change the "Recovery model" to Simple.
You probably also want to shrink your files. To do this, right click on the database and choose Tasks --> Shrink --> Files. You can shrink both the data file and the log file, by changing the "File Type" option in the middle.
Martin's comment is quite interesting. Even if the log file is in auto-truncate mode, you still have the issue of deletes being logged. If you created large-ish tables, the log file will still expand and the space not recovered until you truncate the file. You can get around this by using TRUNCATE rathe than DELETE:
truncate table <table>
does not log every record being deleted (http://msdn.microsoft.com/en-us/library/ms177570.aspx).
delete * from table
logs every record.
As you do inserts, updates, deletes, and design changes a log file with every transaction, and a whole bunch of other data is created. This transaction log is a required component of a SQL Server database, and thus cannot be disabled in any available settings.
Below is an article from Microsoft on doing backups to shrink the transaction logs generated by SQL Server.
http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx
Also, are you indexing your columns? Indexes that consist of several columns on tables with a high row count can become unnecessarily large, especially if you are just doing tests. Try just having a single clustered index on only one column per table.
You may also want to learn about table statistics. They help your indexes out and also help you perform queries like SELECT DISTINCT, or SELECT COUNT(*), etc.
http://msdn.microsoft.com/en-us/library/ms190397.aspx
Finally, you will need to upgrade your storage allocation for the SQL Server database. The more you use it, the faster it will want to grow.

SQL Server 2005 Transaction Log too big

I am running SQL Server 2005.
My db backup scheme is:
Recovery model: FULL
Backup Type: Full
Backup component: Database
Backup set will expire: after 0 days
Overwrite media: Back up to the existing media set, Append to the existing backup set
The db is writing to 250GB drive (232GB actual).
My _Data.mdf file is over 55GB and my _Log.ldf is over 148GB.
We ran into a situation where our drive was filled today. I moved our ab_Full.bak and ab_Log.bak files to another drive to make space - about 45GB. Five hours later, free space is at 37GB.
I'm new to managing SQL server; so, I have some basic questions about my backups.
I know I need to update the db to start managing the transaction log size to help prevent this problem in the future. So, assuming I have enough free space, I:
1. right click the db and choose Backup
2. set 'Backup Type' to 'Transaction Log'
3. change 'Backup set will expire' after to 30 days
4. click 'ok'
My understanding is this will move 'closed' transactions from the transaction log to a backup and truncate the transaction log.
Is this plan sound? Will I need to manually resize the log file afterwards?
Thanks for your time.
Are you backing up the transaction log at any time at all?
If you are using the FULL recovery model, then you need to back up the transaction log in addition to backing up the main database, or if you don't want to back up the log (why would you then use the FULL recovery model?) then at least truncate the log at some regular interval.
You should back up the transaction log before every full backup (and keep it as long as you keep the previous full backup) so you can restore to any point in time since the first full backup you've kept. Also, it might be worth backing up the transaction log more often (the total size is the same) in case something bad happens between two full backups.
The best procedure is to regularly backup your log file. In the mean-time, for 'catastrofic' scenarios like the one you described, you may use this snippet to reduce the size of your log:
http://www.snip2code.com/Snippet/12913/How-to-correctly-Shrink-Log-File-for-SQL

Reducing Size Of SQL Backup?

I am using SQL Express 2005 and do a backup of all DB's every night. I noticed one DB getting larger and larger. I looked at the DB and cannot see why its getting so big! I was wondering if its something to do with the log file?
Looking for tips on how to find out why its getting so big when its not got that much data in it - Also how to optimise / reduce the size?
Several things to check:
is your database in "Simple" recovery mode? If so, it'll produce a lot less transaction log entries, and the backup will be smaller. Recommended for development - but not for production
if it's in "FULL" recovery mode - do you do regular transaction log backups? That should limit the growth of the transaction log and thus reduce the overall backup size
have you run a DBCC SHRINKDATABASE(yourdatabasename) on it lately? That may help
do you have any log / logging tables in your database that are just filling up over time? Can you remove some of those entries?
You can find the database's recovery model by going to the Object Explorer, right click on your database, select "Properties", and then select the "Options" tab on the dialog:
Marc
If it is the backup that keeps growing and growing, I had the same problem. It is not a 'problem' of course, this is happening by design - you are just making a backup 'set' that will simply expand until all available space is taken.
To avoid this, you've got to change the overwrite options. In the SQL management studio, right-click your DB, TASKS - BACKUP, then in the window for the backup you'll see it defaults to the 'General' page. Change this to 'Options' and you'll get a different set of choices.
The default option at the top is 'Append to the existing media set'. This is what makes your backup increase in size indefinitely. Change this to 'Overwrite all existing backup sets' and the backup will always be only as big as one entire backup, the latest one.
(If you have a SQL script doing this, turn 'NOINIT' to 'INIT')
CAUTION: This means the backup will only be the latest changes - if you made a mistake three days ago but you only have last night's backup, you're stuffed. Only use this method if you have a backup regime that copies your .bak file daily to another location, so you can go back to any one of those files from previous days.
It sounds like you are running with the FULL recovery model and the Transaction Log is growing continuously as the result of no Transaction Log backups being taken.
In order to rectify this you need to:
Take a transaction log backup. (See: BACKUP(TRANSACT-SQL) )
Shrink the transaction log file down
to an appropriate size for your needs. (See:How to use DBCC SHRINKFILE.......)
Schedule regular transaction log
backups according to data recovery
requirements.
I suggest reading the following Microsoft reference in order to ensure that you are managing your database environment appropriately.
Recovery Models and Transaction Log Management
Further Reading: How to stop the transaction log of a SQL Server database from growing unexpectedly
One tip for keeping databases small would be at design time, use the smallest data type that you can use.
for Example you may have a status table, do you really need the index to be an int, when a smallint or tinyint will do?
Darknight
as you do a daily FULL backup for your Database , ofcourse it will get so big with time .
so you have to put a plan for your self . as this
1st day: FULL
/ 2nd day: DIFFERENTIAL
/ 3rd day: DIFFERENTIAL
/ 4th day: DIFFERENTIAL
/ 5th day: DIFFERENTIAL
and then start over .
and when you restore your database , if you want to restore the FULL you can do it easily , but when you need to restore the DIFF version , you backup the first FULL before it with " NO-recovery " then the DIFF you need , and then you will have your data back safely .
7zip your backup file for archiving. I recently backed up a database to a 178MB .bak file. After archiving it to a .7z file is was only 16MB.
http://www.7-zip.org/
If you need an archive tool that works with larger files sizes more efficiently and faster than 7zip does, I'd recommend taking a look at LZ4 archiving. I have used it for archiving file backups for years with no issues:
http://lz4.github.io/lz4/