Log Shipping - 24 hour difference between source and destination system times - sql

We are trying to have log shipping work between two database servers where the destination server has a system time set 24 hours earlier than the source system time. Is it possible to force the destination machine to restore the data (disregarding the transaction file stamps are 24 hours ahead).

How about a snapshot # 00:00 (or 23:59:59) daily? Then you could compare the days however you want without a mechanism such as log shipping in play. I'm a big fan of Snapshots (just be careful where you put the files on disk if it's a busy DB).
Here's the How to: http://msdn.microsoft.com/en-us/library/ms175876.aspx

Related

CockroachDB how to restore a dropped column?

I accidentally dropped a column. I have no backup set up for this single node setup. Does cockroach have any auto backup mechanism or am I screwed?
We could use time-travel queries to restored deleted data within a garbage collection window before the data is deleted forever.
The garbage collection window is determined by the gc.ttlseconds field in the replication zone configuration.
Examples are:
SELECT name, balance
FROM accounts
AS OF SYSTEM TIME '2016-10-03 12:45:00'
WHERE name = 'Edna Barath`;
SELECT * FROM accounts AS OF SYSTEM TIME '-4h';
SELECT * FROM accounts AS OF SYSTEM TIME '-20m';
I noticed that managed CockroachDB run database backup (incremental or full) hourly up to 30 days. You may be able to restore the whole database from it.
Please note that the restoration will cause your cluster to be unavailable for the duration of the restored. All current data is deleted.
We can manage our own backup, including incremental, database and table level backup. We need to configure a userfile location or a cloud storage location. This require billing information.
CockroachDB stores old versions of data at least through its configured gc.ttlseconds window (default one day). There's no simple way that I know of to instantly restore, but you can do
SELECT * FROM <tablename> AS OF SYSTEM TIME <timestamp before dropping the column>
And then manually reinsert the data from there.

Copy data from one blob storage to another blob storage

My requiremnt is like that I have two storage account sa01 and sa02. Let say Sa01 having 10 files and Sa02 also having 10 files at time 01:00 AM. Now I have uploaded 4 more files at 1:15AM in sa01 and my copy activity wil automatically runs beacause I am implemented the event trigger. So It will insert the 4 files to sa02.
Question - It will insert the 4 files and also updating the previous (10) files also, so I am getting 14 files at time 01:15 AM,and requriment say that if 10 files uploaded already at 01:00 AM and 4 files which is latest can inserted in sa02.
See the timings in image I have just uploaded one file all the files time is modified.
Azure Data Share is one good way to accomplish this. It is typically used to sync storage with a partner company. But you can sync in your own subscription. There is no code to write. There is a UI and a sync schedule.
You can use a Metadata activity to get the lastModified of the destination folder.
In your Copy activity, put dynamic content in the
Filter by last modified: start time field. Choose the lastModified field output from the Metadata activity.
Only files in the source newer than the destination's lastModified will be copied.
Metadata activity is tiny fractions of a penny in cost.

SQL server database log file increasing enormously

I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach to cleanup the log.
FYI: All the SSIS packages in the job is using Transaction on some tasks. for eg. Sequence Cointainer
I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
Help me on these issues. Thanks in advance.

SQL Server 2005 Transaction Log too big

I am running SQL Server 2005.
My db backup scheme is:
Recovery model: FULL
Backup Type: Full
Backup component: Database
Backup set will expire: after 0 days
Overwrite media: Back up to the existing media set, Append to the existing backup set
The db is writing to 250GB drive (232GB actual).
My _Data.mdf file is over 55GB and my _Log.ldf is over 148GB.
We ran into a situation where our drive was filled today. I moved our ab_Full.bak and ab_Log.bak files to another drive to make space - about 45GB. Five hours later, free space is at 37GB.
I'm new to managing SQL server; so, I have some basic questions about my backups.
I know I need to update the db to start managing the transaction log size to help prevent this problem in the future. So, assuming I have enough free space, I:
1. right click the db and choose Backup
2. set 'Backup Type' to 'Transaction Log'
3. change 'Backup set will expire' after to 30 days
4. click 'ok'
My understanding is this will move 'closed' transactions from the transaction log to a backup and truncate the transaction log.
Is this plan sound? Will I need to manually resize the log file afterwards?
Thanks for your time.
Are you backing up the transaction log at any time at all?
If you are using the FULL recovery model, then you need to back up the transaction log in addition to backing up the main database, or if you don't want to back up the log (why would you then use the FULL recovery model?) then at least truncate the log at some regular interval.
You should back up the transaction log before every full backup (and keep it as long as you keep the previous full backup) so you can restore to any point in time since the first full backup you've kept. Also, it might be worth backing up the transaction log more often (the total size is the same) in case something bad happens between two full backups.
The best procedure is to regularly backup your log file. In the mean-time, for 'catastrofic' scenarios like the one you described, you may use this snippet to reduce the size of your log:
http://www.snip2code.com/Snippet/12913/How-to-correctly-Shrink-Log-File-for-SQL

Reducing Size Of SQL Backup?

I am using SQL Express 2005 and do a backup of all DB's every night. I noticed one DB getting larger and larger. I looked at the DB and cannot see why its getting so big! I was wondering if its something to do with the log file?
Looking for tips on how to find out why its getting so big when its not got that much data in it - Also how to optimise / reduce the size?
Several things to check:
is your database in "Simple" recovery mode? If so, it'll produce a lot less transaction log entries, and the backup will be smaller. Recommended for development - but not for production
if it's in "FULL" recovery mode - do you do regular transaction log backups? That should limit the growth of the transaction log and thus reduce the overall backup size
have you run a DBCC SHRINKDATABASE(yourdatabasename) on it lately? That may help
do you have any log / logging tables in your database that are just filling up over time? Can you remove some of those entries?
You can find the database's recovery model by going to the Object Explorer, right click on your database, select "Properties", and then select the "Options" tab on the dialog:
Marc
If it is the backup that keeps growing and growing, I had the same problem. It is not a 'problem' of course, this is happening by design - you are just making a backup 'set' that will simply expand until all available space is taken.
To avoid this, you've got to change the overwrite options. In the SQL management studio, right-click your DB, TASKS - BACKUP, then in the window for the backup you'll see it defaults to the 'General' page. Change this to 'Options' and you'll get a different set of choices.
The default option at the top is 'Append to the existing media set'. This is what makes your backup increase in size indefinitely. Change this to 'Overwrite all existing backup sets' and the backup will always be only as big as one entire backup, the latest one.
(If you have a SQL script doing this, turn 'NOINIT' to 'INIT')
CAUTION: This means the backup will only be the latest changes - if you made a mistake three days ago but you only have last night's backup, you're stuffed. Only use this method if you have a backup regime that copies your .bak file daily to another location, so you can go back to any one of those files from previous days.
It sounds like you are running with the FULL recovery model and the Transaction Log is growing continuously as the result of no Transaction Log backups being taken.
In order to rectify this you need to:
Take a transaction log backup. (See: BACKUP(TRANSACT-SQL) )
Shrink the transaction log file down
to an appropriate size for your needs. (See:How to use DBCC SHRINKFILE.......)
Schedule regular transaction log
backups according to data recovery
requirements.
I suggest reading the following Microsoft reference in order to ensure that you are managing your database environment appropriately.
Recovery Models and Transaction Log Management
Further Reading: How to stop the transaction log of a SQL Server database from growing unexpectedly
One tip for keeping databases small would be at design time, use the smallest data type that you can use.
for Example you may have a status table, do you really need the index to be an int, when a smallint or tinyint will do?
Darknight
as you do a daily FULL backup for your Database , ofcourse it will get so big with time .
so you have to put a plan for your self . as this
1st day: FULL
/ 2nd day: DIFFERENTIAL
/ 3rd day: DIFFERENTIAL
/ 4th day: DIFFERENTIAL
/ 5th day: DIFFERENTIAL
and then start over .
and when you restore your database , if you want to restore the FULL you can do it easily , but when you need to restore the DIFF version , you backup the first FULL before it with " NO-recovery " then the DIFF you need , and then you will have your data back safely .
7zip your backup file for archiving. I recently backed up a database to a 178MB .bak file. After archiving it to a .7z file is was only 16MB.
http://www.7-zip.org/
If you need an archive tool that works with larger files sizes more efficiently and faster than 7zip does, I'd recommend taking a look at LZ4 archiving. I have used it for archiving file backups for years with no issues:
http://lz4.github.io/lz4/