We have a weekly maintenance plan to shrink all user databases and rebuild their indexes. This has working fine until we created a read-only database, now each time the plan runs it fails when it starts processing this database due to its read only state.
As far as I can see we have two options remove the read only flag from the database, this is possible but as the database is only updated once a quarter it makes sense from a performance point of view to make use of the read-only feature. Or manually select the database that the plan should run for i.e. all the users databases apart from the read only one, this then requires people to remember to add any new databases into the plan.
Does anyone have any suggestions of a better way of doing this?
Thanks
Neil
why are you shrinking the database in the first place?
also there's no need to maintain read opnly db's like that.
I'd remove the read only flag if you don't want to customise the maint plan.
Why are you shrinking DBs too? If the database grows to a given size, then this is probably it's natural current size.
Also remember that an index rebuild (rule of thumb) require free space of 120% of target table size. Eg 500 MB table needs 600 MB free space.
It's pointless to shrink then rebuild... and you'll have horrendous file fragmentation too
I suppose could modify the maintenance plan to start with a 'Execute T-SQL Statement' step, which removes the readonly flag (ALTER DATABASE database-name SET READ_WRITE) and add a final step to reset it:
ALTER DATABASE database-name SET READ_ONLY
Related
I realize everyone wants to just look the other way on this question. I appreciate it if you continue to read on. Of course the log grows when adding a field to a large table. Let me just explain to my best ability:
We have a database upgrade utility that we deploy to our customers. In that utility we manipulate the database with changes that are specific to our version.
Our testing department is seeing varying results locally vs Virtual and Virtual vs Virtual machine. Some VMs do not have much log growth while others grow by 30gb. The database is set to SIMPLE. The transaction log shouldn't be used, "technically". I understand that the log is used as a cache till a disc is free enough to accept the change requested. I know there is not much to be done on the SQL side. We are stuck with some sort of shrink to handle the change after the upgrade is complete.
I am curious why Physical and VM would act differently and what to look for in a VM environment to see if this is going to be problematic. Do I look at something on the disc, MSINFO32, CPU? I have looked to make sure there is no compression on the VM. I also did profiles and looked at fn_dblog to see the indexes growing on the specific table. I just can't figure out why some grow exponentially while others do not grow. Also if you know of any DB_Owner permission level shrink style commands i would appreciate it. Currently we are testing a checkpoint since ShrinkDB will not be available to us due to permission level.
--table has usually has between 10 and 20 million records. It is a table that has 10 fields strictly typed
IF col_length('[dbo].[foo]','Field1') IS NULL
BEGIN
ALTER TABLE [dbo].[foo] ADD [Field1] smalldatetime NOT NULL CONSTRAINT [df_Field1] DEFAULT (GETDATE())
END
--This has growth and can get a bit out of control on VM. Physical machines it does not affect as much.
--Try 2 hoping Getdate was the issue
IF col_length('[dbo].[foo]','Field1') IS NULL
BEGIN
ALTER TABLE [dbo].[foo] ADD [Field1] smalldatetime NULL
END
DECLARE #TheDate smalldatetime = GETDATE()
UPDATE [dbo].[foo] SET [Field1] = #TheDate
--This was even more problematic to the log and took considerably longer
Doing a CHECKPOINT after each column change would be my first try for SIMPLE recovery model, honestly. SQL doesn't let us have much control of how the transaction log behaves other than recovery mode, backups, growth size, and checkpoints.
Check how many VLF's exist and their sizes with sys.dm_db_log_info or DBCC LOGINFO.
Check the recovery model - maybe it's either not SIMPLE or not always SIMPLE on some machines?
Check indexes and index fragmentation; that difference might matter, and it'll change after each column. Make sure you have a clustered index.
Check total table size.
I have a problem when it comes to delete a large table from my database (170GB).
When i "elete this large table by right click > delete I do not get the storage space free again. Of course the table is off the database but the database needed space does not shrink
Can anyone tell me what is wrong?
Tables are stored in table spaces. These are allocated to the database, regardless of whether the space is actually used to store tables (or indexes or anything else).
When you delete the table, you have freed space in the table space. The space is available to the database for your next table (or whatever). You need to either drop or shrink the table space to release the space back to the operating system.
A place to start is with dbcc shrinkfile, documented here.
Short answer:
Run sp_clean_db_free_space, before shrinking. My assumption is that you've tried shrinking the files, but if not that question has been answered.
Parenthetical statement:
You shouldn't shrink databases if you can avoid it.
Long answer: The behavior you see is the result of Ghost Records. To understand more about this at a system level read this article: Inside the Storage Engine: Ghost cleanup in depth.
Bit of a long shot here, but I have a simple query below:
begin transaction
update s
set s.SomeField = null
from someTable s (NOLOCK)
rollback transaction
This runs in ~30 seconds sitting close to the SQL Server box. Are there any tricks I can use to improve the speed. The table has 144,306 rows in it.
thanks.
The single largest component of the performance of a large UPDATE command like this is going to be the speed of your DB log.
For best performance:
Make sure the DB log (LDF file) is on a separate physical spindle from the DB data (MDF file)
Avoid parity RAID for the log volume, such as RAID-5; RAID-1 or RAID-10 are better
Make sure that the DB log file is pre-grown, and that it's physically contiguous on disk
Make sure your server has enough RAM -- ideally, at least enough to hold all of the DB pages containing the modified rows
Using SSDs for your data drive may also help, because the command will create a large number of dirty buffers, which be flushed to disk later by the lazy writer; this can make other operations on the DB slow while it's happening.
If there's no constraint on it, and you really need to set all values of that column to NULL, then I would test dropping the column and re-adding it.
Not sure if that would be faster or not, but I'd investigate it.
Try disabling the index temporarily.
You could change the syntax of your query slightly, but I had no difference in my testing by doing that. I was using STATISTICS IO and STATISTICS TIME.
You mention the column is indexed. You could disable it / re-enable it as part of your transaction. The t-sql for that is simple, see this - http://blog.sqlauthority.com/2007/05/17/sql-server-disable-index-enable-index-alter-index/
I've had to do that in the past for similar jobs and it has worked out well for me.
Try to implement like this
Disable Index
Drop the column
Create the column
Rebuild index
I can guess that it will improve performance.
I have an INSERT statement that is eating a hell of a lot of log space, so much so that the hard drive is actually filling up before the statement completes.
The thing is, I really don't need this to be logged as it is only an intermediate data upload step.
For argument's sake, let's say I have:
Table A: Initial upload table (populated using bcp, so no logging problems)
Table B: Populated using INSERT INTO B from A
Is there a way that I can copy between A and B without anything being written to the log?
P.S. I'm using SQL Server 2008 with simple recovery model.
From Louis Davidson, Microsoft MVP:
There is no way to insert without
logging at all. SELECT INTO is the
best way to minimize logging in T-SQL,
using SSIS you can do the same sort of
light logging using Bulk Insert.
From your requirements, I would
probably use SSIS, drop all
constraints, especially unique and
primary key ones, load the data in,
add the constraints back. I load
about 100GB in just over an hour like
this, with fairly minimal overhead. I
am using BULK LOGGED recovery model,
which just logs the existence of new
extents during the logging, and then
you can remove them later.
The key is to start with barebones
tables, and it just screams. Building
the index once leaves you will no
indexes to maintain, just the one
index build per index.
If you don't want to use SSIS, the point still applies to drop all of your constraints and use the BULK LOGGED recovery model. This greatly reduces the logging done on INSERT INTO statements and thus should solve your issue.
http://msdn.microsoft.com/en-us/library/ms191244.aspx
Upload the data into tempdb instead of your database, and do all the intermediate transformations in tempdb. Then copy only the final data into the destination database. Use batches to minimize individual transaction size. If you still have problems, look into deploying trace flag 610, see The Data Loading Performance Guide and Prerequisites for Minimal Logging in Bulk Import:
Trace Flag 610
SQL Server 2008 introduces trace flag
610, which controls minimally logged
inserts into indexed tables.
I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.