I have a database of 1.10gb size in Microsoft SQL Server 2005 and when I zipped this database, the file that contains this database will produce the size only 80 Mb.
what is the reason for reduce this large amount size?
if there any mistake in data allocation in that database that is, static allocation
There was no any free technique available for SQL server 2005 backup compression, database administrator could do it by using 3rd party tools.
See this blog: http://sqlblogcasts.com/blogs/davidwimbush/archive/2009/09/24/backup-compression-on-sql-2005.aspx
& 3rd party tool – SQL safe lite, you can find related information # http://www.idera.com/Products/SQL-toolbox/SQL-safe-lite/
I have not used this software so try it on your own risk.
Also this: http://www.loudsteve.com/2009/01/21/hack-together-backup-compression-in-sql-2005/
Please note that you must Shrink DB before backup and free unused allocated space.
Related
On a normal SQL server we can tell it how to grow. The default is 10% each time, so the database grows by 10% its current size. Do we have any insight on how the Azure SQL database is growing other than it grows automatically?
Azure SQL server would allow us to configure the database to grow in fixed chunks e.g. 20 MB?
thanks,
sakaldeep
You can use PowerShell, T-SQL, CLI. the portal to increase or decrease the maximum size of a database but Azure SQL Database does not support setting autogrow. You can vote for this feature to be available in the future on this URL.
If you run the following query on the database, you will see the growth is set to 2048 KB.
SELECT name, growth
FROM sys.database_files;
I've hit the 4gb limit in my database and I need a bit of breathing space before upgrading it to a SQL Server 2008 database and I just wondered if increasing auto growth would give me more space inside the database beyond the 4GB limit that SQL Server 2005 Express imposes.
I know the pitfalls of Auto Growth with regards to performance as data will be fragmented across the disk and thus make querying it slower but is there an advantage in granting it say 50/100mb of room for auto growth? Whilst the migration process is planned out or an alternative is sought.
Disk space is not an issue and it would only be a temporary measure anyway.
No. Express Edition will not grow, nor attach or restore, a database over its size limit, no matter what the errorlog viewer tells you. Not even temporarily.
But you have an easy solution: SQL Server 2008R2 Express Edition has raised the limit to 10GB.
No it won't. The SQL Server Express edition is for creating Database-oriented applications without the need to purchase an official SQL Server licenses. However, you cannot use it in an production environment for more reasons other than just file size limit.
No, once you've reached 10'240 MB (10*1024MB) , you're out of luck
(technically not exactly 10GiB).
I have a database for a piece of proprietary software that resides on a SQL Server 2005 instance shared with some databases for C# apps I developed. I'm having an issue with some of the proprietary software's stored procedures eating up resources. Is there a way for me to limit the CPU usage of a particular database? I've advocated moving the DBs to a different server / instance, but I need a solution that can hold me off until then.
You can use resource governor and have a function on it which guide system to a workload with database name , and then limit cpu and memory.
I know it is recommended for SQL 2005 onward but does it also apply on SQL Server 2000?
Any link for reference will also be appreciated.
Any thing you read about SQL Server 2000 is probably out of date because of how technology
has moved on since then.
However, this appears to be best for SQL Server 2000. But not SQL Server 2005+
This says SQL Server 2000 is different to SQL Server 2005+ (my bold)
Only one file group in TempDB is allowed for data and one file group for logs, however you can configure multiple files. With SQL Server 2000 the recommendation is to have one data file per CPU core, however with optimisations in SQL Server 2005/2008 it is now recommend to have 1/2 or 1/4 as many files as CPU cores. This is only a guide and TempDB should be monitored to see if PAGELATCH waits increase or decrease with each change.
Paul Randal
On SQL Server 2000, the recommendation was one tempdb data file for each processor core. On 2005 and 2008, that recommendation persists, but because of some optimizations (see my blog post) you may not need one-to-one - you may be ok with the number of tempdb data files equal to 1/4 to 1/2 the number of processor cores.
SQL Server Engineers. This is interesting that it is internal MS and appears at odd with the first two articles
Now, I'd go by the first two and decide if you need to actually do anything.
As Paul Randal also says (my bold):
One of the biggest confusion points is that the SQL CAT team recommends 1-to-1, but they're coming from a purely scaling perspective, not from an overall perf perspective, and they're dealing with big customers with top-notch servers and IO subsystems. Most people are not.
Have you demonstrated that:
you require this?
have a bottleneck
you have separate disk arrays per file?
you understand TF 1118
...
I am trying to figure out how SQL Server DBAs are doing their backups and verify in 2005. I use the Idera's free stored procs (which is no longer available to download btw) to backup and verify and have gotten around 65% compression. If there any other free alternative?
Not sure if this is what Idera's scripts do, but you could script a (native) SQL backup to a temporary location, then call PKZip or 7zip or some command-line compression software to compress the backup to a permanent storage location.
Note that most of these zip utilities have a high CPU cost.
See the discussion in the comments of this post:
https://blog.stackoverflow.com/2009/02/our-backup-strategy-inexpensive-nas/
(Edit: Or just upgrade to SQL2008 R2, which supports native backup compression.)
The Idera which you are using an 3rd party tool , with that tool it can be backup/restore & you can monitor for your server & databases..
As you asked the question-
am trying to figure out how SQL Server DBAs are doing their backups and verify in 2005. I use the Idera's free stored procs (which is no longer available to download btw) to backup and verify and have gotten around 65% compression. If there any other free alternative?
SQL server has it's own native tool where you can set up your backup of databases to go disk,
usually with the SSIS packages by using Maitenace plan (or)T-sql(where you can configure Full,differntial and log backup also)(where after backup finish you can check the verify integrity of the backup) but if the database grows more and more then you may need to ensre about the capcity as this goes to disk(here you need cut the backup stragegy for big database say 1TB bcz usually for Big database taking full daily causes lot of I/O then you have to decide weekly full backup along with the other days differntial backup in place) & you should do the cleanup also for howmany days you want(in the same maintenace plan only it exists).
see for ex-http://bradmcgehee.com/2010/01/13/how-to-use-sql-backup-inside-a-maintenance-plan/
but the backup/restore totally depends on how well you manage from your side by knowing the business risk & communincated with them.
In sql server 2008 onwards you have backup compression like how the Idera sql safe does.
But the Backup usually depends on what they have Implemented whether from the Native tool or from any other 3rd part like Commvault,Idera,TDP(goes to tape) etc... it depeds on what is agreed for.
Backup-
http://msdn.microsoft.com/en-us/library/ms186865.aspx
Free (good) SQL Server DBA tools are hard to find. = /
Have you considered Windows Backups?
It's definitely not the best thing out there, and it takes up a lot of space, but it is free, it does backup your data, and you already have it installed.