I am facing a serious issue in my production server where the temp DB grow exponantialy. Is there any way we can recover the tempDB space without restarting the SQL service?
Cheers
Kannan.
I would ignore posts advising you to change the recovery model or limit the size of tempDB(!).
You need to track down the actual cause of the growth.
If you have the default trace turned on (it's on by default, out of the box), you can retrospectively find out what caused the growth by running this:
--check if default trace is enabled
if exists (select 1 from sys.configurations where configuration_id = 1568)
BEGIN
declare #defaultTraceFilepath nvarchar(256)
--get the current trace rollover file
select #defaultTraceFilepath = CONVERT(varchar(256), value) from ::fn_trace_getinfo(0)
where property = 2
SELECT ntusername,loginname, objectname, e.category_id, textdata, starttime,spid,hostname, eventclass,databasename, e.name
FROM ::fn_trace_gettable(#defaultTraceFilepath,0)
inner join sys.trace_events e
on eventclass = trace_event_id
INNER JOIN sys.trace_categories AS cat
ON e.category_id = cat.category_id
where
databasename = 'tempDB' and
cat.category_id = 2 and --database category
e.trace_event_id in (92,93) --db file growth
END
Otherwise, you can start a SQL Profiler trace to capture these events. Turn on capturing of Auto Growth events, Sort Warnings and Join Warnings and look for cross joins, hash joins or missing join conditions.
SQL Server exposes a way to identify tempDB space allocations by currently executing queries, using DMVs:
-- This DMV query shows currently executing tasks and tempdb space usage
-- Once you have isolated the task(s) that are generating lots
-- of internal object allocations,
-- you can find out which TSQL statement and its query plan
-- for detailed analysis
select top 10
t1.session_id,
t1.request_id,
t1.task_alloc,
t1.task_dealloc,
(SELECT SUBSTRING(text, t2.statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(max),text)) * 2
ELSE statement_end_offset
END - t2.statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(t2.plan_handle)) as query_plan
from (Select session_id, request_id,
sum(internal_objects_alloc_page_count + user_objects_alloc_page_count) as task_alloc,
sum (internal_objects_dealloc_page_count + user_objects_dealloc_page_count) as task_dealloc
from sys.dm_db_task_space_usage
group by session_id, request_id) as t1,
sys.dm_exec_requests as t2
where t1.session_id = t2.session_id and
(t1.request_id = t2.request_id) and
t1.session_id > 50
order by t1.task_alloc DESC
(Ref.)
You can use DBCC SHRINKFILE to shrink the tempdb files and recover some space.
DBCC SHRINKFILE ('tempdev', 1)
DBCC SHRINKFILE ('templog', 1)
The filenames can be found in the sysfiles table.
You still need to discover the root cause, but this can give you some breathing room until you do. The amount of space you recover will depend on usage and other factors.
Also:
How to shrink the tempdb database in SQL Server
http://support.microsoft.com/kb/307487
In SIMPLE mode, the tempdb database's log is constantly being truncated, and it can never be backed up. So check it is in Simple Mode
Related
I have a query that needs to update 2 million records but there is no space in the disk, so the query is suspended right now. After that, I free up some space, but the query is still in suspended. So how should I change the status to Runnable or is there any way to tell sql server that you have enough space right now, and you can run the query.
After that, I free up some space, but the query is still in suspended.is there any way to tell sql server that you have enough space right now, and you can run the query.
SQLSERVER will change the query status from suspended to runnable automatically,it is not managed by you..
Your job here is to check ,why the query is suspended..Below dmvs can help
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type from sys.dm_exec_requests
where session_id=<< your session id>>
There are many reasons why a query gets suspended..some of them include locking/blocking,rollback,getting data from disk..
You will have to check the status as per above dmv and see what is the reason and troubleshoot accordingly..
Below is some sample piece of code which can help you in understanding what suspended means
create table t1
(
id int
)
insert into t1
select row_number() over (order by (select null))
from
sys.objects c
cross join
sys.objects c1
now in one tab of ssms:
run below query
begin tran
update t1
set id=id+1
Open another tab and run below query
select * from t1
Now open another tab and run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where session_id=<< your session id of select >>
or run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where blocking_session_id>0
You can see status as suspended due to blocking,once you clear the blocking(by committing transaction) , you will see sql server automatically resumes suspended query in this case
I have two large tables with 60 million, resp. 10 million records. I want to join both tables together however the process runs for 3 hours then comes back with the error message:
the transaction log for database is full due to 'active_transaction'
The autogrowth is unlimited and I have set the DB recovery to simple
The size of the log drive is 50 GB
I am using SQL server 2008 r2.
The SQL query I am using is:
Select * into betdaq.[dbo].temp3 from
(Select XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1 inner join XXX.[dbo].temp2
on temp1.Date = temp2.[Date] and temp1.cloth = temp2.Cloth nd temp1.Time = temp1.Time) a
A single command is a transaction and the transaction does not commit until the end.
So you are filling up the transaction log.
You are going to need to loop and insert like 100,000 rows at a time
Start with this just to test the first 100,000
Then will need to add loop with a cursor
create table betdaq.[dbo].temp3 ...
insert into betdaq.[dbo].temp3 (a,b,c,d,e)
Select top 100000 with ties XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1
join XXX.[dbo].temp2
on temp1.Date = temp2.[Date]
and temp1.Time = temp1.Time
and temp1.cloth = temp2.Cloth
order by temp1.Date, temp1.Time
And why? That is a LOT of data. Could you use a View or a CTE?
If those join columns are indexed a View will be very efficient.
Transaction log can be full even though database is in simple recovery model,even though select into is a minimally logged operation,log can become full due to other transactiosn running in parallel as well.
I would use below queries to check tlog space usage by transactions while the query is runnnig
select * from sys.dm_db_log_space_usage
select * from sys.dm_tran_database_transactions
select * from sys.dm_tran_active_transactions
select * from sys.dm_tran_current_transaction
further below query can be used to check sql text also
https://gallery.technet.microsoft.com/scriptcenter/Transaction-Log-Usage-By-e62ba57d
I run EXEC sp_who2 78 and I get the following results:
How can I find why its status is suspended?
This process is a heavy INSERT based on an expensive query. A big SELECT that gets data from several tables and write some 3-4 millions rows to a different table.
There are no locks/ blocks.
The waittype it is linked to is CXPACKET. which I can understand because there are 9 78s as you can see on the picture below.
What concerns me and what I really would like to know is why the number 1 of the SPID 78 is suspended.
I understand that when the status of a SPID is suspended it means the process is waiting on a resource and it will resume when it gets its resource.
How can I find more details about this? what resource? why is it not available?
I use a lot the code below, and variations therefrom, but is there anything else I can do to find out why the SPID is suspended?
select *
from sys.dm_exec_requests r
join sys.dm_os_tasks t on r.session_id = t.session_id
where r.session_id = 78
I already used sp_whoisactive. The result I get for this particular spid78 is as follow: (broken into 3 pics to fit screen)
SUSPENDED:
It means that the request currently is not active because it is waiting on a resource. The resource can be an I/O for reading a page, A WAITit can be communication on the network, or it is waiting for lock or a latch. It will become active once the task it is waiting for is completed. For example, if the query the has posted a I/O request to read data of a complete table tblStudents then this task will be suspended till the I/O is complete. Once I/O is completed (Data for table tblStudents is available in the memory), query will move into RUNNABLE queue.
So if it is waiting, check the wait_type column to understand what it is waiting for and troubleshoot based on the wait_time.
I have developed the following procedure that helps me with this, it includes the WAIT_TYPE.
use master
go
CREATE PROCEDURE [dbo].[sp_radhe]
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id AS session_id
,COALESCE(es.original_login_name, '') AS login_name
,COALESCE(es.host_name,'') AS hostname
,COALESCE(es.last_request_end_time,es.last_request_start_time) AS last_batch
,es.status
,COALESCE(er.blocking_session_id,0) AS blocked_by
,COALESCE(er.wait_type,'MISCELLANEOUS') AS waittype
,COALESCE(er.wait_time,0) AS waittime
,COALESCE(er.last_wait_type,'MISCELLANEOUS') AS lastwaittype
,COALESCE(er.wait_resource,'') AS waitresource
,coalesce(db_name(er.database_id),'No Info') as dbid
,COALESCE(er.command,'AWAITING COMMAND') AS cmd
,sql_text=st.text
,transaction_isolation =
CASE es.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'Read Uncommitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END
,COALESCE(es.cpu_time,0)
+ COALESCE(er.cpu_time,0) AS cpu
,COALESCE(es.reads,0)
+ COALESCE(es.writes,0)
+ COALESCE(er.reads,0)
+ COALESCE(er.writes,0) AS physical_io
,COALESCE(er.open_transaction_count,-1) AS open_tran
,COALESCE(es.program_name,'') AS program_name
,es.login_time
FROM sys.dm_exec_sessions es
LEFT OUTER JOIN sys.dm_exec_connections ec ON es.session_id = ec.session_id
LEFT OUTER JOIN sys.dm_exec_requests er ON es.session_id = er.session_id
LEFT OUTER JOIN sys.server_principals sp ON es.security_id = sp.sid
LEFT OUTER JOIN sys.dm_os_tasks ota ON es.session_id = ota.session_id
LEFT OUTER JOIN sys.dm_os_threads oth ON ota.worker_address = oth.worker_address
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS st
where es.is_user_process = 1
and es.session_id <> ##spid
ORDER BY es.session_id
end
This query below also can show basic information to assist when the spid is suspended, by showing which resource the spid is waiting for.
SELECT wt.session_id,
ot.task_state,
wt.wait_type,
wt.wait_duration_ms,
wt.blocking_session_id,
wt.resource_description,
es.[host_name],
es.[program_name]
FROM sys.dm_os_waiting_tasks wt
INNER JOIN sys.dm_os_tasks ot ON ot.task_address = wt.waiting_task_address
INNER JOIN sys.dm_exec_sessions es ON es.session_id = wt.session_id
WHERE es.is_user_process = 1
Please see the picture below as an example:
I use sp_whoIsActive to look at this kind of information as it is a ready made free tool that gives you good information for troubleshooting slow queries:
How to Use sp_WhoIsActive to Find Slow SQL Server Queries
With this, you can get the query text, the plan it is using, the resource the query is waiting on, what is blocking it, what locks it is taking out and a whole lot more.
Much easier than trying to roll your own.
You can solve it with to ways:
Fix the cluster index.
Use temporal tables to get a part of the all table and work with it.
I have the same problem with a table with a 400,000,000 rows, and use a temporal tables to get a part of it and then i use my filters and inners because change the index was not a option.
Some example:
--
--this is need be cause DECLARE #TEMPORAL are not well for a lot of data.
CREATE TABLE #TEMPORAL
(
ID BIGINT,
ID2 BIGINT,
DATA1 DECIMAL,
DATA2 DECIMAL
);
WITH TABLE1 AS
(
SELECT
L.ID,
L.ID2,
L.DATA
FROM LARGEDATA L
WHERE L.ID = 1
), WITH TABLE2 AS
(
SELECT
L.ID,
L.ID2,
L.DATA
FROM LARGEDATA L
WHERE L.ID = 2
) INSERT INTO #TEMPORAL SELECT
T1.ID,
T2.ID,
T1.DATA,
T2.DATA
FROM TABLE1 T1
INNER JOIN TABLE2 T2
ON T2.ID2 = T2.ID2;
--
--this take a lot of resources proces and time and be come a status suspend, this why i need a temporal table.
SELECT
*
FROM #TEMPORAL T
WHERE T.DATA1 < T.DATA2
--
--IMPORTANT DROP THE TABLE.
DROP TABLE #TEMPORAL
I need to find the size of a sql server 2008 database. I used the following stored procedure
EXEC sp_spaceused
If gave me back the following data, database name, database size, unallocated space, reserved, data, index_size, unused
Is there any way I can get the size of the database, excluding certain tables?
I am able to get the reserved size of each database table using this query
DECLARE #LOW int
SELECT #LOW = LOW
FROM [master].[dbo].[spt_values] (NOLOCK)
WHERE [number] = 1 AND [type] = 'E'
SELECT TableName,[Row Count],[Size (KB)] FROM
(
SELECT QUOTENAME(USER_NAME(o.uid)) + '.' +
QUOTENAME(OBJECT_NAME(i.id))
AS TableName
,SUM(i.rowcnt) [Row Count]
,CONVERT(numeric(15,2),
(((CONVERT(numeric(15,2),SUM(i.reserved)) * #LOW) / 1024))) AS [Size (KB)]
FROM sysindexes i (NOLOCK)
INNER JOIN sysobjects o (NOLOCK)
ON i.id = o.id AND
((o.type IN ('U', 'S')) OR o.type = 'U')
WHERE indid IN (0, 1, 255)
GROUP BY
QUOTENAME(USER_NAME(o.uid)) + '.' +
QUOTENAME(OBJECT_NAME(i.id))
HAVING SUM(i.rowcnt) > 0
) AS Z
ORDER BY [Size (KB)] DESC
But this confuses me slightly, as this only gives the reserved size per table. What is the reserved size? If I sum the reserved size for each database table, it does not add up to the database size.
There is a lot more taking up space in a database than just tables. Also keep in mind that the size of a table/database is an ever changing thing. Rows are added, removed, logs are built to keep track of what was done so it can undo or redo if necessary. As this space is used and released it doesn't typically get released back to the file system as SQL knows it will likely need it again. SQL will release space for it's own future usage but according to the file system that space is still being used by the database.
Please stop using backward compatibility views like sysindexes / sysobjects.
Something like this might be better, though indexes/tables alone do not account for everything in a database.
SELECT Size_In_MB = SUM(reserved_page_count)*8/1024.0
FROM sys.dm_db_partition_stats
WHERE OBJECT_NAME([object_id]) NOT IN (N'table1', N'table2');
Also, why are you ignoring non-clustered indexes? Do you think they don't contribute to size? You can add a similar filter here, but I'm not sure why you would:
AND index_id IN (0,1,255);
Is there a way to find the unused tables which are nothing else but rubbish in database?
The only way I can think of working out if a table is being used is to use sys.dm_db_index_usage_stats. The caveat with this is that it only records the usage of the tables since the SQL Service was last started.
So bearing that in mind, you can use the following query:
SELECT DISTINCT
OBJECT_SCHEMA_NAME(t.[object_id]) AS 'Schema'
, OBJECT_NAME(t.[object_id]) AS 'Table/View Name'
, CASE WHEN rw.last_read > 0 THEN rw.last_read END AS 'Last Read'
, rw.last_write AS 'Last Write'
, t.[object_id]
FROM sys.tables AS t
LEFT JOIN sys.dm_db_index_usage_stats AS us
ON us.[object_id] = t.[object_id]
AND us.database_id = DB_ID()
LEFT JOIN
( SELECT MAX(up.last_user_read) AS 'last_read'
, MAX(up.last_user_update) AS 'last_write'
, up.[object_id]
FROM (SELECT last_user_seek
, last_user_scan
, last_user_lookup
, [object_id]
, database_id
, last_user_update, COALESCE(last_user_seek, last_user_scan, last_user_lookup,0) AS null_indicator
FROM sys.dm_db_index_usage_stats) AS sus
UNPIVOT(last_user_read FOR read_date IN(last_user_seek, last_user_scan, last_user_lookup, null_indicator)) AS up
WHERE database_id = DB_ID()
GROUP BY up.[object_id]
) AS rw
ON rw.[object_id] = us.[object_id]
ORDER BY [Last Read]
, [Last Write]
, [Table/View Name];
;
If you use a source control, see the latest database script. Its the easiest way.
I think you might find the database statistics would the most profitable place to look.
it should be able to tell you which tables are read from most, and which ones are updated most.
If you find tables which are neither read from nor written to, they're probably not much used.
I'm not sure what database statistics are available in SQL Svr 2000 though.
However, rather than simply looking at which tables are not much used, wouldn't a better approach be to examine what each table holds and what it if for, so you gain a proper understanding of the design? In this case you would then be able to properly judge what is necessary and what is not.
It is a concern that you don't know what source control is though (It's a way of managing changing to files - usually sorce code - so you can keep track of who changed what, when and why.) Anything larger than a one-man project (and even some one-man projects) should use it.
You can use sp_depends to confirm any depedencies for the suspect tables.
Here is an example:
CREATE TABLE Test (ColA INT)
GO
CREATE PROCEDURE usp_Test AS
BEGIN
SELECT * FROM Test
END
GO
CREATE FUNCTION udf_Test()
RETURNS INT
AS
BEGIN
DECLARE #t INT
SELECT TOP 1 #t = ColA FROM Test
RETURN #t
END
GO
EXEC sp_depends 'Test'
/** Results **/
In the current database, the specified object is referenced by the following:
name type
----- ----------------
dbo.udf_Test scalar function
dbo.usp_Test stored procedure
This approach has some caveats. Also this won't help with tables that are accessed directly from an application or other software (i.e. Excel, Access, etc.).
To be completely thorough, I would recommend using SQL Profiler in order to monitor your database and see if and when these tables are referenced.