Use column from temp table that is populated by stored proc - sql

I know this is wrong and won't work but here's an idea of what I'm trying to do.
I would just join the table that the [GetFinalCountByGroupId] sp is using internally but I don't want to because the table it uses has some large varbinary data. Of course the sp is still querying it so maybe it is just as good performance wise as a join rather than a sp call. Either way I'm curious if I can get this to work first - if not I'll just try a join. Anyway, here's some code:
CREATE PROCEDURE [dbo].[GetFinalRequests]
AS
BEGIN
SET NOCOUNT ON
DECLARE #FinalTable TABLE
(
FinalCount TINYINT
)
INSERT INTO #FinalTable
EXEC [dbo].[GetFinalCountByGroupId] [GroupId]
SELECT [Id]
,[GroupId]
,[SubmitBy]
,[InUse]
FROM [dbo].[Requests]
WHERE [InUse] = 1
AND #FinalTable.FinalCount > 0
END
edit: here is the result of executing this...
Must declare the scalar variable "#FinalTable".

You have a syntax error here:
SELECT [Id]
,[GroupId]
,[SubmitBy]
,[InUse]
FROM [dbo].[Requests]
WHERE [InUse] = 1
AND #FinalTable.FinalCount > 0
You can not access FinalCount column like that in SQL.
I don't know the purpose of the temp table but just to make it work:
SELECT [Id]
,[GroupId]
,[SubmitBy]
,[InUse]
FROM [dbo].[Requests]
WHERE [InUse] = 1
AND (select sum(FinalCount) from #FinalTable) > 0

Ok - I abandoned this and just did a join:
SELECT [Id]
,r.[GroupId]
,[SubmitBy]
,[InUse]
FROM [dbo].[Requests] r
JOIN [dbo].[AgreementDocuments] d
ON r.[GroupId] = d.[GroupId]
WHERE r.[InUse] = 1
AND d.[Final] = 1

Related

Check if a temp table exists when I only know part of the name?

I have a function for checking if certain tables exist in my database, using part of the table name as a key to match (my table naming conventions include unique table name prefixes). It uses a select statement as below, where #TablePrefix is a parameter to the function and contains the first few characters of the table name:
DECLARE #R bit;
SELECT #R = COUNT(X.X)
FROM (
SELECT TOP(1) 1 X FROM sys.tables WHERE [name] LIKE #TablePrefix + '%'
) AS X;
RETURN #R;
My question is, how can I extend this function to work for #temp tables too?
I have tried checking the first char of the name for # then using the same logic to select from tempdb.sys.tables, but this seems to have a fatal flaw - it returns a positive result when any temp table exists with a matching name, even if not created by the current session - and even if created by SPs in a different database. There does not seem to be any straightforward way to narrow the selection down to only those temp tables that exist in the context of the current session.
I cannot use the other method that seems universally to be suggested for checking temp tables - IF OBJECT('tempdb..#temp1') IS NOT NULL - because that requires me to know the full name of the table, not just a prefix.
create table #abc(id bit);
create table #abc_(id bit);
create table #def__(id bit);
create table #xyz___________(id bit);
go
select distinct (left(t.name, n.r)) as tblname
from tempdb.sys.tables as t with(nolock)
cross join (select top(116) row_number() over(order by(select null)) as r from sys.all_objects with(nolock)) as n
where t.name like '#%'
and object_id('tempdb..'+left(t.name, n.r)) is not null;
drop table #abc;
drop table #abc_;
drop table #def__;
drop table #xyz___________;
Try something like this:
DECLARE #TablePrefix VARCHAR(50) = '#temp';
DECLARE #R BIT, #pre VARCHAR(50) = #TablePrefix + '%';
SELECT #R = CASE LEFT ( #pre, 1 )
WHEN '#' THEN (
SELECT CASE WHEN EXISTS ( SELECT * FROM tempdb.sys.tables WHERE [name] LIKE #pre ) THEN 1
ELSE 0
END )
ELSE (
SELECT CASE WHEN EXISTS ( SELECT * FROM sys.tables WHERE [name] LIKE #pre ) THEN 1
ELSE 0
END )
END;
SELECT #R AS TableExists;

How to set total number of rows before an OFFSET occurs in stored procedure

I've created a stored procedure that filters and paginates for a DataTable.
Problem: I need to set an OUTPUT variable for #TotalRecords found before an OFFSET occurs, otherwise it sets #TotalRecord to #RecordPerPage.
I've messed around with CTE's and also simply trying this:
SELECT *, #TotalRecord = COUNT(1)
FROM dbo
But that doesn't work either.
Here is my stored procedure, with most of the stuff pulled out:
ALTER PROCEDURE [dbo].[SearchErrorReports]
#FundNumber varchar(50) = null,
#ProfitSelected bit = 0,
#SortColumnName varchar(30) = null,
#SortDirection varchar(10) = null,
#StartIndex int = 0,
#RecordPerPage int = null,
#TotalRecord INT = 0 OUTPUT --NEED TO SET THIS BEFORE OFFSET!
AS
BEGIN
SET NOCOUNT ON;
SELECT *
FROM
(SELECT *
FROM dbo.View
WHERE (#ProfitSelected = 1 AND Profit = 1)) AS ERP
WHERE
((#FundNumber IS NULL OR #FundNumber = '')
OR (ERP.FundNumber LIKE '%' + #FundNumber + '%'))
ORDER BY
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'asc'
THEN ERP.FundNumber
END ASC,
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'desc'
THEN ERP.FundNumber
END DESC
OFFSET #StartIndex ROWS
FETCH NEXT #RecordPerPage ROWS ONLY
Thank you in advance!
You could try something like this:
create a CTE that gets the data you want to return
include a COUNT(*) OVER() in there to get the total count of rows
return just a subset (based on your OFFSET .. FETCH NEXT) from the CTE
So your code would look something along those lines:
-- CTE definition - call it whatever you like
WITH BaseData AS
(
SELECT
-- select all the relevant columns you need
p.ProductID,
p.ProductName,
-- using COUNT(*) OVER() returns the total count over all rows
TotalCount = COUNT(*) OVER()
FROM
dbo.Products p
)
-- now select from the CTE - using OFFSET/FETCH NEXT, get only those rows you
-- want - but the "TotalCount" column still contains the total count - before
-- the OFFSET/FETCH
SELECT *
FROM BaseData
ORDER BY ProductID
OFFSET 20 ROWS FETCH NEXT 15 ROWS ONLY
As a habit, I prefer non-null entries before possible null. I did not reference those in my response below, and limited a working example to just the two inputs you are most concerned with.
I believe there could be some more clean ways to apply your local variables to filter the query results without having to perform an offset. You could return to a temp table or a permanent usage table that cleans itself up and use IDs that aren't returned as a way to set pages. Smoother, with less fuss.
However, I understand that isn't always feasible, and I become frustrated myself with those attempting to solve your use case for you without attempting to answer the question. Quite often there are multiple ways to tackle any issue. Your job is to decide which one is best in your scenario. Our job is to help you figure out the script.
With that said, here's a potential solution using dynamic SQL.
I'm a huge believer in dynamic SQL, and use it extensively for user based table control and ease of ETL mapping control.
use TestCatalog;
set nocount on;
--Builds a temp table, just for test purposes
drop table if exists ##TestOffset;
create table ##TestOffset
(
Id int identity(1,1)
, RandomNumber decimal (10,7)
);
--Inserts 1000 random numbers between 0 and 100
while (select count(*) from ##TestOffset) < 1000
begin
insert into ##TestOffset
(RandomNumber)
values
(RAND()*100)
end;
set nocount off;
go
create procedure dbo.TestOffsetProc
#StartIndex int = null --I'll reference this like a page number below
, #RecordsPerPage int = null
as
begin
declare #MaxRows int = 30; --your front end will probably manage this, but don't trust it. I personally would store this on a table against each display so it can also be returned dynamically with less manual intrusion to this procedure.
declare #FirstRow int;
--Quick entry to ensure your record count returned doesn't excede max allowed.
if #RecordsPerPage is null or #RecordsPerPage > #MaxRows
begin
set #RecordsPerPage = #MaxRows
end;
--Same here, making sure not to return NULL to your dynamic statement. If null is returned from any variable, the entire statement will become null.
if #StartIndex is null
begin
set #StartIndex = 0
end;
set #FirstRow = #StartIndex * #RecordsPerPage
declare #Sql nvarchar(2000) = 'select
tos.*
from ##TestOffset as tos
order by tos.RandomNumber desc
offset ' + convert(nvarchar,#FirstRow) + ' rows
fetch next ' + convert(nvarchar,#RecordsPerPage) + ' rows only'
exec (#Sql);
end
go
exec dbo.TestOffsetProc;
drop table ##TestOffset;
drop procedure dbo.TestOffsetProc;

If any values in this list exist else give me everything

In a SQL Server proc we may pass in a string list of values. Sometimes this can be an empty string. We break this string (which is a csv string) and store each value in a temp table. Again, it can be blank and so in that case the temp table is empty.
What we're trying to do in the where clause is have it run against the temp table but if there is no data run it against everything. There is a 'trick' that I've never used before and not sure if I fully understand that others at work have used but it's not working for this query (it kills performance and the query never comes back when it should in 4 seconds otherwise).
Below is an example query and the where part is the key:
DECLARE #myList AS NVARCHAR(MAX)
SET #myList = '73'
CREATE TABLE #TempList (data int)
INSERT INTO #TempList
SELECT * FROM BreakCSV(#myList,',')
DECLARE #Count int
SELECT #Count = COUNT(data) FROM #TempList sl WHERE ISNULL(sl.data,0) <> 0
SELECT *
FROM MyTable
WHERE MyDateField BETWEEN '1/1/2015' AND '3/1/2015'
AND (MyIdField IN (SELECT data FROM #TempList WHERE data <> 0) OR #Count = 0)
That OR part with the #Count is the trick. If TempList has no records then #Count would equal 0 and 0 = 0 should cause it to pull the entire table. However, even if TempList has a record in it, the mere fact of having the OR #Count = 0 makes the query not come back. If I comment out the #Count = 0 then it comes back in 4 seconds as expected.
I'm curious if someone could explain the logic in this thought process and if there is a different way to do something like this without an IF statement and duplicating this query with just different where clauses when you want some specific values or you want them all WITHOUT having to specify them all. Also, no dynamic sql.
Using your example code, this would seem to be a better way to optimize. Remember - sometimes, more code is better.
DECLARE #myList AS NVARCHAR(MAX)
DECLARE #Count int
SET #myList = '73'
CREATE TABLE #TempList (data int)
INSERT INTO #TempList
SELECT DISTINCT data
FROM BreakCSV(#myList,',')
WHERE data != 0
SET #Count = ##ROWCOUNT
CREATE UNIQUE INDEX ix1 ON #tempList(data)
IF #count = 0
SELECT *
FROM MyTable
WHERE MyDateField BETWEEN '1/1/2015' AND '3/1/2015'
ELSE
SELECT *
FROM MyTable
INNER JOIN #TempList
ON MyIdField = data
WHERE MyDateField BETWEEN '1/1/2015' AND '3/1/2015'
You overengineered this. Get rid of this:
DECLARE #Count int
SELECT #Count = COUNT(data) FROM #TempList sl WHERE ISNULL(sl.data,0) <> 0
Change this:
AND (MyIdField IN (SELECT data FROM #TempList WHERE data <> 0) OR #Count = 0)
to this:
AND (MyIdField IN (SELECT data FROM #TempList WHERE data <> 0) OR #myList is null)
This is a common approach when writing stored procedures with optional parameters.

Delete table rows in loop

We are using SQL Server 2000. We have a heavy database with over 100000 images. Currently I'm deleting records with this query:
DELETE FROM T_JBSHEETDATA
WHERE (F_JBREF NOT IN (SELECT JOB_REF_NUMBER
FROM T_JBDTLS))
but unfortunately it only deletes 500 records at a time. If I take more records the server dies (server timeout). How do I create a loop of x rows until it's finished?
Below is a way to do "top N deletes".
A few ideas. You can configure the #TopSize until you find the good "goldie locks" value. Not too big, not too small.
You could also remove the while loop, and then track the #RowCount (after the delete statement)......and if you had client code....return the delete-count, and keep calling the (stored procedure?) over and over until the delete count was zero.
NOW, I would see if an index could improve performance before resorting to the below.
But I'm trying to answer your question..............as asked.
/* START TSQL */
if exists (SELECT * FROM information_schema.tables WHERE table_schema = 'dbo' and table_name = 'Television')
BEGIN
DROP TABLE [dbo].[Television]
END
GO
CREATE TABLE [dbo].[Television] (
TelevisionUUID [uniqueidentifier] not null default NEWSEQUENTIALID() ,
TelevisionName varchar(64) not null ,
TelevisionKey int not null ,
IsCheckedOut bit default 0
)
GO
ALTER TABLE dbo.Television ADD CONSTRAINT PK_Television_TelevisionUUID
PRIMARY KEY CLUSTERED (TelevisionUUID)
GO
ALTER TABLE dbo.Television ADD CONSTRAINT CK_Television_TelevisionName_UNIQUE
UNIQUE (TelevisionName)
GO
set nocount on
declare #counter int
select #counter = 11000
declare #currentTVName varchar(24)
declare #TopSize int
select #TopSize = 10
while #counter > 10000 /* this loop counter is ONLY here for fake data,….do not use this syntax for production code */
begin
select #currentTVName = 'TV: '+ convert(varchar(24) , #counter)
INSERT into dbo.Television ( TelevisionName , TelevisionKey ) values ( #currentTVName , #counter)
select #counter = #counter - 1
end
/* Everything above is just setup data, the crux of the code is below this line */
select count(*) as TV_Total_COUNT_Pre from dbo.Television
declare #DeleteLoopCounter int
select #DeleteLoopCounter = 0
while exists ( select top 1 * from dbo.Television )
BEGIN
select #DeleteLoopCounter = #DeleteLoopCounter + 1
;
WITH cte1 AS
( SELECT
TOP (#TopSize)
TelevisionUUID , /* <<Note, the columns here must be available to the output */
IsCheckedOut
FROM
dbo.Television tv
WITH ( UPDLOCK, READPAST , ROWLOCK ) /* <<Optional Hints, but helps with concurrency issues */
/* WHERE conditions can be put there as well */
ORDER BY /* order by is optional, and I would probably remove it for a delete operation */
tv.TelevisionKey DESC
)
/* UPDATE cte1 SET IsCheckedOut = 1 */ /* this code has nothing to do with the delete solution, but shows how you could use this same trick for an "update top N" */
Delete deleteAlias
from dbo.Television deleteAlias
where exists ( select null from cte1 innerAlias where innerAlias.TelevisionUUID = deleteAlias.TelevisionUUID )
;
print '/#DeleteLoopCounter/'
print #DeleteLoopCounter
print ''
select count(*) as TV_Total_COUNT_Post from dbo.Television
END
EDIT
Sql Server 2000 specific info:
NOTE. Since you have 2000, you will have to HARD CODE the #TopSize value. But I will leave the code "as is" for future readers. Again, you'll have to remove #TopSize and then use a value like "1000" or similar. Remove the () around the #TopSize near the select.
You might be able to improve the performance of your query instead of trying to figure out how to delete in a loop.
1. Make sure there is an index on table T_JBDTLS.JOB_REF_NUMBER. If it's missing that might be the cause of the slowdown.
2. Change your query to use a join instead of a sub-select. Something like:
DELETE FROM T_JBSHEETDATA
FROM T_JBSHEETDATA D
LEFT JOIN T_JBDTLS J ON D.F_JBREF = J.JOB_REF_NUMBER
WHERE J.JOB_REF_NUMBER IS NULL
3. Are there any triggers on the table that you're deleting from? What about cascade deletes? Triggers on the table that is being cascade-deleted?
You may also want to try a "not exists" clause instead of a "not in".
I believe the below is the correct translation.
Delete deleteAlias
/* select deleteAlias.* */
from dbo.T_JBSHEETDATA deleteAlias
where not exists ( select null from dbo.[T_JBDTLS] innerDets where innerDets.JOB_REF_NUMBER = deleteAlias.F_JBREF )
Here is a generic Northwind version that will delete any Order(s) that do(es) not have any (child) Order-Details.
Use Northwind
GO
Delete deleteAlias
/* select deleteAlias.* */
from dbo.Orders deleteAlias
where not exists ( select null from dbo.[Order Details] innerDets where innerDets.OrderId = deleteAlias.OrderId )

SQL Azure doesn't support 'select into' - Is there another way?

I have a very complicated table I'd like to take a temporary backup of whilst I make some changes. Normally, I'd just do the following:
SELECT *
INTO temp_User
FROM dbo.[User] AS u
Unfortunately I'm using Azure, and it appears this isn't supported:
Msg 40510, Level 16, State 1, Line 2 Statement 'SELECT INTO' is not
supported in this version of SQL Server.
Is there a way to re-create this feature into a function, potentially? I could do this by scripting the table, creating it and then inserting data using a select statement but given how frequently I use Azure, and how many databases I need to work on in this area this is very unwieldy.
Azure requires a clustered index on all tables, therefore SELECT INTO is not supported.
You'll have to:
CREATE TABLE temp_User () --fill in table structure
INSERT INTO temp_User
SELECT *
FROM dbo.[User]
To script table easily you can write your own or use one of the answers to this question:
Script CREATE Table SQL Server
Update: As Jordan B pointed out, V12 will include support for heaps (no clustered index requirement) which means SELECT INTO will work. At the moment V12 Preview is available, Microsoft of course only recommends upgrading with test databases.
The new Azure DB Update preview has this problem resolved:
The V12 preview enables you to create a table that has no clustered
index. This feature is especially helpful for its support of the T-SQL
SELECT...INTO statement which creates a table from a query result.
http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/
Unfortunately it cant be done. Here is how I worked around it:
Open SQL Server Management Studio
Right click on the table
Select Script as ... Create Table
Edit the generated script to change the table name to what you specified in your query
Execute your query
INSERT INTO temp_User
SELECT * FROM dbo.[User]
You can try the above. It's basically a select that is applied to an insert statement
http://blog.sqlauthority.com/2011/08/10/sql-server-use-insert-into-select-instead-of-cursor/
Lets assume you have a table with Id, Column1 and Column2. Then this could be your solution
CREATE TABLE YourTableName_TMP ....
GO
SET IDENTITY_INSERT YourTableName_TMP ON
GO
INSERT INTO YourTableName_TMP
([Id] ,[Column1] ,[Column2])
SELECT [Id] ,[Column1] ,[Column2]
FROM
(
SELECT *
FROM
(
SELECT [Id] ,[Column1] ,[Column2] ROW_NUMBER() OVER(ORDER BY ID DESC) AS RowNum
FROM YourTableName
)
WHERE RowNum BETWEEN 0 AND 500000
)
GO
SET IDENTITY_INSERT YourTableName_TMP OFF
GO
First you create a temporary table and then you insert rows windowed. It's a mess, I know. My experiences are, that executing this using SQL Server Management Studio from a client makes approximately 200.000 rows a minute.
As wrote above - you need to rewrite your query from using select into to create table like
It is my sample. Was :
select emrID, displayName --select into
into #tTable
from emrs
declare #emrid int
declare #counter int = 1
declare #displayName nvarchar(max)
while exists (select * from #tTable)
begin
-- some business logic
select top 1 #displayName = displayname
from #tTable
group by displayname
update emrs set groupId = #counter where #displayName = displayname
delete #tTable
where #displayName = displayname
set #counter = #counter + 1
end
drop table #tTable
Modified :
CREATE TABLE #tTable ([displayName] nvarchar(max)) --create table
INSERT INTO #tTable -- insert to next select :
select displayName
from emrs
declare #emrid int
declare #counter int = 1
declare #displayName nvarchar(max)
while exists (select * from #tTable)
begin
-- some business logic
select top 1 #displayName = t.displayName
from #tTable as t
group by t.displayname
update emrs set groupId = #counter where #displayName = displayname
delete #tTable
where #displayName = displayname
set #counter = #counter + 1
end
drop table #tTable
Do not forget to drop your temp table.
Also, you can find more simple example with description here :
http://www.dnnsoftware.com/wiki/statement-select-into-is-not-supported-in-this-version-of-sql-server