I have a table variable with about 20 columns. I'd like to essentially reuse a single table variable structure for 2 different result sets. The 2 result sets should be represented in different table variables so I can't reuse a single table variable. Therefore, I was wondering if there was a way to clone a single table variable for reuse. For example, something like this:
DECLARE #MyTableVar1 TABLE(
Col1 INT,
Col2 INT
}
DECLARE #MyTableVar2 TABLE = #MyTableVar1
I'd like to avoid creating duplicate SQL if I can reuse existing SQL.
That is not possible, use temp table instead
if object_id('tempdb..#MyTempTable1') is not null drop table #MyTempTable1
Create TABLE #MyTempTable1 (
Col1 INT,
Col2 INT
)
if object_id('tempdb..#MyTempTable2') is not null drop table #MyTempTable2
select * into #MyTempTable2 from #MyTempTable1
update :
As suggested by Eric in comment, if you are looking for just table schema and not the data inside the first table then
select * into #MyTempTable2 from #MyTempTable1 where 1 = 0
You can create a user-defined table type which is typically meant for using table valued parameters for stored procedures. Once the type is created, you can use it as a type to declare any number of table variables just like built-in types. This comes closest to you requirement.
Ex:
CREATE TYPE MyTableType AS TABLE
( COL1 int
, COL2 int )
DECLARE #MyTableVar1 AS MyTableType
DECLARE #MyTableVar2 AS MyTableType
A few things to note with this solution
MyTableType becomes a database level type. It is not local to a specific stored procedure.
If you have to ever change the definition of the table, then you have to drop the code/sprocs using the TVP type, then recreate the table type with new definition and related sprocs. Typically this is a non-issue as the code and the type are created/recreated together.
You could use a temp table and select into... they perform better since their statistics are better.
create table #myTable(
Col1 INT null,
Col2 INT null
}
...
select *
into #myTableTwo
from #myTable
You can create one table variable and add type column in the table and use the type column in your queries to filter the data.
By this you are using one table to hold more than one type of data.
Hope this helps.
declare #myTable table(
Col1 INT null,
Col2 INT null,
....
Type INT NULL
}
insert into #myTable(...,type)
select ......,1
insert into #myTable(...,type)
select ......,2
select * from #myTable where type =1
select * from #myTable where type =2
Related
I am trying to use table-valued parameter to pass a column to "IN" clause in SQL Server. To do so I need to declare a type of table.
The SQL expression:
DECLARE TYPE myType AS TABLE( myid varchar(50) NULL);
is giving the error:
'myType' is not a recognized CURSOR option.
Replacing DECLARE to CREATE is working good.
Using "CREATE" is requiring to drop this type in any next SQL calls like:
IF TYPE_ID('myType') IS NOT NULL DROP TYPE myType;
But I would like to use the type "myType" just within this SQL expression only.
How to declare a type as table without creating it permanently and without deleting it in all requests?
DECLARE requires the following sinthax:
DECLARE #LastName NVARCHAR(30), #FirstName NVARCHAR(20), #Country NCHAR(2);
With DECLARE, you can:
To assign a variable name(using #).
To assign a Variable Type
To assign a NULL default Variable value
So "DECLARE TYPE" is not a valid statement.
Have a look to temporary tables in sql.
You can create a temporary table in sql with the following code:
CREATE TABLE #TempTable
(
name VARCHAR(50),
age int,
gender VARCHAR (50)
)
INSERT INTO #TempTable
SELECT name, age, gender
FROM student
WHERE gender = 'Male'
With a temporary table, you can store a subset of data from a standard table for a certain period of time.
Temporary tables are stored inside “tempdb” which is a system database.
Can I do something like this, column is of type nchar(8), but the string I wanted to store in the table is longer than that.
The reason I am doing this is because I want to convert from one table to another table. Table A is nchar(8) and Table B is nvarchar(100). I want all characters in Table B transfer to Table A without missing any single character.
If the nvarchar(100) contains only latin characters with a length up to 16 chars, then you can squeeze the nvarchar(100) into the nchar(8):
declare #t table
(
col100 nvarchar(100),
col8 nchar(8)
);
insert into #t(col100) values('1234567890123456');
update #t
set col8 = cast(cast(col100 as varchar(100)) as varbinary(100))
select *, cast(cast(cast(col8 as varbinary(100)) as varchar(100)) as nvarchar(100)) as from8to100_16charsmax
from #t;
If you cannot modify A, then you cannot use it to store the data. Create another table for the overflow . . . something like:
create table a_overflow (
a_pk int primary key references a(pk),
column nvarchar(max) -- why stop at 100?
);
Then, you can construct a view to bring in the data from this table when viewing a:
create view vw_a as
select . . . , -- all the other columns
coalesce(ao.column, a.column) as column
from a left join
a_overflow ao
on ao.a_pk = a.pk;
And, if you really want to "hide" the view, you can create an insert trigger on vw_a, which inserts the appropriate values into the two tables.
This is a lot of work. Simply modifying the table is much simpler. That said, this approach is sometimes needed when you need to modify a large table and altering a column would incur too much locking overhead.
I am populating a temporary table with data, below is the definition of my temp table.
DECLARE #PurgeFilesList TABLE
(
JobFileID BIGINT,
ClientID INT,
StatusID INT,
IsPurgeSuccessfully BIT,
ReceivedDate DATETIME,
FilePath VARCHAR(2000),
StatementPath VARCHAR(2000)
)
Insert logic to populate temp table, after this I am making an additional join with a table named Account:
SELECT
JobFileID,
PFL.ClientID,
StatusID,
IsPurgeSuccessfully,
ReceivedDate,
CASE
WHEN FilePath IS NULL THEN StatementPath
ELSE FilePath
END 'FilePath'
FROM
#PurgeFilesList PFL
INNER JOIN
Account(NOLOCK) A ON ISNULL(PFL.ClientID, 0) = ISNULL(A.ClientID, 0)
AND A.HoldStatementPurge = 0
But, this join is taking too much time. Although total number of rows in Account table is less than 5000.
Account table schema:
Column_name Type Computed Length
-----------------------------------------------
AccountID bigint no 8
AccountNumber varchar no 32
PrimaryCustomerName varchar no 100
LastName varchar no 100
ClientName varchar no 32
BankID varchar no 32
UpdatedDate datetime no 8
IsPurged bit no 1
PurgeDate datetime no 8
ClientID int no 4
HoldStatementPurge bit no 1
Kindly let me know, if any other info is required.
Execution Plan:
Since you are not using any column from Account so, i would use EXISTS :
select fl.JobFileID, fl.ClientID, fl.StatusID,
fl.IsPurgeSuccessfully, fl.ReceivedDate,
isnull(FilePath, StatementPath) as FilePath
from #PurgeFilesList fl
where fl.ClientID is null or
exists (select 1
from Account a
where a.clientid = fl.clientid and a.HoldStatementPurge = 0
);
For the performance, index would be helpful on Account(clientid,HoldStatementPurge) & same as table variable. Just make sure your table variable has some smaller amount of data if that is not the case then you will need to use temporary tables & provide appropriate index on that table.
Your Account schema is missing nullable yes/no information. Having said that I assume Account.ClientID is not nullable so ISNULL(PFL.ClientID, 0) = A.ClientID would do too. Anyway.
My guess is you are missing a couple of well placed indexes here such as:
CREATE INDEX IX_Account_ClientID_HoldStatementPurge ON Account(ClientID, HoldStatementPurge)
Or just
CREATE INDEX IX_Account_ClientID ON Account(ClientID)
I'd say try creating both while checking the query plan first.
Also, you might want to use a Temporary Table (CREATE TABLE #TempTable ...) for this scenario instead of a Table Variable (DECLARE #TempTable TABLE ...) so you can apply an additional index to speed up things:
CREATE TABLE #PurgeFilesList
(
JobFileID BIGINT PRIMARY KEY,
ClientID INT,
StatusID INT,
IsPurgeSuccessfully BIT,
ReceivedDate DATETIME,
FilePath VARCHAR(2000),
StatementPath VARCHAR(2000)
)
CREATE INDEX #IX_PurgeFilesList_ClientID ON #PurgeFilesList(ClientID)
The reason for this is that it is not possible to create non-clustered indexes on Table Variables (only a primary key is permitted).
Please check the record size in table #PurgeFilesList.
try to use Temp Table instead of Table Variable.
This may be a very basic question, but I have been struggling with this.
I have a SSMS query that I'll be using multiple times for a large set of client Ids. Its quite cumbersome to have to amend the parameters in all the where clauses every time I want to run it.
For simplicity, I want to convert a query like the one below:
SELECT
ID,
Description
From TestDb
Where ID in ('1-234908','1-345678','1-12345')
to a query of the format below so that I only need to change my variable field once and it can be applied across my query:
USE TestDb
DECLARE #ixns NVARCHAR(100)
SET #ixns = '''1-234908'',''1-345678'',''1-12345'''
SELECT
ID,
Description
From TestDb
Where ID IN #ixns
However, the above format doesn't work. Can anyone help me on how I can use a varchar/string variable in my "where" clause for my query so that I can query multiple IDs at the same time and only have to adjust/set my variable once?
Thanks in advance :D
The most appropriate solution would be to use a table variable:
DECLARE #ixns TABLE (id NVARCHAR(100));
INSERT INTO #ixns(id) VALUES
('1-234908'),
('1-345678'),
('1-12345');
SELECT ID, Description
FROM TestDb
WHERE ID IN (SELECT id FROM #ixns);
You can load ids to temp table use that in where condition
USE TestDb
DECLARE #tmpIDs TABLE
(
id VARCHAR(50)
)
insert into #tmpIDs values ('1-234908')
insert into #tmpIDs values ('1-345678')
insert into #tmpIDs values ('1-12345')
SELECT
ID,
Description
From TestDb
Where ID IN (select id from #tmpIDs)
The most appropriate way is to create a table type because it is possible to pass this type as parameters.
1) Creating the table type with the ID column.
create type MyListID as table
(
Id int not null
)
go
2) Creating the procedure that receives this type as a parameter.
create procedure MyProcedure
(
#MyListID as MyListID readonly
)
as
select
column1,
column2
...
from
MyTable
where
Id in (select Id from #MyListID)
3) In this example you can see how to fill this type through your application ..: https://stackoverflow.com/a/25871046/8286724
In SQL what is the best method to query a filtered data set?
I imagined two solutions and I would like to know what are the advantages and incovenients one and the other.
Solution 1
I create one unique procedure with my filter in parameters
CREATE PROCEDURE [dbo].[usp_GetByFilter]
(
-- Pagination
#p_Offset int,
#p_FetchNext int,
-- Filters
#p_Param1 nvarchar(255),
#p_param2 uniqueidentifier,
#p_param3 uniqueidentifier
)
Solution 2
I create a procedure by parameter
CREATE PROCEDURE [dbo].[usp_GetByParam1]
(
-- Pagination
#p_Offset int,
#p_FetchNext int,
-- Filters
#p_Param1 nvarchar(255)
)
CREATE PROCEDURE [dbo].[usp_GetByParam2]
(
-- Pagination
#p_Offset int,
#p_FetchNext int,
-- Filters
#p_param2 uniqueidentifier
)
CREATE PROCEDURE [dbo].[usp_GetByParam3]
(
-- Pagination
#p_Offset int,
#p_FetchNext int,
-- Filters
#p_param3 uniqueidentifier
)
Solution 3
Another way?
I think Solution 1 is the best one: it allows you to filter using one or more parameters: you can set a default value for your params, or pass null values when you do not want to filter by a certain parameter. Then the filter query could be written in this way:
SELECT
--your output
FROM
Table t
WHERE
--some conditions AND
( #p_Param1 is null OR t.column1 = #p_Param1 ) AND
( #p_Param2 is null OR t.column2 = #p_Param2 ) AND
( #p_Param3 is null OR t.column3 = #p_Param3 )
Solution 2 would require a lot of new procedures if you wanted to add more filter options or, for example, filter by parameters 2 and 3 at the same time.