Return multiple tables from a T-SQL function in SQL Server 2008 - sql

You can return a single table from a T-SQL Function in SQL Server 2008.
I am wondering if it is possible to return more than one table.
The scenario is that I have three queries that filter 3 different tables. Each table is filtered against 5 filter tables that I would like to return from a function; rather than copy and paste their creation in each query.
An simplified example of what this would look like with copy and paste:
FUNCTION GetValuesA(#SomeParameter int) RETURNS #ids TABLE (ID int) AS
WITH Filter1 As ( Select id FROM FilterTable1 WHERE Attribute=SomeParameter )
, Filter2 As ( Select id FROM FilterTable2 WHERE Attribute=SomeParameter )
INSERT INTO #IDs
SELECT ID FROM ValueTableA
WHERE ColA IN (SELECT id FROM Filter1)
AND ColB IN (SELECT id FROM Filter2)
RETURN
-----------------------------------------------------------------------------
FUNCTION GetValuesB(#SomeParameter int) RETURNS #ids TABLE (ID int) AS
WITH Filter1 As ( Select id FROM FilterTable1 WHERE Attribute=SomeParameter )
, Filter2 As ( Select id FROM FilterTable2 WHERE Attribute=SomeParameter )
INSERT INTO #IDs
SELECT ID FROM ValueTableB
WHERE ColA IN (SELECT id FROM Filter1)
AND ColB IN (SELECT id FROM Filter2)
AND ColC IN (SELECT id FROM Filter2)
RETURN
So, the only difference between the two queries is the Table being filtered, and HOW (the Where clause).
I would like to know if I could return Filter1 & Filter2 from a function. I am also open to suggestions on different ways to approach this problem.

No.
Conceptually, how would you expect to handle a function that returned a variable number of tables? You would JOIN on two tables at once? What if the returned fields don't line up?
Is there some reason you can't have a TVF for each filter?

As others say, NO. A function in TSQL must return exactly one result (although that result can come in the form of a table with numerous values).
There are a couple of ways you could achieve something similar though. A stored procedure can execute multiple select statements and deliver the results up to whatever called it, whether that be an application layer or something like SSMS. Many libraries require you to add additional commands to access more result sets though. For instance, in Pyodbc to access result sets after the first one you need to call cursor.nextset()
Also, inside a function you could UNION several result sets together although that would require each result set to have the same columns. One way to achieve that if they have a different column structure is to add in nulls for the missing columns for each select statement. If you needed to know which select statement returned the value, you could also add a column which indicated that. This should work with your simplified example since in each case it is just returning a single ID column, but it could get awkward very quickly if the column names or types are radically different.

Related

Insert Column with same value

I am running a query on the table "performance" and I want to insert a column with the same value for all the rows without using alter, update etc.
I wrote a case statement and it works but is there a more refined way?
here is a short query:
SELECT id, name, class,
CASE
WHEN id IS NOT NULL THEN 'Actuals'
ELSE 'Forecast'
END AS type
FROM performance
Basically I need all the values to be labeled "Actuals".
There are many other datasets for which I will use different labels and then append all of them
Just to be clear - don't need to update the table performance itself
use common table expression for your case.
It will add new column in your existing data and you may use this for your further process.
For your point it is not adding nor inserting anything in your existing db structure.
with CTE as (
SELECT id, name, class,
CASE WHEN id IS NOT NULL THEN 'Actuals' ELSE 'Forecast' END AS type
FROM table_performance
)
select * from CTE ----- It give you all the columns from [table] and add another column as you needed.
OR
You may create a view for same, if this condition is fixed.

SQL - Split query data stream into 2 separate tables [Theoretical Optimisation]

I am writing some SQL code to be run in MapBasic (MapInfo's Programming language). The best way to describe the question is with an example:
I want to select all records where ShipType="Barge" into a query named Barges and I want all the remaining records to be put in a query OtherShips.
I could simply use the following SQL commands:
select * from ShipsTable where ShipType = "Barge" into Barges
select * from ShipsTable where ShipType <> "Barge" into OtherShips
That's fine and all but I can't help but feel that this is inefficient. Won't SQL be searching through the database twice? Won't it find the rows of data that fit the 2nd Query during the processing of the 1st?
Instead, it would be faster if there was a command like:
select * from ShipsTable where ShipType = "Barge" into Barges ELSE into OtherShips
My question is, can you do this? Is there a command that fits this spec?
Thanks,
You could do this quite easily in SSIS with a conditional split and two different destinations.
But not really in TSQL.
However for "fun" some possibilities are looked at below.
You could create a partitioned view but the requirements that you need to meet for this are quite arduous and the execution plan just loads it all into a spool and then reads the spool twice with two different filters anyway.
CREATE TABLE Barges
(
Id INT,
ShipType VARCHAR(50) NOT NULL CHECK (ShipType = 'Barge'),
PRIMARY KEY (Id, ShipType)
)
CREATE TABLE OtherShips
(
Id INT,
ShipType VARCHAR(50) NOT NULL CHECK (ShipType <> 'Barge'),
PRIMARY KEY (Id, ShipType)
)
CREATE TABLE ShipsTable
(
ShipType VARCHAR(50) NOT NULL
)
go
CREATE VIEW ShipsView
AS
SELECT *
FROM Barges
UNION ALL
SELECT *
FROM OtherShips
GO
INSERT INTO ShipsView(Id, ShipType)
SELECT ROW_NUMBER() OVER(ORDER BY ##SPID), ShipType
FROM ShipsTable
Or you could use the OUTPUT clause and composable DML but that would require inserting both sets of rows into the first table and then cleaning out the unwanted rows afterwards (the second table would only get the correct rows and not need any clean up).
CREATE TABLE Barges2
(
ShipType VARCHAR(50) NOT NULL
)
CREATE TABLE OtherShips2
(
ShipType VARCHAR(50) NOT NULL
)
CREATE TABLE ShipsTable2
(
ShipType VARCHAR(50) NOT NULL
)
INSERT INTO Barges2
SELECT *
FROM
(
INSERT INTO OtherShips2
OUTPUT INSERTED.*
SELECT *
FROM ShipsTable2
) D
WHERE D.ShipType = 'Barge';
DELETE FROM OtherShips2 WHERE ShipType = 'Barge';
MapBasic does provide you access to MapInfo's 'Invert Selection' which would give you anything that wasn't selected from your first query (assuming your first query does return results). You can call it by using it's menu ID (found in Menu.def) which is 311 or if you include menu.def at the top of the file you can reference it through the constant M_QUERY_INVERTSELECT.
eg.
Select * from ShipsTable where ShipType = "Barge" into Barges
Run Menu Command 311
or
Run Menu Command M_QUERY_INVERTSELECT if you have included the menu definitions file.
I believe this would give you better performance than doing a second selection as per your example but you wouldn't be able to then name the results table with an alias without doing another selection. Depends on your use case whether this is worth using or not, for a large query that takes quite a while it could well save on some processing time.

How to insert generated id into a results table

I have the following query
SELECT q.pol_id
FROM quot q
,fgn_clm_hist fch
WHERE q.quot_id = fch.quot_id
UNION
SELECT q.pol_id
FROM tdb2wccu.quot q
WHERE q.nr_prr_ls_yr_cov IS NOT NULL
For every row in that result set, I want to create a new row in another table (call it table1) and update pol_id in the quot table (from the above result set) with the generated primary key from the inserted row in table1.
table1 has two columns. id and timestamp.
I'm using db2 10.1.
I've tried numerous things and have been unsuccessful for quite a while. Thanks!
Simple solution: create a new table for the result set of your query, which has an identity column in it. Then, after running your query, update the pol_id field with the newly generated ID in your result table.
Alteratively, you can do it more manually by using the the ROW_NUMBER() OLAP function, which I often found convenient for creating IDs. For this it is convenient to use a stored procedure which does the following:
get the maximum old id from Table1 and write it into a variable old_max_id.
after generating the result set, write the row-numbers into the table1, maybe by something like
INSERT INTO TABLE1
SELECT ROW_NUMBER() OVER (PARTITION BY <primary-key> ORDER BY <whatever-you-want>)
+ OLD_MAX_ID
, CURRENT TIMESTAMP
FROM (<here comes your SQL query>)
Either write the result set into a table or return a cursor to it. Here you should either use the same ROW_NUMBER statement as above or directly use the ID from Table1.

Sqlite - SELECT or INSERT and SELECT in one statement

I'm trying to avoid writing separate SQL queries to achieve the following scenario:
I have a Table called Values:
Values:
id INT (PK)
data TEXT
I would like to check if certain data exists in the table, and if it does, return its id, otherwise insert it and return its id.
The (very) naive way would be:
select id from Values where data = "SOME_DATA";
if id is not null, take it.
if id is null then:
insert into Values(data) values("SOME_DATA");
and then select it again to see its id or use the returned id.
I am trying to make the above functionality in one line.
I think I'm getting close, but I couldn't make it yet:
So far I got this:
select id from Values where data=(COALESCE((select data from Values where data="SOME_DATA"), (insert into Values(data) values("SOME_DATA"));
I'm trying to take advantage of the fact that the second select will return null and then the second argument to COALESCE will be returned. No success so far. What am I missing?
Your command does not work because in SQL, INSERT does not return a value.
If you have a unique constraint/index on the data column, you can use that to prevent duplicates if you blindly insert the value; this uses SQLite's INSERT OR IGNORE extension:
INSERT OR IGNORE INTO "Values"(data) VALUES('SOME_DATE');
SELECT id FROM "Values" WHERE data = 'SOME_DATA';

SQL query select from table and group on other column

I'm phrasing the question title poorly as I'm not sure what to call what I'm trying to do but it really should be simple.
I've a link / join table with two ID columns. I want to run a check before saving new rows to the table.
The user can save attributes through a webpage but I need to check that the same combination doesn't exist before saving it. With one record it's easy as obviously you just check if that attributeId is already in the table, if it is don't allow them to save it again.
However, if the user chooses a combination of that attribute and another one then they should be allowed to save it.
Here's an image of what I mean:
So if a user now tried to save an attribute with ID of 1 it will stop them, but I need it to also stop them if they tried ID's of 1, 10 so long as both 1 and 10 had the same productAttributeId.
I'm confusing this in my explanation but I'm hoping the image will clarify what I need to do.
This should be simple so I presume I'm missing something.
If I understand the question properly, you want to prevent the combination of AttributeId and ProductAttributeId from being reused. If that's the case, simply make them a combined primary key, which is by nature UNIQUE.
If that's not feasible, create a stored procedure that runs a query against the join for instances of the AttributeId. If the query returns 0 instances, insert the row.
Here's some light code to present the idea (may need to be modified to work with your database):
SELECT COUNT(1) FROM MyJoinTable WHERE AttributeId = #RequestedID
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO MyJoinTable ...
END
You can control your inserts via a stored procedure. My understanding is that
users can select a combination of Attributes, such as
just 1
1 and 10 together
1,4,5,10 (4 attributes)
These need to enter the table as a single "batch" against a (new?) productAttributeId
So if (1,10) was chosen, this needs to be blocked because 1-2 and 10-2 already exist.
What I suggest
The stored procedure should take the attributes as a single list, e.g. '1,2,3' (comma separated, no spaces, just integers)
You can then use a string splitting UDF or an inline XML trick (as shown below) to break it into rows of a derived table.
Test table
create table attrib (attributeid int, productattributeid int)
insert attrib select 1,1
insert attrib select 1,2
insert attrib select 10,2
Here I use a variable, but you can incorporate as a SP input param
declare #t nvarchar(max) set #t = '1,2,10'
select top(1)
t.productattributeid,
count(t.productattributeid) count_attrib,
count(*) over () count_input
from (select convert(xml,'<a>' + replace(#t,',','</a><a>') + '</a>') x) x
cross apply x.x.nodes('a') n(c)
cross apply (select n.c.value('.','int')) a(attributeid)
left join attrib t on t.attributeid = a.attributeid
group by t.productattributeid
order by countrows desc
Output
productattributeid count_attrib count_input
2 2 3
The 1st column gives you the productattributeid that has the most matches
The 2nd column gives you how many attributes were matched using the same productattributeid
The 3rd column is how many attributes exist in the input
If you compare the last 2 columns and the counts
match - you can use the productattributeid to attach to the product which has all these attributes
don't match - then you need to do an insert to create a new combination