Assigning Value along with Data Retrieval - sql

Is there a way to combine assigning a value to a variable and Selecting a column in sql. I need to compute and select a column in a table based on the variable. The variable's value changes based on another column in the table.
var #BeginValue
Columns in table : ReducedBy
My initial begin value is stored in #BeginValue. The table has reducedBy which is a factor by which my begin value should be reduced. So when i select, beginvalue for the first recored would be #BeginValue and the #EndValue should be #BeginValue = #BeginValue - reducedBy. It continues like this, as many times as the number of records in my table.
Result set must be like this:
#Begin = 10
Begin End ReducedBy
10 8 2
8 6 2
6 5 1
Is there a way with which i can achieve this without using a cursor or with multiple update statements.

You can't assign in a query that returns a result set. The closest you can get is to store the result in a table variable. Then you can both do computations against that table, and return it as a result set:
-- Store results in table variable
declare #tbl table (id int, col1 int, ...)
insert #tbl
(id, col1, ...)
select id
, col1
, ...
from ... your query here ...
-- Assign variable
select #YourVariable = ... your computation here ...
from #tbl
-- Return result set
select *
from #tbl

If your question is
Can I do..
SELECT #a = field, field2 from table
and get a resultset and set the value of #a?
Then the answer is no, not in a single statement.

Related

How to return all records using IN clause when parameter is empty?

I'm passing a parameter like '1,2,3' to this statement:
SELECT *
FROM Negative
WHERE IdNegative IN (#IdNegative)
But I want to return all records (like the records with an id 1,2,3,4,5,6...) when the parameter is empty. Is there a way to do this?
Thank you.
Parameters and variables are not simply pasted into the query by the compiler. They are inserted as actual compiled values, so in this case the string 1,2,3 is never going to be equal to any of the numbers 1 2 3.
You need to declare as a table variable or as a Table-Valued Parameter
DECLARE #IdNegative TABLE (id int PRIMARY KEY);
INSERT #IdNegative VALUES (1),(2),(3);
SELECT *
FROM Negative
WHERE IdNegative IN (SELECT id FROM #IdNegative);

Checking if field contains multiple string in sql server

I am working on a sql database which will provide with data some grid. The grid will enable filtering, sorting and paging but also there is a strict requirement that users can enter free text to a text input above the grid for example
'Engine 1001 Requi' and that the result will contain only rows which in some columns contain all the pieces of the text. So one column may contain Engine, other column may contain 1001 and some other will contain Requi.
I created a technical column (let's call it myTechnicalColumn) in the table (let's call it myTable) which will be updated each time someone inserts or updates a row and it will contain all the values of all the columns combined and separated with space.
Now to use it with entity framework I decided to use a table valued function which accepts one parameter #searchQuery and it will handle it like this:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS #Result TABLE
( ... here come columns )
AS
BEGIN
DECLARE #searchToken TokenType
INSERT INTO #searchToken(token) SELECT value FROM STRING_SPLIT(#searchText,' ')
DECLARE #searchTextLength INT
SET #searchTextLength = (SELECT COUNT(*) FROM #searchToken)
INSERT INTO #Result
SELECT
... here come columns
FROM myTable
WHERE (SELECT COUNT(*) FROM #searchToken WHERE CHARINDEX(token, myTechnicalColumn) > 0) = #searchTextLength
RETURN;
END
Of course the solution works fine but it's kinda slow. Any hints how to improve its efficiency?
You can use an inline Table Valued Function, which should be quite a lot faster.
This would be a direct translation of your current code
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE (
SELECT COUNT(*)
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) > 0
) = (SELECT COUNT(*) FROM searchText)
);
GO
You are using a form of query called Relational Division Without Remainder and there are other ways to cut this cake:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE NOT EXISTS (
SELECT 1
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) = 0
)
);
GO
This may be faster or slower depending on a number of factors, you need to test.
Since there is no data to test, i am not sure if the following will solve your issue:
-- Replace the last INSERT portion
INSERT INTO #Result
SELECT
... here come columns
FROM myTable T
JOIN #searchToken S ON CHARINDEX(S.token, T.myTechnicalColumn) > 0

How does one automatically insert the results of several function calls into a table?

Wasn't sure how to title the question but hopefully this makes sense :)
I have a table (OldTable) with an index and a column of comma separated lists. I'm trying to split the strings in the list column and create a new table with the indexes coupled with each of the sub strings of the string it was connected to in the old table.
Example:
OldTable
index | list
1 | 'a,b,c'
2 | 'd,e,f'
NewTable
index | letter
1 | 'a'
1 | 'b'
1 | 'c'
2 | 'd'
2 | 'e'
2 | 'f'
I have created a function that will split the string and return each sub string as a record in a 1 column table as so:
SELECT * FROM Split('a,b,c', ',', 1)
Which will result in:
Result
index | string
1 | 'a'
1 | 'b'
1 | 'c'
I was hoping that I could use this function as so:
SELECT * FROM Split((SELECT * FROM OldTable), ',')
And then use the id and string columns from OldTable in my function (by re-writing it slightly) to create NewTable. But I as far as I understand sending tables into the function doesn't work as I get: "Subquery returned more than 1 value. ... not premitted ... when the subquery is used as an expression."
One solution I was thinking of would be to run the function, as is, on all the rows of OldTable and insert the result of each call into NewTable. But I'm not sure how to iterate each row without a function. And I can't send tables into the a function to iterate so I'm back at square one.
I could do it manually but OldTable contains a few records (1000 or so) so it seems like automation would be preferable.
Is there a way to either:
Iterate over OldTable row by row, run the row through Split(), add the result to NewTable for all rows in OldTable. Either by a function or through regular sql-transactions
Re-write Split() to take a table variable after all
Get rid of the function altogether and just do it in sql transactions?
I'd prefer to not use procedures (don't know if there is a solutions with them either) mostly because I don't want the functionality inside of the DB to be exposed to the outside. If, however that is the "best"/only way to go I'll have to consider it. I'm quite (read very) new to SQL so it might be a needless worry.
Here is my Split() function if it is needed:
CREATE FUNCTION Split (
#string nvarchar(4000),
#delimitor nvarchar(10),
#indexint = 0
)
RETURNS #splitTable TABLE (id int, string nvarchar(4000) NOT NULL) AS
BEGIN
DECLARE #startOfSubString smallint;
DECLARE #endOfSubString smallint;
SET #startOfSubString = 1;
SET #endOfSubString = CHARINDEX(#delimitor, #string, #startOfSubString);
IF (#endOfSubString <> 0)
WHILE #endOfSubString > 0
BEGIN
INSERT INTO #splitTable
SELECT #index, SUBSTRING(#string, #startOfSubString, #endOfSubString - #startOfSubString);
SET #startOfSubString = #endOfSubString+1;
SET #endOfSubString = CHARINDEX(#delimitor, #string, #startOfSubString);
END;
INSERT INTO #splitTable
SELECT #index, SUBSTRING(#string, #startOfSubString, LEN(#string)-#startOfSubString+1);
RETURN;
END
Hope my problem and attempt was explained and possible to understand.
You are looking for cross apply:
SELECT t.index, s.item
FROM OldTable t CROSS APPLY
(dbo.split(t.list, ',')) s(item);
Inserting in the new table just requires an insert or select into clause.

SQL select multiple rows of data then compare

What would be the best approach in SQL Server 2008 to select something that can contain 10 list of data, then compare that data with a specific value in one of it's columns
So something like this below
SELECT bType FROM WORK_STATION WHERE nFileId = 123456789
Which could return either 1 - 10 values MAX (will return at least one value). Then to compare the data from that SQL statement above that we just selected to a specific value to something like
if bType = 1
--DO something
What is the best approach of doing something like this?
declare #table as table(btype int)
declare #btype int
insert into #table
SELECT bType FROM WORK_STATION WHERE nFileId = 123456789
while(exists(select top 1 'x' from #table)) --as long as #table contains records continue
begin
select top 1 #btype = btype from #table
if(#btype = 10)
print 'something'
delete top (1) from #table --remove the previously processed row. also ensures no infinite loop
end
I think you can use SP to declare variables and then compare it with the resultset, if you know that you have only 10 values you can use temp table and insert 10 values.
I hope this is helpful.

Update multiple rows with different values in SQL

I have a table like this:
SKU Size
A 10
B 10
C 10
D 10
E 10
F 10
G 10
I want to change it to:
SKU Size
A 20
B 10
C 30
D 10
E 80
F 10
G 60
I have more than 3000 rows of records to update. How can I do that with SQL update command ?
UPDATE T
SET Size = CASE SKU
WHEN 'A' THEN 20
WHEN 'B' THEN 10
WHEN 'C' THEN 30
WHEN ...
END
Or there may be a formula for calculating the size, but you've failed to give it in your question (Or we may have to switch to a more complex CASE expression, but again, too little detail in the question).
Create a table with the mapping of SKU to new size; update the master table from that.
Many dialects of SQL have a notation for doing updates via joined tables. Some do not. This will work where there is no such notation:
CREATE TABLE SKU_Size_Map
(
SKU CHAR(16) NOT NULL,
Size INTEGER NOT NULL
);
...Populate this table with the SKU values to be set...
...You must have such a list...
UPDATE MasterTable
SET Size = (SELECT Size FROM SKU_Size_Map
WHERE MasterTable.SKU = SKU_Size_Map.Size)
WHERE SKU IN (SELECT SKU FROM SKU_Size_Map);
The main WHERE condition is need to avoid setting the size to null where there is no matching row.
You can probably also do it with a MERGE statement. But the key insight for any of these notations is that you need a table to do the mapping between SKU and size. You either need a table or you need an algorithm, and the sample data doesn't suggest an algorithm.
Make use of OpenXML to resolve your issue
example
declare #i int
exec sp_xml_preparedocument #i output,
'<mydata>
<test xmlID="3" xmlData="blah blah blah"/>
<test xmlID="1" xmlData="blah"/>
</mydata>'
insert into test
select xmlID, xmlData
from OpenXml(#i, 'mydata/test')
with (xmlID int, xmlData nvarchar(30))
where xmlID not in (select xmlID from test)
update test
set test.xmlData = ox.xmlData
from OpenXml(#i, 'mydata/test')
with (xmlID int, xmlData nvarchar(30)) ox
where test.xmlID = ox.xmlID
exec sp_xml_removedocument #i
Just do...
UPDATE [yourTable] SET Size = 20 WHERE SKU = 'A'
And do this for all values you want to change...
Well, if you don't have a formula to calculate your Sizes, and you don't have a file or an Excel sheet with the data that you can massage into your table, you'll just have to get some luckless intern to type something like
UPDATE <table> SET Size = <value> WHERE SKU = '<key>'
3000 times.
If you are that intern, I'd suggest giving us a little more information...
Since you wanted to change the whole column, drop that particular column by using this:
ALTER TABLE table_name
DROP COLUMN column_name;
then create a new column using:
ALTER TABLE table_name
ADD column_name varchar(80);