How the change the cte column value - sql

I want to get the CTE each column value stored in a variable then perform some operation on it. At last stored variable values into other table. But there are more than 10 records in a CTE so I am confused how i do this?
Declare #LineRead nvarchar(max)
;with cte(ID,RecordLine) as (
select Id,RecordLine from [dbo].[WorkDataImport]
)
select #LineRead = RecordLine + 'TEmp'
print LineRead
Result is
xyzaaddda Temp
I dont know why i get only one records.

That's because you are using SELECT for variable assignation.
SQL Server supports nonstandard assignment SELECT statement, which allows querying
data and assign multiple values obtained from the same row to multiple variables by using a single statement.
The assignment SELECT has predictable behavior when exactly one row qualifies. However, if the query has more than one qualifying row, the code doesn’t fail. The assignments take place per each qualifying row, and with each row accessed, the values from the current row overwrite the existing values in the variables. When the assignment SELECT finishes, the values in the variables are those from the last row that SQL Server happened to access.
That's why you are getting only one row.
Replace the SELECT with SET and the code will throw error as:
SET #LineRead = RecordLine + 'TEmp'
One way is to save all the rows from CTE to temp table and then perform the manipulations as:
;with cte(ID,RecordLine) as (
select Id,RecordLine from [dbo].[WorkDataImport]
)
select RecordLine + 'TEmp' as LineRead
into #Temp1
from cte
select * from #Temp1
Demo

Try as below:
;with mycte(ID,RecordLine) as (
select Id,RecordLine from [dbo].[WorkDataImport]
)
select RecordLine + 'TEmp' into #temp from mycte
THEN retrieve all the records from #temp(temp table)
select * from #temp

Related

Is there a way to convert SQL inserted.ID INTO into merge statement? [duplicate]

This question already has answers here:
Is it possible to for SQL Output clause to return a column not being inserted?
(2 answers)
Closed 7 months ago.
We need to change the way we populate the two temporary tables so that we use a MERGE instead of INSERTED. Using a Merge we would be able to query both the inserted table and the table it comes from which would allow us to guarantee the ordinals line up correctly.
INSERT dbo.Segment_1
(
Name,
Element,
)
OUTPUT
INSERTED.Segment_No
INTO dbo.#Segment_Log_Into_Table
SELECT
#Name,
LEFT(ISNULL(S.Formatted_Value, ''), 500)
FROM dbo.#Segment_Log_Table AS SLT
OUTER APPLY dbo.XYZFUNCION(SLT.Element, 'C') AS S
ORDER BY
SLT.Ordinal;
Structure:
Table1 - Segment_1(Name Varchar(500),Element varchar(500),Segment bigint)
Table2 - #Segment_Log_Into_Table(ORDINAL INT IDENTITY, Segment_No bigint)
Table3- #Segment_Log_Table(ORDINAL INT IDENTITY, Segment_No bigint)
We store the data into two temporary tables where we join the tables together in another below query based on an Ordinal but in some situations ordinal are wrong (not every time).
It looks like the way we handle this situation of ordinal creation doesn't guarantee the order of the second table to be the same as the first and below query end up with wrong\weird combination of element.
INSERT dbo.Segment_2
(
Name,
Element_Ext
)
SELECT
#Name,
SUBSTRING(ISNULL(S.Formatted_Value, ''), 501, LEN(ISNULL(S.Formatted_Value, '')) - 500)
FROM dbo.#Segment_Log_Table AS SLT
JOIN dbo.#Segment_Log_Into_Table AS SLIT
ON SLIT.Ordinal = SLT.Ordinal
OUTER APPLY dbo.XYZFunction(SLT.Element, 'C') AS S
WHERE LEN(SLT.Element) > 500
ORDER BY
SLT.Ordinal;
Above query returns wrong combinations
The following is your first insert re-written as a quasi-merge (I say quasi because it doesn't actually merge it only ever inserts):
MERGE dbo.Segment_1 AS t
USING
( SELECT Name = #Name,
Element = LEFT(ISNULL(S.Formatted_Value, ''), 500),
SLT.Ordinal
FROM dbo.#Segment_Log_Table AS SLT
OUTER APPLY dbo.XYZFUNCION(SLT.Element, 'C') AS S
) AS s
ON 1 = 0
WHEN NOT MATCHED THEN
INSERT (Name, Element)
VALUES (s.Name, s.Element)
OUTPUT Inserted.Segment_No, s.Ordinal
INTO dbo.#Segment_Log_Into_Table (Segment_No, Ordinal);
This means that you can capture the newly inserted data and the source ordinal in the output and retain the correct mapping between the two.

SQL Use Value from different column on same Select

I have a question if this is possible which will save me time writing extra code and limits user error. I need to use a value from a column (which has already performed some calulcation) from the same select then do extra calculation on it.
I encounter this a lot in my job. I will highlight the problem with a small example.
I have the following table created with one row added to it:
DECLARE #info AS TABLE
(
Name VARCHAR(500),
Value_A NUMERIC(8, 2)
)
INSERT INTO #info
VALUES ('Test Name 1', 10.20)
Now the requirements is to produce a select with 2 columns. First column needs to multiple Value_A by 10 and then the second column needs to add 1 to the first column. Below is the full requirements added:
SELECT (I.Value_A * 10) ,
(I.Value_A * 10) + 1
FROM #info AS I
As you can see, I just copied and pasted the first column code to second column and added one to it. Is there a way I can just reference the first column and just add + 1 instead of the copy and paste?
I can achieve this in a another way using an insert block followed by an update block. I can create a temp table, insert the first column to it then update second column. However, this means I have wrote extra code. I am looking for a solution which I only need to use one select.
Above is a small example. Normally, the problems I face is bigger select with more calculation or logic.
You can move the expression to the FROM clause using APPLY:
SELECT v.col1, v.col1 + 1
FROM #info I CROSS APPLY
(VALUES (I.Value_A * 10)) v(col1);
For the example given I would also use Gordon's method, but its worth knowing other techniques e.g. a sub-query and a common-table-expression (very similar to a sub-query) as they may be more appropriate for specific situations.
I find that a straight sub-query helps with understanding what is happening in the other solutions.
SELECT Calc1, Calc1 + 1
FROM (
SELECT (I.Value_A * 10) Calc1
FROM #info AS I
) X;
-- OR
WITH cte AS (
SELECT (I.Value_A * 10) Calc1
FROM #info AS I
)
SELECT Calc1, Calc1 + 1
FROM cte;

SQL Server random using seed

I want to add a column to my table with a random number using seed.
If I use RAND:
select *, RAND(5) as random_id from myTable
I get an equal value(0.943597390424144 for example) for all the rows, in the random_id column. I want this value to be different for every row - and that for every time I will pass it 0.5 value(for example), it would be the same values again(as seed should work...).
How can I do this?
(
For example, in PostrgreSql I can write
SELECT setseed(0.5);
SELECT t.* , random() as random_id
FROM myTable t
And I will get different values in each row.
)
Edit:
After I saw the comments here, I have managed to work this out somehow - but it's not efficient at all.
If someone has an idea how to improve it - it will be great. If not - I will have to find another way.
I used the basic idea of the example in here.
Creating a temporary table with blank seed value:
select * into t_myTable from (
select t.*, -1.00000000000000000 as seed
from myTable t
) as temp
Adding a random number for each seed value, one row at a time(this is the bad part...):
USE CPatterns;
GO
DECLARE #seed float;
DECLARE #id int;
DECLARE VIEW_CURSOR CURSOR FOR
select id
from t_myTable t;
OPEN VIEW_CURSOR;
FETCH NEXT FROM VIEW_CURSOR
into #id;
set #seed = RAND(5);
WHILE ##FETCH_STATUS = 0
BEGIN
set #seed = RAND();
update t_myTable set seed = #seed where id = #id
FETCH NEXT FROM VIEW_CURSOR
into #id;
END;
CLOSE VIEW_CURSOR;
DEALLOCATE VIEW_CURSOR;
GO
Creating the view using the seed value and ordering by it
create view my_view AS
select row_number() OVER (ORDER BY seed, id) AS source_id ,t.*
from t_myTable t
I think the simplest way to get a repeatable random id in a table is to use row_number() or a fixed id on each row. Let me assume that you have a column called id with a different value on each row.
The idea is just to use this as a seed:
select rand(id*1), as random_id
from mytable;
Note that the seed for the id is an integer and not a floating point number. If you wanted a floating point seed, you could do something with checksum():
select rand(checksum(id*0.5)) as random_id
. . .
If you are doing this for sampling (where you will say random_id < 0.1 for a 10% sample for instance, then I often use modulo arithmetic on row_number():
with t as (
select t.* row_number() over (order by id) as seqnum
from mytable t
)
select *
from t
where ((seqnum * 17 + 71) % 101) < 0.1
This returns about 10% of the numbers (okay, really 10/101). And you can adjust the sample by fiddling with the constants.
Someone sugested a similar query using newid() but I'm giving you the solution that works for me.
There's a workaround that involves newid() instead of rand, but it gives you the same result. You can execute it individually or as a column in a column. It will result in a random value per row rather than the same value for every row in the select statement.
If you need a random number from 0 - N, just change 100 for the desired number.
SELECT TOP 10 [Flag forca]
,1+ABS(CHECKSUM(NEWID())) % 100 AS RANDOM_NEWID
,RAND() AS RANDOM_RAND
FROM PAGSEGURO_WORK.dbo.jobSTM248_tmp_leitores_iso
So, in case it would someone someday, here's what I eventually did.
I'm generating the random seeded values in the server side(Java in my case), and then create a table with two columns: the id and the generated random_id.
Now I create the view as an inner join between the table and the original data.
The generated SQL looks something like that:
CREATE TABLE SEED_DATA(source_id INT PRIMARY KEY, random_id float NOT NULL);
select Rand(5);
insert into SEED_DATA values(1,Rand());
insert into SEED_DATA values(2, Rand());
insert into SEED_DATA values(3, Rand());
.
.
.
insert into SEED_DATA values(1000000, Rand());
and
CREATE VIEW DATA_VIEW
as
SELECT row_number() OVER (ORDER BY random_id, id) AS source_id,column1,column2,...
FROM
( select * from SEED_DATA tmp
inner join my_table i on tmp.source_id = i.id) TEMP
In addition, I create the random numbers in batches, 10,000 or so in each batch(may be higher), so it will not weigh heavily on the server side, and for each batch I insert it to the table in a separate execution.
All of that because I couldn't find a good way to do what I want purely in SQL. Updating row after row is really not efficient.
My own conclusion from this story is that SQL Server is sometimes really annoying...
You could convert a random number from the seed:
rand(row_number over (order by ___, ___,___))
Then cast that as a varchar
, Then use the last 3 characters as another seed.
That would give you a nice random value:
rand(right(cast(rand(row_number() over(x,y,x)) as varchar(15)), 3)

SQL Server Empty Result

I have a valid SQL select which returns an empty result, up and until a specific transaction has taken place in the environment.
Is there something available in SQL itself, that will allow me to return a 0 as opposed to an empty dataset? Similar to isNULL('', 0) functionality. Obviously I tried that and it didn't work.
PS. Sadly I don't have access to the database, or the environment, I have an agent installed that is executing these queries so I'm limited to solving this problem with just SQL.
FYI: Take any select and run it where the "condition" is not fulfilled (where LockCookie='777777777' for example.) If that condition is never met, the result is empty. But at some point the query will succeed based on a set of operations/tasks that happen. But I would like to return 0, up until that event has occurred.
You can store your result in a temp table and check ##rowcount.
select ID
into #T
from YourTable
where SomeColumn = #SomeValue
if ##rowcount = 0
select 0 as ID
else
select ID
from #T
drop table #T
If you want this as one query with no temp table you can wrap your query in an outer apply against a dummy table with only one row.
select isnull(T.ID, D.ID) as ID
from (values(0)) as D(ID)
outer apply
(
select ID
from YourTable
where SomeColumn = #SomeValue
) as T
alternet way is from code, you can check count of DataSet.
DsData.Tables[0].Rows.count > 0
make sure that your query matches your conditions

Multiple replacements in string in single Update Statement in SQL server 2005

I've a table 'tblRandomString' with following data:
ID ItemValue
1 *Test"
2 ?Test*
I've another table 'tblSearchCharReplacement' with following data
Original Replacement
* `star`
? `quest`
" `quot`
; `semi`
Now, I want to make a replacement in the ItemValues using these replacement.
I tried this:
Update T1
SET ItemValue = select REPLACE(ItemValue,[Original],[Replacement])
FROM dbo.tblRandomString T1
JOIN
dbo.tblSpecialCharReplacement T2
ON T2.Original IN ('"',';','*','?')
But it doesnt help me because only one replacement is done per update.
One solution is I've to use as a CTE to perform multiple replacements if they exist.
Is there a simpler way?
Sample data:
declare #RandomString table (ID int not null,ItemValue varchar(500) not null)
insert into #RandomString(ID,ItemValue) values
(1,'*Test"'),
(2,'?Test*')
declare #SearchCharReplacement table (Original varchar(500) not null,Replacement varchar(500) not null)
insert into #SearchCharReplacement(Original,Replacement) values
('*','`star`'),
('?','`quest`'),
('"','`quot`'),
(';','`semi`')
And the UPDATE:
;With Replacements as (
select
ID,ItemValue,0 as RepCount
from
#RandomString
union all
select
ID,SUBSTRING(REPLACE(ItemValue,Original,Replacement),1,500),rs.RepCount+1
from
Replacements rs
inner join
#SearchCharReplacement scr
on
CHARINDEX(scr.Original,rs.ItemValue) > 0
), FinalReplacements as (
select
ID,ItemValue,ROW_NUMBER() OVER (PARTITION BY ID ORDER BY RepCount desc) as rn
from
Replacements
)
update rs
set ItemValue = fr.ItemValue
from
#RandomString rs
inner join
FinalReplacements fr
on
rs.ID = fr.ID and
rn = 1
Which produces:
select * from #RandomString
ID ItemValue
----------- -----------------------
1 `star`Test`quot`
2 `quest`Test`star`
What this does is it starts with the unaltered texts (the top select in Replacements), then it attempts to apply any valid replacements (the second select in Replacements). What it will do is to continue applying this second select, based on any results it produces, until no new rows are produced. This is called a Recursive Common Table Expression (CTE).
We then use a second CTE (a non-recursive one this time) FinalReplacements to number all of the rows produced by the first CTE, assigning lower row numbers to rows which were produced last. Logically, these are the rows which were the result of applying the last applicable transform, and so will no longer contain any of the original characters to be replaced. So we can use the row number 1 to perform the update back against the original table.
This query does do more work than strictly necessary - for small numbers of rows of replacement characters, it's not likely to be too inefficient. We could clear it up by defining a single order in which to apply the replacements.
Will skipping the join table and nesting REPLACE functions work?
Or do you need to actually get the data from the other table?
-- perform 4 replaces in a single update statement
UPDATE T1
SET ItemValue = REPLACE(
REPLACE(
REPLACE(
REPLACE(
ItemValue,'*','star')
ItemValue,'?','quest')
ItemValue,'"','quot')
ItemValue,';','semi')
Note: I'm not sure if you need to escape any of the characters you're replacing