How can run second query based on first query? - sql

Im using two query's, 1st separated one column to two columns and inserted one table and second query (PIVOT) fetching based on inserted table,
1st Query,
SELECT A.MDDID, A.DeviceNumber,
Split.a.value('.', 'VARCHAR(100)') AS MetReading
FROM (
SELECT MDDID,DeviceNumber,
CAST ('<M>' + REPLACE(Httpstring, ':', '</M><M>') + '</M>' AS XML) AS MetReading
FROM [IOTDBV1].[dbo].[MDASDatas] E
Where E.MDDID = 49101
) AS A CROSS APPLY MetReading.nodes ('/M') AS Split(a);
2nd Query
SELECT * FROM
(
Select ID,MDDID,DeviceNumber,ReceivedDate
, ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT 1)) AS ID2
, SPLT.MR.value('.','VARCHAR(MAX)') AS LIST FROM (
Select ID,MDDID,DeviceNumber,ReceivedDate
, CAST( '<M>'+REPLACE(MeterReading,',','</M><M>')+'</M>' AS XML) AS XML_MR
From [dbo].[PARSEMDASDatas] E
Where E.MeterReading is Not Null
)E
CROSS APPLY E.XML_MR.nodes('/M') AS SPLT(MR)
)A
PIVOT
(
MAX(LIST) FOR ID2 IN ([1],[2],[3],[4],[5],[6],[7],[8])
)PV
I want 2nd query run based on first query no need to require table.
any help would be appreciated.

Your question is not very clear... And it is a very good example, why you always should add a MCVE, including DDL, sample data, own attempts, wrong output and expected output. This time I do this for you, please try to prepare such a MCVE the next time yourself...
If I get this correctly, your source table includes a CSV column with up to 8 (max?) values. This might be solved much easier, no need to break this up in two queries, no need for an intermediate table and not even for PIVOT.
--create a mockup-table to simulate your situation (slightly shortened for brevity)
DECLARE #YourTable TABLE(ID INT,MDDID INT, DeviceNumber VARCHAR(100),MetReading VARCHAR(2000));
INSERT INTO #YourTable VALUES
(2,49101,'NKLDEVELOPMENT02','DCPL,981115,247484,9409') --the character code and some numbers
,(3,49101,'NKLDEVELOPMENT02','SPPL,,,,,,,,') --eigth empty commas
,(4,49101,'NKLDEVELOPMENT02','BLAH,,,999,,'); --A value somewhere in the middle
--The cte will return the table as is. The only difference is a cast to XML (as you did it too)
WITH Splitted AS
(
SELECT ID
,MDDID
,DeviceNumber
,CAST('<x>' + REPLACE(MetReading,',','</x><x>') + '</x>' AS XML) AS Casted
FROM #YourTable t
)
SELECT s.ID
,s.MDDID
,s.DeviceNumber
,s.Casted.value('/x[1]','varchar(100)') AS [1]
,s.Casted.value('/x[2]','varchar(100)') AS [2]
,s.Casted.value('/x[3]','varchar(100)') AS [3]
,s.Casted.value('/x[4]','varchar(100)') AS [4]
,s.Casted.value('/x[5]','varchar(100)') AS [5]
,s.Casted.value('/x[6]','varchar(100)') AS [6]
,s.Casted.value('/x[7]','varchar(100)') AS [7]
,s.Casted.value('/x[8]','varchar(100)') AS [8]
FROM Splitted s;
the result
ID MDDID DeviceNumber 1 2 3 4 5 6 7 8
2 49101 NKLDEVELOPMENT02 DCPL 981115 247484 9409 NULL NULL NULL NULL
3 49101 NKLDEVELOPMENT02 SPPL
4 49101 NKLDEVELOPMENT02 BLAH 999 NULL NULL
The idea in short:
Each CSV is tranformed to a XML similar to this:
<x>DCPL</x>
<x>981115</x>
<x>247484</x>
<x>9409</x>
Using a position predicate in the XPath, we can call the first, the second, the third <x> easily.

CTE: WITH common_table_expression is answer
you can prepare some data in first query and user in second
WITH cte_table AS
(
SELECT *
FROM sys.objects
)
SELECT *
FROM cte_table
where name like 'PK%'

Related

Extract two values between square brackets an get the results in two different columns?

I have in a column some Names and then square brackets with some numbers and letters inside.
How can I extract two values between square brackets and get the results in two different columns?
I start from the Column 'NAME' with the value 'XCDRT [20.9 kd]'
--NAME--
XCDRT [20.9 kd]
qwer [12.234 r.t.]
and I would like to get 3 columns with the values in different columns
-- NAME--- NAME 1--- NAME 2---
--XCDRT---- 20.9-------- kd----
--qwer----- 12.234-------- r.t.-----
Is there a function for such a problem?
I tried to split the value but I don't get the results that I need.
As an alternative solution, if you are on a bleeding edge version of the SQL Server data engine, then you make use of STRING_SPLIT and it's (new) ability to return the ordinal position of a value. Then, with some conditional aggregation, you can unpivot the results:
SELECT TRIM(MAX(CASE N.ordinal WHEN 1 THEN N.[value] END)) AS [Name],
TRIM(MAX(CASE N.ordinal WHEN 2 THEN LEFT(N.[value], CHARINDEX(' ',N.[value] + ' ')) END)) AS [Name1],
TRIM(MAX(CASE N.ordinal WHEN 2 THEN NULLIF(STUFF(N.[value], 1, CHARINDEX(' ',N.[value] + ' '),''),'') END)) AS [Name2]
FROM (VALUES('XCDRT [20.9 kd] qwer [12.234 r.t.]'))V([NAME])
CROSS APPLY STRING_SPLIT(V.[NAME],']',1) R
CROSS APPLY STRING_SPLIT(R.[value],'[',1) N
WHERE R.[value] != ''
GROUP BY V.[NAME],
R.ordinal;
The TRIMs and NULLIF are there to "tidy" the values, as you'd have leading whitespace and incase you don't have a value for Name2.
With a bit of JSON and a CROSS APPLY (or two)
Cross Apply B will split/parse the string
Cross Apply C will create JSON to be consumed.
This will also support N groups of 3
Example
Declare #YourTable Table ([Name] varchar(50)) Insert Into #YourTable Values
('XCDRT [20.9 kd] qwer [12.234 r.t.]')
Select [Name] = JSON_VALUE(JS,'$[0]')
,[Name1] = JSON_VALUE(JS,'$[1]')
,[Name2] = JSON_VALUE(JS,'$[2]')
From #YourTable A
Cross Apply string_split([Name],']') B
Cross Apply ( values ('["'+replace(string_escape(trim(replace(B.Value,'[','')),'json'),' ','","')+'"]') ) C(JS)
Where B.value<>''
Results
Name Name1 Name2
XCDRT 20.9 kd
qwer 12.234 r.t.

Join using a LIKE clause is taking too long

Please see the TSQL below:
create table #IDs (id varchar(100))
insert into #IDs values ('123')
insert into #IDs values ('456')
insert into #IDs values ('789')
insert into #IDs values ('1010')
create table #Notes (Note varchar(500))
insert into #Notes values ('Here is a note for 123')
insert into #Notes values ('A note for 789 here')
insert into #Notes values ('456 has a note here')
I want to find all the IDs that are referenced in the #Notes table. This works:
select #IDs.id from #IDs inner join #Notes on #Notes.note like '%' + #IDs.id + '%'
However, there are hundreds of thousands of records in both tables and the query does not complete. I was thinking about FreeText searching, but I don't think it can be applied here. A cursor takes too long to run as well (I think it will take over one month). Is there anything else I can try? I am using SQL Server 2019.
The size of the input is only one aspect of the solution.
By splitting the text to tokens you indeed increase the number of records, but in the same time you enable equality join, which can be implemented using Hash Join.
You should get the query results in a few minutes top, basically the time it takes to your system to do a full scan on both tables, plus some processing time.
No need for temp tables.
No need for indexes.
Select id
from #IDS
where id in (select w.value
from #Notes as n
cross apply string_split(n.Note, ' ') as w
)
Fiddle
Per the OP request -
Here is a code that handles more complicated scenario, where an id could contain various characters (as defined by #token_char) and the separators are potentially all other characters
declare #token_char varchar(100) = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
;
with cte_notes as
(
select Note
,replace(translate(Note,#token_char,space(len(#token_char))),' ','') as non_token_char
from #Notes
)
select id
from #IDS
where id in
(
select w.value
from cte_notes as n
cross apply string_split(translate(n.Note,n.non_token_char,space(len(n.non_token_char))),' ') as w
where w.value != ''
)
The Fiddle data sample was altered accordingly, to reflect the change
If you are going to do this search often you may want to explore using a wonderful (if underused) feature of SQL Server called 'Full Text Search.' To quote Microsoft:
A LIKE query against millions of rows of text data can take minutes to
return; whereas a full-text query can take only seconds or less
against the same data, depending on the number of rows that are
returned.'
I have seen searches go from minutes to seconds using this feature.
You would need to create a Full Text Search Catalog and then create indexs on the tables you want to search. It's not hard and will take you a few minutes to learn how to do this.
This is a good starting point:
https://learn.microsoft.com/en-us/sql/relational-databases/search/get-started-with-full-text-search?view=sql-server-ver15
I would apply CTE with string_split to filter out all alphabetic components and then join #ID table with the result of the CTE by id column. The query was tested on a sample of 1 mm rows.
With CTE As (
Select T.value As id
From #Notes Cross Apply String_Split(Note,' ') As T
Where Try_Convert(Int, T.value) Is Not Null
)
Select I.id
From #IDs As I Inner Join CTE On (I.id=CTE.id)
If you just want to extract a numeric value from a string, in this case join is excessive.
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(Note,' ') As T
Where Try_Convert(Int, T.value) Is Not Null And T.value Like '%[0-9]%'
id
Note
123
Here is a note for 123
789
A note for 789 here
456
456 has a note here
No matter what, under the given circumstances, I would use join to filter out those numbers that are not represented in #IDs table.
With CTE As (
Select distinct(id) As id
From #IDs
)
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(Note,' ') As T
Inner Join CTE On (T.value=CTE.id)
Where Try_Convert(Int, T.value) Is Not Null
And T.value Like '%[0-9]%'
If the string contains brackets or parenthesis instead of spaces like this:
"456(this is an id number) has a note here" or "456[01/01/2022]"
as last resorts (since it degrades performance) you can use TRANSLATE to replace those brackets with spaces as follows:
With CTE As (
Select distinct(id) As id
From #IDs
)
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(TRANSLATE(Note,'[]()',' '),' ') As T
Inner Join CTE On (T.value=CTE.id)
Where Try_Convert(Int, T.value) Is Not Null
And T.value Like '%[0-9]%'
db<>fiddle

Append values from 2 different columns in SQL

I have the following table
I need to get the following output as "SVGFRAMXPOSLSVG" from the 2 columns.
Is it possible to get this appended values from 2 columns
Please try this.
SELECT STUFF((
SELECT '' + DEPART_AIRPORT_CODE + ARRIVE_AIRPORT_CODE
FROM #tblName
FOR XML PATH('')
), 1, 0, '')
For Example:-
Declare #tbl Table(
id INT ,
DEPART_AIRPORT_CODE Varchar(50),
ARRIVE_AIRPORT_CODE Varchar(50),
value varchar(50)
)
INSERT INTO #tbl VALUES(1,'g1','g2',NULL)
INSERT INTO #tbl VALUES(2,'g2','g3',NULL)
INSERT INTO #tbl VALUES(3,'g3','g1',NULL)
SELECT STUFF((
SELECT '' + DEPART_AIRPORT_CODE + ARRIVE_AIRPORT_CODE
FROM #tbl
FOR XML PATH('')
), 1, 0, '')
Summary
Use Analytic functions and listagg to get the job done.
Detail
Create two lists of code_id and code values. Match the code_id values for the same airport codes (passengers depart from the same airport they just arrived at). Using lag and lead to grab values from other rows. NULLs will exist for code_id at the start and end of the itinerary. Default the first NULL to 0, and the last NULL to be the previous code_id plus 1. A list of codes will be produced, with a matching index. Merge the lists together and remove duplicates by using a union. Finally use listagg with no delimiter to aggregate the rows onto a string value.
with codes as
(
select
nvl(lag(t1.id) over (order by t1.id),0) as code_id,
t1.depart_airport_code as code
from table1 t1
union
select
nvl(lead(t1.id) over (order by t1.id)-1,lag(t1.id) over (order by t1.id)+1) as code_id,
t1.arrive_airport_code as code
from table1 t1
)
select
listagg(c.code,'') WITHIN GROUP (ORDER BY c.code_id) as result
from codes c;
Note: This solution does rely on an integer id field being available. Otherwise the analytic functions wouldn't have a column to sort by. If id doesn't exist, then you would need to manufacture one based on another column, such as a timestamp or another identifier that ensures the rows are in the correct order.
Use row_number() over (order by myorderedidentifier) as id in a subquery or view to achieve this. Don't use rownum. It could give you unpredictable results. Without an ORDER BY clause, there is no guarantee that the same query will return the same results each time.
Output
| RESULT |
|-----------------|
| SVGFRAMXPOSLSVG |

Populate the record links using Recursive CTE

I have the contact table records which has a link of other contact record or the contact record is not linked to anything (null)
As per below example id 21 is a parent for contact 1
I need to populate the temptable using T-SQL records (Using the recursive CTE) with all the contact links for the each and every contact id in contact table as below
As one contact id is associated with multiple contact ids, the Link1,Link2,link3 columns should be dynamically created if possible.
Could anybody please help me with this script
Try this (necessary remarks in comments):
--data definition
declare #contactTable table (contactId int, linkContactId int)
insert into #contactTable values
(1,21),
(2,null),
(3,450),
(4,1),
(5,900),
(6,5),
(7,3),
(8,1)
--recursive cte
;with cte as (
(select 1 n, contactId from #contactTable
where linkContactId = 1
union
select 1, linkContactId from #contactTable
where contactId = 1)
union all
--this part might seem confusing, I tried writing recursive part similairly as anchor part,
--but it needed to joins, which isn't allowed in recursive part of cte, so I worked around it
select n + 1,
case when cte.n + 1 = t.contactId then t.linkContactId else t.contactId end
from cte join #contactTable [t] on
(cte.n + 1 = t.contactId or cte.n + 1 = t.linkContactId)
)
--grouping results by contactId concatenating all linkContacts
select n [contactId],
(select distinct cast(contactId as varchar(5)) + ',' from cte where n = c.n for xml path(''), type).value('(.)[1]', 'varchar(100)') [linkContactId]
from cte [c]
group by n
As per your above script i was able to nearly get the results
As 4,8 have already been included in the first row, it should not be shown as seperate record/records
Can you please adjust your query and please provide me the skipping script

SQL Server : Columns to Rows

Looking for elegant (or any) solution to convert columns to rows.
Here is an example: I have a table with the following schema:
[ID] [EntityID] [Indicator1] [Indicator2] [Indicator3] ... [Indicator150]
Here is what I want to get as the result:
[ID] [EntityId] [IndicatorName] [IndicatorValue]
And the result values will be:
1 1 'Indicator1' 'Value of Indicator 1 for entity 1'
2 1 'Indicator2' 'Value of Indicator 2 for entity 1'
3 1 'Indicator3' 'Value of Indicator 3 for entity 1'
4 2 'Indicator1' 'Value of Indicator 1 for entity 2'
And so on..
Does this make sense? Do you have any suggestions on where to look and how to get it done in T-SQL?
You can use the UNPIVOT function to convert the columns into rows:
select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in (Indicator1, Indicator2, Indicator3)
) unpiv;
Note, the datatypes of the columns you are unpivoting must be the same so you might have to convert the datatypes prior to applying the unpivot.
You could also use CROSS APPLY with UNION ALL to convert the columns:
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
select 'Indicator1', Indicator1 union all
select 'Indicator2', Indicator2 union all
select 'Indicator3', Indicator3 union all
select 'Indicator4', Indicator4
) c (indicatorname, indicatorvalue);
Depending on your version of SQL Server you could even use CROSS APPLY with the VALUES clause:
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
values
('Indicator1', Indicator1),
('Indicator2', Indicator2),
('Indicator3', Indicator3),
('Indicator4', Indicator4)
) c (indicatorname, indicatorvalue);
Finally, if you have 150 columns to unpivot and you don't want to hard-code the entire query, then you could generate the sql statement using dynamic SQL:
DECLARE #colsUnpivot AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #colsUnpivot
= stuff((select ','+quotename(C.column_name)
from information_schema.columns as C
where C.table_name = 'yourtable' and
C.column_name like 'Indicator%'
for xml path('')), 1, 1, '')
set #query
= 'select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in ('+ #colsunpivot +')
) u'
exec sp_executesql #query;
well If you have 150 columns then I think that UNPIVOT is not an option. So you could use xml trick
;with CTE1 as (
select ID, EntityID, (select t.* for xml raw('row'), type) as Data
from temp1 as t
), CTE2 as (
select
C.id, C.EntityID,
F.C.value('local-name(.)', 'nvarchar(128)') as IndicatorName,
F.C.value('.', 'nvarchar(max)') as IndicatorValue
from CTE1 as c
outer apply c.Data.nodes('row/#*') as F(C)
)
select * from CTE2 where IndicatorName like 'Indicator%'
sql fiddle demo
You could also write dynamic SQL, but I like xml more - for dynamic SQL you have to have permissions to select data directly from table and that's not always an option.
UPDATEAs there a big flame in comments, I think I'll add some pros and cons of xml/dynamic SQL. I'll try to be as objective as I could and not mention elegantness and uglyness. If you got any other pros and cons, edit the answer or write in comments
cons
it's not as fast as dynamic SQL, rough tests gave me that xml is about 2.5 times slower that dynamic (it was one query on ~250000 rows table, so this estimate is no way exact). You could compare it yourself if you want, here's sqlfiddle example, on 100000 rows it was 29s (xml) vs 14s (dynamic);
may be it could be harder to understand for people not familiar with xpath;
pros
it's the same scope as your other queries, and that could be very handy. A few examples come to mind
you could query inserted and deleted tables inside your trigger (not possible with dynamic at all);
user don't have to have permissions on direct select from table. What I mean is if you have stored procedures layer and user have permissions to run sp, but don't have permissions to query tables directly, you still could use this query inside stored procedure;
you could query table variable you have populated in your scope (to pass it inside the dynamic SQL you have to either make it temporary table instead or create type and pass it as a parameter into dynamic SQL;
you can do this query inside the function (scalar or table-valued). It's not possible to use dynamic SQL inside the functions;
Just to help new readers, I've created an example to better understand #bluefeet's answer about UNPIVOT.
SELECT id
,entityId
,indicatorname
,indicatorvalue
FROM (VALUES
(1, 1, 'Value of Indicator 1 for entity 1', 'Value of Indicator 2 for entity 1', 'Value of Indicator 3 for entity 1'),
(2, 1, 'Value of Indicator 1 for entity 2', 'Value of Indicator 2 for entity 2', 'Value of Indicator 3 for entity 2'),
(3, 1, 'Value of Indicator 1 for entity 3', 'Value of Indicator 2 for entity 3', 'Value of Indicator 3 for entity 3'),
(4, 2, 'Value of Indicator 1 for entity 4', 'Value of Indicator 2 for entity 4', 'Value of Indicator 3 for entity 4')
) AS Category(ID, EntityId, Indicator1, Indicator2, Indicator3)
UNPIVOT
(
indicatorvalue
FOR indicatorname IN (Indicator1, Indicator2, Indicator3)
) UNPIV;
Just because I did not see it mentioned.
If 2016+, here is yet another option to dynamically unpivot data without actually using Dynamic SQL.
Example
Declare #YourTable Table ([ID] varchar(50),[Col1] varchar(50),[Col2] varchar(50))
Insert Into #YourTable Values
(1,'A','B')
,(2,'R','C')
,(3,'X','D')
Select A.[ID]
,Item = B.[Key]
,Value = B.[Value]
From #YourTable A
Cross Apply ( Select *
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper ))
Where [Key] not in ('ID','Other','Columns','ToExclude')
) B
Returns
ID Item Value
1 Col1 A
1 Col2 B
2 Col1 R
2 Col2 C
3 Col1 X
3 Col2 D
I needed a solution to convert columns to rows in Microsoft SQL Server, without knowing the colum names (used in trigger) and without dynamic sql (dynamic sql is too slow for use in a trigger).
I finally found this solution, which works fine:
SELECT
insRowTbl.PK,
insRowTbl.Username,
attr.insRow.value('local-name(.)', 'nvarchar(128)') as FieldName,
attr.insRow.value('.', 'nvarchar(max)') as FieldValue
FROM ( Select
i.ID as PK,
i.LastModifiedBy as Username,
convert(xml, (select i.* for xml raw)) as insRowCol
FROM inserted as i
) as insRowTbl
CROSS APPLY insRowTbl.insRowCol.nodes('/row/#*') as attr(insRow)
As you can see, I convert the row into XML (Subquery select i,* for xml raw, this converts all columns into one xml column)
Then I CROSS APPLY a function to each XML attribute of this column, so that I get one row per attribute.
Overall, this converts columns into rows, without knowing the column names and without using dynamic sql. It is fast enough for my purpose.
(Edit: I just saw Roman Pekar answer above, who is doing the same.
I used the dynamic sql trigger with cursors first, which was 10 to 100 times slower than this solution, but maybe it was caused by the cursor, not by the dynamic sql. Anyway, this solution is very simple an universal, so its definitively an option).
I am leaving this comment at this place, because I want to reference this explanation in my post about the full audit trigger, that you can find here: https://stackoverflow.com/a/43800286/4160788
DECLARE #TableName varchar(max)=NULL
SELECT #TableName=COALESCE(#TableName+',','')+t.TABLE_CATALOG+'.'+ t.TABLE_SCHEMA+'.'+o.Name
FROM sysindexes AS i
INNER JOIN sysobjects AS o ON i.id = o.id
INNER JOIN INFORMATION_SCHEMA.TABLES T ON T.TABLE_NAME=o.name
WHERE i.indid < 2
AND OBJECTPROPERTY(o.id,'IsMSShipped') = 0
AND i.rowcnt >350
AND o.xtype !='TF'
ORDER BY o.name ASC
print #tablename
You can get list of tables which has rowcounts >350 . You can see at the solution list of table as row.
The opposite of this is to flatten a column into a csv eg
SELECT STRING_AGG ([value],',') FROM STRING_SPLIT('Akio,Hiraku,Kazuo', ',')