I have table as follows
Discese
ID | DisceseNAme
1 | Heart
2 | Lungs
3 | ENT
Registration
PatienID | NAME | Discease
1 | abc | 1
2 | asa | 2|3
3 | asd | 1|2|3
I have a function to split |-separated data. Now I want result as:
PatientID | Name | DisceseNAme
1 | abc | heart
2 |asa | Lungs,ENT
3 |asd | heart,Lungs,ENT
My split function is
ALTER FUNCTION [dbo].[fnSplit](
#sInputList VARCHAR(8000) -- List of delimited items
, #sDelimiter VARCHAR(8000) = '|' -- delimiter that separates items
) RETURNS #List TABLE (item VARCHAR(8000))
BEGIN
DECLARE #sItem VARCHAR(8000)
WHILE CHARINDEX(#sDelimiter,#sInputList,0) <> 0
BEGIN
SELECT
#sItem=RTRIM(LTRIM(SUBSTRING(#sInputList,1,CHARINDEX(#sDelimiter,#sInputList,0)-1))),
#sInputList=RTRIM(LTRIM(SUBSTRING(#sInputList,CHARINDEX(#sDelimiter,#sInputList,0)+LEN(#sDelimiter),LEN(#sInputList))))
IF LEN(#sItem) > 0
INSERT INTO #List SELECT #sItem
END
IF LEN(#sInputList) > 0
INSERT INTO #List SELECT #sInputList -- Put the last item in
RETURN
END
I am not sure how I can get that result, though.
As already mentioned in the comments, it is better to normalize your table structure. What this means is that you should not store patient's diseases in one VARCHAR column with disease ID's separated with some character. Instead you should store all diseases for a patient in separate rows.
If you keep using the setup you have now, your queries will become real cumbersome and performance will be really bad. Also, you will not be able to enjoy database consistency by using foreign keys.
I've written this example script which finally selects for the output you require. The example uses temporary tables. If you choose to use this way of working (and you should), just use this setup with regular tables (ie not starting with #).
The tables:
#disease: Defines diseases
#patients: Defines patients
#registration: Defines patients' diseases; foreign keys to #disease and #patients for data consistency (make sure the patients and diseases actually exist in the database)
If you're wondering how the FOR XML PATH('') construct in the final query results in a |-separated VARCHAR, read this answer I gave a while ago on this subject.
-- Diseases
CREATE TABLE #disease(
ID INT,
DiseaseName VARCHAR(256),
CONSTRAINT PK_disease PRIMARY KEY(ID)
);
INSERT INTO #disease(ID,DiseaseName)VALUES
(1,'Heart'),(2,'Lungs'),(3,'ENT');
-- Patients
CREATE TABLE #patients(
PatientID INT,
Name VARCHAR(256),
CONSTRAINT PK_patients PRIMARY KEY(PatientID)
);
INSERT INTO #patients(PatientID,Name)VALUES
(1,'abc'),(2,'asa'),(3,'asd'),(4,'zldkzld');
-- Registration for patient's diseases
CREATE TABLE #registration(
PatientID INT,
Disease INT,
CONSTRAINT FK_registration_to_patient FOREIGN KEY(PatientID) REFERENCES #patients(PatientID),
CONSTRAINT FK_registration_to_disease FOREIGN KEY(Disease) REFERENCES #disease(ID),
);
INSERT INTO #registration(PatientID,Disease)VALUES
(1,1), -- patient with ID 1 has one disease: Heart
(2,2),(2,3), -- patient with ID 2 has two diseases: Lungs and ENT
(3,1),(3,2),(3,3); -- patient with ID 3 has three diseases: Heart, Lungs and ENT
-- Select diseases for partients in one |-separated column
SELECT
p.PatientID,p.Name,Diseases=STUFF(dn.diseases,1,1,'')
FROM
#patients AS p
CROSS APPLY ( -- construct a |-separated column with all diseases for the client
SELECT
'|'+d.DiseaseName
FROM
#registration AS r
INNER JOIN #disease AS d ON
d.ID=r.Disease
WHERE
r.PatientID=p.PatientID
FOR
XML PATH('')
) AS dn(diseases)
WHERE
EXISTS(SELECT 1 FROM #registration AS r WHERE r.PatientID=p.PatientID)
ORDER BY
p.PatientID;
DROP TABLE #disease;DROP TABLE #registration;DROP TABLE #patients;
Results:
+-----------+------+-----------------+
| PatientID | Name | Diseases |
+-----------+------+-----------------+
| 1 | abc | Heart |
| 2 | asa | Lungs|ENT |
| 3 | asd | Heart|Lungs|ENT |
+-----------+------+-----------------+
Related
I have two tables. One holds common data for articles, and the other holds translations for text. Something like this:
Articles Table
id | key | date
Translations Table
id | article_key | lang | title | content
key is a string and is the primary key.
article_key is a foreign key relating it to articles on the key column.
When I add a new row to the Articles, I'd like to be able to use the key that was just inserted and add a new row to the Translations Table.
I've read about OUTPUT INTO but it doesn't seem like I can add other values to the Translations table. Also I get an error about it being on either side of a relationship.
Is my only course of action to INSERT into Articles followed by an INSERT with a SELECT subquery to get the key?
Edit: Expected output would be something like:
Articles
id | key | date
---------------
1 | somekey | 2018-05-31
Article Translations
id | article_key | lang | title | content
-----------------------------------------
1 | somekey | en | lorem | ipsum
Well this could work based on your description:
SET NOCOUNT ON;
DECLARE #Articles TABLE (id INT NOT NULL
, [key] VARCHAR(50) NOT NULL
, [date] DATE NOT NULL);
DECLARE #ArticleTranslations TABLE (id INT NOT NULL
, article_key VARCHAR(50) NOT NULL
, lang VARCHAR(50) NOT NULL
, title VARCHAR(50) NOT NULL
, content VARCHAR(50) NOT NULL);
INSERT #Articles (id, [key], [date]) -- This is insert into #Articles
OUTPUT INSERTED.id, INSERTED.[key], 'en', 'lorem', 'ipsum' -- This is insert into #ArticleTranslations
INTO #ArticleTranslations (id, article_key, lang, title, content) -- This is insert into #ArticleTranslations
VALUES (1, 'somekey', GETDATE()); -- This is insert into #Articles
SELECT *
FROM #Articles;
SELECT *
FROM #ArticleTranslations;
Try this out Stack Exchange: https://data.stackexchange.com/stackoverflow/query/857925
Maybe it's not that simple as it is. So let me know whether this works or not.
We have an old database that we maintain, and a new one that we have started using. We need to periodically transfer data from the old db to the new one. At the moment, we need to transfer, or merge as it might also be called, data from one table - Student, in the old database to two tables (ie two targets) in the new one - Person and Student. Now the catch is that the data from the old, source, database should be divided among the two tables in the new one. For example (just for the sake of this post),
Old table 'Student'
------------------------------
IdNo | FirstName | LastName |
578 | John | Doe |
645 | Sara | Doe |
New table 'Person'
-----------
Id | IdNo |
11 | 578 |
23 | 645 |
New table 'Student'
--------------------------------------
Id | PersonId | FirstName | LastName |
101| 11 | John | Doe |
102| 23 | Sara | Doe |
And the procedure should take a parameter of the number of rows to merge.
How can this be accomplished?
Update
Perhaps it would be easier for you guys to know what I mean by pseudo code:
MERGE [NewDB].[dbo].[Person] p, [NewDB].[dbo].[Student] ns -- 2 targets, this does not work
USING [OldDB].[dbo].[student] os -- source table, old student
ON p.IdNo = s.IdNo
WHEN MATCHED THEN -- Update existing rows
UPDATE p
SET p.SomeCoumn1 = os.SomeColumn1 -- works. os (old student) is know here
UPDATE ns
SET ns.SomeColumn2 = os.SomeColumn2 -- Does not work. os is not known here
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT INTO p (IdNo, SomeOlumn1)
VALUES (os.Idno, os.SomeColumn1); -- os (old Studnet) is known here
INSERT INTO ns (SomeColumn2)
VALUES (os.SomeColumn2); -- Does not work. os is not knwon here
I hope that makes it somewhat clearer.
May we assume the reason you want to do this in one statement instead of two is that one if the fields in the first table you are inserting in to is an identity field (Id in the Person table in your example) that needs to be inserted into the second table?
If so, add an OUTPUT clause in the first merge statement so that you have the relationship and fields you require for the second merge statement.
declare #OldStudent table (IdNo int, FirstName varchar(30), LastName varchar(30))
declare #Person table (Id int identity, IdNo int)
declare #NewStudent table (Id int identity, PersonId int, FirstName varchar(30), LastName varchar(30))
insert #OldStudent (IdNo, FirstName, LastName)
select 578, 'John', 'Doe'
union all select 645, 'Sara', 'Doe'
declare #output table ([Action] varchar(20), PersonId int, IdNo int)
MERGE #Person p
USING #OldStudent os
ON p.IdNo = os.IdNo
WHEN MATCHED THEN -- Update existing rows
UPDATE SET IdNo = os.IdNo
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT (IdNo) VALUES (os.Idno)
OUTPUT $action, inserted.Id, inserted.IdNo into #output;
WITH src AS
(
select
o.IdNo, o.PersonId, os.FirstName, os.LastName
from
#output o
inner join #OldStudent os on os.IdNo = o.IdNo
)
MERGE INTO #NewStudent as ns
USING src
ON src.PersonID = ns.PersonID
WHEN MATCHED THEN
UPDATE SET FirstName = src.FirstName, LastName = src.LastName
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT (PersonID, FirstName, LastName) VALUES (src.PersonID, src.FirstName, src.LastName);
select * from #Person
select * from #NewStudent
I'm trying to write some SQL to help transition from one database to another. It's gone well so far, but I ran into a problem I can't wrap my brain around.
Original:
Id (bigint) | ColA (XML) | ColB (XML) | ... | RecordCreation
The XML for each column with XML looks like the following:
<ColA count="3"><int>3</int><int>9</int><int>6</int></ColA>
For any particular row, the "count" is the same for each list, ColB will also have 3, etc., but some lists are of strings.
In the new database:
Id (bigint) | Index (int) | ColA (int) | ColB (nvarchar(20)) | ... | RecordCreation
So if I start with
5 | <ColA count="3"><int>9</int><int>8</int><int>7</int></ColA> | <ColB count="3"><string>A</string><string>B</string><string>C</string></ColB> | ... | 2014-01-15 ...
I need out:
5 | 1 | 9 | A | ... | 2014-01-15 ...
5 | 2 | 8 | B | ... | 2014-01-15 ...
5 | 3 | 7 | C | ... | 2014-01-15 ...
For each of the rows in the original DB where Index (second column) is the position in the XML list the values for that row are coming from.
Any ideas?
Thanks.
Edit:
A colleague showed me a dirty way that looks like it might get me there. This is to transfer some existing data into the new database for testing purposes, it's not production and won't be used often; we're just starving for data to test on.
declare #count int
set #count = 0
declare #T1 ( Id bigint, Index int, ColA int, ColB nvarchar(20),..., MaxIndex int)
while #count < 12 begin
Insert into #T1
select Id, #count,
CAST(CONVERT(nvarchar(max), ColA.query('/ColA/int[sql:variable("#count")]/text()')) as int),
CONVERT(nvarchar(20), ColB.query('/ColB/string[sql:variable("#count")]/text()')),
...,
CAST(CONVERT(nvarchar(max), ColA.query('data(/ColA/#count)')) as int)
From mytable
set #count = #count + 1
end
Then I can insert from the temp table where Index < MaxIndex. There'll never be more than 12 indices and I think index is 0 based; easy fix if not. And each row may have a different count in its lists (but all lists of the same row will have the same count); that's why I went with MaxIndex and a temp table. And I may switch to real table that I drop when I'm done if the performance is too bad.
Try this query:
DECLARE #MyTable TABLE (
ID INT PRIMARY KEY,
ColA XML,
ColB XML
);
INSERT #MyTable (ID, ColA, ColB)
SELECT 5, N'<ColA count="3"><int>9</int><int>8</int><int>7</int></ColA>', N'<ColB count="3"><string>A</string><string>B</string><string>C</string></ColB>';
SELECT x.ID,
ab.*
FROM #MyTable x
CROSS APPLY (
SELECT a.IntValue, b.VarcharValue
FROM
(
SELECT ax.XmlCol.value('(text())[1]', 'INT') AS IntValue,
ROW_NUMBER() OVER(ORDER BY ax.XmlCol) AS RowNum
FROM x.ColA.nodes('/ColA/int') ax(XmlCol)
) a INNER JOIN
(
SELECT bx.XmlCol.value('(text())[1]', 'VARCHAR(50)') AS VarcharValue,
ROW_NUMBER() OVER(ORDER BY bx.XmlCol) AS RowNum
FROM x.ColB.nodes('/ColB/string') bx(XmlCol)
) b ON a.RowNum = b.RowNum
) ab;
Output:
/*
ID IntValue VarcharValue
-- -------- ------------
5 9 A
5 8 B
5 7 C
*/
Note: very likely, the performance could be horrible (even for an ad-hoc task)
Assumption:
For any particular row, the "count" is the same for each list, ColB
will also have 3, etc., but some lists are of strings.
A colleague showed me a dirty way that looks like it might get me there. This is to transfer some existing data into the new database for testing purposes, it's not production and won't be used often; we're just starving for data to test on.
declare #count int
set #count = 0
declare #T1 ( Id bigint, Index int, ColA int, ColB nvarchar(20),..., MaxIndex int)
while #count < 12 begin
Insert into #T1
select Id, #count,
CAST(CONVERT(nvarchar(max), ColA.query('/ColA/int[sql:variable("#count")]/text()')) as int),
CONVERT(nvarchar(20), ColB.query('/ColB/string[sql:variable("#count")]/text()')),
...,
CAST(CONVERT(nvarchar(max), ColA.query('data(/ColA/#count)')) as int)
From mytable
set #count = #count + 1
end
Then I can insert from the temp table where Index < MaxIndex. There'll never be more than 12 indices and I think index is 0 based; easy fix if not. And each row may have a different count in its lists (but all lists of the same row will have the same count); that's why I went with MaxIndex and a temp table. And I may switch to real table that I drop when I'm done if the performance is too bad.
I have two tables in the following structure
Table - MemoType
ID | MemoTypeID | MemoTypeName
1 1234 A
2 5678 B
Table - Memos
ID | MemoTypeID | Memo | ExtRef
1 1234 TextOne XYZ
2 5678 TextTwo XYZ
3 1234 TextThree TUV
We would like to update these tables to reflect the following data
Table - MemoType
ID | MemoTypeID | MemoTypeName
3 9999 NewCombinedMemo
Table - Memos
ID | MemoTypeID | Memo | ExtRef
4 9999 <A> TextOne <B> TextTwo XYZ
5 9999 <A> TextThree TUV
The memos table has about 2 million rows with about 200,000 unique values for ExtRef.
My thinking is along the following lines (using .NET): Populate a List of all unique ExtRef values from Memos table; For each unique ExtRef get a list of all Memo values; concatenate strings as required; insert new record for each ExtRef; delete rest of the records for each ExtRef. The problem is that this would result in a large number of sql operations.
Please suggest if there are other efficient strategies to achieve this directly in SQL.
This is indeed possible directly through SQL, the following creates table variables to demonstrate / test with sample data and doesn't delete the original data.
The orginal data could easily be deleted using a clause checking on the memo type id, but I'd want to hold off on that until I'd performed a manual check on such a large table!
-- setting the scene
DECLARE #MemoType TABLE
(
Id int,
MemoTypeId int,
MemoTypeName varchar(30)
)
DECLARE #Memo TABLE
(
Id int identity(1,1),
MemoTypeId int,
Memo varchar(500),
ExtRef varchar(1000)
)
INSERT INTO #MemoType VALUES (1,1234,'A');
INSERT INTO #MemoType VALUES (2,1234,'B');
INSERT INTO #MemoType VALUES (3,9999,'NewCombinedMemo');
INSERT INTO #Memo VALUES (1234, 'TextOne', 'XYZ');
INSERT INTO #Memo VALUES (5678, 'TextTwo', 'XYZ');
INSERT INTO #Memo VALUES (1234, 'TextThree', 'TUV');
WITH cte(id, memotype, memotext, ref) as (
SELECT Id, MemoTypeId, Memo, ExtRef FROM #Memo
)
INSERT INTO #memo
SELECT 9999, stuff(memos,1,1,''),ref
FROM cte [outer]
CROSS APPLY (
SELECT ',' + memotext
FROM cte [inner]
WHERE [outer].ref = [inner].ref
FOR XML PATH('')
) n(memos)
GROUP BY ref, memos
select * from #memo
The CTE logic/description was borrowed from string concatenate in group by function with other aggregate functions - adding in logic to insert and strip out the leading comma.
I placed your original query in a CTE.
Then I cross applied with a subquery that gets
a comma-delimited set of memos for each reference in the outer query.
Since I also selected the memos column, I had to also group by the
memos column.
An initial comma needed to be stripped out with the stuff function
Finally, the result is inserted.
Person
| p_id | n_name | l_name | address | city | state | zip |
Customer
| p_id | reward_points| balance |
Person_PhoneNum
| ppn_id | p_id | number |
Main issue is that I want to attempt making a Retrieve Stored Procedure that can search by any of Person's fields as well as phone number or p_id BUT I want it to be able to handle NULL values from the parameters. Here is the Stored Procedure below:
CREATE PROCEDURE RetrieveCust(
#p_id AS varchar(50),
#f_name AS varchar(50),
#l_name AS varchar(50),
#address AS varchar(50),
#city AS varchar(50),
#state AS varchar(50),
#zip AS varchar(50),
#number AS varchar(50))
AS
BEGIN
END
I understand that I need to join the tables in order to match results but I don't know what I could do to handle NULL values. Any help would be amazing!
Any NULL in a parameter should match any value in the tables. Whenever you compare a parameter to a table field OR that comparison with a test for a null parameter:
( #f_name = f_name ) or ( #f_name is null )
then AND all those comparisons together to make up your retrieval.
The comparison against the phone number when phone number is null will result in more that one row if they have more than one phone number so Select DISTINCT on p_id.
What does Customer have to do with the query? You're not selecting on any field in that table and you don't appear to be returning any values from the procedure.
your where statement could be something like this
where (f_name = #f_name or #f_name is null)