I am building a data warehouse currently that processes data (for the sake of this question, let's just say one table) from a table that is updated every 15 minutes. My process stores a snapshot of the table and then compares the refreshed version with the snapshot and then stores the difference - or delta - in a separate staging table that will then be processed at the end of the day. At the end of the day I want a row describing the name of the column that has changed with a timestamp, to be then used when create snapshots at any point in time. It is worth noting that by the end of the day, there can be multiple rows for each unique identifier created i.e. a row for every change someone might have actioned during the day. So, I am stuck on the last part. I found this clever link Return column Names of Changed values with XML but the problem is this is very inefficient when processing thousands of rows. I would be grateful to anyone who has
any ideas on a more appropriate solution (excluding change Data Capture)?
Thank you.
if OBJECT_ID('tempdb..#TempHistory') is Not Null
drop table #TempHistory
-- I just made up a few column here, there are LOADS of them in the real query but for Governance of course...
SELECT
distinct
a.ClaimId,
a.FirstName,
a.Surname,
a.Incident,
a.Total,
a.extractDate -- this field is create by ETL process
into #temphistory
FROM [Data Mart Test].[Staging].[Claim] a JOIN [Data Mart Test].[dbo].[Claim] b
ON a.ClaimId = b.ClaimId
WHERE
ISNULL(a.ClaimId,0) <> ISNULL(b.ClaimId,0) OR
ISNULL(a.FirstName,'') <> ISNULL(b.FirstName,'') OR
ISNULL(a.Surname,'') <> ISNULL(b.Surname,'') OR
ISNULL(a.Incident,'') <> ISNULL(b.Incident,'') OR
ISNULL(a.Total,0.0) <> ISNULL(b.Total,0.0) OR
if OBJECT_ID('tempdb..#TempHistoryA') is Not Null
drop table #TempHistoryA
select
*
, 1 as Version
into #TempHistoryA
FROM [Data Mart Test].[dbo].[Claim] where ClaimID in (select distinct claimid FROM #TempHistory)
if OBJECT_ID('tempdb..#TempHistoryB') is Not Null
drop table #TempHistoryB
SELECT *
,(RANK() OVER(PARTITION BY [ClaimId] ORDER BY [ExtractDate])) + 1 as Version
into #TempHistoryB
FROM [Data Mart Test].[Staging].[Claim]where ClaimID in (select distinct claimid FROM #TempHistory)
if OBJECT_ID('tempdb..#TempChanges') is Not Null
drop table #TempChanges
DECLARE #x xml
SET #x = (
SELECT
t2.ClaimID AS [#key]
, t2.Version AS [#version]
, ( SELECT t1.* FOR XML PATH('t1'), TYPE ) AS [*]
, ( SELECT t2.* FOR XML PATH('t2'), TYPE ) AS [*]
FROM #TempHistoryA AS t1
INNER JOIN #TempHistoryB AS t2
ON t1.ClaimID = t2.ClaimID
AND t1.Version = t2.Version - 1
FOR XML PATH('row'), ROOT('root')
);
WITH Nodes AS (
SELECT
C.value('../../#key', 'int') AS [Key]
, C.value('../../#version', 'int') AS Version_ID
, C.value('local-name(..)', 'varchar(255)')AS Version_Alias
, C.value('local-name(.)', 'varchar(max)') AS Field
, C.value('.', 'varchar(max)') AS Val
FROM #x.nodes('/root/row/*/*') AS T(C)
)
SELECT
[Key] as ClaimID,
x.ExtractDate,
Field
, Max(CASE Version_Alias WHEN 't1' THEN Val END) AS [Initial Value]
, Max(CASE Version_Alias WHEN 't2' THEN Val END) AS [New Value]
Into #TempChanges
FROM [Nodes]v
inner join [#TempHistoryB]x on x.ClaimId = v.[Key]
and x.Version = v.Version_ID
where Field not in ( 'ExtractDate','Version')
GROUP BY
[Key],
x.ExtractDate,
Field
HAVING Max(CASE Version_Alias WHEN 't1' THEN Val END) <> Max(CASE Version_Alias WHEN 't2' THEN Val END)
--Find records in [Data Mart Test].dbo.Claim that are not in [Data Mart Test].Staging.Claim
SELECT
*
FROM [Data Mart Test].dbo.Claim
WHERE ClaimId NOT IN (SELECT b.ClaimId FROM [Data Mart Test].Staging.Claim b)
delete from [Data Mart Test].[dbo].[Claim] where ClaimID in (select distinct claimid FROM #TempHistory)
delete from [#TempHistoryB]
where not exists
(
select
*
from
(select
claimid,
max(Version) as LastVersion
FROM #TempHistoryB b
group by ClaimId
) b
where b.ClaimId = claimid
and b.LastVersion = Version
)
insert into [Data Mart Test].[dbo].[Claim]
select
[ClaimId]
--a whole lot of other columns
from #TempHistoryB
Related
I'm trying to copy data from one table to another, while transposing it and combining it into appropriate rows, with different columns in the second table.
First time posting. Yes this may seem simple to everyone here. I have tried for a couple hours to solve this. I do not have much support internally and have learned a great deal on this forum and managed to get so much accomplished with your other help examples. I appreciate any help with this.
Table 1 has the data in this format.
Type Date Value
--------------------
First 2019 1
First 2020 2
Second 2019 3
Second 2020 4
Table 2 already has the Date rows populated and columns created. It is waiting for the Values from Table 1 to be placed in the appropriate column/row.
Date First Second
------------------
2019 1 3
2020 2 4
For an update, I might use two joins:
update t2
set first = tf.value,
second = ts.value
from table2 t2 left join
table1 tf
on t2.date = tf.date and tf.type = 'First' left join
table1 ts
on t2.date = ts.date and ts.type = 'Second'
where tf.date is not null or ts.date is not null;
use conditional aggregation
select date,max(case when type='First' then value end) as First,
max(case when type='Second' then value end) as Second from t
group by date
You can do conditional aggregation :
select date,
max(case when type = 'first' then value end) as first,
max(case when type = 'Second' then value end) as Second
from table t
group by date;
After that you can use cte :
with cte as (
select date,
max(case when type = 'first' then value end) as first,
max(case when type = 'Second' then value end) as Second
from table t
group by date
)
update t2
set t2.First = t1.First,
t2.Second = t1.Second
from table2 t2 inner join
cte t1
on t1.date = t2.date;
Seems like you're after a PIVOT
DECLARE #Table1 TABLE
(
[Type] NVARCHAR(100)
, [Date] INT
, [Value] INT
);
DECLARE #Table2 TABLE(
[Date] int
,[First] int
,[Second] int
)
INSERT INTO #Table1 (
[Type]
, [Date]
, [Value]
)
VALUES ( 'First', 2019, 1 )
, ( 'First', 2020, 2 )
, ( 'Second', 2019, 3 )
, ( 'Second', 2020, 4 );
INSERT INTO #Table2 (
[Date]
)
VALUES (2019),(2020)
--Show us what's in the tables
SELECT * FROM #Table1
SELECT * FROM #Table2
--How to pivot the data from Table 1
SELECT * FROM #Table1
PIVOT (
MAX([Value]) --Pivot on this Column
FOR [Type] IN ( [First], [Second] ) --Make column where [Value] is in one of this
) AS [pvt] --Table alias
--which gives
--Date First Second
------------- ----------- -----------
--2019 1 3
--2020 2 4
--Using that we can update #Table2
UPDATE [tbl2]
SET [tbl2].[First] = pvt.[First]
,[tbl2].[Second] = pvt.[Second]
FROM #Table1 tbl1
PIVOT (
MAX([Value]) --Pivot on this Column
FOR [Type] IN ( [First], [Second] ) --Make column where [Value] is in one of this
) AS [pvt] --Table alias
INNER JOIN #Table2 tbl2 ON [tbl2].[Date] = [pvt].[Date]
--Results from #Table 2 after updated
SELECT * FROM #Table2
--which gives
--Date First Second
------------- ----------- -----------
--2019 1 3
--2020 2 4
My expected result should be like
----invoiceNo----
T17080003,INV14080011
But right now, I've come up with following query.
SELECT AccountDoc.jobCode,AccountDoc.shipmentSyskey,AccountDoc.docType,
CASE AccountDoc.docType
WHEN 'M' THEN
JobInvoice.invoiceNo
WHEN 'I' THEN
(STUFF((SELECT ', ' + RTRIM(CAST(AccountDoc.docNo AS VARCHAR(20)))
FROM AccountDoc LEFT OUTER JOIN JobInvoice
ON AccountDoc.principalCode = JobInvoice.principalCode AND
AccountDoc.jobCode = JobInvoice.jobCode
WHERE (AccountDoc.isCancelledByCN = 0)
AND (AccountDoc.docType = 'I')
AND (AccountDoc.jobCode = #jobCode)
AND (AccountDoc.shipmentSyskey = #shipmentSyskey)
AND (AccountDoc.principalCode = #principalCode) FOR XML
PATH(''), TYPE).value('.','NVARCHAR(MAX)'),1,2,' '))
END AS invoiceNo
FROM AccountDoc LEFT OUTER JOIN JobInvoice
ON JobInvoice.principalCode = AccountDoc.principalCode AND
JobInvoice.jobCode = AccountDoc.jobCode
WHERE (AccountDoc.jobCode = #jobCode)
AND (AccountDoc.isCancelledByCN = 0)
AND (AccountDoc.shipmentSyskey = #shipmentSyskey)
AND (AccountDoc.principalCode = #principalCode)
OUTPUT:
----invoiceNo----
T17080003
INV14080011
Explanation:
I want to select docNo from table AccountDoc if AccountDoc.docType = I.
Or select invoiceNo from table JobInvoice if AccountDoc.docType = M.
The problem is what if under same jobCode there have 2 docType which are M and I, how I gonna display these 2 invoices?
You can achieve this by using CTE and FOR XML. below is the sample code i created using similar tables you have -
Create table #AccountDoc (
id int ,
docType char(1),
docNo varchar(10)
)
Create table #JobInvoice (
id int ,
invoiceNo varchar(10)
)
insert into #AccountDoc
select 1 , 'M' ,'M1234'
union all select 2 , 'M' ,'M2345'
union all select 3 , 'M' ,'M3456'
union all select 4 , 'I' ,'I1234'
union all select 5 , 'I' ,'I2345'
union all select 6 , 'I' ,'I3456'
insert into #JobInvoice
select 1 , 'INV1234'
union all select 2 , 'INV2345'
union all select 3 , 'INV3456'
select *
from #AccountDoc t1 left join #JobInvoice t2
on t1.id = t2.id
with cte as
(
select isnull( case t1.docType WHEN 'M' THEN t2.invoiceNo WHEN 'I' then
t1.docNo end ,'') invoiceNo
from #AccountDoc t1 left join #JobInvoice t2
on t1.id = t2.id )
select invoiceNo + ',' from cte For XML PATH ('')
You need to pivot your data if you have situations where there are two rows, and you want two columns. Your sql is a bit messy, particularly the bit where you put an entire select statement inside a case when in the select part of another query. These two queries are virtually the same, you should look for a more optimal way of writing them. However, you can wrap your entire sql in the following:
select
Jobcode, shipmentsyskey, [M],[I]
from(
--YOUR ENTIRE SQL GOES HERE BETWEEN THESE BRACKETS. Do not alter anything else, just paste your entire sql here
) yoursql
pivot(
max(invoiceno)
for docType in([M],[I])
)pvt
I have a table with million records. The following is an example of one group of data:
select id,
id_depend,
Item,
values as 'Current Values'
from mytable
where id in (685690, 691282, 691297)
order by 1
The first id (685690) correspond to a first movement, the second one (691282) cancel the first movement and the third is the correction of the first movement. The id_depend field relates the movements with the original.
I need to show the same data adding a new column with the values for the last movement related. I mean, sometimes the first movement (other ids) is rigth and ther is no corrections after this movement (e.g.: id 691371).
This can help if I understand it correctly:
SELECT m.id,m.id_depend,m.Item,m.[values] [Current Values]
,CASE WHEN m.id_depend = 0 AND NOT EXISTS(SELECT 1 FROM mytable cor WHERE cor.id_depend = m.id)
THEN m.[values]
ELSE COALESCE((SELECT SUM(mt.[values]) FROM mytable mt WHERE mt.Item = m.Item AND mt.id < m.id)+m.[values],0)
END [Values Required]
FROM mytable m
There is also my query to play with:
CREATE TABLE #mytable (id BIGINT, id_depend BIGINT, Item VARCHAR(50), [values] DECIMAL(23,10))
INSERT INTO #mytable (id,id_depend,Item,[values])VALUES(685690,0,'1',216),(685690,0,'2',108)
,(691282,685690,'1',-216),(691282,685690,'2',-108)
,(691297,685690,'1',324),(691297,685690,'2',162)
,(691371,0,'1',100),(691371,0,'2',200),(691371,0,'3',300)
SELECT m.id,m.id_depend,m.Item,m.[values] [Current Values]
FROM #mytable m
SELECT m.id,m.id_depend,m.Item,m.[values] [Current Values]
,CASE WHEN m.id_depend = 0 AND NOT EXISTS(SELECT 1 FROM #mytable cor WHERE cor.id_depend = m.id)
THEN m.[values]
ELSE COALESCE((SELECT SUM(mt.[values]) FROM #mytable mt WHERE mt.Item = m.Item AND mt.id < m.id)+m.[values],0)
END [Values Required]
FROM #mytable m
DROP TABLE #mytable
Please let me know if you have any questions.
I was hoping someone perhaps could help. This problem was presented to me recently and I thought it would be easy, but (personally) found it a bit of a struggle. I can do it in Excel and SSRS - but I was curious if I was able to do it in SQL Server...
I would like to create a set of summary statistics (Max, Min) for a dataset. Easy enough... But I wanted to associate the corresponding date with those values.
Here is what my data looks like:
I have yearly data (not exactly - but beside the point) and I produce a pivoted summary like this using a series of CASE WHEN statements. This is fine - the output is seen on the right (above).
Each time I output this data - I like to provide a summary of the all the historic data (I only show the most recent data for sake of brevity). So... The question is how do I take an output like the one shown below (on different dates) and provide a summary data set like the one I have on the right?
So - a little background. I have already managed to join the Min and Max values using a UNION and that bit is fine. The tricky bit (I think) is how to form an INNER JOIN, using a sub query, with the Max or Min result values to return the corresponding Max or Min date, for each Type? Now it is highly likely that I am being a bit of an idiot and missing something obvious....but... Would really appreciate any help from anyone...
Many thanks in advance
This query will do the job, and for all TYPE
SELECT
Description, [CAR], [CAT], [MAT], [EAT], [PAR], [MAR], [FAR], [MOT], [LOT], [COT], [ROT]
FROM
(SELECT
unpvt.TYPE
,unpvt.Description
,unpvt.value
FROM (
SELECT
t.TYPE
,CONVERT(sql_variant,MAX(maxResult.MAX_RESULT)) as MAX_RESULT
,CONVERT(sql_variant,MIN(minResult.MIN_RESULT)) as MIN_RESULT
,CONVERT(sql_variant,MAX(CASE WHEN maxResult.MAX_RESULT IS NOT NULL THEN t.DATE ELSE NULL END)) as MAX_DATE
,CONVERT(sql_variant,MIN(CASE WHEN minResult.MIN_RESULT IS NOT NULL THEN t.DATE ELSE NULL END)) as MIN_DATE
FROM
table_name t -- You need to set your table name
LEFT JOIN (SELECT
TYPE
,MIN(RESULT) as MIN_RESULT
FROM
table_name -- You need to set your table name
GROUP BY
TYPE) minResult
on minResult.TYPE = t.TYPE
and minResult.MIN_RESULT = t.RESULT
LEFT JOIN (SELECT
TYPE
,MAX(RESULT) as MAX_RESULT
FROM
table_name -- You need to set your table name
GROUP BY
TYPE) maxResult
on maxResult.TYPE = t.TYPE
and maxResult.MAX_RESULT = t.RESULT
GROUP BY
t.TYPE) U
unpivot (
value
for Description in (MAX_RESULT, MIN_RESULT, MAX_DATE, MIN_DATE)
) unpvt) P
PIVOT
(
MAX(value)
FOR TYPE IN ([CAR], [CAT], [MAT], [EAT], [PAR], [MAR], [FAR], [MOT], [LOT], [COT], [ROT])
)AS PVT
DEMO : SQLFIDDLE
CONVERT(sql_variant, is a cast for columns to a common data type. This is a requirement of the UNPIVOT operator when you are running with subquery FROM.
It is possible to use the PIVOT command if your SQLServer is 2005 or better, but the raw data for the pivot need to be in a specific format, and the query I came up with is ugly
WITH minmax AS (
SELECT TYPE, RESULT, [date]
, row_number() OVER (partition BY TYPE ORDER BY TYPE, RESULT) a
, row_number() OVER (partition BY TYPE ORDER BY TYPE, RESULT DESC) d
FROM t)
SELECT info
, cam = CASE charindex('date', info)
WHEN 0 THEN cast(cast(cam AS int) AS varchar(50))
ELSE cast(cam AS varchar(50))
END
, car = CASE charindex('date', info)
WHEN 0 THEN cast(cast(car AS int) AS varchar(50))
ELSE cast(cam AS varchar(50))
END
, cat = CASE charindex('date', info)
WHEN 0 THEN cast(cast(cat AS int) AS varchar(50))
ELSE cast(cam AS varchar(50))
END
FROM (SELECT TYPE, 'maxres' info, RESULT value FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'minres' info, RESULT value FROM minmax WHERE 1 = a
UNION ALL
SELECT TYPE, 'maxdate' info , [date] value FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'mindate' info , [date] value FROM minmax WHERE 1 = a) DATA
PIVOT
(max(value) FOR TYPE IN ([CAM], [CAR], [CAT])) pvt
It's only a proof of concept so in SQLFiddle I have used a reducet set of fake data (3 row per 3 Type)
After the data preparation
SELECT TYPE, 'maxres' info, RESULT value FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'minres' info, RESULT value FROM minmax WHERE 1 = a
UNION ALL
SELECT TYPE, 'maxdate' info , [date] value FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'mindate' info , [date] value FROM minmax WHERE 1 = a
the value column is implicitly casted to the more complex datatype, in this case DateTime (you cannot have different data type in the same column), to see the data in the intended way an explicit cast is in needed, and is done with the CASE and CAST in
, cam = CASE charindex('date', info)
WHEN 0 THEN cast(cast(cam AS int) AS varchar(50))
ELSE cast(cam AS varchar(50))
END
the CASE check the data type, looking for the substring 'date' in the info column, then cast the row value back to INT for the minres and maxres column and in any case cast the value to varchar(50) to have the same data type again
UPDATE
With the sql_variant the CASE CAST block is not needed, thanks Ryx5
WITH minmax AS (
SELECT TYPE, RESULT, [date]
, row_number() OVER (partition BY TYPE ORDER BY TYPE, RESULT) a
, row_number() OVER (partition BY TYPE ORDER BY TYPE, RESULT DESC) d
FROM table_name)
SELECT info
, [CAM], [CAR], [CAT]
FROM (SELECT TYPE, 'maxres' info, cast(RESULT as sql_variant) value
FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'minres' info, cast(RESULT as sql_variant) value
FROM minmax WHERE 1 = a
UNION ALL
SELECT TYPE, 'maxdate' info , cast([date] as sql_variant) value
FROM minmax WHERE 1 = d
UNION ALL
SELECT TYPE, 'mindate' info , cast([date] as sql_variant) value
FROM minmax WHERE 1 = a) DATA
PIVOT
(max(value) FOR TYPE IN ([CAM], [CAR], [CAT])) pvt
I have three address line columns, aline1, aline2, aline3 for a street
address. As staged from inconsistent data, any or all of them can be
blank. I want to move the first non-blank to addrline1, 2nd non-blank
to addrline2, and clear line 3 if there aren't three non blank lines,
else leave it. ("First" means aline1 is first unless it's blank,
aline2 is first if aline1 is blank, aline3 is first if aline1 and 2
are both blank)
The rows in this staging table do not have a key and there could be
duplicate rows. I could add a key.
Not counting a big case statement that enumerates the possible
combination of blank and non blank and moves the fields around, how
can I update the table? (This same problem comes up with a lot more
than 3 lines, so that's why I don't want to use a case statement)
I'm using Microsoft SQL Server 2008
Another alternative. It uses the undocumented %%physloc%% function to work without a key. You would be much better off adding a key to the table.
CREATE TABLE #t
(
aline1 VARCHAR(100),
aline2 VARCHAR(100),
aline3 VARCHAR(100)
)
INSERT INTO #t VALUES(NULL, NULL, 'a1')
INSERT INTO #t VALUES('a2', NULL, 'b2')
;WITH cte
AS (SELECT *,
MAX(CASE WHEN RN=1 THEN value END) OVER (PARTITION BY %%physloc%%) AS new_aline1,
MAX(CASE WHEN RN=2 THEN value END) OVER (PARTITION BY %%physloc%%) AS new_aline2,
MAX(CASE WHEN RN=3 THEN value END) OVER (PARTITION BY %%physloc%%) AS new_aline3
FROM #t
OUTER APPLY (SELECT ROW_NUMBER() OVER (ORDER BY CASE WHEN value IS NULL THEN 1 ELSE 0 END, idx) AS
RN, idx, value
FROM (VALUES(1,aline1),
(2,aline2),
(3,aline3)) t (idx, value)) d)
UPDATE cte
SET aline1 = new_aline1,
aline2 = new_aline2,
aline3 = new_aline3
SELECT *
FROM #t
DROP TABLE #t
Here's an alternative
Sample table for discussion, don't worry about the nonsensical data, they just need to be null or not
create table taddress (id int,a varchar(10),b varchar(10),c varchar(10));
insert taddress
select 1,1,2,3 union all
select 2,1, null, 3 union all
select 3,null, 1, 2 union all
select 4,null,null,2 union all
select 5,1, null, null union all
select 6,null, 4, null
The query, which really just normalizes the data
;with tmp as (
select *, rn=ROW_NUMBER() over (partition by t.id order by sort)
from taddress t
outer apply
(
select 1, t.a where t.a is not null union all
select 2, t.b where t.b is not null union all
select 3, t.c where t.c is not null
--- EXPAND HERE
) u(sort, line)
)
select t0.id, t1.line, t2.line, t3.line
from taddress t0
left join tmp t1 on t1.id = t0.id and t1.rn=1
left join tmp t2 on t2.id = t0.id and t2.rn=2
left join tmp t3 on t3.id = t0.id and t3.rn=3
--- AND HERE
order by t0.id
EDIT - for the update back into table
;with tmp as (
select *, rn=ROW_NUMBER() over (partition by t.id order by sort)
from taddress t
outer apply
(
select 1, t.a where t.a is not null union all
select 2, t.b where t.b is not null union all
select 3, t.c where t.c is not null
--- EXPAND HERE
) u(sort, line)
)
UPDATE taddress
set a = t1.line,
b = t2.line,
c = t3.line
from taddress t0
left join tmp t1 on t1.id = t0.id and t1.rn=1
left join tmp t2 on t2.id = t0.id and t2.rn=2
left join tmp t3 on t3.id = t0.id and t3.rn=3
Update - Changed statement to an Update statement. Removed Case statement solution
With this solution, you will need a unique key in the staging table.
With Inputs As
(
Select PK, 1 As LineNum, aline1 As Value
From StagingTable
Where aline1 Is Not Null
Union All
Select PK, 2, aline2
From StagingTable
Where aline2 Is Not Null
Union All
Select PK, 3, aline3
From StagingTable
Where aline3 Is Not Null
)
, ResequencedInputs As
(
Select PK, Value
, Row_Number() Over( Order By LineNum ) As LineNum
From Inputs
)
, NewValues As
(
Select S.PK
, Min( Case When R.LineNum = 1 Then R.addrline1 End ) As addrline1
, Min( Case When R.LineNum = 2 Then R.addrline1 End ) As addrline2
, Min( Case When R.LineNum = 3 Then R.addrline1 End ) As addrline3
From StagingTable As S
Left Join ResequencedInputs As R
On R.PK = S.PK
Group By S.PK
)
Update OtherTable
Set addrline1 = T2.addrline1
, addrline2 = T2.addrline2
, addrline3 = T2.addrline3
From OtherTable As T
Left Join NewValues As T2
On T2.PK = T.PK
R. A. Cyberkiwi, Thomas, and Martin, thanks very much - these were very generous responses by each of you. All of these answers were the type of spoonfeeding I was looking for. I'd say they all rely on a key-like device and work by dividing addresses into lines, some of which are empty and some of which aren't, excluding the empties. In the case of lines of addresses, in my opinion this is semantically a gimmick to make the problem fit what SQL does well, and it's not a natural way to conceptualize the problem. Address lines are not "really" separate rows in a table that just got denormalized for a report. But that's debatable and whether you agree or not, I (a rank beginner) think each of your alternatives are idiomatic solutions worth elaborating on and studying.
I also get lots of similar cases where there really is normalization to be done - e.g., collatDesc1, collatCode1, collatLastAppraisal1, ... collatLastAppraisal5, with more complex criteria about what in excludeand how to order than with addresses, and I think techniques from your answers will be helpful.
%%phsloc%% is fun - since I'm able to create a key in this case I won't use it (as Martin advises). There was other stuff in Martin's stuff I wasn't familiar with too, and I'm still tossing them all around.
FWIW, here's the trigger I tried out, I don't know that I'll actually use it for the problem at hand. I think this qualifies a "bubble sort", with the swapping expressed in a peculiar way.
create trigger fixit on lines
instead of insert as
declare #maybeblank1 as varchar(max)
declare #maybeblank2 as varchar(max)
declare #maybeblank3 as varchar(max)
set #maybeBlank1 = (select line1 from inserted)
set #maybeBlank2 = (select line2 from inserted)
set #maybeBlank3 = (select line3 from inserted)
declare #counter int
set #counter = 0
while #counter < 3
begin
set #counter = #counter + 1
if #maybeBlank2 = ''
begin
set #maybeBlank2 =#maybeblank3
set #maybeBlank3 = ''
end
if #maybeBlank1 = ''
begin
set #maybeBlank1 = #maybeBlank2
set #maybeBlank2 = ''
end
end
select * into #kludge from inserted
update #kludge
set line1 = #maybeBlank1,
line2 = #maybeBlank2,
line3 = #maybeBlank3
insert into lines
select * from #kludge
You could make an insert and update trigger that check if the fields are empty and then move them.