Simple concept we are basically doing some auditing, comparing what came in, and what actually happened during processing. I am looking for a better way to execute a query that can do side by side table comparisons with columns that are slightly differnt in name and potentialy type.
DB Layout:
Table (* is the join condition)
Log (Un-altered data record.)
- LogID
- RecordID*
- Name
- Date
- Address
- Products
- etc.
Audit (post processing record)
- CardID*
- CarName
- DeploymentDate
- ShippingAddress
- Options
- etc.
For example this would work if you look past the annoying complexity to write, and performance issues.
The query just joins the left and right and selects them as strings. Showing each field matched up.
select
cast(log.RecordID as varchar(40)) + '=' + cast(audit.CardID as varchar(40),
log.Name+ '=' + audit.Name ,
cast(log.Date as varchar(40)) + '=' + cast(audit.DeploymentDate as varchar(40),
log.Address + '=' + audit.ShippingAddress,
log.Products+ '=' + audit.Options
--etc
from Audit audit, Log log
where audit.CardID=log.RecordId
Which would output something like:
1=1 Test=TestName 11/09/2009=11/10/2009 null=My Address null=Wheels
This works but is extremely annoying to build. Another thing I thought of was to just alias the columns, union the two tables, and order them so they would be in list form. This would allow me to see the column comparisons. This comes with the obvious overhead of the union all.
ie:
Log 1 Test 11/09/2009 null, null
Audit 1 TestName 11/10/2009 My Address Wheels
Any suggestions on a better way to audit this data?
Let me know what other questions you may have.
Additional notes. We are going to want to reduce the unimportant information so in some cases we might null the column if they are equal (but i know its too slow)
case when log.[Name]<>audit.[CarName] then (log.[Name] + '!=' + audit.[CarName]) else null end
or if we are doing the second way
nullif(log.[Name], audit.[CarName]) as [Name]
,nullif(audit.[CarName], log.[Name]) as [Name]
I've found the routine given here by Jeff Smith to be helpful for doing table comparisons in the past. This might at least give you a good base to start from. The code given on that link is:
CREATE PROCEDURE CompareTables(#table1 varchar(100),
#table2 Varchar(100), #T1ColumnList varchar(1000),
#T2ColumnList varchar(1000) = '')
AS
-- Table1, Table2 are the tables or views to compare.
-- T1ColumnList is the list of columns to compare, from table1.
-- Just list them comma-separated, like in a GROUP BY clause.
-- If T2ColumnList is not specified, it is assumed to be the same
-- as T1ColumnList. Otherwise, list the columns of Table2 in
-- the same order as the columns in table1 that you wish to compare.
--
-- The result is all records from either table that do NOT match
-- the other table, along with which table the record is from.
declare #SQL varchar(8000);
IF #t2ColumnList = '' SET #T2ColumnList = #T1ColumnList
set #SQL = 'SELECT ''' + #table1 + ''' AS TableName, ' + #t1ColumnList +
' FROM ' + #Table1 + ' UNION ALL SELECT ''' + #table2 + ''' As TableName, ' +
#t2ColumnList + ' FROM ' + #Table2
set #SQL = 'SELECT Max(TableName) as TableName, ' + #t1ColumnList +
' FROM (' + #SQL + ') A GROUP BY ' + #t1ColumnList +
' HAVING COUNT(*) = 1'
exec ( #SQL)
Would something like this work for you:
select
(Case when log.RecordID = audit.CardID THEN 1 else 0) as RecordIdEqual,
(Case when log.Name = audit.Name THEN 1 else 0) as NamesEqual ,
(Case when log.Date = audit.DeploymentDate THEN 1 else 0) as DatesEqual,
(Case when log.Address = audit.ShippingAddress THEN 1 else 0) as AddressEqual,
(Case when log.Products = audit.Options THEN 1 else 0) as ProductsEqual
--etc
from Audit audit, Log log
where audit.CardID=log.RecordId
This will give you a break down of what's equal based on the column name. Seems like it might be easier than doing all the casting and having to interpret the resulting string...
Related
I want to build a dynamic SQL query where I can use data from another table as where condition. Let's assume I have two tables: one table with financial data and the other one with conditions. They look something like this:
Table sales
c006 mesocomp c048 c020 c021
----- ---------- ------- ----- ----
120 01TA MICROSOFT 2 239
and a condition table with the following data:
dimension operator wert_db
--------- -------- -------
sales.c006 < 700
sales.c048 not like 'MIC%'
sales.c021 in (203,206)
I want to select all data from sales with the conditions stated in the condition table. So I have an SQL Query as follows:
SELECT *
FROM sales
WHERE sales.c006 < 700
AND sales.c048 NOT LIKE 'MIC%'
AND sales.c021 IN (203, 206)
Since you've posted no attempt to solve or research this yourself, I'll point you in a direction to get you started.
Your question already mentions using Dynamic SQL, so I assume you know at least what that is. You're going to populate a string variable, starting with 'SELECT * FROM Sales '.
You can use the STUFF...FOR XML PATH technique to assemble the conditions rows into a WHERE clause.
One change to the linked example is that you'll need to concatenate dimension, operator and wert_db into one artificial column in the innermost SELECT. Also instead of separating with a comma, you'll separate with ' AND '. And change the parameters of the STUFF function to take off the length of ' AND ' instead of the length of a comma.
DECLARE #tblSales TABLE
(
c006 VARCHAR(10),
mesocomp VARCHAR(100),
c048 VARCHAR(100),
c020 VARCHAR(100),
c021 VARCHAR(100)
)
INSERT INTO #tblSales(c006, mesocomp, c048, c020, c021)
VALUES(120,'01Ta','Microsoft','2','239')
SELECT * FROM #tblSales
DECLARE #tblCondition TABLE
(
Id INT,
dimension VARCHAR(100),
operator VARCHAR(10),
wert_db VARCHAR(100)
)
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c006','<','700')
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c048','not like','''MIC%''')
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c021','in','(203,206)')
DECLARE #whereCondition VARCHAR(400)
SELECT #whereCondition = COALESCE(#whereCondition + ' ', '') + dimension + ' ' + operator + ' ' + wert_db + ' AND '
FROM #tblCondition
SET #whereCondition = SUBSTRING(#whereCondition,0, LEN(#whereCondition) - 3)
PRINT #whereCondition
DECLARE #sql VARCHAR(4000)
SET #sql = 'SELECT * FROM #tblSales Where ' + #whereCondition
PRINT #sql
EXEC(#sql)
--please use real tables so you will get everything working.
I have a scenario to copy the data from one table to another table. While copying, I need to store the Identity values of table1 and table2.
create procedure procedure1
(#sourceClientNumber int,
#destinationClientNumber int,
#statusID varchar(10))
as
declare #scriptSql nvarchar(max);
declare #Temp_ClientScriptTable table
(newScriptID int,
oldScriptID int);
begin
SET #CreatedBy = 'Initaial Creation'
SET #CreatedByName= 'Initial Creation'
SET #CreatedDateTime = GETDATE()
SET #scriptSql = 'Merge into [dbo].[Client_'+#destinationClientNumber +'_Script] using
(select ScriptID,
ScriptName,
ScriptVersion,
Description,
FacilityID,
StatusID,
Shared,
ScriptTYpeID
from [dbo].[Client_'+#sourceClientNumber +'_Script]
where statusID = ' +#statusID + '
) scripts on 1 = 0
When not matched then
insert ([ScriptName],
[ScriptVersion],
[CreatedBy],
[CreatedByName],
[CreatedDateTime],
[Description],
[FacilityID],
[StatusID],
[Shared],
[ScriptTypeID])
values (scripts.ScriptName,
scripts.ScriptVersion,'
+ #CreatedBy + ','
+ #CreatedByName + ','
+ #CreatedDateTime + ',
scripts.Description,
scripts.FacilityID,
scripts.StatusID,
scripts.Shared,
scripts.ScriptTypeID)
output Inserted.ScriptID, scripts.ScriptID
into' + #Temp_ClientScriptTable + '(newScriptID, oldScriptID)'
EXECUTE sp_executesql #scriptSql
I am getting the error at #Temp_ClientScriptTable.
Could you please help me on this..
The main problem here is faulty database design.
The fact that you have multiple tables with the same structure, only different by a number inside the name - means you are mixing data (the number) and meta data (the table name).
Instead of having a different table for each client, you should add the client number as a column to a single table.
If you can do that, it will also eliminate the need for using dynamic SQL everywhere you need to address this table.
Root cause is, #Temp_ClientScriptTable is a table variable and cannot participate in string concatenation.
Your last line of #scriptSql should be -
into #Temp_ClientScriptTable(newScriptID, oldScriptID)'
instead of
into' + #Temp_ClientScriptTable + '(newScriptID, oldScriptID)'
Also you need to add single quotes around varchar variables while using them in string concatenation. Your final #scriptSql will be -
SET #scriptSql = 'Merge into [dbo].[Client_'+#destinationClientNumber +'_Script] using
(select ScriptID,
ScriptName,
ScriptVersion,
Description,
FacilityID,
StatusID,
Shared,
ScriptTYpeID
from [dbo].[Client_'+#sourceClientNumber +'_Script]
where statusID = ''' +#statusID + '''
) scripts on 1 = 0
When not matched then
insert ([ScriptName],
[ScriptVersion],
[CreatedBy],
[CreatedByName],
[CreatedDateTime],
[Description],
[FacilityID],
[StatusID],
[Shared],
[ScriptTypeID])
values (scripts.ScriptName,
scripts.ScriptVersion,'''
+ #CreatedBy + ''','''
+ #CreatedByName + ''','
+ #CreatedDateTime + ',
scripts.Description,
scripts.FacilityID,
scripts.StatusID,
scripts.Shared,
scripts.ScriptTypeID)
output Inserted.ScriptID, scripts.ScriptID
into #Temp_ClientScriptTable(newScriptID, oldScriptID)'
To debug these kind of issue, you should always print the the concatenated query and see how your query will look like -
print #scriptSql
I have inherited a query that seems to be a bit of a mess, or at least when I look at it I think that there must be a better way of doing it. The query:
select distinct 'INSERT INTO table1 SELECT '''+ACCT_NUM +''',* FROM table2 where library=''' + isnull(library,'') + '''' +
CASE
when CO is not NULL then ' and CO=''' +CO + ''''
else isnull(CO,'')
END +
CASE
when ACCTNO is not NULL then CASE WHEN LEN(acctNO)>5 then ' and ACCTNO=''' +ACCTNO + '''' else ' and ACCTNO like ''' +rtrim(ACCTNO) +'%'+ '''' END
else isnull(ACCTNO,'')
END +
CASE
when FILEDN is not NULL then ' and coalesce(nullif(PRMSTE,''''),ACCSTE)=''' +FILEDN + ''''
else isnull(FILEDN,'')
END +
CASE
when LOCNUM is not NULL then ' and LOCNUM=''' +LOCNUM + ''''
else isnull(LOCNUM,'')
END MySQL
INTO #temp
from table3
It generates a table of INSERT statements that are then looped through and executed via sp_executesql. Any help would be appreciated. I've been looking at this query now for over 2 months and I just can't seem to wrap my head around a better way of doing it.
Specifically I would like to see this taken down to one 'INSERT' statement or a 'SELECT ... INTO ...'
I think you need to completely throw this sql away and start from scratch.
Generating a table of insert statements and then executing them - really ?
Very very bad idea.
Insert the VALUES you need into a temporary table, not INSERT statements.
I would suggest doing some research on temporary tables - have a look at this article as a start: http://www.codeproject.com/Articles/42553/Quick-Overview-Temporary-Tables-in-SQL-Server
A number of times over the last month I've had to replace 'null' fields with '0' to every column returned from a query.
to save a lot of time (some of these are returning a high number of columns) I've been using the following and then pasting the results for relevant columns into a new query:
select ', isnull(' + COLUMN_NAME + ', 0)' + ' as ' + COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'summary_by_scca_sales_category '
and TABLE_SCHEMA = 'property''
Essentially I'm wondering if there's a better way that I can do this? Ideally a method where I could automatically apply isnull to all columns being returned in a query (without using two queries).
For example:
I want to take a query like:
select *
from tablename
And for every column returned by * replace null results with 0 without having to write an isnull() line for each column.
edit:
Will accomplish this with a view (doh, should have thought of that). For interests / educations sake is there a way to do something like this with code also?
You could create a VIEW against the tables in question where the ISNULL logic you want is set up. Then queries against the views would return the data you want.
EDIT:
As requested, some sample code to accomplish creating the VIEWs automatically. This is pretty gross, but for something that only has to be run once it will work. Beware of type issues (you stated everything should transmute to 0 so I assume all your columns are of a suitable numeric type):
DECLARE #table_def varchar(max)
SET #table_def = 'CREATE VIEW <tname>_NoNull AS SELECT '
SELECT #table_def = REPLACE(#table_def, '<tname>', t.name) +
'ISNULL(' + c.name + ', 0) AS ' + c.name + ', '
FROM sys.tables t
INNER JOIN sys.columns c ON t.object_id = c.object_id
WHERE t.name = <<table name>>
SELECT #table_def
In a previous application version we were using a particular field for a primary key, but because the field may represent different identities across various systems we have made it a non significant field(ie not a primary key or part of a composite primary) however since we dont have another system yet users still use that field as a primary method of identification.
The problem is with auditing...previously I used a single table to do all audits for the database dumping the data with a newvalue oldvalue schema using the generic trigger that is floating around. This could still work fine except for one thing. I have moved contactinformation into a separate table that is tied to the new primary key of the original table. So when changes are made the unfamiliar and unused primary key shows in the auditlog instead of the now insignificant foreignSystemID...
I moved to doing a one to one copy method of auditing so that any changes to any table are now written to a mirror image in a different schema. The problem comes down to showing changes to the users. They are used to seeing a report that shows only the changed values for a particular doctor...
My question would be using sql queries and Crystal reports, how could I show only the changed column values between rows in my audit tables. I have looked at the pivot command, but I dont think thats really going to help me. I had also looked at the code within the script that compares the columns and determines if they are different and writes them to the table.
im really spinning in the sand here and this is a critical issue for me to solve. Thanks in advance for ANY help...
we are early enough into production that I could change my changetracking method if need be, but it needs to be soon. thanks
EDIT:
My boss and I have worked on this a bit and this is what we have started with...I would like to get further opinions and options...as well...thanks..
CREATE TABLE #TEMP (
DoctorsID bigint,
TableName varchar(50),
FieldName varchar(50),
CurrentFieldValue varchar(255),
PreviousFieldValue varchar(255),
PreviousValueDate datetime
)
DECLARE #sql varchar(MAX)
SELECT
#sql = COALESCE(#sql,'') +
CAST(
'INSERT INTO #TEMP ' +
'SELECT ' +
'o.DoctorsID, ' +
'''' + TABLE_NAME + ''' ,' +
'''' + COLUMN_NAME + ''',' +
'o.' + COLUMN_NAME + ',' +
'a.' + COLUMN_NAME + ',' +
'a.AuditDate' +
' FROM ' +
'dbo.DoctorLicenses o ' +
'INNER JOIN Audit.DoctorLicenses a ON ' +
'o.DoctorsID = a.DoctorsID ' +
'WHERE ' +
'AuditDate BETWEEN ''10/01/2010'' AND ''10/31/2010'' AND ' +
'o.' + COLUMN_NAME + ' <> a.' + COLUMN_NAME +
';'
AS varchar(MAX))
FROM
INFORMATION_SCHEMA.COLUMNS AS [Fields]
WHERE
TABLE_SCHEMA = 'dbo' AND
TABLE_NAME = 'DoctorLicenses'
PRINT #sql
EXEC(#sql)
SELECT * FROM #TEMP
DROP TABLE #TEMP
It sounds to me like there is a design issue, but I have a hard time envisioning what your design is at the moment. Can you be more specific on what your tables look like at the moment and what data you're trying to generate the report(s) on?
Also, when talking about auditing "the changed values", how do you keep track of what's been changed?