SQL Server 2012. Copy row into single column on another table - sql

I am working with SQL Server on the AdventureWorks2012 Database. I am working with triggers. I would like to copy any new inserted row into one single column in another table called AuditTable. Basically whenever I insert into the parson.address table, I would like to copy all of the rows into the AuditTable.prevValue column. I know how to insert etc, I am not sure how to write to one column.
Here is the general idea.
USE [AdventureWorks2012]
ALTER TRIGGER [Person].[sPerson] ON [Person].[Address]
FOR INSERT AS INSERT INTO AdventureWorks2012.HumanResources.AuditTable(PrevValue) select
AddressID,AddressLine1,AddressLine2,City, StateProvinceID, PostalCode, SpatialLocation, rowguid, ModifiedDate FROM Inserted
ERROR: The select list for the INSERT statement contains more items than the insert list. The number of SELECT values must match the number of INSERT columns.
Thank you for any assistance. I have searched loads but cannot find the exact solution anywhere.

The error message says it all - you can't insert 9 columns of different types into a single column. Assuming that your destination AuditTable.PrevValue column is NVARCHAR(), you could flatten your insert as follows, by concatenating the columns and casting non-char columns to n*char:
INSERT INTO AdventureWorks2012.HumanResources.AuditTable(PrevValue)
SELECT
N'ID : ' + CAST(AddressID AS NVARCHAR(20)) + N'Address: ' + AddressLine1 +
N', ' +AddressLine2 + ....
FROM Inserted
IMO keeping one long string like this makes the Audit table difficult to search, so you might consider adding SourceTable and possibly Source PK columns.
You could also consider converting your row to Xml and storing it as an Xml column, like this:
create table Audit
(
AuditXml xml
);
alter trigger [Person].[sPerson] ON [Person].[Address] for INSERT AS
begin
DECLARE #xml XML;
SET #xml =
(
SELECT *
FROM INSERTED
FOR XML PATH('Inserted')
);
insert into [MyAuditTable](AuditXml) VALUES (#xml);
end

Related

How to update xml column node value with another column new value at same update query?

I want to change the value of 2 columns in one table. One column is varchar and the other is XML. First of all, I want to replace the value of the RECIPIENT column with the new value and replace the node value named as RecipientNo in the XML column with the new value of RecipientNo. How can I do these two operations in the same update function? The query below works. Secondly, DATARECORD table includes too many records. Does modify function take too much time to update the records? If so, how can I increase the performance of modify function or can you suggest another alternative solution? By the way, I cannot add index to DATARECORD table. Thanks.
Here is the sample row;
ID RECIPIENT RECORDDETAILS
1 1 <?xml version="1.0"?>
<MetaTag xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/XMLSchema">
<Code>123</Code>
<RecipientNo>123</RecipientNo>
<Name>xyz</Name>
</MetaTag>'
CREATE TABLE #TEMPTABLE(
ID bigint,
RECIPIENT nvarchar(max),
RECORDDETAILS xml
)
INSERT INTO #TEMPTABLE
SELECT ID,RECIPIENT,RECORDDETAILS
FROM DATARECORD WITH (NOLOCK)
WHERE cast(RECORDDETAILS as varchar(max)) LIKE '%<Code>123</Code>%' and cast(RECORDDETAILS as varchar(max)) LIKE '%MetaTag%'
UPDATE #TEMPTABLE SET RECIPIENT = CONCAT('["queryType|1","recipientNoIDENTIFICATION|',RECIPIENT,']')
UPDATE #TEMPTABLE SET RECORDDETAILS.modify('replace value of (MetaTag/RecipientNo/text())[1] with sql:column("RECIPIENT")')
UPDATE d
SET d.RECORDDETAILS =Concat('<?xml version="1.0"?>', CAST(t.RECORDDETAILS AS VARCHAR(max))),
d.RECIPIENT = t.RECIPIENT
FROM dbo.DATARECORD as d
Join #TEMPTABLE as t
ON t.ID = d.ID
It's certainly possible to update an SQL column and an XML node in the same update statement, e.g.:
create table DataRecord (
ID bigint not null primary key,
Recipient nvarchar(max) not null,
RecordDetails xml not null
);
insert DataRecord values
(1, N'1', N'<?xml version="1.0"?>
<MetaTag xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/XMLSchema">
<Code>123</Code>
<RecipientNo>123</RecipientNo>
<Name>xyz</Name>
</MetaTag>');
create table #TempTable (
ID bigint not null primary key,
Recipient nvarchar(max) not null,
RecordDetails xml not null
);
insert #TempTable
select ID, Recipient, RecordDetails
from DataRecord with (nolock)
where cast(RecordDetails as varchar(max)) like '%<Code>123</Code>%' and cast(RecordDetails as varchar(max)) like '%MetaTag%'
-- Change an SQL value and an XML node in the one update statement...
update tt set
Recipient = NewRecipient,
RecordDetails.modify('replace value of (/MetaTag/RecipientNo/text())[1] with sql:column("NewRecipient")')
from #TempTable tt
outer apply (
select NewRecipient = concat('["queryType|1","recipientNoIDENTIFICATION|', Recipient, '"]')
) Calc
select * from #TempTable
Which yields:
ID Recipient RecordDetails
1 ["queryType|1","recipientNoIDENTIFICATION|1"] <MetaTag
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/XMLSchema">
<Code>123</Code>
<RecipientNo>["queryType|1","recipientNoIDENTIFICATION|1"]</RecipientNo>
<Name>xyz</Name>
</MetaTag>
There are a couple of things contributing to your performance problem:
Converting XML, which SQL Server essentially stores in UTF-16 encoding, to varchar (twice) is expensive. It will also trash any Unicode characters outside your database's collation.
Performing like matches on the XML (converted to varchar) will be causing TABLE SCAN operations, converting and testing every row in your table.
Some things to consider:
Add XML Index(es) to the RecordDetails column and use something like WHERE RecordDetails.exists('/MetaTag/Code[.="123"]) to short list the rows to be updated.
Alternatively, pre-shred your RecordDetails, persist the value of /MetaTag/Code/text() in a table column (e.g.: MetaTagCode), and use something like WHERE MetaTagCode='123' in your query. Adding an index to that column will allow SQL to do a much cheaper INDEX SCAN when searching for the desired value instead of a TABLE SCAN.
Since you say you cannot add indexes you're basically going to have to tolerate TABLE SCANs and just wait it out.

The column "GeoLocation" cannot be modified because it is either a computed column or is the result of a UNION operator

Here's the insert code.
CREATE TRIGGER InsertedPoint
ON Points
instead of insert
AS
insert into Points(Route_Id,Title,Description,Latitude,Longitude,GeoLocation)
SELECT Route_Id,Title,Description,Latitude,Longitude,
geography::STPointFromText('POINT(' + CAST([Latitude] AS VARCHAR(20)) + ' ' + CAST([Longitude] AS VARCHAR(20)) + ')', 4326)
FROM inserted
The problem about valuing of geography. When i create trigger i get error The column "GeoLocation" cannot be modified because it is either a computed column or is the result of a UNION operator.. How can i fix problem ?
So it's Points table
You can't insert values in a computed column. The principle of computed columns is that you define a computation rule, and then your RDBMS automatically manages it, computing values from other columns as needed.
Without this colum, your trigger boils down to:
CREATE TRIGGER InsertedPoint
ON Points
INSTEAD OF INSERT
AS
INSERT INTO Points(Route_Id,Title,Description,Latitude,Longitude)
SELECT Route_Id,Title,Description,Latitude,Longitude
FROM inserted
As it is, this trigger is a no-operation. Unless it does something else that what you have showed us, you would better just remove it.

Bulk Insert - How to tell SQLServer to insert empty-string and not null

This seems like a trivial question. And it is. But I have googled for over a day now, and still no answer:
I wish to do a bulk insert where for a column whose datatype is varchar(100), I wish to insert an empty string. Not Null but empty. For example for the table:
create table temp(columnName varchar(100))
I wish to insert an empty string as the value:
BULK INSERT sandbox..temp FROM
'file.txt' WITH ( FIELDTERMINATOR = '|#', ROWTERMINATOR = '|:' );
And the file contents would be row1|:row2|:|:|:. So it contains 4 rows where last two rows are intended to be empty string. But they get inserted as NULL.
This question is not the same as the duplicate marked question: In a column, I wish to have the capacity to insert both: NULL and also empty-string. The answer's provided does only one of them but not both.
Well instead of inserting empty string explicitly like this why not let your table column have a default value of empty string and in your bulk insert don't pass any values for those columns. Something like
create table temp(columnName varchar(100) default '')

How to validate/restrict values in a column as per a specific format in a sql Database

I am working on an application that populates values from sql Database in a format two numeric and alpha character e.g 11G,34H. There is no validation or check for the same.I want to put put checkpoint/validation from Database end.Is it possible to implement via SQL procedure or anything.Can anyone help me with the code.
Try the below query
DECLARE #strtable TABLE (column1 VARCHAR(50))
INSERT INTO #strtable
VALUES ('11H'),('sda'),('175HH'),('1H1'),('282')
INSERT INTO YourTable (Column1)
SELECT Column1
FROM #strtable
WHERE LEN(column1)=3
AND ISNUMERIC(LEFT(column1,2))=1
AND ISNUMERIC(RIGHT(column1,1))!=1
--Output : 11H

Select row just inserted without using IDENTITY column in SQL Server 2012

I have a bigint PK column which is NOT an identity column, because I create the number in a function using different numbers. Anyway, I am trying to save this bigint number in a parameter #InvID, then use this parameter later in the procedure.
ScopeIdentity() is not working for me, it saved Null to #InvID, I think because the column is not an identity column. Is there anyway to select the record that was just inserted by the procedure without adding an extra ID column to the table?
It would save me a lot of effort and work if there is a direct way to select this record and not adding an id column.
insert into Lab_Invoice(iID, iDate, iTotal, iIsPaid, iSource, iCreator, iShiftID, iBalanceAfter, iFileNo, iType)
values (dbo.Get_RI_ID('True'), GETDATE(),
(select FilePrice from LabSettings), 'False', #source, #user, #shiftID, #b, #fid, 'Open File Invoice');
set #invID = CAST(scope_identity() AS bigint);
P.S. dbo.Get_RI_ID('True') a function returns a bigint.
Why don't you use?
set #invId=dbo.Get_RI_ID('True');
insert into Lab_Invoice(iID,iDate,iTotal,iIsPaid,iSource,iCreator,iShiftID,iBalanceAfter,iFileNo,iType)
values(#invId,GETDATE(),(select FilePrice from LabSettings),'False',#source,#user,#shiftID,#b,#fid,'Open File Invoice');
You already know that big id value. Get it before your insert statement then use it later.
one way to get inserted statement value..it is not clear which value you are trying to get,so created some example with dummy data
create table #test
(
id int
)
declare #id table
(
id int
)
insert into #test
output inserted.id into #id
select 1
select #invID=id from #id