Auto-Increment editable Column - sql

I am looking for the best solution, of a problem I've met.
I have several databases, which I align quite often to have the same data in it. In some referential tables there are Auto-Increment ID columns, and in other tables there are columns that match these ID values. Unfortunately, on the source DB IDs after changes might be like: (10,15,20), while, after inserting it into destination table it is (1,2,3). I hope my mock-ups clarify the situation:
Source DB RefTable:
ID_________Name
10_________a
15_________b
20_________c
Destination DB RefTable:
ID_________Name
1__________a
2__________b
3__________c
Source FactTable:
RefTableID___OtherColumns
10__________a
10__________b
15__________c
15__________d
The problem is that after migrating FactTable to the destination DB it does not match records from the RefTable. I had an idea to add another column that will be editable in contrast to ID column. The issue is that the end-user should not see it (it will not mean anything for him), so it should be also auto-increment. What should I use in this situation:
sequence?
-trigger?
-post add/save stored procedure?
Or maybe someone has better idea how to solve this problem?

In SQL Server, an identity columns can't be updated once the row was inserted into a table.
However, it's value can be inserted manually by using set identity_insert for the specified table.
In each session, you can have only a single table with identity_insert set to on at the same time, so always set it back to off once insert is complete.
The code might look something like this:
set identity_insert RefTable on;
insert into RefTable (id, name)
select id, name
from sourceDb.sourceSchema.RefTable s
where not exists (
select 1
from RefTable t
where t.Id = s.Id
);
set identity_insert RefTable off;

Related

Copy a table data from one database to another database SQL

I have had a look at similar problems, however none of the answers helped in my case.
Just a little bit of background. I have Two databases, both have the same table with the same fields and structure. Data already exists in both tables. I want to overwrite and add to the data in db1.table from db2.table the primary ID is causing a problem with the update.
When I use the query:
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
It works to a blank table, because none of the primary keys exist. As soon as the primary key exists it fails.
Would it be easier for me to overwrite the primary keys? or find the primary key and update the fields related to the field_id? Im really not sure how to go ahead from here. The data needs to be migrated every 5min, so possibly a stored procedure is required?
first you should try to add new records then update all records.you can create a procedure like below code
PROCEDURE sync_Data(a IN NUMBER ) IS
BEGIN
insert into db2.table
select *
from db1.table t
where t.field_id not in (select tt.field_id from db2.table tt);
begin
for t in (select * from db1.table) loop
update db2.table aa
set aa.field1 = t.field1,
aa.field2 = t.field2
where aa.field_id = t.field_id;
end loop;
end;
END sync_Data
Set IsIdentity to No in Identity Specification on the table in which you want to move data, and after executing your script, set it to Yes again
I ended up just removing the data in the new database and sending it again.
DELETE FROM db2.table WHERE db2.table.field_id != 0;
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
Its not very efficient, but gets the job done. I couldnt figure out the syntax to correctly do an UPDATE or to change the IsIdentity field within MariaDB, so im not sure if they would work or not.
The overhead of deleting and replacing non-trivial amounts of data for an entire table will be prohibitive. That said I'd prefer to update in place (merge) over delete /replace.
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT t.field_id,t.field1,t.field2
FROM table t
ON DUPLICATE KEY UPDATE field1 = t.field1, field2 = t.field2
This can be used inside a procedure and called every 5 minutes (not recommended) or you could build a trigger that fires on INSERT and UPDATE to keep the tables in sync.
INSERT INTO database1.tabledata SELECT * FROM database2.tabledata;
But you have to keep length of varchar length larger or equal to database2 and keep the same column name

Adding Row in existing table (SQL Server 2005)

I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.

SQL Server MERGE with Insert subquery

I'm having trouble getting this compound insert to work in my MERGE statement between two tables (Ignore the when match condition, I know its bad practice). The issue I'm having is getting the ServerId field in the target table to fill. The Team field is filling fine but all of the rows have a null value for ServerId. I can't find an example online for this so I'm hoping someone can help. I don't seem to have any syntactical errors and I know the column 'ServerName' in the Source table is filled for all rows.
MERGE ApplicationTeams AS Target
USING TempApplicationTeams AS Source
ON (Target.ServerId = (SELECT ID from Servers WHERE Name='Source.ServerName') AND Target.Team = Source.Team)
WHEN MATCHED THEN
UPDATE SET Target.Team = Target.Team
WHEN NOT MATCHED BY TARGET THEN
INSERT (ServerId, Team) VALUES((SELECT ID from Servers WHERE Name='Source.ServerName'), Source.Team)
WHEN NOT MATCHED BY SOURCE THEN
DELETE
;
Thanks.
I think you should remove the single quoutes on the where clausule.
You wrote:
(SELECT ID from Servers WHERE Name='Source.ServerName')
But I think you should write this:
(SELECT ID from Servers WHERE Name=Source.ServerName)
And make sure the select id returns only one row otherwise the statement will fail
I hope it will be usefully

SQL Server -- Any way to add a column and make it first column in table? [duplicate]

This question already has answers here:
Add a new table column to specific ordinal position in Microsoft SQL Server
(10 answers)
Closed 8 years ago.
I'm altering an existing table to add an Identity column. That I can do, no problem.
But I'm wanting to be sure that people who look at it in the future will see that it has the identity column added, so I really want to make it column 1. I know this is totally inconsequential to the system's operation; it's strictly for human reading.
Does anyone know of a way to do this? I've looked at the TSQL syntax for Alter Table and for column_definition, and don't see anything; but I'm hoping someone knows of a way to make this happen.
FWIW, this is a one-time operation (but on many servers, so it needs to be automated), so I'm not worried whether any "trick" might go away in the future -- as long as it works now. We're using recent versions of SQL Server Express.
Thanks for any suggestions.
Solve this by following these steps:
-- First, add identity column
alter table
mytable
add
id int identity(1, 1) not null
-- Second, create new table from existing one with correct column order
select
id,
col1,
col2
into
newtable
from
mytable
Now you've got newtable with reordered columns. If you need to you can drop your mytable and rename newtable to mytable:
drop table
mytable
exec sp_rename
'newtable', 'mytable'
It is not possible with ALTER statement. If you wish to have the columns in a specific order, you will have to create a newtable, use INSERT INTO newtable (col-x,col-a,col-b)SELECT col-x,col-a,col-b FROM oldtable to transfer the data from the oldtable to the newtable, delete the oldtable and rename the newtable to the oldtable name.
This is not necessarily recommended because it does not matter which order the columns are in the database table. When you use a SELECT statement, you can name the columns and have them returned to you in the order that you desire.
USING OBJECT EXPLORE
Avoid this step.. because ssms tools gives you to do light Data administration, while going for changes with multiple column record ,you may end with loosing some data..etc..because how fast your processor is it will always hang for changing architecture..
And once data lost..you will be no where to fetch them back...happened with me oncw..
According to Microsoft you can do this only using SQL Server Management Studio.
Check this

Create Data Audit in SQL Server

I've recently been given the task of creating an Audit on a database table so that any changes made to any columns can be tracked.
Lets say I have the following table:
[TableA]
------
ID
ColumnA
ColumnB
ColumnC
For Auditing I've created a table such as:
[TableA.Audit]
------
ID
TableAID
UserID
Date (default value = getdate())
ColumnA
ColumnB
ColumnC
I've then wrote a script like:
DECLARE #currentColumnA int
,#currentColumnB int
,#currentColumnC int
SELECT TOP 1 #currentColumnA=ColumnA
,#currentColumnB=ColumnB
,#currentColumnC=ColumnC
FROM [TableA]
WHERE ID=#TableAID
UPDATE [TableA]
SET ColumnA=#ColumnA
,ColumnB=#ColumnB
,ColumnC=#ColumnC
WHERE ID=#TableAID
INSERT INTO [TableA.Audit] (TableAID, UserID, ColumnA, ColumnB, ColumnC)
VALUES (#TableAID, #UserID, NULLIF(#ColumnA, #currentColumnA), NULLIF(#ColumnB, #currentColumnB), NULLIF(#ColumnC, #currentColumnC))
The problem with this, is that if I was to add a ColumnD field to TableA I'm going to have to edit my TableA.Audit table as well as the above script.
Therefore is there a better way of doing this?
You are better off writing triggers for the table for AFTER INSERT, AFTER DELETE, and AFTER UPDATE. This way, any time ANYTHING (application, Management Studio, etc.) that inserts, updates, or deletes data in the table will get logged. You'll have to add a field for the audit action, and in your trigger insert the literal for the action (e.g. 'I' or 'INSERT'). I structure my audit tables in this way:
audit_id: INT IDENTITY
audit_date: DATETIME GETDATE()
audit_action: VARCHAR(16) ... or you can use CHAR(1)
audit_user: VARCHAR(128) SUSER_SNAME()
(the fields from the table being audited)
Since our apps use Active Directory, I can default audit_user to SUSER_SNAME().
We use triggers (the only way to go and make sure you write them to handle mulitple record inserts/updates/deletes) and our structure is a bit differnt.
First we have a table that stores the information about the action, the person/aplicationthat did it the date thenumber of affected records. Then we havea table that stores the details. This table has an identifier column, column_name, old value, new value. (we use nvarchar (max) for the columns in the audit table) This way if the table gets new columns we don't have to worry about changing the audit tables. We have one set of audit tables for each table we audit.
Newer versions of Sql server have change tracking but we don't find it has enough detail for the auditing we need and it deletes the data too quickly unless you move it to another permanent table.
The problem with this, is that if I was to add a ColumnD field to
TableA I'm going to have to edit my TableA.Audit table as well as the
above script.
Therefore is there a better way of doing this?
Not really. You can make the implementation better via trigger's as HardCode mentions but you still have to modify the audit and related scirpts.
I've witnessed attempts to make this "better" where you don't have to update a trigger or audit table. This always results in trading the minor problem (hey a column got added and I've got to do some stuff) for much larger ones. Usually performance, correctness and reliability issues.