In our db there is a table that has a little over 80 columns. It has a primary key and Identity insert is turned on. I'm looking for a way to insert into this table every column EXCEPT the primary key column from an identical table in a different DB.
Is this possible?
You can do this quite easily actually:
-- Select everything into temp table
Select * Into
#tmpBigTable
From [YourBigTable]
-- Drop the Primary Key Column from the temp table
Alter Table #tmpBigTable Drop Column [PrimaryKeyColumn]
-- Insert that into your other big table
Insert Into [YourOtherBigTable]
Select * From #tmpBigTable
-- Drop the temp table you created
Drop Table #tmpBigTable
Provided you have Identity Insert On in "YourOtherBigTable" and columns are absolutely identical you will be okay.
You could query Information_Schema to get a list of all the columns and programatically generate the column names for your query. If you're doing this all in t-sql it would be cumbersome, but it could be done. If you're using some other client language, like C# to do the operation, it would be a little less cumbersome.
No, that's not possible. You could be tempted to use
INSERT INTO MyLargeTable SELECT * FROM OtherTable
But that would not work, because your identity column would be included in the *.
You could use
SET IDENTITY_INSERT MyLargeTable ON
INSERT INTO MyLargeTable SELECT * FROM OtherTable
SET IDENTITY_INSERT MyLargeTable OFF
first you enable inserting identity values, than you copy the records, then you enable the identity column again.
But this won't work neither. SQL server won't accept the * in this case. You have to explicitly include the Id in the script, like :
SET IDENTITY_INSERT MyLargeTable ON
INSERT INTO MyLargeTable (Id, co1, col2, ...., col80) SELECT Id, co1, col2, ...., col80 FROM OtherTable
SET IDENTITY_INSERT MyLargeTable OFF
So we're back from where we started.
The easiest way is to right click the table in Management Studio, let it generate the INSERT and SELECT scripts, and edit them a little to let them work together.
CREATE TABLE Tests
(
TestID int IDENTITY PRIMARY KEY,
A int,
B int,
C int
)
INSERT INTO dbo.Tests
VALUES (1,2,3)
SELECT * FROM Tests
This works in SQL2012
Why not just create a VIEW of the original data, removing the unwanted fields?
Then 'Select * into' your hearts desire.
Localized control within a single view
No need to modify SPROC
Add/change/delete fields easy
No need to query meta-data
No temporary tables
Really, honestly it takes ten seconds or less to pull all of the columns over from the object browser and then delete the identity column from the list. It is a bad idea to use select * for anything but quick ad hoc query.
In answer to a related question (SELECT * EXCEPT), I point out the truly relational language Tutorial D allows projection to be expressed in terms of the attributes to be removed instead of the ones to be kept e.g.
my_relvar { ALL BUT description }
However its INSERT syntax requires tuple value constructors to include attribute name / value pairs e.g.
INSERT P
RELATION
{
TUPLE { PNO PNO ( 'P1' ) , PNAME CHARACTER ( 'Nut' ) },
TUPLE { PNO PNO ( 'P2' ) , PNAME CHARACTER ( 'Bolt' ) }
};
Of course, using this syntax there is no column ordering (because it is truly relational!) e.g. this is semantically equivalent:
INSERT P
RELATION
{
TUPLE { PNO PNO ( 'P1' ) , PNAME CHARACTER ( 'Nut' ) },
TUPLE { PNAME CHARACTER ( 'Bolt' ) , PNO PNO ( 'P2' ) }
};
The alternative would be to rely fully on attribute ordering, which SQL does partially e.g. this is a close SQL equivalent to the the above:
INSERT INTO P ( PNO , PNAME )
VALUES
( PNO ( 'P1' ) , CAST ( 'Nut' AS VARCHAR ( 20 ) ) ) ,
( PNO ( 'P2' ) , CAST ( 'Bolt' AS VARCHAR ( 20 ) ) );
Once the commalist of columns has been specified the VALUES row constructors have the maintain this order, which is not ideal. But at least the order is specified: your proposal would rely on some default order which may be possibly non-deterministic.
Related
I am looking to insert or update values in an SQLite database (version > 3.35) avoiding multiple queries. upsert along with returning seems promising :
CREATE TABLE phonebook2(
name TEXT PRIMARY KEY,
phonenumber TEXT,
validDate DATE
);
INSERT INTO phonebook2(name,phonenumber,validDate)
VALUES('Alice','704-555-1212','2018-05-08')
ON CONFLICT(name) DO UPDATE SET
phonenumber=excluded.phonenumber,
validDate=excluded.validDate
WHERE excluded.validDate>phonebook2.validDate RETURNING name;
This helps me track names corresponding to inserted/modified rows. How to find rows where phonebook2 values conflict with values upserted in above statement, but no insert or update happened due to where clause?
The RETURNING clause can't be used to get non-affected rows.
What you can do is execute a SELECT statement before the UPSERT:
WITH cte(name, phonenumber, validDate) AS (VALUES
('Alice', '704-555-1212', '2018-05-08'),
('Bob','804-555-1212', '2018-05-09')
)
SELECT *
FROM phonebook2 p
WHERE EXISTS (
SELECT *
FROM cte c
WHERE c.name = p.name AND c.validDate <= p.validDate
);
In the CTE you may include as many tuples as you want
How can I abbreviate a list so that
WHERE id IN ('8893171511',
'8891227609',
'8884577292',
'886790275X',
.
.
.)
becomes
WHERE id IN (name of a group/list)
The list really would have to appear somewhere. From the point of view of your code being maintainable and reusable, you could represent the list in a CTE:
WITH id_list AS (
SELECT '8893171511' AS id UNION ALL
SELECT '8891227609' UNION ALL
SELECT '8884577292' UNION ALL
SELECT '886790275X'
)
SELECT *
FROM yourTable
WHERE id IN (SELECT id FROM cte);
If you have a persistent need to do this, then maybe the CTE should become a bona fide table somewhere in your database.
Edit: Using the Horse's suggestion, we can tidy up the CTE to the following:
WITH id_list (id) AS (
VALUES
('8893171511'),
('8891227609'),
('8884577292'),
('886790275X')
)
If the list is large, I would create a temporary table and store the list there.
That way you can ANALYZE the temporary table and get accurate estimates.
The temp table and CTE answers suggested will do.
Just wanted to bring another approach, that will work if you use PGAdmin for querying (not sure about workbench) and represent your data in a "stringy" way.
set setting.my_ids = '8893171511,8891227609';
select current_setting('setting.my_ids');
drop table if exists t;
create table t ( x text);
insert into t select 'some value';
insert into t select '8891227609';
select *
from t
where x = any( string_to_array(current_setting('setting.my_ids'), ',')::text[]);
There is a db2 database with two tables. The first one, table1, has autoincrement column ID. It is the foreign key for the table2.
A am writing an HTML generator for SQL queries. So with some input parameters it generates a query or multiple queries. It is not connected to the database.
What I need is to get that autoincrement field and use it in next queries.
So basically, the scenario is:
insert into table1;
select autogenerated field ID;
insert into table2 using that ID;
insert into table2 using that ID;
...some more similar inserts...
insert into table2 using that ID;
And all that SQL query should be generated and then used as a single SQL script.
I was thinking about something like this:
SELECT ID FROM FINAL TABLE (INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.))
But I don't know, how I can write the result into a variable so I could use it in next queries like this:
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES ($ID, t2value1, t2value2, etc.)
I could just paste that select instead of $ID, but the second query can be used several times with the same $ID and different values.
EDIT: DB2 10.5 on Linux.
You can chain several inserts together using CTEs, like so:
WITH idcte (id) as (
SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
)
),
ins1 (id) as (
SELECT foreignKeyCol FROM FINAL TABLE (
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
SELECT id, t2value1, t2value2, etc.
FROM idcte
)
),
-- more CTEs
SELECT foreignKeyCol FROM FINAL TABLE (
-- your last INSERT ... SELECT FROM
)
Essentially you will have to wrap each INSERT into a SELECT FROM FINAL TABLE for this to work.
Alternatively, you can use a global variable to keep the ID value:
CREATE VARIABLE myNewId INT;
SET myNewId = (SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
));
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES (myNewId, t2value1, t2value2, etc.);
DROP VARIABLE myNewId;
This assumes a recent version of Db2 for LUW.
CREATE TABLE object (
object_id serial,
object_attribute_1 integer,
object_attribute_2 VARCHAR(255)
)
-- primary key object_id
-- btree index on object_attribute_1, object_attribute_2
Here is what I currently have:
SELECT * FROM object
WHERE (object_attribute_1=100 AND object_attribute_2='Some String') OR
(object_attribute_1=200 AND object_attribute_2='Some other String') OR
(..another row..) OR
(..another row..)
When the query returns, I check for what is missing (thus, does not exist in the database).
Then I will make an multiple row insert:
INSERT INTO object (object_attribute_1, object_attribute_2)
VALUES (info, info), (info, info),(info, info)
Then I will select what I just inserted
SELECT ... WHERE (condition) OR (condition) OR ...
And at last, I will merge the two selects on the client side.
Is there a way that I can combine these 3 queries, into one single queries, where I will provide all the data, and INSERT if the records do not already exist and then do a SELECT in the end.
Your suspicion was well founded. Do it all in a single statement using a data-modifying CTE (Postgres 9.1+):
WITH list(object_attribute_1, object_attribute_2) AS (
VALUES
(100, 'Some String')
, (200, 'Some other String')
, .....
)
, ins AS (
INSERT INTO object (object_attribute_1, object_attribute_2)
SELECT l.*
FROM list l
LEFT JOIN object o1 USING (object_attribute_1, object_attribute_2)
WHERE o1.object_attribute_1 IS NULL
RETURNING *
)
SELECT * FROM ins -- newly inserted rows
UNION ALL -- append pre-existing rows
SELECT o.*
FROM list l
JOIN object o USING (object_attribute_1, object_attribute_2);
Note, there is a tiny time frame for a race condition. So this might break if many clients try it at the same time. If you are working under heavy concurrent load, consider this related answer, in particular the part on locking or serializable transaction isolation:
Postgresql batch insert or ignore
I have a requirement to keep a history of changes made to a specific table when there is an UPDATE called, but only care about specific columns.
So, I have created a History table:
CREATE TABLE [dbo].[SourceTable_History](
[SourceTable_HistoryID] [int] IDENTITY(1,1) NOT NULL,
[SourceTableID] [int] NOT NULL,
[EventDate] [date] NOT NULL,
[EventUser] [date] NOT NULL,
[ChangedColumn] VARCHAR(50) NOT NULL,
[PreviousValue] VARCHAR(100) NULL,
[NewValue] VARCHAR(100) NULL
CONSTRAINT pk_SourceTable_History PRIMARY KEY ([SourceTable_HistoryID]),
CONSTRAINT fk_SourceTable_HistoryID_History_Source FOREIGN KEY ([SourceTableID]) REFERENCES SourceTable (SourceTableId)
)
Abd my plan is to create an Update trigger on the SourceTable. The business only cares about changes to certain columns, so, in psudo code, I was planning to do something like
If source.Start <> new.start
Insert into history (PrimaryKey, EventDate, EventUser, ColumnName, OldValue, NewValue)
(PK, GETDATYE(), updateuser, "StartDate", old.value, new.value)
And there would be a block like that per column we want history on.
We're NOT allowed to use CDC, so we have to roll our own, and this is my plan so far.
Does this seem a suitable plan?
There are 7 tables we need to monitor, with a column count of between 2 and 5 columns per table.
I just need to work out how to get a trigger to first comapr the before and after values of a specific columnm and then write a new row.
I thought it was something as simple as:
CREATE TRIGGER tr_PersonInCareSupportNeeds_History
ON PersonInCareSupportNeeds
FOR UPDATE
AS
BEGIN
IF(inserted.StartDate <> deleted.StartDate)
BEGIN
INSERT INTO [dbo].[PersonInCareSupportNeeds_History]
([PersonInCareSupportNeedsID], [EventDate], [EventUser], [ChangedColumn], [PreviousValue], [NewValue])
VALUES
(inserted.[PersonInCareSupportNeedsID], GETDATE(), [LastUpdateUser], 'StartDate', deleted.[StartDate], deleted.[StartDate])
END
END
We have trigger based auditing system and we basically created it by analyzing how third party tool for generating audit triggers ApexSQL Audit creates triggers and manages storage and developed our own system based on that.
I think your solution is generally ok but that you need to think about modifying storage a bit and plan for scaling.
What if business decides to keep track of all columns in all tables? What if they decide to track inserts and deletes as well? Will your solution be able to accommodate this?
Storage: Use two tables to hold your data. One table for holding all info about transactions (when, who, application name, table name, schema name, affected rows, etc… ). And another table to hold the actual data (before and after values, primary key, etc..).
Triggers: We ended up with a template for insert, update and delete triggers and very simple C# app where we enter tables and columns so application outputs DDL. This saved us a lot of time.
Depending on your requirments, I think history tables should mirror the table you want to capture, plus the extra audit details (who, when, why).
That can make it easier to use the same existing logic (sql, data classes, screens etc) to view historical data.
With your design getting the data in will be ok, but how easy will it be to pull the data out in a usable format?
Well I think that your idea is not so bad. Actually, I have the similar system in production. I will not give you my complete code (with acynchronious history saving), but I could give you some guidelines.
The main idea is to turn your data from relational model into Entity-Attribute-Value model. Also we want our triggers to be as much general as we can, that means - do not write column names explicitly. This could be done by different ways, but the most general I know in SQL Server is to use FOR XML and then select from xml:
declare #Data xml
select #Data = (select * from Test for xml raw('Data'))
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data.nodes('Data/#*') as T(C)
SQL FIDDLE EXAMPLE
To get different rows of two tables, you could use EXCEPT:
select * from Test1 except select * from Test2
union all
select * from Test2 except select * from Test1
SQL FIDDLE EXAMPLE
and, finally, your trigger could be something like this:
create trigger utr_Test_History on Test
after update
as
begin
declare #Data_Inserted xml, #Data_Deleted xml
select #Data_Inserted =
(
select *
from (select * from inserted except select * from deleted) as a
for xml raw('Data')
)
select #Data_Deleted =
(
select *
from (select * from deleted except select * from inserted) as a
for xml raw('Data')
)
;with CTE_Inserted as (
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data_Inserted.nodes('Data/#*') as T(C)
), CTE_Deleted as (
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data_Deleted.nodes('Data/#*') as T(C)
)
insert into History (Table_Name, Record_ID, Event_Date, Event_User, Column_Name, Value_Old, Value_New)
select 'Test', isnull(I.ID, D.ID), getdate(), system_user, isnull(D.Name, I.Name), D.Value, I.Value
from CTE_Inserted as I
full outer join CTE_Deleted as D on D.ID = I.ID and D.Name = I.Name
where
not
(
I.Value is null and D.Value is null or
I.Value is not null and D.Value is not null and I.Value = D.Value
)
end
SQL FIDDLE EXAMPLE