I have a table called flights which has all information related to flights and I have a table called users.
I want to create a new table called orders, in this table I wanted to add user name from the user table and certain flight information from the flight table.
The thing is I also want to have a column in my order table called orderID.
My question is how do I add a column in a as select query
SELECT *, an_expression AS another_column FROM the_table_or_subquery .... ;
an_expression is that, an expression that results in a single value
the_table_or_subquery and another_column are descriptive rather than actual, change accordingly.
the new column could be first e.g. SELECT an_expression AS another_column,* FROM the_table_or_subquery;
Could you please give an example like to better understand
Considering that you have provided scant details then here are examples of creating a new table and inserting some data from wherever and other data from another table (flights) :-
DROP TABLE IF EXISTS flights;
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS `order`;
DROP TABLE IF EXISTS other_order_table;
CREATE TABLE IF NOT EXISTS flights (
id INTEGER PRIMARY KEY,
flight_info TEXT
)
;
CREATE TABLE IF NOT EXISTS users (
userid INTEGER PRIMARY KEY,
user_name TEXT UNIQUE,
user_email TEXT UNIQUE
)
;
INSERT OR IGNORE INTO flights (flight_info)
VALUES
('Flight1'),
('Flight2'),
('Flight3')
;
INSERT OR IGNORE INTO users (user_name,user_email)
VALUES
('Fred','fred#email'),
('Mary','mary#email'),
('Jane','jane#email')
;
DROP TABLE IF EXISTS `order`;
/* >>>>>>>>>> NOT A GOOD IDEAD <<<<<<<<<< due to
A table created using CREATE TABLE AS has no PRIMARY KEY and no constraints of any kind.
The default value of each column is NULL.
The default collation sequence for each column of the new table is BINARY.
*/
CREATE TABLE IF NOT EXISTS `order` /* NOTE ORDER is a keyword so has to be enclosed - better to not call it order */
AS SELECT *,null AS orderid /* The new column BUT see above, value will be null*/
FROM flights;
SELECT * FROM `order`;
/* BETTER as can specify column attributes
however must insert elsewhere
*/
CREATE TABLE IF NOT EXISTS other_order_table (
orderid INTEGER PRIMARY KEY,
order_added TEXT DEFAULT CURRENT_TIMESTAMP,
flight_id,
flight_info
)
;
/*
EXAMPLE 1
uses defaults for columns
in the case of orderid as it's an alias of the rowid then autogenerated id
in the case of order_added the current date and time in YYYY-MM-DD hh:mm:ss format
*/
INSERT INTO other_order_table (flight_id,flight_info) SELECT * FROM flights;
SELECT * FROM other_order_table;
DELETE FROM other_order_table;
/* EXAMPLE 2 */
INSERT INTO other_order_table
SELECT
/* a random id will bne inserted into the first column (orderid) */
abs(random()),
/* a random date up to 999 days in the past */
datetime('now','-'||CAST(abs(random()) % 1000 AS INTEGER)||' days'),
/* all the columnd from the flights tables */
*
FROM flights
;
SELECT * FROM other_order_table;
/* Cleanup Ennvironment*/
DROP TABLE IF EXISTS flights;
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS `order`;
DROP TABLE IF EXISTS other_order_table;
The 3 results are :-
2 columns from the flights table + new orderID column set to null
WARNING see commentary above re column attributes being stripped
the orderId is generated
the order_added is generated due to default being CURRENT_TIMESTAMP
both the new columns orderid and order_added use expressions that return a random suitable value.
Related
I want to insert a new record if the record is not present in the table
For that I am using below query in Teradata
INSERT INTO sample(id, name) VALUES('12','rao')
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
When I execute the above query I am getting below error.
WHERE NOT EXISTS
Failure 3706 Syntax error: expected something between ')' and the 'WHERE' keyword.
Can anyone help with the above issue. It will be very helpful.
You can use INSERT INTO ... SELECT ... as follows:
INSERT INTO sample(id,name)
select '12','rao'
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
You can also create the primary/unique key on id column to avoid inserting duplicate data in id column.
I would advise writing the query as:
INSERT INTO sample (id, name)
SELECT id, name
FROM (SELECT 12 as id, 'rao' as name) x
WHERE NOT EXISTS (SELECT 1 FROM sample s WHERE s.id = x.id);
This means that you do not need to repeat the constant value -- such repetition can be a cause of errors in queries. Note that I removed the single quotes. id looks like a number so treat it as a number.
The uniqueness of ids is usually handled using a unique constraint or index:
alter table sample add constraint unq_sample_id unique (id);
This makes sure that the database ensures uniqueness. Your approach can fail if two inserts are run at the same time with the same id. An attempt to insert a duplicates returns an error (which the exists can then avoid).
In practice, id columns are usually generated automatically by the database. So the create table statement would look more like:
id integer generated by default as identity
And the insert would look like:
insert into sample (name)
values (name);
If id is the Primary Index of the table you can use MERGE:
merge into sample as tgt
using VALUES('12','rao') as src (id, name)
on src.id = tgt.id
when not matched
then insert (src.id,src.name)
1.I have a table nodes with node_id(PK),node_name(name),connstr(text),last_snap_id(integer) and this table has 1 row fill with 1,local,dbname = postgres,0
2.I have a table indexes_list with node_id(PK),indexrelid(PK),schemaname,indexrelname which is empty
3.I have to collect the data from pg_stat_user_indexes the columns are indexrelid,schemaname,indexrelname
Questions: How i do fetch data from pg_stat_user_indexes to load into my indexes_list table and the same time and if i use 2 select statement in one i get error.
Welcome to SO.
First you need to create a SEQUENCE or alternative create the column node_id with the type serial..
CREATE SEQUENCE seq_node_id START WITH 1;
.. and then with a INSERT INTO … (SELECT * …) populate your node table
INSERT INTO nodes (node_id,indexrelid,schemaname,indexrelname)
SELECT nextval('seq_node_id'),indexrelid,schemaname,indexrelname
FROM pg_stat_user_indexes;
If node_id is of type serial, you can simply omit it in the INSERT
INSERT INTO nodes (indexrelid,schemaname,indexrelname)
SELECT indexrelid,schemaname,indexrelname
FROM pg_stat_user_indexes;
EDIT:
These CREATE TABLE and INSERT statements should give you some clarity:
CREATE TABLE nodes2 (
node_id serial, indexrelid text, schemaname text, indexrelname text
);
INSERT INTO nodes2 (indexrelid,schemaname,indexrelname)
SELECT indexrelid,schemaname,indexrelname
FROM pg_stat_user_indexes;
I have a common pattern in the current database that I would like to rip out. I have 3 objects where a single will suffice: current_table, history_table, combined_view.
current_table and history_table have exactly the same columns and contain data split on a timestamp, that is history_table contains data up to 2010-01-01 and current_table includes data since, including 2010-01-01 etc.
The combined view is (poor man's partitioning)
select * from history_table
UNION ALL
select * from current_table
I would like to have a single table with the same name as the view and go away with the history_table and the view. My algorithm is:
Drop constraints on cutoff time.
Move data from history_table into current_table
Rename history_table to history_table_DEPR, rename view to combined_view_DEPR, rename current_table to combined_view
I currently achieve (2) above via the following SQL:
INSERT INTO current_table
SELECT * FROM history_table
I imagine (2) is where the bulk of the time is spent. I am worried that the insert above will attempt to write a log for each row inserted and will be slower than it could be. What is the best way to move the data in this case? I do not care about logging these moves.
This will batch
select 1
while (##rowcount > 0)
begin
INSERT INTO current_table ct
SELECT top (100000) * FROM history_table ht
where not exists ( select 1 from current_table ctt
where ctt.PK = ht.PK
)
end
I wouldn't move the data at all, especially if you're going to have repeat this exercise. Use some partitioning tricks to shuffle metadata around.
1) Create an intermediate staging table with two partitions based on your separation date.
2) Create your eventual target table, named after your view, without partitions.
3) Switch the data from the existing tables into the partitioned table.
4) Collapse the two partitions into one partition.
5) Switch the remaining partition into your new target table.
6) Drop all the working objects.
7) Repeat as needed.
-- Step 0.
-- Standard issue pre-cleaning.
IF OBJECT_ID('dbo.OldData','U') IS NOT NULL
DROP TABLE dbo.OldData;
IF OBJECT_ID('dbo.NewData','U') IS NOT NULL
DROP TABLE dbo.NewData;
IF OBJECT_ID('dbo.CleanUp','U') IS NOT NULL
DROP TABLE dbo.CleanUp;
IF OBJECT_ID('dbo.AllData','U') IS NOT NULL
DROP TABLE dbo.AllData;
IF EXISTS (SELECT * FROM sys.partition_schemes
WHERE name = 'psCleanUp')
DROP PARTITION SCHEME psCleanUp;
IF EXISTS (SELECT * FROM sys.partition_functions
WHERE name = 'pfCleanUp')
DROP PARTITION FUNCTION pfCleanUp;
-- Mock up your existing situation. Two data tables.
CREATE TABLE dbo.OldData
(
[Dates] DATE NOT NULL
,[OtherStuff] VARCHAR(1) NULL
);
CREATE TABLE dbo.NewData
(
[Dates] DATE NOT NULL
,[OtherStuff] VARCHAR(1) NULL
);
INSERT INTO dbo.OldData
(
Dates
,OtherStuff
)
VALUES
(
'20090101' -- Dates - date
,'' -- OtherStuff - varchar(1)
);
INSERT INTO dbo.NewData
(
Dates
,OtherStuff
)
VALUES
(
'20110101' -- Dates - date
,'' -- OtherStuff - varchar(1)
)
-- Step .5
-- Here's where the solution starts.
-- Add check contraints to your existing tables.
-- The partition switch will require this to be sure
-- the incoming data works with the partition scheme.
ALTER TABLE dbo.OldData
ADD CONSTRAINT ckOld CHECK (Dates < '2010-01-01');
ALTER TABLE dbo.NewData
ADD CONSTRAINT ckNew CHECK (Dates >= '2010-01-01');
-- Step 1.
-- Create your partitioning artifacts and
-- intermediate table.
CREATE PARTITION FUNCTION pfCleanUp (DATE)
AS RANGE RIGHT FOR VALUES ('2010-01-01');
CREATE PARTITION SCHEME psCleanUp
AS PARTITION pfCleanUp
ALL TO ([PRIMARY]);
CREATE TABLE dbo.CleanUp
(
[Dates] DATE NOT NULL
,[OtherStuff] VARCHAR(1) NULL
) ON psCleanUp(Dates);
-- Step 2.
-- Create your new target table.
CREATE TABLE dbo.AllData
(
[Dates] DATE NOT NULL
,[OtherStuff] VARCHAR(1) NULL
);
-- Step 3.
-- Start flopping metadata around.
ALTER TABLE dbo.OldData
SWITCH TO dbo.CleanUp PARTITION 1;
ALTER TABLE dbo.NewData
SWITCH TO dbo.CleanUp PARTITION 2;
-- Step 4.
-- Your old tables should be empty now.
-- Put all of the data into one partition.
ALTER PARTITION FUNCTION pfCleanUp()
MERGE RANGE ('2010-01-01');
-- Step 5.
-- Switch that partition out to your
-- spanky new table.
ALTER TABLE dbo.CleanUp
SWITCH PARTITION 1 TO dbo.AllData;
-- Verify the data's where it belongs.
SELECT *
FROM dbo.AllData;
-- Verify the data's not where it shouldn't be.
SELECT * FROM dbo.OldData;
SELECT * FROM dbo.NewData;
SELECT * FROM dbo.CleanUp ;
-- Step 6.
-- Clean up after yourself.
DROP TABLE dbo.OldData;
DROP TABLE dbo.NewData;
DROP TABLE dbo.CleanUp;
DROP PARTITION SCHEME psCleanUp;
DROP PARTITION FUNCTION pfCleanUp;
-- This one's just here for me.
DROP TABLE dbo.AllData;
I have created a backup for my country table.
create table country_bkp as select * from country;
What SQL should I use to restore the country table to it's original state?
I can do
insert into country select * from country_bkp;
but it will just have duplicate entries and probably fail as primary key would be same .
Is there an SQL command to merge data back?
Last alternative would be
DROP TABLE country;
create table country as select * from country_bkp;
but I want to avoid this as all the grants/permissions would get lost by this.
Other cleaner way would be
delete from country ;
insert into country select * from country_bkp;
But I am looking for more of a merge approach without having to clear data from original table.
Instead of dropping the table, which, as you noted, would lose all the permission defitions, you could truncate it to just remove all the data, and then insert-select the old data:
TRUNCATE TABLE country;
INSERT INTO country SELECT * FROM county_bkp;
In my case, INSERT INTO country SELECT * FROM county_bkp; didnt work because:
It wouldnt let me insert in Primary Key column due to
indentity_insert being off by default.
My table had TimeStamp columns.
In that case:
allow identity_insert in the OriginalTable
insert query in which you mention all the columns of OriginalTable (Excluding TimeStamp Columns) and in Values select all columns from BackupTable (Excluding TimeStamp Columns)
restrict identity_insert in the OriginalTable at the end.
EXAMPLE:
Set Identity_insert OriginalTable ON
insert into OriginalTable (a,b,c,d,e, ....) --[Exclude TimeStamp Columns here]
Select a,b,c,d,e, .... from BackupTable --[Exclude TimeStamp Columns here]
Set Identity_insert OriginalTable Off
Only One Solution to Recover Data from Backup table is Rename Original table with random name and than rename Backup table with Original Table name in case if Identity Insert is ON for Original Table.
for example
Original Table - Invoice
Back Up Table - Invoice_back
Now Rename these tables :
Original Table - Invoice_xxx
Back Up Table - Invoice
I am trying to copy a record in a table and change a few values with a stored procedure in SQL Server 2005. This is simple, but I also need to copy relationships in other tables with the new primary keys. As this proc is being used to batch copy records, I've found it difficult to store some relationship between old keys and new keys.
Right now, I am grabbing new keys from the batch insert using OUTPUT INTO.
ex:
INSERT INTO table
(column1, column2,...)
OUTPUT INSERTED.PrimaryKey INTO #TableVariable
SELECT column1, column2,...
Is there a way like this to easily get the old keys inserted at the same time I am inserting new keys (to ensure I have paired up the proper corresponding keys)?
I know cursors are an option, but I have never used them and have only heard them referenced in a horror story fashion. I'd much prefer to use OUTPUT INTO, or something like it.
If you need to track both old and new keys in your temp table, you need to cheat and use MERGE:
Data setup:
create table T (
ID int IDENTITY(5,7) not null,
Col1 varchar(10) not null
);
go
insert into T (Col1) values ('abc'),('def');
And the replacement for your INSERT statement:
declare #TV table (
Old_ID int not null,
New_ID int not null
);
merge into T t1
using (select ID,Col1 from T) t2
on 1 = 0
when not matched then insert (Col1) values (t2.Col1)
output t2.ID,inserted.ID into #TV;
And (actually needs to be in the same batch so that you can access the table variable):
select * from T;
select * from #TV;
Produces:
ID Col1
5 abc
12 def
19 abc
26 def
Old_ID New_ID
5 19
12 26
The reason you have to do this is because of an irritating limitation on the OUTPUT clause when used with INSERT - you can only access the inserted table, not any of the tables that might be part of a SELECT.
Related - More explanation of the MERGE abuse
INSERT statements loading data into tables with an IDENTITY column are guaranteed to generate the values in the same order as the ORDER BY clause in the SELECT.
If you want the IDENTITY values to be assigned in a sequential fashion
that follows the ordering in the ORDER BY clause, create a table that
contains a column with the IDENTITY property and then run an INSERT ..
SELECT … ORDER BY query to populate this table.
From: The behavior of the IDENTITY function when used with SELECT INTO or INSERT .. SELECT queries that contain an ORDER BY clause
You can use this fact to match your old with your new identity values. First collect the list of primary keys that you intend to copy into a temporary table. You can also include your modified column values as well if needed:
select
PrimaryKey,
Col1
--Col2... etc
into #NewRecords
from Table
--where whatever...
Then do your INSERT with the OUTPUT clause to capture your new ids into the table variable:
declare #TableVariable table (
New_ID int not null
);
INSERT INTO #table
(Col1 /*,Col2... ect.*/)
OUTPUT INSERTED.PrimaryKey INTO #NewIds
SELECT Col1 /*,Col2... ect.*/
from #NewRecords
order by PrimaryKey
Because of the ORDER BY PrimaryKey statement, you will be guaranteed that your New_ID numbers will be generated in the same order as the PrimaryKey field of the copied records. Now you can match them up by row numbers ordered by the ID values. The following query would give you the parings:
select PrimaryKey, New_ID
from
(select PrimaryKey,
ROW_NUMBER() over (order by PrimaryKey) OldRow
from #NewRecords
) PrimaryKeys
join
(
select New_ID,
ROW_NUMBER() over (order by New_ID) NewRow
from #NewIds
) New_IDs
on OldRow = NewRow