SQL Server Insert Into Table containing a column "Timestamp (Rowversion)" - sql

I'm creating a C# Winforms application for recipe management in an industrial environment.
I created a SQL Server table with 130 columns. The table contains a column called CheckData (of datatype Timestamp), which I use to detect changes made to a row.
If I insert a new row to that table all works fine. The code I use is:
INSERT INTO tablename (Column1, column2, column3, column4)
VALUES (value1, value2, value3, value4)
I just assign values to major columns, the others get their default value. I do not assign a value to the timestamp field since it's written by the system.
Additionally, I want to copy a row from this table to the same table (duplicate a data record).
I copy the source row to a temporary table, drop the ID (primary key) and the timestamp fields in that temporary table and try to insert that only row in the temporary table into the table. This fails.
Here's the code:
SELECT *
INTO #temptable
FROM tablename
WHERE Recipe_No = 8;
ALTER TABLE #temptable DROP COLUMN ID, CHECKDATA;
ALTER TABLE #temptable REBUILD;
UPDATE #temptable
SET Recipe_No = 9, Recipe_Name = 'Test'
WHERE Recipe_No = 8;
INSERT INTO tablename
SELECT * FROM #temptable;
I don't understand where the difference is between inserting a new row thru INSERT INTO xxx (yyy) VALUES (zzz) and INSERT INTO xxx SELECT * FROM yyy. In both cases I don't try to write the timestamp value in the new row.
Does anybody have an idea what I'm missing here?

I don't understand where the difference is between inserting a new row thru INSERT INTO xxx (yyy) VALUES (zzz) and INSERT INTO xxx SELECT * FROM yyy.
With this,
INSERT INTO xxx SELECT * FROM yyy.
you are failing to specify the column mappings from the SELECT to the target table. You should always use
INSERT INTO xxx (Column1, Column2, . . .)
SELECT (Column1, Column2, . . .)
FROM yyy
Here's a simplified example of what you're attempting:
drop table if exists t
create table t(id int, a int)
insert into t(id,a) values (1,1)
select * into #t from t where id = 1
alter table #t drop column id
insert into t select * from #t
and it will fail with
Msg 213, Level 16, State 1, Line 12
Column name or number of supplied values does not match table definition.
because the temp table doesn't even have the same number of columns. And even if it did, you wouldn't know for sure that the column mappings were correct.

It is failing because essentially your command
INSERT INTO tablename SELECT * FROM #temptable;";
Is telling SQL - "Insert everything into this table from this temp table."
While you can work around this, I would say why don't you just try inserting into only the columns made available in your current table with only the values you would like to include. Instead of needing to drop the columns/values, you just don't import it to begin with.
An alternative - if you can write to a helper table, it may be beneficial to INSERT INTO that helper table, as opposed to a temp table, the values you have. Then transform that helper table, and THEN you can do INSERT INTO final_table SELECT * FROM helper. This should give you the results you're looking for.
I hope this is helpful, and I hope it explains why your current command is failing.

Related

How to Insert new Record into Table if the Record is not Present in the Table in Teradata

I want to insert a new record if the record is not present in the table
For that I am using below query in Teradata
INSERT INTO sample(id, name) VALUES('12','rao')
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
When I execute the above query I am getting below error.
WHERE NOT EXISTS
Failure 3706 Syntax error: expected something between ')' and the 'WHERE' keyword.
Can anyone help with the above issue. It will be very helpful.
You can use INSERT INTO ... SELECT ... as follows:
INSERT INTO sample(id,name)
select '12','rao'
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
You can also create the primary/unique key on id column to avoid inserting duplicate data in id column.
I would advise writing the query as:
INSERT INTO sample (id, name)
SELECT id, name
FROM (SELECT 12 as id, 'rao' as name) x
WHERE NOT EXISTS (SELECT 1 FROM sample s WHERE s.id = x.id);
This means that you do not need to repeat the constant value -- such repetition can be a cause of errors in queries. Note that I removed the single quotes. id looks like a number so treat it as a number.
The uniqueness of ids is usually handled using a unique constraint or index:
alter table sample add constraint unq_sample_id unique (id);
This makes sure that the database ensures uniqueness. Your approach can fail if two inserts are run at the same time with the same id. An attempt to insert a duplicates returns an error (which the exists can then avoid).
In practice, id columns are usually generated automatically by the database. So the create table statement would look more like:
id integer generated by default as identity
And the insert would look like:
insert into sample (name)
values (name);
If id is the Primary Index of the table you can use MERGE:
merge into sample as tgt
using VALUES('12','rao') as src (id, name)
on src.id = tgt.id
when not matched
then insert (src.id,src.name)

How to copy some records of table and change some columns before insert into this table again in sql server?

In my SQL Server table, I have a table whose PK is GUID with lots of records already.
Now I want to add records which only needs to change the COMMON_ID and COMMON_ASSET_TYPE column of some existing records.
select * from My_Table where COMMON_ASSET_TYPE = "ASSET"
I am writing sql to copy above query result, changing COMMON_ID value to new GUID value and COMMON_ASSET_TYPE value from "ASSET" to "USER", then insert the new result into My_Table.
I do not know how to write it since now I feel it is a trouble to insert records manually.
Update:
I have far more columns in table and most of them are not nullable, I want to keep all these columns' data for new records except above two columns.Is there any way if I do not have to write all these column names in sql?
Try to use NEWID if you want to create new guid:
INSERT INTO dbo.YourTable
(
COMMON_ID,
COMMON_ASSET_TYPE
)
select NEWID(), 'User' as Common_Asset_Type
from My_Table
where COMMON_ASSET_TYPE = "ASSET"
UPDATE:
As a good practice I would suggest to write all column names explicitly to have a clean and clear insert statement. However, you can use the following construction, but it is not advisable in my opinion:
insert into table_One
select
id
, isnull(name,'Jon')
from table_Two
INSERT INTO My_Table (COMMON_ID,COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,COMMON_ASSET_TYPE)
SELECT NEWID(), COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,'USER'
FROM My_Table
WHERE COMMON_ASSET_TYPE = 'ASSET'
If I've understood correctly you want to take existing records in your table, modify them, and insert them as new records in the same table.
I'll assume ID column contains the the GUID?
I'd first create a temporary table
CREATE TABLE #myTempTable(
ID UNIQUEIDENTIFIER,
Name varchar(max),
... etc
);
Fill this temp table with the records to change with your SELECT statement.
Change the records in the temp table using UPDATE statement.
Finally, Insert those "new" records back into the primary table. with INSERT INTO SELECT statement.
You will probably have to sandwitch the INSERT INTO SELECT with IDENTITY_INSERT (on/off):
SET IDENTITY_INSERT schema_name.table_name ON
SET IDENTITY_INSERT schema_name.table_name OFF
IDENTITY_INSERT "Allows explicit values to be inserted into the identity column of a table."

Hive insert query like SQL

I am new to hive, and want to know if there is anyway to insert data into Hive table like we do in SQL. I want to insert my data into hive like
INSERT INTO tablename VALUES (value1,value2..)
I have read that you can load the data from a file to hive table or you can import data from one table to hive table but is there any way to append the data as in SQL?
Some of the answers here are out of date as of Hive 0.14
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL
It is now possible to insert using syntax such as:
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2));
INSERT INTO TABLE students
VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
You can use the table generating function stack to insert literal values into a table.
First you need a dummy table which contains only one line. You can generate it with the help of limit.
CREATE TABLE one AS
SELECT 1 AS one
FROM any_table_in_your_database
LIMIT 1;
Now you can create a new table with literal values like this:
CREATE TABLE my_table AS
SELECT stack(3
, "row1", 1
, "row2", 2
, "row3", 3
) AS (column1, column2)
FROM one
;
The first argument of stack is the number of rows you are generating.
You can also add values to an existing table:
INSERT INTO TABLE my_table
SELECT stack(2
, "row4", 1
, "row5", 2
) AS (column1, column2)
FROM one
;
Slightly better version of the unique2 suggestion is below:
insert overwrite table target_table
select * from
(
select stack(
3, # generating new table with 3 records
'John', 80, # record_1
'Bill', 61 # record_2
'Martha', 101 # record_3
)
) s;
Which does not require the hack with using an already exiting table.
You can use below approach. With this, You don't need to create temp table OR txt/csv file for further select and load respectively.
INSERT INTO TABLE tablename SELECT value1,value2 FROM tempTable_with_atleast_one_records LIMIT 1.
Where tempTable_with_atleast_one_records is any table with atleast one record.
But problem with this approach is that If you have INSERT statement which inserts multiple rows like below one.
INSERT INTO yourTable values (1 , 'value1') , (2 , 'value2') , (3 , 'value3') ;
Then, You need to have separate INSERT hive statement for each rows. See below.
INSERT INTO TABLE yourTable SELECT 1 , 'value1' FROM tempTable_with_atleast_one_records LIMIT 1;
INSERT INTO TABLE yourTable SELECT 2 , 'value2' FROM tempTable_with_atleast_one_records LIMIT 1;
INSERT INTO TABLE yourTable SELECT 3 , 'value3' FROM tempTable_with_atleast_one_records LIMIT 1;
No. This INSERT INTO tablename VALUES (x,y,z) syntax is currently not supported in Hive.
You could definitely append data into an existing table. (But it is actually not an append at the HDFS level). It's just that whenever you do a LOAD or INSERT operation on an existing Hive table without OVERWRITE clause the new data will be put without replacing the old data. A new file will be created for this newly inserted data inside the directory corresponding to that table. For example :
I have a file named demo.txt which has 2 lines :
ABC
XYZ
Create a table and load this file into it
hive> create table demo(foo string);
hive> load data inpath '/demo.txt' into table demo;
Now,if I do a SELECT on this table it'll give me :
hive> select * from demo;
OK
ABC
XYZ
Suppose, I have one more file named demo2.txt which has :
PQR
And I do a LOAD again on this table without using overwrite,
hive> load data inpath '/demo2.txt' into table demo;
Now, if I do a SELECT now, it'll give me,
hive> select * from demo;
OK
ABC
XYZ
PQR
HTH
Ways to insert data into Hive table:
for demonstration, I am using table name as table1 and table2
create table table2 as select * from table1 where 1=1;
or
create table table2 as select * from table1;
insert overwrite table table2 select * from table1;
--it will insert data from one to another. Note: It will refresh the target.
insert into table table2 select * from table1;
--it will insert data from one to another. Note: It will append into the target.
load data local inpath 'local_path' overwrite into table table1;
--it will load data from local into the target table and also refresh the target table.
load data inpath 'hdfs_path' overwrite into table table1;
--it will load data from hdfs location iand also refresh the target table.
or
create table table2(
col1 string,
col2 string,
col3 string)
row format delimited fields terminated by ','
location 'hdfs_location';
load data local inpath 'local_path' into table table1;
--it will load data from local and also append into the target table.
load data inpath 'hdfs_path' into table table1;
--it will load data from hdfs location and also append into the target table.
insert into table2 values('aa','bb','cc');
--Lets say table2 have 3 columns only.
Multiple insertion into hive table
Yes you can insert but not as similar to SQL.
In SQL we can insert the row level data, but here you can insert by fields (columns).
During this you have to make sure target table and the query should have same datatype and same number of columns.
eg:
CREATE TABLE test(stu_name STRING,stu_id INT,stu_marks INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
INSERT OVERWRITE TABLE test SELECT lang_name, lang_id, lang_legacy_id FROM export_table;
To insert entire data of table2 in table1. Below is a query:
INSERT INTO TABLE table1 SELECT * FROM table2;
You can't do insert into to insert single record. It's not supported by Hive. You may place all new records that you want to insert in a file and load that file into a temp table in Hive. Then using insert overwrite..select command insert those rows into a new partition of your main Hive table. The constraint here is your main table will have to be pre partitioned. If you don't use partition then your whole table will be replaced with these new records.
Enter the following command to insert data into the testlog table with some condition:
INSERT INTO TABLE testlog SELECT * FROM table1 WHERE some condition;
I think in such scenarios you should be using HBASE which facilitates such kind of insertion but it does not provide any SQL kind of query language. You need you use Java API of HBASE like the put method to do such kind of insertion. Moreover HBASE is column oriented no-sql database.
You still can insert into complex type in Hive - it works
(id is int, colleagues array)
insert into emp (id,colleagues) select 11, array('Alex','Jian') from (select '1')
you can add values to specific columns as well, just specify the column names in which you like to add corresponding values:
Insert into Table (Col1, Col2, Col4,col5,Col7) Values ('Va11','Va2','Val4','Val5','Val7');
Make sure the columns you skip dont have not null value type.
There are few properties to set to make a Hive table support ACID properties and to insert the values into tables as like in SQL .
Conditions to create a ACID table in Hive.
The table should be stored as ORC file. Only ORC format can support ACID prpoperties for now.
The table must be bucketed
Properties to set to create ACID table:
set hive.support.concurrency =true;
set hive.enforce.bucketing =true;
set hive.exec.dynamic.partition.mode =nonstrict
set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads= 1;
set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set the property hive.in.test to true in hive.site.xml
After setting all these properties , the table should be created with tblproperty 'transactional' ='true'. The table should be bucketed and saved as orc
CREATE TABLE table_name (col1 int,col2 string, col3 int) CLUSTERED BY col1 INTO 4
BUCKETS STORED AS orc tblproperties('transactional' ='true');
Now its possible to inserte values into the table like SQL query.
INSERT INTO TABLE table_name VALUES (1,'a',100),(2,'b',200),(3,'c',300);
Yes we can use Insert query in Hive.
hive> create table test (id int, name string);
INSERT: INSERT...VALUES is available starting in version 0.14.
hive> insert into table test values (1,'mytest');
This is going to work for insert. We have to use values keyword.
Note: User cannot insert data into a complex datatype column (array, map, struct, union) using the INSERT INTO...VALUES clause.

Can I keep old keys linked to new keys when making a copy in SQL?

I am trying to copy a record in a table and change a few values with a stored procedure in SQL Server 2005. This is simple, but I also need to copy relationships in other tables with the new primary keys. As this proc is being used to batch copy records, I've found it difficult to store some relationship between old keys and new keys.
Right now, I am grabbing new keys from the batch insert using OUTPUT INTO.
ex:
INSERT INTO table
(column1, column2,...)
OUTPUT INSERTED.PrimaryKey INTO #TableVariable
SELECT column1, column2,...
Is there a way like this to easily get the old keys inserted at the same time I am inserting new keys (to ensure I have paired up the proper corresponding keys)?
I know cursors are an option, but I have never used them and have only heard them referenced in a horror story fashion. I'd much prefer to use OUTPUT INTO, or something like it.
If you need to track both old and new keys in your temp table, you need to cheat and use MERGE:
Data setup:
create table T (
ID int IDENTITY(5,7) not null,
Col1 varchar(10) not null
);
go
insert into T (Col1) values ('abc'),('def');
And the replacement for your INSERT statement:
declare #TV table (
Old_ID int not null,
New_ID int not null
);
merge into T t1
using (select ID,Col1 from T) t2
on 1 = 0
when not matched then insert (Col1) values (t2.Col1)
output t2.ID,inserted.ID into #TV;
And (actually needs to be in the same batch so that you can access the table variable):
select * from T;
select * from #TV;
Produces:
ID Col1
5 abc
12 def
19 abc
26 def
Old_ID New_ID
5 19
12 26
The reason you have to do this is because of an irritating limitation on the OUTPUT clause when used with INSERT - you can only access the inserted table, not any of the tables that might be part of a SELECT.
Related - More explanation of the MERGE abuse
INSERT statements loading data into tables with an IDENTITY column are guaranteed to generate the values in the same order as the ORDER BY clause in the SELECT.
If you want the IDENTITY values to be assigned in a sequential fashion
that follows the ordering in the ORDER BY clause, create a table that
contains a column with the IDENTITY property and then run an INSERT ..
SELECT … ORDER BY query to populate this table.
From: The behavior of the IDENTITY function when used with SELECT INTO or INSERT .. SELECT queries that contain an ORDER BY clause
You can use this fact to match your old with your new identity values. First collect the list of primary keys that you intend to copy into a temporary table. You can also include your modified column values as well if needed:
select
PrimaryKey,
Col1
--Col2... etc
into #NewRecords
from Table
--where whatever...
Then do your INSERT with the OUTPUT clause to capture your new ids into the table variable:
declare #TableVariable table (
New_ID int not null
);
INSERT INTO #table
(Col1 /*,Col2... ect.*/)
OUTPUT INSERTED.PrimaryKey INTO #NewIds
SELECT Col1 /*,Col2... ect.*/
from #NewRecords
order by PrimaryKey
Because of the ORDER BY PrimaryKey statement, you will be guaranteed that your New_ID numbers will be generated in the same order as the PrimaryKey field of the copied records. Now you can match them up by row numbers ordered by the ID values. The following query would give you the parings:
select PrimaryKey, New_ID
from
(select PrimaryKey,
ROW_NUMBER() over (order by PrimaryKey) OldRow
from #NewRecords
) PrimaryKeys
join
(
select New_ID,
ROW_NUMBER() over (order by New_ID) NewRow
from #NewIds
) New_IDs
on OldRow = NewRow

large insert in two tables. First table will feed second table with its generated Id

One question about how to t-sql program the following query:
Table 1
I insert 400.000 mobilephonenumbers in a table with two columns. The number to insert and identity id.
Table 2
The second table is called SendList. It is a list with 3columns, a identity id, a List id, and a phonenumberid.
Table 3
Is called ListInfo and contains PK list id. and info about the list.
My question is how should I using T-sql:
Insert large list with phonenumbers to table 1, insert the generated id from the insert of phonenum. in table1, to table 2. AND in a optimized way. It cant take long time, that is my problem.
Greatly appreciated if someone could guide me on this one.
Thanks
Sebastian
What version of SQL Server are you using? If you are using 2008 you can use the OUTPUT clause to insert multiple records and output all the identity records to a table variable. Then you can use this to insert to the child tables.
DECLARE #MyTableVar table(MyID int);
INSERT MyTabLe (field1, field2)
OUTPUT INSERTED.MyID
INTO #MyTableVar
select Field1, Field2 from MyOtherTable where field3 = 'test'
--Display the result set of the table variable.
Insert MyChildTable (myID,field1, field2)
Select MyID, test, getdate() from #MyTableVar
I've not tried this directly with a bulk insert, but you could always bulkinsert to a staging table and then use the processs, described above. Inserting groups of records is much much faster than one at a time.