SQL: How to insert data into a table with column names - sql

When inserting data into a SQL Server table, is it possible to specify which column you want to insert data to?
For a table with
I know you can have syntax like this:
INSERT INTO MyTable (Name, col4_on, col8_on, col9_on)
VALUES ('myName', 0, 1, 0)
But the above syntax becomes unwieldy when you have lots of columns, especially if they have binary data. It becomes hard to match up which 1 and 0 go with which column. I'm hoping there's a named-parameter like syntax (similar to what C# has) which looks like the following:
INSERT INTO MyTable
VALUES (Name: 'myName', col4_on: 0, col8_on: 1, col9_on: 0)
Thanks

You must specify the column names. However, there is one exception. If you INSERTing exactly the same number of columns as the target table has in the same order as they are in the table, use this syntax:
INSERT INTO MyTable
VALUES ('val1A', 'val4A', 'val8A')
Note that this is a fragile way of performing an INSERT, because if that table changes, or if the columns are ordered differently on a different system, the INSERT may fail, or worse-- it may put the wrong data in each column.
I've found that when I INSERT a lot of columns, I find the queries easier to read if I can group them somehow. If column names are long, I may put them on separate lines like so:
INSERT INTO MyTable
(
MyTable_VeryLongName_Col1,
MyTable_VeryLongName_Col4,
MyTable_VeryLongName_Col8,
-- etc.
)
SELECT
Very_Long_Value_1,
Very_Long_Value_4,
Very_Long_Value_8,
-- etc.
Or you can group 2 columns on a line, or put spaces on every 5, or comment every 10th line, etc. Whatever makes it easier to read.
If you find including column names onerous when INSERTing a lot of rows, then try chaining the data together:
INSERT INTO MyTable (col1, col4, col8)
VALUES ('val1A', 'val4A', 'val8A'),
('val1B', 'val4B', 'val8B'),
-- etc.
Or UNION them together:
INSERT INTO MyTable (col1, col4, col8)
SELECT 'val1A', 'val4A', 'val8A'
UNION ALL 'val1B', 'val4B', 'val8B'
UNION ALL ... -- etc.
Or, SELECT them from another table:
INSERT INTO MyTable (col1, col4, col8)
SELECT val1, va4, val8
FROM MyOtherTable
WHERE -- some condition is met

INSERT INTO MyTable (col1, col4, col8)
VALUES ('val1', 'val4', 'val8')
This statement will add values to the columns mentioned in your INSERT INTO statement, you can write the above query in the following formats it will not make any difference .
INSERT INTO MyTable (col8, col1, col4)
VALUES ('val8', 'val1', 'val4')
OR
INSERT INTO MyTable (col4, col8, col1)
VALUES ('val4', 'val8', 'val1')
to Add multiple rows at a time you can pass multiple rows at a time in you values clause something like this
INSERT INTO MyTable (col4, col8, col1)
VALUES ('val4', 'val8', 'val1'),
('val4', 'val8', 'val1'),
('val4', 'val8', 'val1'),
('val4', 'val8', 'val1')
The order of the values should match the order of the columns
mentioned in your INSERT INTO statement.
All above statement will have the same result.
keeping one thing in mind once you have mentioned a column you must provide a value for it
like this
INSERT INTO MyTable (col1, col4, col8)
VALUES ('val1', null, 'val8')
but you cannot do something like this
INSERT INTO MyTable (col1, col4, col8)
VALUES ('val1', 'val8')

I figured out a way around this but it's rather hacky and only works for tables which has columns with unique values:
INSERT INTO MyTable (Name)
VALUES ('myName')
UPDATE MyTable
SET col4_on=0, col8_on=1, col9_on=0
WHERE Name = 'myName'
This could be expanded into a multiple row insert as follows:
INSERT INTO MyTable (Name)
VALUES ('row1'), ('row2'), ('row3')
UPDATE MyTable SET col4_on=0, col8_on=1, col9_on=0 WHERE Name = 'row1'
UPDATE MyTable SET col4_on=1, col8_on=0, col9_on=0 WHERE Name = 'row2'
UPDATE MyTable SET col4_on=1, col8_on=1, col9_on=1 WHERE Name = 'row3'

No, there is no way to do specifically what you want. The closest thing you can do is to use the column creation order to avoid use the columns names on the insert command. As this:
If you have a table like
tableA ( id, name, phone )
You can insert values on it using
insert into tableA values ( 1, 'Name', '555-9999' );
But be carefull, you have to follow the exact order on the fields of your table, otherwise you can have an error and worst, put wrong data in wrong fields.

Nope you cannot do it, the only other alternative for you would be insert from select
insert into MyTable
select 'val1' as col1, 'val4' as col4, 'val8' as col8 --if any extra columns then just do "null as col10"
assuming the order is same in table

Related

SQL - SQL statement has been terminated if one insert into row is incorrect

I'm trying to insert hundreds of rows into a table using a query like:
Insert INTO tableX (column1, colum2)
VALUES
((SELECT sysID FROM tableY where ID = var1), 1)
((SELECT sysID FROM tableY where ID = var2), 1)
et cetera
Now let's say var88 doesn't exist, it will return NULL as sysID, however I can't insert a NULL into column1 so I get an error and the whole insert into query will be terminated. Is there a way to cancel the whole termination and just skip the rows where sysID = NULL? I'm sure I can do this by first doing a proper select, filtering out the NULL rows and THEN do the insert into, however I'm wondering if there is an other way to do this.
You can use the following instead, using a INSERT INTO SELECT:
INSERT INTO tableX (column1, colum2)
SELECT sysID, 1
FROM tableY
WHERE ID IN (va1, var2) AND NOT sysID IS NULL
Where/how are you getting the var1 (etc) variables for your values?
You can convert this to:
Insert INTO tableX (column1, colum2)
VALUES
Select SELECT sysID, 1
Where ID IN (var1, var2, etc..)
WHERE sysID is not null
Or build this into a loop somehow (depending on where/how your var1 etc are coming from

insert into fails to insert selected data

I have been working on breaking up a 68GB table into a more normalized structure for the last few weeks, and everything has bee going smoothly until today.
I am attempting to move a select few columns from the big table into the new table with this query:
insert into [destination] (col1, col2, col3...)
select col1, col2, col3
From [source]
where companyID = [source].companyID
I receive the message, (60113678 row(s) affected), but the data was not inserted into the destination, and the data in the source table hasn't been altered, so what has been affected, and why wasn't any of the data inserted into the destination?
The code you seem to want to execute is:
update d
set col1 = s.col1,
col2 = s.col2,
col3 = s.col3
from destination d join
sources s
on s.companyID = s.companyId;
The code you have written is equivalent to:
insert into [destination] (col1, col2, col3...)
select s.col1, s.col2, s.col3
From [source]
where s.companyID = s.companyID;
The where is equivalent to s.companyID is not null. Hence, you have inserted all 60,113,678 rows from source into new rows in destination.
Obviously, one moral of the story is to understand the difference between insert and update. More importantly, qualify all columns names in a query. If you had done so, your query would have have failed at source.CompanyID = destination.CompanyId -- and you wouldn't have to figure out how to delete 60,113,678 new rows.

Avoid duplicates on import of updated excel-sheets. Unique-Index can only hold 10 fields max

I am facing the following situation:
I import an Excel-Sheet, then some columns are modified (e.g. "comments")
After a while, I would receive an updated Excel-Sheet containing the records from the old Excel-sheet as well as new ones.
I do not want to import the records that already exist in the database.
Step-by-Step:
Initial Excel-sheet
col1 col2 comments
A A
A B
After import, some fields will get manipulated
col1 col2 comments
A A looks good
A B fine with me
Then I receive an excel sheet with updates
col1 col2 comments
A A
A B
A C
After this update-step, the database should look like
col1 col2 comments
A A looks good
A B fine with me
A C
I was planning to simply create a unique index on all fields that won't get manipulated, so only the new records will get imported. (sth like
ALTER TABLE tbl ADD CONSRAINT unique_key UNIQUE (col1,col2)
My problem now is that Access somehow only allows composite indices of max. 10 fields. My tables all have around 11-20 cols...
I could maybe import the updated xls to a temp. table, and do s.th like
INSERT INTO tbl_old SELECT col1,col2, "" FROM tbl_new WHERE (col1,col2) NOT IN (SELECT col1,col2 FROM tbl_old UNION SELECT col1,col2 FROM tbl_new)
But I'm wondering if there isn't a more straigt-forward way...
Any ideas how I can solve that?
Try the EXISTS condition:
INSERT INTO tbl_old (col1, col2, comments)
SELECT col1, col2, Null
FROM tbl_new
WHERE NOT EXISTS (SELECT col1, col2 FROM tbl_old WHERE tbl_old.col1 = tbl_new.col1 AND tbl_old.col2 = tbl_new.col2);
Considering you will use SQL approach:
INSERT INTO table_old (col1, col2)
SELECT col1, col2 FROM table_new
EXCEPT
SELECT col1, col2 FROM table_old
:)
It will insert null in comments column though. Use this:
INSERT INTO table_old
SELECT * FROM table_new
EXCEPT
SELECT * FROM table_old
to avoid null values. Also both tables have to have the same amount of columns. For Oracle go with minus instead of except. Equivalent SQL query would be made with LEFT OUTER JOIN.
INSERT INTO table_old (col1 , col2)
SELECT N.col1, N.col2
FROM table_new N
LEFT OUTER JOIN table_old O ON O.col2 = N.col2
WHERE O.col2 IS NULL
Which will also provide null values to comments column, as we are inserting only col1 and col2. All inserts tested on provided table examples.
I would just put PK ID column in those tables.

Insert data in multiple tables at a time with repeated values

I have to insert data into first and second table directly. But the third table which I received data as array and inserted into 3rd table as same.
In my 3rd table values will be repeated. Ex:
values:
{name=ff,age=45,empid=23,desig=se,offid=1,details=kk,offid=2,details=aa,offid=3,details=bb,offid=4,details=cc}
So using 2nd table userid as same for all the offid, but details and other columns are different
#My issue is i will get single hit but i need to iterate for 3rd table.
with first_insert as (
insert into sample(name,age)
values(?,?)
RETURNING id
),
second_insert as (
insert into sample1(empid,desig)
values((select id from first_insert),?)
RETURNING userid
)
insert into sample2(offid,details)
values((select userid from second_insert),?)
Is this available or possible in PostgreSQL?
Yes, absolutely possible.
You can join rows from CTEs to VALUES expressions to combine them for a new INSERT in a data-modifying CTE. Something like this:
WITH first_insert AS (
INSERT INTO sample(name,age)
VALUES (?,?)
RETURNING id
)
, second_insert AS (
INSERT INTO sample1(empid, desig, colx)
SELECT i1.id, v.desig, v.colx
FROM first_insert i1
, (VALUES(?,?)) AS v(desig, colx)
RETURNING userid
)
INSERT INTO sample2(offid, details, col2, ...)
SELECT i2.userid, v.details, ...
FROM second_insert i2
, (VALUES (?,?, ...)) AS v(details, col2, ...);

MySQL INSERT with multiple nested SELECTs

Is a query like this possible? MySQL gives me an Syntax error. Multiple insert-values with nested selects...
INSERT INTO pv_indices_fields (index_id, veld_id)
VALUES
('1', SELECT id FROM pv_fields WHERE col1='76' AND col2='val1'),
('1', SELECT id FROM pv_fields WHERE col1='76' AND col2='val2')
I've just tested the following (which works):
insert into test (id1, id2) values (1, (select max(id) from test2)), (2, (select max(id) from test2));
I imagine the problem is that you haven't got ()s around your selects as this query would not work without it.
When you have a subquery like that, it has to return one column and one row only. If your subqueries do return one row only, then you need parenthesis around them, as #Thor84no noticed.
If they return (or could return) more than row, try this instead:
INSERT INTO pv_indices_fields (index_id, veld_id)
SELECT '1', id
FROM pv_fields
WHERE col1='76'
AND col2 IN ('val1', 'val2')
or if your conditions are very different:
INSERT INTO pv_indices_fields (index_id, veld_id)
( SELECT '1', id FROM pv_fields WHERE col1='76' AND col2='val1' )
UNION ALL
( SELECT '1', id FROM pv_fields WHERE col1='76' AND col2='val2' )