what is the difference between insert statement with into and without into? - sql

I have created table #temp with columns id as int identity(1,1) and name as varchar.
Say suppose I am writing the following 2 different statements for inserting rows:
insert into #temp (name) select ('Vikrant') ;
insert #temp (name) select ('Vikrant')
I want to ask what is the difference between these two types of insert statements?
Is there really any difference in between these insertions?

From the MSDN documentation:
[INTO]
Is an optional keyword that can be used between INSERT and the target table.
There is no difference between the two statements.

Related

SQL Server: Insert INTO Statement syntax [duplicate]

This question already has answers here:
Why are dot-separated prefixes ignored in the column list for INSERT statements?
(3 answers)
Closed 8 years ago.
Why does not the following INSERT statement give any error?
CREATE TABLE Table1(id INT,name VARCHAR(10))
INSERT INTO Table1(xx.id,yyyy.name) Values (1,'A')
Why does the above statement ignore xx. and yyyy.? What does this imply ?
I checked the below query also.
INSERT INTO Table11(xx.xx.xx.xx.xx.xx.xx.id,yy.yy.yy.yy.yy.yy.yy.yy.name)
Values (1,'A')
It also got worked. Usually we use alias for joins. As I know, For Insert query Using alias near table name is restricted in sql. In the case of column name, the query use only the string next to the last Dot(.).
I conclude it as, The Insert query don't care about the string prefixed to the column name separated by Dot(.).
The only thing I can think of is that the database engine is ignoring the name space as the query's scope is limited to the Table's scope when dealing with INSERT INTO. When it comes to say UPDATE where multiple tables can be part of the scope, the below would fail. Don't know why this happens but if I were to guess, probably all values to the left of the last period'.' is ignored
If you analyze the execution plan for the below query
CREATE TABLE Table1(id INT,name VARCHAR(10))
INSERT INTO Table1(Table2.dbo.id,...................name) Values (1,'A')
AS
INSERT INTO [Table1]([id],[name]) Values(#1,#2)
This implies the namespace of something...
For example:
SELECT object.id, object.name FROM table object WHERE object.name = 'Foo';
/ \
| |
object is the name space for the table.
And if you haven't a namespace created the query fails.
As far as I know, the syntax you are using generally means table.column
So in other words you are trying to insert into Table1 but declaring columns from other tables.
You should do something like this
CREATE TABLE Table1(id INT,name VARCHAR(10))
INSERT INTO Table1(id,name) Values (1,'A')

SELECT * FROM NEW TABLE equivalent in Postgres

In DB2 I can do a command that looks like this to retrieve information from the inserted row:
SELECT *
FROM NEW TABLE (
INSERT INTO phone_book
VALUES ( 'Peter Doe','555-2323' )
) AS t
How do I do that in Postgres?
There are way to retrieve a sequence, but I need to retrieve arbitrary columns.
My desire to merge a select with the insert is for performance reasons. This way I only need to execute one statement to insert values and select values from the insert. The values that are inserted come from a subselect rather than a values clause. I only need to insert 1 row.
That sample code was lifted from Wikipedia Insert Article
A plain INSERT ... RETURNING ... does the job and delivers best performance.
A CTE is not necessary.
INSERT INTO phone_book (name, number)
VALUES ( 'Peter Doe','555-2323' )
RETURNING * -- or just phonebook_id, if that's all you need
Aside: In most cases it's advisable to add a target list.
The Wikipedia page you quoted already has the same advice:
Using an INSERT statement with RETURNING clause for PostgreSQL (since
8.2). The returned list is identical to the result of a SELECT.
PostgreSQL supports this kind of behavior through a returning clause in a common table expression. You generally shouldn't assume that something like this will improve performance simply because you're executing one statement instead of two. Use EXPLAIN to measure performance.
create table test (
test_id serial primary key,
col1 integer
);
with inserted_rows as (
insert into test (c1) values (3)
returning *
)
select * from inserted_rows;
test_id col1
--
1 3
Docs

What are the benefits of using the Row Constructor syntax in a T-Sql insert statement?

In SQL Server 2008, you can use the Row Constructor syntax to insert multiple rows with a single insert statement, e.g.:
insert into MyTable (Col1, Col2) values
('c1v', 0),
('c2v', 1),
('c3v', 2);
Are there benefits to doing this instead of having one insert statement for each record other than readability?
Aye, there is a rather large performance difference between:
declare #numbers table (n int not null primary key clustered);
insert into #numbers (n)
values (0)
, (1)
, (2)
, (3)
, (4);
and
declare #numbers table (n int not null primary key clustered);
insert into #numbers (n) values (0);
insert into #numbers (n) values (1);
insert into #numbers (n) values (2);
insert into #numbers (n) values (3);
insert into #numbers (n) values (4);
The fact that every single insert statement has its own implicit transaction guarantees this. You can prove it to yourself easily by viewing the execution plans for each statement or by timing the executions using set statistics time on;. There is a fixed cost associated with "setting up" and "tearing down" the context for each individual insert and the second query has to pay this penalty five times while the first only pays it once.
Not only is the list method more efficient but you can also use it to build a derived table:
select *
from (values
(0)
, (1)
, (2)
, (3)
, (4)
) as Numbers (n);
This format gets around the 1,000 value limitation and allows you to join and filter your list before it is inserted. One might also notice that we're not bound to the insert statement at all! As a de facto table, this construct can be used anywhere a table reference would be valid.
Yes - you will see performance improvements. Especially with large numbers of records.
If you will be inserting more than one column of data with a SELECT in addition to your explicitly typed rows, the Table Value Constructor will require you to spell out each column individually as opposed to when you are using one INSERT statement, you can specify multiple columns in the SELECT.
For example:
USE AdventureWorks2008R2;
GO
CREATE TABLE dbo.MyProducts (Name varchar(50), ListPrice money);
GO
-- This statement fails because the third values list contains multiple columns in the subquery.
INSERT INTO dbo.MyProducts (Name, ListPrice)
VALUES ('Helmet', 25.50),
('Wheel', 30.00),
(SELECT Name, ListPrice FROM Production.Product WHERE ProductID = 720);
GO
Would fail; you would have to do it like this:
INSERT INTO dbo.MyProducts (Name, ListPrice)
VALUES ('Helmet', 25.50),
('Wheel', 30.00),
((SELECT Name FROM Production.Product WHERE ProductID = 720),
(SELECT ListPrice FROM Production.Product WHERE ProductID = 720));
GO
see Table Value Constructor Limitations and Restrictions
There is no performance benefit as Abe mentioned.
The order of the columns constructor is the required order for the values (or select statement). You can list the columns in any order - the values will have to follow that order.
If you accidently switch columns in the select statement (or values clause) and the data types are compatible, using the columns construct will help you find the problem.

large insert in two tables. First table will feed second table with its generated Id

One question about how to t-sql program the following query:
Table 1
I insert 400.000 mobilephonenumbers in a table with two columns. The number to insert and identity id.
Table 2
The second table is called SendList. It is a list with 3columns, a identity id, a List id, and a phonenumberid.
Table 3
Is called ListInfo and contains PK list id. and info about the list.
My question is how should I using T-sql:
Insert large list with phonenumbers to table 1, insert the generated id from the insert of phonenum. in table1, to table 2. AND in a optimized way. It cant take long time, that is my problem.
Greatly appreciated if someone could guide me on this one.
Thanks
Sebastian
What version of SQL Server are you using? If you are using 2008 you can use the OUTPUT clause to insert multiple records and output all the identity records to a table variable. Then you can use this to insert to the child tables.
DECLARE #MyTableVar table(MyID int);
INSERT MyTabLe (field1, field2)
OUTPUT INSERTED.MyID
INTO #MyTableVar
select Field1, Field2 from MyOtherTable where field3 = 'test'
--Display the result set of the table variable.
Insert MyChildTable (myID,field1, field2)
Select MyID, test, getdate() from #MyTableVar
I've not tried this directly with a bulk insert, but you could always bulkinsert to a staging table and then use the processs, described above. Inserting groups of records is much much faster than one at a time.

SQL Pivot table

Is there a way to pivot an Entity-Attribute table?
I want to flip all the rows into columns, regardless of how many different attributes there are.
Here's an example of what I want to accomplish. The example uses two attributes: FirstName, LastName. But in the real database, there are thousands of attributes and I want to flip them into columns for reporting purposes.
I don't want to have to write a CTE for every attribute.
USE TempDB
DECLARE #Attribute TABLE(
AttributeID Int Identity(10,1) PRIMARY KEY,
AttributeName Varchar(MAX))
INSERT INTO #Attribute(AttributeName) VALUES('Firstname')
INSERT INTO #Attribute(AttributeName) VALUES('Lastname')
DECLARE #tbl TABLE(
AttributeID Int,
EntityValue Varchar(MAX)
)
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(10,'John')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(10,'Paul')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(10,'George')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(10,'Ringo')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(11,'Lennon')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(11,'McCartney')
INSERT INTO #tbl(AttributeID,EntityValue) VALUES(11,'Harrison')
SELECT A.AttributeID,AttributeName,EntityValue FROM #tbl T
INNER JOIN #Attribute A
ON T.AttributeID=A.AttributeID
DECLARE #Tbl2 Table(
FirstName Varchar(MAX),
LastName Varchar(MAX)
)
INSERT INTO #Tbl2(FirstName,LastName) VALUES('John','Lennon')
INSERT INTO #Tbl2(FirstName,LastName) VALUES('Paul','McCartney')
INSERT INTO #Tbl2(FirstName,LastName) VALUES('George','Harrison')
INSERT INTO #Tbl2(FirstName) VALUES('Ringo')
SELECT * FROM #Tbl2
Based on what you posted, you're dealing with SQL Server.
The old school method is to use IF or CASE statements to represent each column you want to create. IE:
CASE WHEN t.AttributeID = 10 THEN t.EntityValue ELSE NULL END 'FirstName'
The alternative is to use PIVOT (SQL Server 2005+).
In either case, you're going to have to define the output columns by hand. If you model was setup to address it, you might be able to use dynamic SQL.
In case you're curious, the reason Microsoft SQL Server's PIVOT operator isn't "dynamic," and that you must specify each value to pivot, is that this makes it possible to identify the table structure of the PIVOT query from the query text alone. That's an important principle of most programming languages - it should be possible to determine the type of an expression from the expression. The type shouldn't depend on the run-time values of anything mentioned in the expression.
That said, some implementations of SQL implement what you want. For example, I think Microsoft Access does this with TRANSFORM.
If you search the web for "dynamic pivot", you'll find a lot.