Multiple inserts in one SQL query - sql

DB: SQL server
I am using the below construct for inserting multiple records into a table. I am receiving the data (to be inserted) from other DB.
Insert into table1
select '1','2' UNION ALL
select '3','4' UNION ALL
select '5','6';
would there be any other chance in doing inserts in less turn around time. Its also been executed as a web request. I guess bulk insert would not fit here, as I don't have the data in a file to do a bulk insert.
Any suggestions please..

If the source database is also a SQL Server, you could add a linked server and:
insert table1 select * from linkedserver.dbname.dbo.table1
If you're inserting from .NET code, the fastest way to insert multiple rows is SqlBulkCopy. SqlBulkCopy does require DBO rights.

That is actually the best multiple insert I have ever seen. Just be careful to SQL injections, always use CommandParameters in ASP.NET or use mysql_real_escape in MySQL.

I looked into this recently, coming from MySQL and expecting the syntax from cularis' answer, but after some searching all I could find is the syntax you posted in your answer.
Edit: Looks like cularis removed his answer, he was talking about the INSERT INTO x VALUES (1, 2), (3, 4); syntax.

If you are using SQL Server 2008 and stored procedures, you could always make use of table valued parameters:
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
It then becomes an INSERT INTO ... SELECT * FROM ...
This would help against injection problems. Not sure if this is possible with parameterised SQL.

Related

Is there any way to safely run SELECT INTO?

I have a script that runs a SELECT INTO into a table. To my knowledge, there are no other procedures that might be concurrently referencing/modifying this table. Once in awhile, however, I get the following error:
Schema changed after the target table was created. Rerun the Select
Into query.
What can cause this error and how do I avoid it?
I did some googling, and this link suggests that SELECT INTO cannot be used safely without some crazy try-catch-retry logic. Is this really the case?
I'm using SQLServer 2012.
Unless you really don't know the fields and data types in advance, I'd recommend first creating the table, then adding the data with an Insert statement. In your link, David Moutray suggests the same thing, here's his example code verbatim:
CREATE TABLE #TempTableY (ParticipantID INT NOT NULL);
INSERT #TempTableY (ParticipantID)
SELECT ParticipantID
FROM TableX;

Insert and update in the same transaction

I have heard you can not insert a row first and immediately update it in next statement in the same transaction in SQL Server? But I have been doing that (SQL Server 2005) and my results show its been done. Am I missing something or doing something stupid here? Please enlighten. Thanks.
From my experience insert and update in the same query may result in locked queries if the amount of inserts are relatively high. I'd consider creating triggers on insert and modify values before inserting them. Not sure how relevant this approach would be in your case. But having said that, it is definitely possible to do insert and update in the same query.

Upsert (update or insert) in Sybase ASE?

I'm writing an application to move data from Oracle to Sybase and need to perform update / insert operations. In Oracle, I'd use MERGE INTO, but it doesn't seem to be available in Sybase (not in ASE, anyway). I know this can be done with multiple statements, but for a couple of reasons, I'm really trying to get this into a single statement.
Any suggestions?
ASE 15.7 has this feature.
Find the docs here:
http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc36272.1570/html/commands/commands84.htm
Sybase and DB2 are very IEC/ISO/ANSI SQL Standrd-compliant. MS a little less so.
Oracle is not very Standard-compliant at all (despite what the glossies say). More important, due to it limitations, the method they use to overcome them is to introduce Extensions to SQL (which are not required for the others DBMS, which do not have the limitations). Nice way of making sure that customers do not migrate away.
So the best advice for you is to learn the SQL Standard way of doing whatever you were doing on the Oracle side. And second (not first) learn about Sybases or DB2s (or whatever) Extensions.
"MERGE" and "UPSERT" do not exist in SQL, they exist in Oracle only. The bottom line is, you have to UPDATE and INSERT in two separate operations.
In SQL, UPDATE and INSERT apply to a single table; you may have quite complex FROM clauses.
For "MERGE", that is simply an:
INSERT target ( column_list ) -- we do have defaults
SELECT ( column_list )
FROM source
WHERE primary_key NOT IN ( SELECT primary_key FROM target )
Update is simply the complement:
UPDATE target SET ( target_column = source_column, ... )
FROM source
WHERE primary_key IN ( SELECT primary_key FROM target )
In the UPDATE it is easy to merge the WHERE conditions and eliminate the Subquery (I am showing it to you for explanation).
As I understand it, Oracle is abyssmal at executing Subqueries (Standard SQL). Which is why they have all these non-Standard "MERGE", etc., the purpose of which is to avoid the Standard Subquery syntax, which every other DBMS performs with ease.
unfortunately, it is impossible to insert and update a table in one statement without using MERGE. which btw does exist in SQL as of SQL:2008, according to this article anyway, and supported by almost all major databases, except Sybase ASE and PostgreSQL.
Merge exists in SAP ASE 15.7 upwards, as mentioned here and here
Replace / Upsert exists in SAP ASE 16.0 and up.
You'll need to update to access them.
Maybe it could work. Tested in ASA9.
insert into my_table (columns) on existing update values (values);
May be you could try to fake it with INSERT INTO and/or UPDATE FROM with some sub-queries but it will not be as convenient as Oracle does.
You wanna do this into code or data warehouse ? because you could also encapsulate all the SQL into a stored procedure if you want to hide the complexity of the queries.

Table-Valued Parameter versus multiple Insert rows performance question for inserting into SQL Server 2008

If all I'm doing is inserting multiple rows of data into a single table in SQL Server 2008, which is faster?
A Table-Valued Parameter or a single insert statement with multiple values?
Where in this simple scenario would you prefer one over the other?
If I understand the question correctly, I'd go with the Table-Valued Parameter. Otherwise I'd think the list of parameters would quickly become unmanageable. You wouldn't want to end up with something like:
insert into YourTable
(col1, col2, ..., colN)
values
(#Row1Col1, #Row1Col2, ..., #Row1ColN),
(#Row2Col1, #Row2Col2, ..., #Row2ColN),
...
(#RowMCol1, #RowMCol2, ..., #RowMColN)
Since both operations will insert data into a table the question becomes; "Is the overhead of using a stored procedure to make table inserts to much for my system to handle?"
Does your system allow direct inserts into tables from the app now? If yes then just go with the direct insert.
I prefer to use stored procs as it allows me to add auditing, error logic, etc which just makes me feel better than dumping data directly into a table.

Is it possible in SQL Server to create a function which could handle a sequence?

We are looking at various options in porting our persistence layer from Oracle to another database and one that we are looking at is MS SQL. However we use Oracle sequences throughout the code and because of this it seems moving will be a headache. I understand about #identity but that would be a massive overhaul of the persistence code.
Is it possible in SQL Server to create a function which could handle a sequence?
That depends on your current use of sequences in Oracle. Typically a sequence is read in the Insert trigger.
From your question I guess that it is the persistence layer that generates the sequence before inserting into the database (including the new pk)
In MSSQL, you can combine SQL statements with ';', so to retrieve the identity column of the newly created record, use INSERT INTO ... ; SELECT SCOPE_IDENTITY()
Thus the command to insert a record return a recordset with a single row and a single column containing the value of the identity column.
You can of course turn this approach around, and create Sequence tables (similar to the dual table in Oracle), in something like this:
INSERT INTO SequenceTable (dummy) VALUES ('X');
SELECT #ID = SCOPE_IDENTITY();
INSERT INTO RealTable (ID, datacolumns) VALUES (#ID, #data1, #data2, ...)
I did this last year on a project. Basically, I just created a table with the name of the sequence, current value, & increment amount.
Then I created a 4 procs :
GetCurrentSequence( sequenceName)
GetNextSequence( sequenceName)
CreateSequence( sequenceName, startValue, incrementAmount)
DeleteSequence( sequenceName)
But there is a limitation you may not appreciate; functions cannot have side effects. So you could create a function for GetCurrentSequence(...), but GetNextSequence(...) would need to be a proc, since you will probably want to increment the current sequence value. However, if it's a proc, you won't be able to use it directly in your insert statements.
So instead of
insert into mytable(id, ....) values( GetNextSequence('MySequence'), ....);
Instead you will need to break it up over 2 lines;
declare #newID int;
exec #newID = GetNextSequence 'MySequence';
insert into mytable(id, ....) values(#newID, ....);
Also, SQL Server doesn't have any mechanism that can do something like
MySequence.Current
or
MySequence.Next
Hopefully, somebody will tell me I am incorrect with the above limitations, but I'm pretty sure they are accurate.
Good luck.
If you have a lot of code, you're going to want to do a massive overhaul of the code anyway; what works well in Oracle is not always going to work well in MSSQL. If you have a lot of cursors, for instance, while you could convert them line for line to MSSQL, you're not going to get good performance.
In short, this is not an easy undertaking.