fastest way I can insert large query - sql

I have a query to insert like
insert into test (city,State,City,number,Sales,Quantity)
select 'AAA','Texas ','SSS',115121,79,1
UNION ALL
select 'BBB','Texas ','WWW',144921,338,2 UNION ALL
and 45000 record more.
What is the fastest way I can insert this query. I want to insert it with batch or bulk upload. Please help.

Related

Using IDs from table and insert into secondary table

Currently I am trying to write a sql statement that selects from one table then uses the primary key to insert into a secondary table. I am having a hard time figuring out how I would do it. This is the select and insert I have right now. The first select will return multiple results. I need to run this nightly so I need to make it dynamic.
SELECT ParentTypeId FROM Config_OrderParentQueueType
INSERT INTO [dbo].[Config_OrderParentQueueTypeNotes]
([ParentTypeId]
,[NoteDate]
,[NoteText]
,[NoteSubmittedById])
VALUES
(This is the ID I need to insert from the select
,GETDATE()
,'Default Note'
,6)
I have tried to mess with rowcount but the IDs are not always sequential. Appreciate any help in advance on how I would do this.
Use insert .. select:
INSERT INTO [dbo].[Config_OrderParentQueueTypeNotes]
([ParentTypeId]
,[NoteDate]
,[NoteText]
,[NoteSubmittedById])
SELECT ParentTypeId, getdate(), 'Default Note', 6
FROM Config_OrderParentQueueType

Executing select during bulk insert - oracle

While trying to select from a table at a time when multiple insert are being made to the same table, the select statement times out. How can this be avoided

Inserting more than 1000 rows from Excel into SQLServer

I'm new to Sql but what is the best way to insert more than 1000 rows from an excel document into my database(Sql server 2008.)
For example I'm using the below query:
INSERT INTO mytable(companyid, category, sub, catalogueref)
VALUES
('10197', 'cat', 'sub', '123'),
('10197', 'cat2', 'sub2', '124')
This is working fine but there is a limit of inserting 1000 records and I have 19000 records and I don't really want to do 19 separate insert statements and another question, is that the company id is always the same is there a better way then writing it 19000 times?
Just edit the data in Excel or another program to create N amount of insert statements with a single insert for each statement, you'll have an unlimited number of inserts. For example...
INSERT INTO table1 VALUES (6696480,'McMurdo Station',-77.846,166.676,'Antarctica','McMurdo')
INSERT INTO table1 VALUES (3833367,'Ushuaia',-54.8,-68.3,'America','Argentina')
...19,000 later
INSERT INTO table1 VALUES (3838854,'Rio Grande',-53.78769,-67.70946,'America','Argentina')
Microsoft provides an import wizard with SQL Server. I've used it to migrate data from other databases and from spreadsheets. It is pretty robust and easy to use.
There are several options, the Import Wizard which Erik suggests, or SSIS is another good one.
Read here:
Import Excel spreadsheet columns into SQL Server database
Ok, it's a late answer but I ran into this very same problem and found a solution that worked twice as fast as #Nur.B's solution for a 7K-row insertion.
Note: whenever possible prefer to use TRANSACTIONS when dealing with large amounts of data.
INSERT INTO mytable(companyid, category, sub, catalogueref)
SELECT '10197', 'cat', 'sub', '123' UNION ALL
SELECT '10197', 'cat2', 'sub2', '124' UNION ALL
-- ... other N-thousand rows
SELECT '10197', 'catN-1', 'subN-1', '12312' UNION ALL
SELECT '10197', 'catN', 'subN', '12313'; -- don't add "UNION ALL" statement on the last line
You should be able to insert using multiple transactions -
BEGIN TRANSACTION
Insert into mytable(companyid,category,sub,catalogueref)
values
('10197','cat', sub','123'),
('10197','cat2', sub2','124')
...998 more rows...
COMMIT TRANSACTION
go
BEGIN TRANSACTION
Insert into mytable(companyid,category,sub,catalogueref)
values
('10197','cat', sub','123'),
('10197','cat2', sub2','124')
...998 more rows...
COMMIT TRANSACTION
INSERT INTO mytable(companyid, category, sub, catalogueref)
SELECT companyid, category, sub, catalogueref from (VALUES
('10197', 'cat', 'sub', '123'),
('10197', 'cat2', 'sub2', '124')
//more 1000 rows...
as sub (companyid, category, sub, catalogueref)

Inserting multiple rows into Oracle

In the discussion about multiple row insert into the Oracle two approaches were demonstrated:
First:
insert into pager (PAG_ID,PAG_PARENT,PAG_NAME,PAG_ACTIVE)
select 8000,0,'Multi 8000',1 from dual
union all select 8001,0,'Multi 8001',1 from dual
Second:
INSERT ALL
INTO t (col1, col2, col3) VALUES ('val1_1', 'val1_2', 'val1_3')
INTO t (col1, col2, col3) VALUES ('val2_1', 'val2_2', 'val2_3')
INTO t (col1, col2, col3) VALUES ('val3_1', 'val3_2', 'val3_3')
.
.
.
SELECT 1 FROM DUAL;
Could anyone argue the preference of using one over another?
P.S. I didn't do any research myself (even explanation plan), so any information or opinion would be appreciated.
Thanks.
From performance's point of view, these queries are identical.
UNION ALL won't hurt performance, since Oracle estimates the UNION'ed query only when it needs it, it doesn't cache the results first.
SELECT syntax is more flexible in that sense that you can more easuly manupulate the SELECT query if you want to change something.
For instance, this query:
insert into pager (PAG_ID,PAG_PARENT,PAG_NAME,PAG_ACTIVE)
select 8000,0,'Multi 8000',1 from dual
union all select 8001,0,'Multi 8001',1 from dual
can be rewritten as
INSERT
INTO pager (PAG_ID,PAG_PARENT,PAG_NAME,PAG_ACTIVE)
SELECT 7999 + level, 0, 'Multi ' || 7999 + level, 1
FROM dual
CONNECT BY
level <= 2
By replacing 2 with appropriate number, you can get any number of rows you want.
In case of INSERT ALL, you would have to duplicate the destination table description, which is less readable if you need, say, 40 rows.
The INSERT ALL method has a problem with inserting bigger number of rows into a table.
I recently wanted to insert 1130 rows into a table with single SQL statement. When I tried to do this with INSERT ALL method I got the following error:
ORA-24335 - cannot support more than 1000 columns
When I used INSERT INTO .. UNION ALL .. approach everything went fine.
Btw. I didn't know about the UNION ALL method before I found this discussion :)
I would suspect solution 1 is a bit of a hack that works and is probably less efficient than the designed alternative of Insert ALL.
Insert all is really designed for you to insert many rows into more than 1 table as a result of a select, eg:
Insert ALL
into
t1 (c1, c2) values (q1, q2)
t2 (x1, x2) values (q1, q3)
select q1, q2, q3 from t3
If you want to load thousands of rows and they are not in the database already, I don't think this is the best way to do it - If your data is in a file, you want to look at External Tables or SQL Loader to efficiently insert the rows for you.
i tried some test and the faster solution should be
insert into pager (PAG_ID,PAG_PARENT,PAG_NAME,PAG_ACTIVE)
select 8000,0,'Multi 8000',1 from dual
union all select 8001,0,'Multi 8001',1 from dual
buffering between 300 <-> 400 rows (i tried with odbc, this value could depends about its configuration)
The statement utilizing the UNION ALL has theoretically a small performance disadvantage as it has to union the results of all statements before the insert can happen.
The INSERT ALL doesn't have this disadvantage as the final result can already be processed line-by-line.
But practically the optimizer inside Oracle should make the difference negligible and it is up to your preferences which way you choose.
In my own opinion the INSERT ALL is the better human-readable of the two while the UNION ALL variant is the one taking less space when such an insert is automatically generated.
If you have insert statements that are more than 1000 then put all the insert statements in a .sql file and open that in Toad or SQL Developer and then execute. All records will get inserted.
You should consider Array-Insert.
Easy SQL
need some client-side coding to setup the array-Parameters
This is the way to minimize the Network-Traffic if some hundred inserts needs to be done in a batch.

Efficient Insert Query Against Multiple Tables in MySQL

I was wondering if there is a more efficient way of doing an insert in MySQL against multiple tables than a separate insert query for each record in each table. I was thinking of doing something like this:
INSERT INTO table1
(t1c1, t1c2, t1c3), table2 (t2c1, t2c2, t2c3)
VALUES
('t1c1', 't1c2', 't1c3', 't2c1', 't2c2', 't2c3');
The reason for this is that the data is collated on a remote machine and will be doing the insert over the network.
No, there is no way of doing this in a single step. You will need to perform multiple queries.
You could insert into one table first, then into the second one from the first table:
INSERT INTO table1 ....
VALUES(....
INSERT INTO table2(....
SELECT ...
FROM table1
WHERE ....