Using SSMS 2014, how to generate efficient insert scripts? - ssms

When generating INSERT scripts for tables in SSMS 2014, the output of the generated script is in the format of:
INSERT [schema].[table] (<column_list) VALUES (column_values)
GO
INSERT [schema].[table] (<column_list) VALUES (column_values)
GO
While this gets the job done, it is horribly slow. We can manually re-tool the script to be in the format of:
INSERT [schema].[table] (<column_list>)
VALUES (column_values)
,(column_values) -- up to 1000 rows
GO
INSERT [schema].[table] (<column_list>)
VALUES (column_values)
,(column_values) -- up to 1000 rows
GO
We've noted an increase in speed of more than 10x by changing the script in this manner, which is very beneficial if it is a script that needs to be re-run occasionally (not just a one-time insert.)
The question is, is there a way to do this from within the SSMS script generation, or alternately, is there a process that can convert the script that is in the first format into a script in the second format?

I develop SSMSBoost add-in. We have feature named Results Grid Scripter, that can produce virtually any script that you want based on the data from Results Grid:
https://www.ssmsboost.com/Features/ssms-add-in-results-grid-script-results
There are several pre-defined templates and you can change them to get exactly that you need.

Related

Copy table from Dev DB to QA DB

I'm trying to copy a table from dev database to QA database in SQL Server using the Import/Export Wizard. There are about 6.6 million rows and it is taking around 7 hours to do it. Is there any faster way to accomplish the task?
Below is the code that I'm using:
SELECT *
FROM [Table_Name] WITH (NOLOCK)
WHERE [ColumnDate] > 2018
OR [Code] in ('A', 'B', 'C','D')
Thanks.
You can generate the script of the table with data and then use that to create table in your destination database. It is one of the fastest approach. You can also use powershell script to automate it. Since your data is too large and SSMS is slow you are facing the issue of time.
Let me explain when you try to use import/ export property it basically takes more time because it will try to create schema and then insert data one by one. It will be like creating a table and performing insert of each row sequentially.
To avoid this situation you can simply generate script of table with data. Once you get the script you can perform insert into. Since you performing bulk insert from local it will avoid sequential insert and will reduce insert time exponentially.
For reference you can go through the link for step by step guide.
https://www.sqlshack.com/six-different-methods-to-copy-tables-between-databases-in-sql-server/

insert into statements,how to be fastly insert records

If i have a 1 million insert into statement in text file then how to be inserted in a table with faster time.
INSERT INTO TABLE(a,b,c) VALUES (1,'shubham','engg');
INSERT INTO TABLE(a,b,c) VALUES (2,'swapnil','chemical');
INSERT INTO TABLE(a,b,c) VALUES (n,'n','n');
like in above we have 1 million rows. how to be fastly insert records in a table If any other options is there else simply run above all statement in sequency.
Avoid row by row inserts for dumping such huge quantities of data. They are pretty slow and there's no reason you should rely on the plain inserts, even if you're utilising SQL* Plus command line to run it as a file. Put the values to be inserted as comma(or any other delimiter) separated entries in a flat file and then use options such as
SQL Loader
External table
It is a common practice to extract data into flat files from tools like SQL Developer. Choose the "CSV" option instead of "Insert" that will generate the values in a flat file.
If it means that your text file contains literally those INSERT INTO statements, then run the file.
If you use GUI, load the file and run it as a script (so that it executes them all).
If you use SQL*Plus, call the file using #, e.g.
SQL> #insert_them_all.sql
You may use a batch insert on single query statement:
INSERT INTO TABLE(a,b,c) VALUES
(1,'shubham','engg'),
(2,'swapnil','chemical'),
(n,'n','n');

How to run an insert script file in oracle with around 2.5 million insert statements?

I have an insert script file in oracle with around 2.5 million insert statements. Idea needed to insert into Oracle table.
I have tried inserting using SQL Developer, by executing the file directly #path\script.sql. But it times out.
2.5 million individual INSERT statements is always going to suck: you need to use something more suited for bulk data volume
"its an export , am trying to insert those records in another table "
The best approach would be to redo the export in a different format, say using datadump.
Alternatively, as #thatjeffsmith suggests, you could export the records in CSV format and import them using SQL*Loader;
SQL Developer has options to help with this.
As a last resort remove per script the INSERT part of the statements and leave only the values in csv format.
Define an EXTERNAL TABLE or load it with SQL*Loader
If this is a one-time script, spend a few minutes to manually group statements into anonymous PL/SQL blocks by adding multiple BEGIN and END; / to the file. There's no need to use an alternative format or tool unless you are going to constantly regenerate this file.
Scripts with a large number of statements are usually slow because of network lag. Each statement is sent to the server, executed, and returns a status. Even when running on the same host, the communication overhead can still exceed the amount of time used to perform the real work.
PL/SQL blocks are sent to the server as a single unit. If you send blocks of 100 statements at a time the network overhead will be reduced by 99%. You won't see much improvement by using blocks of 1000, 10000, etc., but the larger the block the less manual editing you need to do. But beware, if the block is too large it will exceed a compilation limit and throw an error.
Under the assumption that those records are clean, you can disable logging and index on the tables you want to insert. Also i would do an commit after 1000 records or as per your demand.

Oracle Bulk Insert Using SQL Developer

I have recently taken data dumps from an Oracle database.
Many of them are large in size(~5GB). I am trying to insert the dumped data into another Oracle database by executing the following SQL in SQL Developer
#C:\path\to\table_dump1.sql;
#C:\path\to\table_dump2.sql;
#C:\path\to\table_dump3.sql;
:
but it is taking a long time like more than a day to complete even a single SQL file.
Is there any better way to get this done faster?
SQL*Loader is my favorite way to bulk load large data volumes into Oracle. Use the direct path insert option for max speed but understand impacts of direct-path loads (for example, all data is inserted past the high water mark, which is fine if you truncate your table). It even has a tolerance for bad rows, so if your data has "some" mistakes it can still work.
SQL*Loader can suspend indexes and build them all at the end, which makes bulk inserting very fast.
Example of a SQL*Loader call:
$SQLDIR/sqlldr /#MyDatabase direct=false silent=feedback \
control=mydata.ctl log=/apps/logs/mydata.log bad=/apps/logs/mydata.bad \
rows=200000
And the mydata.ctl would look something like this:
LOAD DATA
INFILE '/apps/load_files/mytable.dat'
INTO TABLE my_schema.my_able
FIELDS TERMINATED BY "|"
(ORDER_ID,
ORDER_DATE,
PART_NUMBER,
QUANTITY)
Alternatively... if you are just copying the entire contents of one table to another, across databases, you can do this if your DBA sets up a DBlink (a 30 second process), presupposing your DB is set up with the redo space to accomplish this.
truncate table my_schema.my_table;
insert into my_schema.my_table
select * from my_schema.my_table#my_remote_db;
The use of the /* +append */ hint can still make use of direct path insert.

How do I handle large SQL SERVER batch inserts?

I'm looking to execute a series of queries as part of a migration project. The scripts to be generated are produced from a tool which analyses the legacy database then produces a script to map each of the old entities to an appropriate new record. THe scripts run well for small entities but some have records in the hundreds of thousands which produce script files of around 80 MB.
What is the best way to run these scripts?
Is there some SQLCMD from the prompt which deals with larger scripts?
I could also break the scripts down into further smaller scripts but I don't want to have to execute hundreds of scripts to perform the migration.
If possible have the export tool modified to export a BULK INSERT compatible file.
Barring that, you can write a program that will parse the insert statements into something that BULK INSERT will accept.
BULK INSERT uses BCP format files which come in traditional (non-XML) or XML. Does it have to get a new identity and use it in a child and you can't get away with using SET IDENTITY INSERT ON because the database design has changed so much? If so, I think you might be better off using SSIS or similar and doing a Merge Join once the identities are assigned. You could also load the data into staging tables in SQL using SSIS or BCP and then use regular SQL (potentially within SSIS in a SQL task) with the OUTPUT INTO feature to capture the identities and use them in the children.
Just execute the script. We regularly run backup / restore scripts that are 100's Mb in size. It only takes 30 seconds or so.
If it is critical not to block your server for this amount to time, you'll have to really split it up a bit.
Also look into the -tab option of mysqldump with outputs the data using TO OUTFILE, which is more efficient and faster to load.
It sounds like this is generating a single INSERT for each row, which is really going to be pretty slow. If they are all wrapped in a transaction, too, that can be kind of slow (although the number of rows doesn't sound that big that it would cause a transaction to be nearly impossible - like if you were holding a multi-million row insert in a transaction).
You might be better off looking at ETL (DTS, SSIS, BCP or BULK INSERT FROM, or some other tool) to migrate the data instead of scripting each insert.
You could break up the script and execute it in parts (especially if currently it makes it all one big transaction), just automate the execution of the individual scripts using PowerShell or similar.
I've been looking into the "BULK INSERT" from file option but cannot see any examples of the file format. Can the file mix the row formats or does it have to always be consistent in a CSV fashion? The reason I ask is that I've got identities involved across various parent / child tables which is why inserts per row are currently being used.