Oracle Bulk Insert Using SQL Developer - sql

I have recently taken data dumps from an Oracle database.
Many of them are large in size(~5GB). I am trying to insert the dumped data into another Oracle database by executing the following SQL in SQL Developer
#C:\path\to\table_dump1.sql;
#C:\path\to\table_dump2.sql;
#C:\path\to\table_dump3.sql;
:
but it is taking a long time like more than a day to complete even a single SQL file.
Is there any better way to get this done faster?

SQL*Loader is my favorite way to bulk load large data volumes into Oracle. Use the direct path insert option for max speed but understand impacts of direct-path loads (for example, all data is inserted past the high water mark, which is fine if you truncate your table). It even has a tolerance for bad rows, so if your data has "some" mistakes it can still work.
SQL*Loader can suspend indexes and build them all at the end, which makes bulk inserting very fast.
Example of a SQL*Loader call:
$SQLDIR/sqlldr /#MyDatabase direct=false silent=feedback \
control=mydata.ctl log=/apps/logs/mydata.log bad=/apps/logs/mydata.bad \
rows=200000
And the mydata.ctl would look something like this:
LOAD DATA
INFILE '/apps/load_files/mytable.dat'
INTO TABLE my_schema.my_able
FIELDS TERMINATED BY "|"
(ORDER_ID,
ORDER_DATE,
PART_NUMBER,
QUANTITY)
Alternatively... if you are just copying the entire contents of one table to another, across databases, you can do this if your DBA sets up a DBlink (a 30 second process), presupposing your DB is set up with the redo space to accomplish this.
truncate table my_schema.my_table;
insert into my_schema.my_table
select * from my_schema.my_table#my_remote_db;
The use of the /* +append */ hint can still make use of direct path insert.

Related

How to run an insert script file in oracle with around 2.5 million insert statements?

I have an insert script file in oracle with around 2.5 million insert statements. Idea needed to insert into Oracle table.
I have tried inserting using SQL Developer, by executing the file directly #path\script.sql. But it times out.
2.5 million individual INSERT statements is always going to suck: you need to use something more suited for bulk data volume
"its an export , am trying to insert those records in another table "
The best approach would be to redo the export in a different format, say using datadump.
Alternatively, as #thatjeffsmith suggests, you could export the records in CSV format and import them using SQL*Loader;
SQL Developer has options to help with this.
As a last resort remove per script the INSERT part of the statements and leave only the values in csv format.
Define an EXTERNAL TABLE or load it with SQL*Loader
If this is a one-time script, spend a few minutes to manually group statements into anonymous PL/SQL blocks by adding multiple BEGIN and END; / to the file. There's no need to use an alternative format or tool unless you are going to constantly regenerate this file.
Scripts with a large number of statements are usually slow because of network lag. Each statement is sent to the server, executed, and returns a status. Even when running on the same host, the communication overhead can still exceed the amount of time used to perform the real work.
PL/SQL blocks are sent to the server as a single unit. If you send blocks of 100 statements at a time the network overhead will be reduced by 99%. You won't see much improvement by using blocks of 1000, 10000, etc., but the larger the block the less manual editing you need to do. But beware, if the block is too large it will exceed a compilation limit and throw an error.
Under the assumption that those records are clean, you can disable logging and index on the tables you want to insert. Also i would do an commit after 1000 records or as per your demand.

Hive: create table and write it locally at the same time

Is it possible in hive to create a table and have it saved locally at the same time?
When I get data for my analyses, I usually create temporary tables to track eventual
mistakes in the queries/scripts. Some of these are just temporary tables, while others contain the data that I actually need for my analyses.
What I do usually is using hive -e "select * from db.table" > filename.tsv to get the data locally; however when the tables are big this can take quite some time.
I was wondering if there is some way in my script to create the table and save it locally at the same time. Probably this is not possible, but I thought it is worth asking.
Honestly doing it the way you are is the best way out of the two possible ways but it is worth noting you can preform a similar task in an .hql file for automation.
Using syntax like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp' select * from table;
You can run a query and store it somewhere in the local directory (as long as there is enough space and correct privileges)
A disadvantage to this is that with a pipe you get the data stored nicely as '|' delimitation and new line separated, but this method will store the values in the hive default '^b' I think.
A work around is to do something like this:
INSERT OVERWRITE LOCAL DIRECTORY '/home/user/temp'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
select books from table;
But this is only in Hive 0.11 or higher

Inserting rows - bulk or row by row?

I am inserting data into a database using millions of insert statements stored in a file. Is it better to insert this row by row or in bulk ? I am not sure what the implications can be.
Any suggestions on the approach ? Right now, I am executing 50K of these statements at a time.
Generally speaking, you're much better off inserting in bulk, provided you know that the inserts won't fail for some reason (i.e. invalid data, etc). If you're going row by row, what you're doing, is opening the data connection, adding the row, closing the data connection. Rinse wash, repeat in your case tens of thousands of times (or more?). It's a huge performance hit as opposed to opening the connection once, dumping all the data at one shot, then closing the connection once. If your data ISN'T a clean set of data, you might be better off going row by row, as the bulk insert won't fail if you have data to be cleaned up.
If you are using SSIS, I would suggest a data flow task as another possible avenue. This will allow you to move data from a flat text file, SQL table or other source and map it into your new table. Performance, I have found, is always pretty good and I use it regularly.
If your table is not created before the insert, what I do is drag an Execute SQL Task function into my process with the table creation query (CREATE TABLE....etc.) and update the properties on the data flow function to delay validation.
As long as my data structure is consistent, this works. Here are a couple screenshots.
You should definitely use the BULK INSERT instead of inserting row by row. The BULK INSERT is the in-process method designed for bringing data from a text file into SQL Server, ant it is the fasted among other approaches described in the The Data Loading Performance Guide online article
The other alternative is to use a batch process that uses set-based processing over a smaller set of records (say 5000 at a time) . This can keep the server from getting totally locked up and is faster than one record at a time.

Query to read a query in a file

I have huge create table queries (100's of Gb) which I'd like to ship throught ODBC to my db (Postgre in that case). The problem is that these queries are built from an external program, so I would like to avoid loading each query in memory to ship it by ODBC to the db. I would much prefer to indicate to the db in a (small) query to go execute that huge query directly.
That could be easy with psql, but I'd like to do it throught odbc. Is it possible ?
If you mean bulk data load, PostgreSQL has COPY command - it can read the data file on the server directly but it can not process regular SQL queries - it can load data from file in the CSV or similiar format (which you can customize as a COPY parameters).
If you're loading table from scratch nice optimizations are having plain table (without PK, FK, constraints, indexes), and executing the COPY in the transaction together with TRUNCATE table like:
BEGIN;
TRUNCATE ....;
COPY ...;
COMMIT;

What's the fastest way to do a bulk insert into Postgres?

I need to programmatically insert tens of millions of records into a Postgres database. Presently, I'm executing thousands of insert statements in a single query.
Is there a better way to do this, some bulk insert statement I do not know about?
PostgreSQL has a guide on how to best populate a database initially, and they suggest using the COPY command for bulk loading rows. The guide has some other good tips on how to speed up the process, like removing indexes and foreign keys before loading the data (and adding them back afterwards).
There is an alternative to using COPY, which is the multirow values syntax that Postgres supports. From the documentation:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
The above code inserts two rows, but you can extend it arbitrarily, until you hit the maximum number of prepared statement tokens (it might be $999, but I'm not 100% sure about that). Sometimes one cannot use COPY, and this is a worthy replacement for those situations.
One way to speed things up is to explicitly perform multiple inserts or copy's within a transaction (say 1000). Postgres's default behavior is to commit after each statement, so by batching the commits, you can avoid some overhead. As the guide in Daniel's answer says, you may have to disable autocommit for this to work. Also note the comment at the bottom that suggests increasing the size of the wal_buffers to 16 MB may also help.
UNNEST function with arrays can be used along with multirow VALUES syntax. I'm think that this method is slower than using COPY but it is useful to me in work with psycopg and python (python list passed to cursor.execute becomes pg ARRAY):
INSERT INTO tablename (fieldname1, fieldname2, fieldname3)
VALUES (
UNNEST(ARRAY[1, 2, 3]),
UNNEST(ARRAY[100, 200, 300]),
UNNEST(ARRAY['a', 'b', 'c'])
);
without VALUES using subselect with additional existance check:
INSERT INTO tablename (fieldname1, fieldname2, fieldname3)
SELECT * FROM (
SELECT UNNEST(ARRAY[1, 2, 3]),
UNNEST(ARRAY[100, 200, 300]),
UNNEST(ARRAY['a', 'b', 'c'])
) AS temptable
WHERE NOT EXISTS (
SELECT 1 FROM tablename tt
WHERE tt.fieldname1=temptable.fieldname1
);
the same syntax to bulk updates:
UPDATE tablename
SET fieldname1=temptable.data
FROM (
SELECT UNNEST(ARRAY[1,2]) AS id,
UNNEST(ARRAY['a', 'b']) AS data
) AS temptable
WHERE tablename.id=temptable.id;
      ((this is a WIKI you can edit and enhance the answer!))
The external file is the best and typical bulk-data
The term "bulk data" is related to "a lot of data", so it is natural to use original raw data, with no need to transform it into SQL. Typical raw data files for "bulk insert" are CSV and JSON formats.
Bulk insert with some transformation
In ETL applications and ingestion processes, we need to change the data before inserting it. Temporary table consumes (a lot of) disk space, and it is not the faster way to do it. The PostgreSQL foreign-data wrapper (FDW) is the best choice.
CSV example. Suppose the tablename (x, y, z) on SQL and a CSV file like
fieldname1,fieldname2,fieldname3
etc,etc,etc
... million lines ...
You can use the classic SQL COPY to load (as is original data) into tmp_tablename, them insert filtered data into tablename... But, to avoid disk consumption, the best is to ingested directly by
INSERT INTO tablename (x, y, z)
SELECT f1(fieldname1), f2(fieldname2), f3(fieldname3) -- the transforms
FROM tmp_tablename_fdw
-- WHERE condictions
;
You need to prepare database for FDW, and instead static tmp_tablename_fdw you can use a function that generates it:
CREATE EXTENSION file_fdw;
CREATE SERVER import FOREIGN DATA WRAPPER file_fdw;
CREATE FOREIGN TABLE tmp_tablename_fdw(
...
) SERVER import OPTIONS ( filename '/tmp/pg_io/file.csv', format 'csv');
JSON example. A set of two files, myRawData1.json and Ranger_Policies2.json can be ingested by:
INSERT INTO tablename (fname, metadata, content)
SELECT fname, meta, j -- do any data transformation here
FROM jsonb_read_files('myRawData%.json')
-- WHERE any_condiction_here
;
where the function jsonb_read_files() reads all files of a folder, defined by a mask:
CREATE or replace FUNCTION jsonb_read_files(
p_flike text, p_fpath text DEFAULT '/tmp/pg_io/'
) RETURNS TABLE (fid int, fname text, fmeta jsonb, j jsonb) AS $f$
WITH t AS (
SELECT (row_number() OVER ())::int id,
f AS fname,
p_fpath ||'/'|| f AS f
FROM pg_ls_dir(p_fpath) t(f)
WHERE f LIKE p_flike
) SELECT id, fname,
to_jsonb( pg_stat_file(f) ) || jsonb_build_object('fpath', p_fpath),
pg_read_file(f)::jsonb
FROM t
$f$ LANGUAGE SQL IMMUTABLE;
Lack of gzip streaming
The most frequent method for "file ingestion" (mainlly in Big Data) is preserving original file on gzip format and transfering it with streaming algorithm, anything that can runs fast and without disc consumption in unix pipes:
gunzip remote_or_local_file.csv.gz | convert_to_sql | psql
So ideal (future) is a server option for format .csv.gz.
Note after #CharlieClark comment: currently (2022) nothing to do, the best alternative seems pgloader STDIN:
gunzip -c file.csv.gz | pgloader --type csv ... - pgsql:///target?foo
You can use COPY table TO ... WITH BINARY which is "somewhat faster than the text and CSV formats." Only do this if you have millions of rows to insert, and if you are comfortable with binary data.
Here is an example recipe in Python, using psycopg2 with binary input.
It mostly depends on the (other) activity in the database. Operations like this effectively freeze the entire database for other sessions. Another consideration is the datamodel and the presence of constraints,triggers, etc.
My first approach is always: create a (temp) table with a structure similar to the target table (create table tmp AS select * from target where 1=0), and start by reading the file into the temp table.
Then I check what can be checked: duplicates, keys that already exist in the target, etc.
Then I just do a do insert into target select * from tmp or similar.
If this fails, or takes too long, I abort it and consider other methods (temporarily dropping indexes/constraints, etc)
I just encountered this issue and would recommend csvsql (releases) for bulk imports to Postgres. To perform a bulk insert you'd simply createdb and then use csvsql, which connects to your database and creates individual tables for an entire folder of CSVs.
$ createdb test
$ csvsql --db postgresql:///test --insert examples/*.csv
I implemented very fast Postgresq data loader with native libpq methods.
Try my package https://www.nuget.org/packages/NpgsqlBulkCopy/
May be I'm late already. But, there is a Java library called pgbulkinsert by Bytefish. Me and my team were able to bulk insert 1 Million records in 15 seconds. Of course, there were some other operations that we performed like, reading 1M+ records from a file sitting on Minio, do couple of processing on the top of 1M+ records, filter down records if duplicates, and then finally insert 1M records into the Postgres Database. And all these processes were completed within 15 seconds. I don't remember exactly how much time it took to do the DB operation, but I think it was around less then 5 seconds. Find more details from https://www.bytefish.de/blog/pgbulkinsert_bulkprocessor.html
As others have noted, when importing data into Postgres, things will be slowed by the checks that Postgres is designed to do for you. Also, you often need to manipulate the data in one way or another so that it's suitable for use. Any of this that can be done outside of the Postgres process will mean that you can import using the COPY protocol.
For my use I regularly import data from the httparchive.org project using pgloader. As the source files are created by MySQL you need to be able to handle some MySQL oddities such as the use of \N for an empty value and along with encoding problems. The files are also so large that, at least on my machine, using FDW runs out of memory. pgloader makes it easy to create a pipeline that lets you select the fields you want, cast to the relevant data types and any additional work before it goes into your main database so that index updates, etc. are minimal.
The query below can create test table with generate_series column which has 10000 rows. *I usually create such test table to test query performance and you can check generate_series():
CREATE TABLE test AS SELECT generate_series(1, 10000);
postgres=# SELECT count(*) FROM test;
count
-------
10000
(1 row)
postgres=# SELECT * FROM test;
generate_series
-----------------
1
2
3
4
5
6
-- More --
And, run the query below to insert 10000 rows if you've already had test table:
INSERT INTO test (generate_series) SELECT generate_series(1, 10000);