PostgreSQL: Execute queries in loop - performance issues - sql

I need to copy data from a file into a PostgreSQL database. For that purpose I parse that file using bash in a loop and generate the corresponding insert queries. The trouble is that it takes a lot of time in order to perform that loop.
1)What can I do to accelerate that loop? Should I open a kind of connection before the loop and close it after?
2)Should I use a temporary text file inside the loop in order to write there the unique values and search in it using the text utility instead of writing them to the database and perform a search there?

Does whatever programming language you use commit after every insert? If so, the easiest thing you can do is commit after inserting all rows rather than after every row.
You might also be able to batch inserts, but using the PostgreSQL copy command is less work and also very fast.

If you insist on using BASH you could split files by defined row numbers and then excecute paralel commands using & at the end of each line.
I strongly suggest you try a different approach or programing language since as Bill said, bash doesn't talk to postgres, you can also use the pg_dump funtionality if your file's source is another postgres database.

Related

How to load csv file to multiple tables in postgres (mainly concerned about best practice)

I'm new to DB/postgres SQL.
Scenario:
Need to load an csv file into postgres DB. This CSV data needs to loaded into multiple tables according DB schema. I'm looking for better design using python script.
My thought:
1. Load CSV file to intermediate table in postgres
2. Write a trigger on intermediate table to insert data into multiple tables on event of insert
3. Trigger includes truncate data at end
Any suggestions for better design/other ways without any ETL tools, and also any info on modules in Python 3.
Thanks.
Rather than using a trigger, use an explicit INSERT or UPDATE statement. That is probably faster, since it is not invoked per row.
Apart from that, your procedure is fine.

Sequential Teradata Queries

I have a collection of SQL queries that need to run in a specific order using Teradata. How can this be done?
I've considered writing an application in some other language (like Python or C++) to sequentially call each query, but am unsure how to get live data there from Teradata. I also want to keep the queries as separate SQL files (like it is currently).
Goal is to minimize the need for human interaction ie. I want to hit "Run" and let it take care of the rest.
BTEQ scripts are your Go-To solution.
Have each query, or at least, logical blocks of several statements, in single bteq script.
Then create a script that will call the the BTEQ with needed settings, i.e. TD logon command and have this script be called in a batch with parameters like this:
start /wait C:\Teradata\BTEQ.bat Script_1.txt
start /wait C:\Teradata\BTEQ.bat Script_2.txt
start /wait C:\Teradata\BTEQ.bat Script_3.txt
pause
Then you can create several batch files, split in logical blocks and have them executed at will, or scheduled.

Inserting rows - bulk or row by row?

I am inserting data into a database using millions of insert statements stored in a file. Is it better to insert this row by row or in bulk ? I am not sure what the implications can be.
Any suggestions on the approach ? Right now, I am executing 50K of these statements at a time.
Generally speaking, you're much better off inserting in bulk, provided you know that the inserts won't fail for some reason (i.e. invalid data, etc). If you're going row by row, what you're doing, is opening the data connection, adding the row, closing the data connection. Rinse wash, repeat in your case tens of thousands of times (or more?). It's a huge performance hit as opposed to opening the connection once, dumping all the data at one shot, then closing the connection once. If your data ISN'T a clean set of data, you might be better off going row by row, as the bulk insert won't fail if you have data to be cleaned up.
If you are using SSIS, I would suggest a data flow task as another possible avenue. This will allow you to move data from a flat text file, SQL table or other source and map it into your new table. Performance, I have found, is always pretty good and I use it regularly.
If your table is not created before the insert, what I do is drag an Execute SQL Task function into my process with the table creation query (CREATE TABLE....etc.) and update the properties on the data flow function to delay validation.
As long as my data structure is consistent, this works. Here are a couple screenshots.
You should definitely use the BULK INSERT instead of inserting row by row. The BULK INSERT is the in-process method designed for bringing data from a text file into SQL Server, ant it is the fasted among other approaches described in the The Data Loading Performance Guide online article
The other alternative is to use a batch process that uses set-based processing over a smaller set of records (say 5000 at a time) . This can keep the server from getting totally locked up and is faster than one record at a time.

Few questions from a Java programmer regarding porting preexisting database which is stored in .txt file to mySQL?

I've been writing a Library management Java app lately, and, up until now, the main Library database is stored in a .txt file which was later converted to ArrayList in Java for creating and editing the database and saving the alterations back to the .txt file again. A very primitive method indeed. Hence, having heard on SQL later on, I'm considering to port my preexisting .txt database to mySQL. Since I've absolutely no idea how SQL and specifically mySQL works, except for the fact that it can interact with Java code. Can you suggest me any books/websites to visit/buy? Will the book Head First with SQL ever help? especially when using Java code to interact with the SQL database? It should be mentioned that I'm already comfortable with using 3rd Party APIs.
View from 30,000 feet:
First, you'll need to figure out how to represent the text file data using the appropriate SQL tables and fields. Here is a good overview of the different SQL data types. If your data represents a single Library record, then you'll only need to create 1 table. This is definitely the simplest way to do it, as conversion will be able to work line-by-line. If the records contain a LOT of data duplication, the most appropriate approach is to create multiple tables so that your database doesn't duplicate data. You would then link these tables together using IDs.
When you've decided how to split up the data, you create a MySQL database, and within that database, you create the tables (a database is just something that holds multiple tables). Connecting to your MySQL server with the console and creating a database and tables is described in this MySQL tutorial.
Once you've got the database created, you'll need to write the code to access the database. The link from OMG Ponies shows how to use JDBC in the simplest way to connect to your database. You then use that connection to create Statement object, execute a query to insert, update, select or delete data. If you're selecting data, you get a ResultSet back and can view the data. Here's a tutorial for using JDBC to select and use data from a ResultSet.
Your first code should probably be a Java utility that reads the text file and inserts all the data into the database. Once you have the data in place, you'll be able to update the main program to read from the database instead of the file.
Know that the connection between a program and a SQL database is through a 'connection program'. You write an instruction in an SQL statement, say
Select * from Customer order by name;
and then set up to retrieve data one record at a time. Or in the other direction, you write
Insert into Customer (name, addr, ...) values (x, y, ...);
and either replace x, y, ... with actual values or bind them to the connection according to the interface.
With this understanding you should be able to read pretty much any book or JDBC API description and get started.

How do I handle large SQL SERVER batch inserts?

I'm looking to execute a series of queries as part of a migration project. The scripts to be generated are produced from a tool which analyses the legacy database then produces a script to map each of the old entities to an appropriate new record. THe scripts run well for small entities but some have records in the hundreds of thousands which produce script files of around 80 MB.
What is the best way to run these scripts?
Is there some SQLCMD from the prompt which deals with larger scripts?
I could also break the scripts down into further smaller scripts but I don't want to have to execute hundreds of scripts to perform the migration.
If possible have the export tool modified to export a BULK INSERT compatible file.
Barring that, you can write a program that will parse the insert statements into something that BULK INSERT will accept.
BULK INSERT uses BCP format files which come in traditional (non-XML) or XML. Does it have to get a new identity and use it in a child and you can't get away with using SET IDENTITY INSERT ON because the database design has changed so much? If so, I think you might be better off using SSIS or similar and doing a Merge Join once the identities are assigned. You could also load the data into staging tables in SQL using SSIS or BCP and then use regular SQL (potentially within SSIS in a SQL task) with the OUTPUT INTO feature to capture the identities and use them in the children.
Just execute the script. We regularly run backup / restore scripts that are 100's Mb in size. It only takes 30 seconds or so.
If it is critical not to block your server for this amount to time, you'll have to really split it up a bit.
Also look into the -tab option of mysqldump with outputs the data using TO OUTFILE, which is more efficient and faster to load.
It sounds like this is generating a single INSERT for each row, which is really going to be pretty slow. If they are all wrapped in a transaction, too, that can be kind of slow (although the number of rows doesn't sound that big that it would cause a transaction to be nearly impossible - like if you were holding a multi-million row insert in a transaction).
You might be better off looking at ETL (DTS, SSIS, BCP or BULK INSERT FROM, or some other tool) to migrate the data instead of scripting each insert.
You could break up the script and execute it in parts (especially if currently it makes it all one big transaction), just automate the execution of the individual scripts using PowerShell or similar.
I've been looking into the "BULK INSERT" from file option but cannot see any examples of the file format. Can the file mix the row formats or does it have to always be consistent in a CSV fashion? The reason I ask is that I've got identities involved across various parent / child tables which is why inserts per row are currently being used.