Is there an alternative way to import data into Postgres than using psql? - sql

I am under strict corporate environment and don't have access to Postgres' psql. Therefore I can't do what's shown e.g. in the SO Convert SQLITE SQL dump file to POSTGRESQL. However, I can generate the sqlite dump file .sql. The resulting dump.sql file is 1.3gb big.
What would be the best way to import this data into Postgres? I also have DBeaver and can connect to both databases simultaneously but unfortunately can't do INSERT from SELECT.

I think the term for that is 'absurd', not 'strict'.
DBeaver has an 'execute script' feature. But who knows, maybe it will be blocked.

EnterpriseDB offers binary downloads. If you unzip those to a local drive you might be able to execute psql from the bin subdirectory.

If you can install psycopg2 or pg8000 for python, you should be able to connect to the database and then loop over the dump file sending each line to the database with cur.execute(line) . It might take some fiddling if the dump file has any multi-line commands, but the example you linked to doesn't show any of those.

Related

Import huge SQL file into SQL Server

I use the sqlcmd utility to import a 7 GB large SQL dump file into a remote SQL Server. The command I use is this:
sqlcmd -S IP address -U user -P password -t 0 -d database -i file.sql
After about 20-30 min the server regularly responds with:
Sqlcmd: Error: Scripting error.
Any pointers or advice?
I assume file.sql is just a bunch of INSERT statements. For a large amount of rows, I suggest using the BCP command-line utility. This will perform orders of magnitude faster than individual INSERT statements.
You could also bulk insert data using the T-SQL BULK INSERT command. In that case, the file path needs to be accessible by the database server (i.e. UNC path or copied to a drive on the server) and along with needed permissions. See http://msdn.microsoft.com/en-us/library/ms188365.aspx.
Why not use SSIS. While I have a certificate as a DBA, I always try to use the right tool for the job.
Here are some reasons to use SSIS.
1 - Use can still use fast-load, bulk copy. Make sure you set the batch size.
2 - Error handling is much better.
However, if you are using fast-load, either the batch commits or it gets tossed.
If you are using single record, you can direct each error row to a separate destination.
3 - You can perform transformations on the source data before loading it into the destination.
In short, Extract Translate Load.
4 - SSIS loves memory and buffers. If you want to get really in depth, read some articles from Matt Mason or Brian Knight.
Last but not least, the LAN/WAN always plays a factor if the job is not running on the target server with the input file on a local disk.
If you are on the same backbone with a good pipe, things go fast.
In summary, yeah you can use BCP. It is great for little quick jobs. Anything complicated with robust error handling should be done with SSIS.
Good luck,

Best way to export huge data from Oracle DB to Oracle DB

That's my first post in stackoverflow, and i've to say that i really like that website !
For a project i need to export then re-import some huge Oracle tables from one DB to another (around 100 million lines and 30 rows).
My idea is to export the table in a flat file and then reimport into another empty table considering that the schema already exists.
I'm using PL/SQL Developer and/or SQL*Plus to make my operations.
I've tested SQL*Loader which seems to do a good job but it's really slow in my opinion : about 30 seconds to make an import of a CSV file with 1 million lines/30 rows.
Which solution could you bring? Is SQL*Loader is the best tool? Some better tools already exists?
Is CSV the better format talking about size and processing time?
Thanks a lot in advance.
Use Oracle DataPump aka expdp and impdp Overview of Oracle Data Pump See Examples of Using Data Pump Export and Examples of Using Data Pump Import
There really is no need to program this on your own, there is no way that you can outperform expdp/impdp. Don't forget there is also an impdp option to use a network_link. In that case, you just skip the dmp file and import directly into the target database. This can be done using impdp from the commandline but also using dbms_datapump using pl/sql. See PL/SQL Packages and Types Reference for docu.
You can use one of the following option:
SQL Loader (which you already might be trying out).
Traditional data export and Import (exp / imp commands).
Oracle data pump (expdp /impdp).
Also if you need to do this regularly then you can schedule this using oracle scheduler or shell script.

choosing where to save sqlite database

This is probably a simple question, but I could use some help. I am trying to build a small database for an application that will only be run on my computer so I want to create a local database.
To do this I am trying to use sqlite. I can use the command prompt to make what seems to be a database by using the sqlite3 databaseName; functionality, but I do not know where it is being stored.
I need to be able to find the database to access it through the application I am experimenting with. I already know all of the basic sql and such for creating the database tables and data, but I cannot figure out how to simply make the database connection.
is there a way to specify where the database .db file will be stored, and why can I not find the file it seems to be making?
Using sqlite3 shell? Some help using sqlite3 -help:
Usage: sqlite3 [OPTIONS] FILENAME [SQL]
If FILENAME is not supplied, shell uses an temporary database.
If you start shell without supplying a filename, you may save temporary database at any time using:
sqlite> .backup MAIN "folder\your_file.extension"
Or you can ATTACH an existing database an use SQL methods:
sqlite> ATTACH DATABASE "path\stored.db" AS other;
sqlite> INSERT OR REPLACE INTO other.table1 SELECT * FROM this_table1;
sqlite> DETACH other;
For doing such things , you can use Sqlite Manager , you'll get it as a Firefox addon. It's excellent in creating / Managing Sqlite database.
https://addons.mozilla.org/en-US/firefox/addon/sqlite-manager/
Thanks everyone for answering, but it turns out my issue was much simpler than I thought.
I was trying to name the database after already starting the shell.
I was supposed to create the database from command line by doing sqlite3 name.db
But I was trying to use that command within the sqlite shell so nothing was being created.

Generate DDL SQL create table statement after scanning CSV file

Are there any command line tools (Linux, Mac, and/or Windows) that I could use to scan a delimited file and output a DDL create table statement with the data type determined for me?
Did some googling, but couldn't find anything. Was wondering if others might know, thanks!
DDL-generator can do this. It can generate DDL's for YAML, JSON, CSV, Pickle and HTML (although I don't know how the last one works). I just tried it on some data exported from Salesforce and it worked pretty well. Note you need to use it with Python 3, I could not get it to work with Python 2.7.
You can also try https://github.com/mshanu/idli. It can take csv file as input and can generate create statement with appropriate types.It can generate for mysql, oracle and postgres. I am actively working on this and happy to receive feedback for future improvement

Export DB with PostgreSQL's PgAdmin-III

How to export a Postgresql db into SQL that can be executed into other pgAdmin?
Exporting as backup file, doesn't work when there's a difference in version
Exporting as SQL file, does not execute when tried to run on a different pgAdmin
I tried exporting a DB with pgAdmin III but when I tried to execute the SQL in other pgAdmin it throws error in the SQL, when I tried to "restore" a Backup file, it says there's a difference in version that it can't do the import/restore.
So is there a "safe" way to export a DB into standard SQL that can be executed plainly in pgAdmin SQL editor, regardless of which version it is?
Don't try to use PgAdmin-III for this. Use pg_dump and pg_restore directly if possible.
Use the version of pg_dump from the destination server to dump the origin server. So if you're going from (say) 8.4 to 9.2, you'd use 9.2's pg_dump to create a dump. If you create a -Fc custom format dump (recommended) you can use pg_restore to apply it to the new database server. If you made a regular SQL dump you can apply it with psql.
See the manual on upgrading your PostgreSQL cluster.
Now, if you're trying to downgrade, that's a whole separate mess.
You'll have a hard time creating an SQL dump that'll work in any version of PostgreSQL. Say you created a VIEW that uses a WITH query. That won't work when restored to PostgreSQL 8.3 because it didn't support WITH. There are tons of other examples. If you must support old PostgreSQL versions, do your development on the oldest version you still support and then export dumps of it for newer versions to load. You cannot sanely develop on a new version and export for old versions, it won't work well if at all.
More troubling, developing on an old version won't always give you code that works on the new version either. Occasionally new keywords are added where support for new specification features are introduced. Sometimes issues are fixed in ways that affect user code. For example, if you were to develop on the (ancient and unsupported) 8.2, you'd have lots of problems with implicit casts to text on 8.3 and above.
Your best bet is to test on all supported versions. Consider setting up automated testing using something like Jenkins CI. Yes, that's a pain, but it's the price for software that improves over time. If Pg maintained perfect backward and forward compatibility it'd never improve.
Export/Import with pg_dump and psql
1.Set PGPASSWORD
export PGPASSWORD='123123123';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.
If you still want to use PGAdmin then see procedure below.
Export DB with PGAdmin:
Select DB and click Export.
File Options
Name DB file name for you local directory
Select Format - Plain
Ignore Dump Options #1
Dump Options #2
Check Use Insert Commands
Objects
Uncheck tables if you don't want any
Import DB with PGAdmin:
Create New DB.
By keeping selected DB, Click Menu->Plugins->PSQL Console
Type following command to import DB
\i /path/to/db.sql
If you want to export Schema and Data separately.
Export Schema
File Options
Name schema file at you local directory
Select Format - Plain
Dump Options #1
Check Only Schema
Check Blobs (By default checked)
Export Data
File Options
Name data file at you local directory
Select Format - Plain
Dump Options #1
Check Only Data
Check Blobs (By default checked)
Dump Options #2
Check Use Insert Commands
Check Verbose messages (By default checked)
Note: It takes time to Export/Import based on DB size and with PGAdmin it will add some more time.