I have a huge text file that is in format of "name:email".
I also created a table that has name and email columns.
How do I upload the text file into the table?
The textfile is on my ubuntu server, and I have connected to psql by using the commands
sudo -u postgres psql
\connect <username>"
What do I do next in order to import the text file to my database?
psql has a \COPY command
Copy allows you to do copy data into and out of tables in your
database. It supports several modes including:
binary
tab delimited
csv delimited
What you need is:
\COPY tablename(name,email) FROM '/path/to/file' DELIMITER ':' CSV
If you get an error ERROR: missing data for column "name", that means your file has an empty line at the end, just remove it.
Related
I want to copy a Postgres table in CSV format from a network database to my computer.
For example, here is its address
psql postgresql://login:password#192.168.00.00:5432/test_table
The problem is that I don't have superuser rights and I can't copy the table via pg_admin.
For example, if I make a request in pg_admin:
COPY test_table TO 'C:\tmp\test_table.csv' DELIMITER ',' CSV HEADER;
I get an error:
ERROR: must be superuser or a member of the pg_write_server_files role to COPY to a file
HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
SQL state: 42501
As I understand it, it is possible to copy the table - but through the command line, right? How to do it in my case? Thank
Instead of using COPY with a path, use STDOUT. Then, redirect the output to a local path:
psql -c "COPY test_table TO STDOUT DELIMITER ',' CSV HEADER" >> C:\tmp\test_table.csv
See the documentation for COPY.
In case you need this explanation: stdout stands for standard output, it means that the result of the command should be printed on your terminal. Using >> you redirect the output of the psql command to a file.
I would just learn how to use the command line, but if you want to stick with pgAdmin4 you can right click on the table in the browser tree and then choose "Import/Export Data" and follow the dialog box. Doing that is basically equivalent to using \copy from psql.
I am new to using SQL, so please bear with me.
I need to import several hundred csv files into PostgreSQL. My web search has only indicated how to import many csv files into one table. However, most csv files have different column types (all have one line headers). Is it possible to somehow run a loop, and have each csv imported to a table with the same name as the csv? Creating each table manually and specifying columns is not an option. I know that COPY will not work as the table needs to already by specified.
Perhaps this is not feasible in PostgreSQL? I would like to accomplish this in pgAdmin III or the PSQL console, but I am open to other ideas (using something like R to change the csv to a format more easily entered into PostgreSQL?).
I am using PostgreSQL on a Windows 7 computer. It was requested that I use PostgreSQL, thus the focus of the question.
The desired result is a database full of tables, that I will then join with a spreadsheet that includes specific site data. Thanks!
Use pgfutter.
The general syntax looks like this:
pgfutter csv
In order to run this on all csv files in a directory from Windows Command Prompt, navigate to the desired directory and enter:
for %f in (*.csv) do pgfutter csv %f
Note that the path for the downloaded program must be added to the list of accepted paths for Environmental Variables.
EDIT:
Here is the command line code for Linux users
Run it as
pgfutter *.csv
Or if that won't do
find -iname '*.csv' -exec pgfutter csv {} \;
In the terminal use nano to make a file to loop through moving csv files under my directory to postgres DB
>nano run_pgfutter.sh
The content of run_pgfutter.sh:
#! /bin/bash
for i in /mypath/*.csv
do
./pgfutter csv ${i}
done
Then make the file executable:
chmod u+x run_pgfutter.sh
I am using the BCP utility to import data to SQL Server 2012. I am importing a text file that is tab delimited. If there is data in my file that contains commas, SQL Server is wrapping the field value in double quotes. How can I avoid SQL Server adding this text qualifier? I want the data to not have these quotations, just like it does in the flat text file.
I would like to avoid using a format file because I am using lots flat text files with great variability.
I also cannot use Bulk Insert because my files are not stored on the server, but a local machine that is connecting via SSMS.
EDIT: My BCP Command:
bcp "[DB].[SCHEMA].[TABLE]" in "PATH\FILE.txt" -F2 -w -S"SERVERSTRING" -U"USERNAME" -P"PASSWORD";
I am working on exporting a table from my server DB which is about few thousand rows and the PHPMyadmin is unable to handle it.So I switched to the command line option
But I am running into this error after executing the mysqldump command.The error is
Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064)
After doing some search on the same I found this as a bug in the mysql version 5.5 not supporting the SET OPTION command.
I am running a EC2 instance with centos on it.My mysql version is 5.5.31(from my phpinfo).
I would like to know if there is a fix for this as it wont be possible to upgrade the entire database for this error.
Or if there is any other alternative to do a export or dump,please suggest.
An alternative to mysqldump is the SELECT ... INTO form of SELECT, which allows results to be written to a file (http://dev.mysql.com/doc/refman/5.5/en/select-into.html).
Some example syntax from the above help page is:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
Data can then be loaded back in using LOAD DATA INFILE (http://dev.mysql.com/doc/refman/5.5/en/load-data.html).
Again the page gives an example:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test
FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
And with a complete worked example pair:
When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE
to write data from a database into a file and then read the file back
into the database later, the field- and line-handling options for both
statements must match. Otherwise, LOAD DATA INFILE will not interpret
the contents of the file properly. Suppose that you use SELECT ...
INTO OUTFILE to write a file with fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ','
FROM table2;
To read the comma-delimited file back in, the correct statement would
be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
Not tested, but something like this:
cat yourdumpfile.sql | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" | mysql -u user -p -h host databasename
This inserts the dump into your database, but removes the lines containing "SET OPTION SQL_QUOTE_SHOW_CREATE". The -v means reverting.
Couldn't find the english manual entry for SQL_QUOTE_SHOW_CREATE to link it here, but you don't need this option at all, when your table and database names don't include special characters or something (meaning they don't need to put in quotes).
UPDATE:
mysqldump -u user -p -h host database | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" > yourdumpfile.sql
Then when you insert the dump into database you have to do nothing special.
mysql -u user -p -h host database < yourdumpfile.sql
I used quick and dirty hack for this.
Download mysql 5.6. (from https://downloads.mariadb.com/archive/signature/p/mysql/f/mysql-5.6.13-linux-glibc2.5-x86_64.tar.gz/v/5.6.13)
Untar and use newly downloaded mysqldump.
I need query for import csv file from a particular table and that quest must be used inside a stored procedures.
I tried this query
EXEC master..xp_cmdshell
'osql.exe -S ramcobl412 -U connect -P connect
-Q "select * from ramcodb..rct_unplanned_hdr" -o "c:\out.csv" -h-1 -s","'
but that csv file not in format when I open in xsl sheet
Comma separated files are working fine but width is problem
Save the .csv file you want to import to SQL Server on your desktop or some other path you can easily access.
Using SQL Server Management Studio, you should right click the database you want the csv file imported to as a table and go to Tasks >
Import Data and use the Import Wizard to import the csv file to a table.
The Import Wizard will automatically account for the different lengths you have for some rows. For example if you have column X and it has 5 characters on 1 row and 10 characters on two other rows the Import Wizard will automatically set the max length for the X column as 10.