I am trying to import/ copy my csv file to PostgreSQL. However, I am encountering these errors. I don't have import/ write permissions to the file. Will stdin help and how?The Postgres docs provides no examples. I was henceforth asked to do bulk insert but since there are too many columns with mixed data types, I am not sure how to proceed with that further.
Command to copy the csv file:
COPY sales.sales_tickets
FROM 'C:/Users/Nandini/Downloads/AIG_Sales_Tickets.csv'
DELIMITER ',' CSV;
ERROR: must be superuser to COPY to or from a file
Hint: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
1 statement failed.
Command to do bulk insert is too time taking:
insert into sales.sales_ticket values (1,'2',3,'4','5',6,7,8,'9',10','11');
Please suggest. Thank you.
From PostgreSQL docummentation on COPY:
COPY naming a file or command is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
and
Files named in a COPY command are read or written directly by the server, not by the client application. Therefore, they must reside on or be accessible to the database server machine, not the client. They must be accessible to and readable or writable by the PostgreSQL user (the user ID the server runs as), not the client. Similarly, the command specified with PROGRAM is executed directly by the server, not by the client application, must be executable by the PostgreSQL user. COPY naming a file or command is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
You're trying to use the COPY command violating two of the requirements:
You're trying to execute the COPY command from a non-super user.
You're trying to read a file on your client machine, and have it copied to the server.
This won't work. If you need to perform such a COPY, you need to:
Copy the CSV file to the server; to a directory that can be read by the (system) user executing the PostgreSQL server process.
Execute the COPY command from a superuser account.
Alternative
If you can't do some of these, you can always use a tool such as pgAdmin 4 and use its Import/Export functionality.
See also How to import CSV file data into a PostgreSQL table?
You are an ideal case to use /COPY not COPY.
/COPY sales.sales_tickets
FROM 'C:/Users/Nandini/Downloads/AIG_Sales_Tickets.csv'
DELIMITER ',' CSV;
Related
I am under strict corporate environment and don't have access to Postgres' psql. Therefore I can't do what's shown e.g. in the SO Convert SQLITE SQL dump file to POSTGRESQL. However, I can generate the sqlite dump file .sql. The resulting dump.sql file is 1.3gb big.
What would be the best way to import this data into Postgres? I also have DBeaver and can connect to both databases simultaneously but unfortunately can't do INSERT from SELECT.
I think the term for that is 'absurd', not 'strict'.
DBeaver has an 'execute script' feature. But who knows, maybe it will be blocked.
EnterpriseDB offers binary downloads. If you unzip those to a local drive you might be able to execute psql from the bin subdirectory.
If you can install psycopg2 or pg8000 for python, you should be able to connect to the database and then loop over the dump file sending each line to the database with cur.execute(line) . It might take some fiddling if the dump file has any multi-line commands, but the example you linked to doesn't show any of those.
I need to export a 50gb file with inserts to a table in postgreSQL to be able to count the time it takes to perform the inserts, but I can't find any way to load that file, can someone help me?
If the file have you have contains syntactically valid SQL (like INSERT statements), this is very straightforward using the command line psql client that comes with a Postgres installation:
psql DATABASE_NAME < FILE_NAME.sql
You may also want to replace DATABASE_NAME with a connection string like postgres://user:pass#localhost/database_name.
This causes your shell to read the given file and pass it off to psql's stdin, which will cause it to execute commands against the database it's connected to.
I want to read a tab delimited file using PLSQL and insert the file data into a table.
Everyday new file will be generated.
I am not sure if external table will help here because filename will be changed based on date.
Filename: SPRReadResponse_YYYYMMDD.txt
Below is the sample file data.
Option that works on your own PC is to use SQL*Loader. As file name changes every day, you'd use your operating system's batch script (on MS Windows, these are .BAT files) to pass a different name while calling sqlldr (and the control file).
External table requires you to have access to the database server and have (at least) read privilege on its directory which contains those .TXT files. Unless you're a DBA, you'll have to talk to them to provide environment. As of changing file name, you could use alter table ... location which is rather inconvenient.
If you want to have control over it, use UTL_FILE; yes, you still need to have access to that directory on the database server, but - writing a PL/SQL script, you can modify whatever you want, including file name.
Or, a simpler option, first rename input file to SPRReadResponse.txt, then load it and save yourself of all that trouble.
I have an interesting challenge. I am learning how to use COPY function in SQL. I need to import a data from .CSV to the table (PostgreSQL server). But every time when I try to do this I get this message:
ERROR: could not open file "/Users/olenaskoryk/Desktop/us_counties_2010.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
My query is:
COPY us_counties_2010
FROM '/Users/olenaskoryk/Desktop/us_counties_2010.csv'
WITH (FORMAT CSV, HEADER);
As you see I am working on Mac.
I assume that PostgreSQL is not running locally on your computer. That's why the server can't read your local file.
You may want to do this through a psql session using the \copy command.
$ psql your_db_connection_url
psql (10.5)
Type "help" for help.
db=# \copy us_counties_2010 FROM '/Users/olenaskoryk/Desktop/us_counties_2010.csv' WITH (FORMAT CSV, HEADER);
Psql's \copy command uses COPY FROM STDIN under the hood, passing the contents of the local file through standard input to the server, circumventing the limitation of the server not being able to read the local file.
The HINT is relevant. The SQL server is not running as you, and does not have the right to read your file.
Give it permissions (which may require giving permissions to containing directories), or put in in a universally readable place. (like perhaps a TEMP directory.)
i have created a table in Hive "sample" and loaded a csv file "sample.txt" into it.
now i need that data from "sample" into my local /opt/zxy/sample.txt.
How can i do that?
Hortonworks' Sandbox lets you do it through its HCatalog menu. Otherwise, the syntax is
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/c' SELECT a.* FROM b
as per Hive language manual
Since your intention is just to copy the entire file from HDFS to your local FS, I would not suggest you to do it through a Hive query, because of the following reasons :
It'll start a Mapreduce job which will take more time than a normal copy.
It'll create file(s) with different names(000000_0, 000001_0 and so on), which will require you to rename the file manually afterwards.
You might face problem in opening these files as they are without any extension. Your OS would be unable to choose an application to open these files on its own. In such a case you either have to rename the file or manually select an application to open it.
To avoid these problems you could use HDFS get command :
bin/hadoop fs -get /user/hive/warehouse/sample/sample.txt /opt/zxy/sample.txt
Simple n easy. But if you need to copy some selected data, then you have to use a Hive query.
HTH
I usually run my query directly through Hive on the command line for this kind of thing, and pipe it into the local file like so:
hive -e 'select * from sample' > /opt/zxy/sample.txt
Hope that helps.
Readers who are accessing Hive from Windows OS can check out this script on Github.
It's a Python+paramiko script that extracts Hive data to local Windows OS file-system.