I am at a loss. I am on a mac using PG Admin 4 with PostgreSQL I tried to use the import/export data wizard when you right click on the table I created...and get this error ERROR: extra data after last expected column...all of the colomns match up and there is no additional data. I don't know what needs to change...
So then I tried to create it with the quarry tool with the following code (renaming things to put here):
create schema schema_name;
create table schema_name.tablename( column1 text, column2 text...); ***all the columns are a text data type***
copy schema_name.tablename
from '/Users/me/downloads/filename.csv'
delimiter ',' header csv;
and get this error message:
ERROR: could not open file "/Users/me/downloads/filename.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
Going to properties for that database, and then to security, and privileges I made public all privileges. But that did nothing. And so I am here to ask yall for help.
Successfully import the data from the CSV.
Related
Just as the title says, I want to duplicate one table from a completely separate database in phpPgAdmin to another one. I have tried two ways but both did not work for me:
Tried "Create table like" (database >> table >> create table like)
this seems to can only duplicate a table within the database
Tried export and import
I tried to export the table I want, then heading over to the other database and try to import in an empty table. but the error i am getting is either "Import error: File could not be uploaded to the server phppgadmin" or "Import error: Failed to automatically determine the file format."
you can use pg_dump , which extract PostgreSQL database/table and the pipe it directly to another server/database
pg_dump -t table_name source_db | psql target_db
for more info read pg_dump documentation.
I clicked a table on bigquery dashboard, got this error:
However, I can get data when I do a select on this table. (That means the table does exist)
I already have the highest admin privilege so it shouldn't be a permission issue.
I created this table with python script, which collects data, writes into a csv file, and upload the csv file to bigquery everyday. After I created the table I once changed the schema both in the script and on the dashboard. Not sure if that's the cause, but the table loading error occurred several days after I changed the schema.
If you have Addblock extensions, this might be the root cause of this issue. Thus, try disabling it, then try running your query again.
Hope it helps.
I guess I just cannot formulate the search query appropriately, but I cannot find an answer to the following simple question: how to use extracted DDL pieces to recreate tables, views etc. in a different database or a different schema?
For example, when I extract table DDL with
SELECT dbms_metadata.get_dependent_ddl ('TABLE', TABLE-NAME, SCHEMA) FROM dual
I get output with FOREIGN KEY there. If I now naively issue the resulting CREATE TABLE statements on a different database in e.g. alphabetical order of table names, I get "table or view doesn't exist" error, because constraints reference to non-yet-created tables.
What is the normal procedure of using DDL? Is it (easily) possible to recreate full scheme structure (short of full database dump) without using external tools?
You can use datapump export CONTENT option to only export the metadata for a schema:
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
The import process will create the objects and constraints, taking the dependencies into account.
If you want to see the DDL, and optionally run it manually, you can use the datapump import SQLFILE option to put the DDL into a file instead of executing it:
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
You can do similar things through SQL Developer and other clients, but those are 'external tools', whereas datapump might not fall into that category, even if you have to run it from the command line. There is a datapump API so you can even avoid the command line if you want to, though in some ways it's more complicated than using the expdp and impdp utilities.
I have a Rails app on Heroku that i'm currently testing to ensure that I can download the information it gathers. I've managed to get PostgreSQL 9.3.5 working and can even get it to spit out a public url to an unreadable dump file, but I want to export a particular table into a CSV that is easier to understand so that I can gather the data.
I've been looking into Heroku Dataclips. The documentation says that this is possible, but doesn't explain how. This site seemed to give some tips on SQL inputs:
http://www.gistutor.com/postgresqlpostgis/10-intermediate-postgresqlpostgis-tutorials/39-how-to-import-or-export-a-csv-file-using-postgresql-copy-to-and-copy-from-queries.html
So I entered this into Dataclips:
COPY participations(user_full_name, user_email, event_name, event_date_time)
TO '/usr/local/pgsql/data/csv/event_registrations.csv'
WITH DELIMITER ‘,’
CSV HEADER
However, I get this error:
Your query couldn't be created.
ERROR: syntax error at or near "COPY"
LINE 2: COPY participation(user_full_name, user_email, event_name, e...
^
How can I fix this? Maybe the reference i'm using is wrong, because I don't see the difference between what i'm doing and what's there.
FWIW, i'm using Cloud9 IDE as my terminal.
If you are trying to get data out in csv file then :
try to do this in command line and put "\" before copy
like this
\COPY participations(user_full_name, user_email, event_name, event_date_time)
TO '/usr/local/pgsql/data/csv/event_registrations.csv'
WITH DELIMITER ‘,’
CSV HEADER
or you can download PGadmin it has option execute query to file under QUERY tab on top
According to Heroku support, this is what you need to put in a Dataclip if you want to get all the records from a particular table:
SELECT * from table_name;
Once you create your Dataclip, you will have the option through the Dataclips interface to download the results as a CSV.
I have a little problem. My friend has a database with over 10 tables and each table has over 90-100 records.
I can't find a workaround to export the records (to put in a SQL file something like this: INSERT INTO .... VALUES ... for each existing records) from his tables to import in my database.
How to do that ?
I tried: right click on a table -> Script Table as -> INSERT TO -> File ...
but it only generate the INSERT statement.
There are a solution ? or this feature is only for commercial version ?
UPDATE
You can use BCP command with command prompt like this
For export: bcp ADatabase.dbo.OneTable out d:\test\OneTable.bcp -c -Usa -Ppassword
For import: bcp ADatabase.dbo.OneTable in d:\test\OneTable.bcp -c -Usa -Ppassword
these commands will create a BCP file which contains records for specified table. You can import using existing BCP file into another database
If you use remote database then:
bcp ADatabaseRemote.dbo.OneTableRemote out d:\test\OneTableRemote.bcp -Slocalhost/SQLExpress -Usa -Ppassword
Instead of localhost/SQLExpress, you can use localhost or other server name...
Probably the simplest way to do this would be to run a SELECT statement that outputs to a file. Then you can import that data into your database.
For simple moves, I have also done a copy/paste manually. Sometimes it is better to use Excel as a staging platform before pasting it into the new database. You may need to create a temporary table in your new database that matches up exactly with the data you are pasting over. For example, I usually don't put a PK on the temp table at first and make the PK field just an INT. That way the copy will go smoother.
In the corporate world, you would use SSIS to move this data around.
a couple of ways you could do this. One,select everything from each table and save the results as a csv or delimited file (you can do this from sql management studio). You can also script the tables as create and copy the scripts over to the new database, assuming it is a sql server also. Then for import use the load infile statement. You may have to google the syntax for sql server but I know this works in mysql and oracle. haven't tried it in sql server yet.
LOAD DATA INFILE 'myfile'
INTO TABLE stuff
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
SET id = NULL;
Or if you are going to another sql server use the sql export import wizard.
http://msdn.microsoft.com/en-us/library/ms141209.aspx