Copy content from a csv to PostgreSQL table - sql

I am trying to copy content from a csv-file to an existing but empty table in PostgreSQL.
This is how far I've come:
COPY countries FROM 'C:\Program Files\PostgreSQL\9.5\data\countries-20140629.csv'
DELIMITERS ',' CSV HEADER
The Problem that I'm experiencing is that the csv-file containts three columns (code, english_name and French_name), whereas my table only persists of two columns (code, english_name).
Adding a third column to my table is not an Option.
Is there any way to tell PostgreSQL to only import the first two columns of the csv-file?

The easiest way is to modify your CSV and delete the last column.
You could try it like the documentation says:
In your case it would be like:
COPY countries (code, english_name) FROM 'C:\Program Files\PostgreSQL\9.5\data\countries-20140629.csv' DELIMITERS ',' CSV HEADER
Take a look in the documentation for further help:
PostgreSQL.org

As far as I can see, there is no possibility to tell postgres to only import a subset of the columns. What you can do, is to import the csv file into a temporary table and then transfer the data you want from the temporary table to your final table.

Related

How to omit columns in a Schema.ini file?

I'm building an ini files to import a complex text file to a table in Access. How do you omit a column? My text file is fixed width but still had a separator column in between each column of data that I don't need.
I was able to take #Olivier Jacot-Descombes and Kostas K. advice and create a temp table to import the data into, then run a delete query to remove the information that wasn't needed.

CSV deletes the data from the table

I have created a new table in SQLPro for Postgres, and I want to upload multiple CSV into that Table.
Each CSV has about 5K records. Basically, whenever I want to upload another one it deletes/overrides the information from the table.
Can you help? :)
Basically,
Merge all the CSV with the headers insert them into your table.
Delete all the rows that were created by headers.
Might be obvious, but remember it will only work with CSV where data is mapped the same way.

Import CSV file to SQLite database (without headers)

How to load a CSV file into the table using the console? The problem is that I have to somehow omit the headers from the CSV file (I can not delete them manually).
From the sqlite3 doc on CSV Import:
There are two cases to consider: (1) Table "tab1" does not previously
exist and (2) table "tab1" does already exist.
In the first case, when the table does not previously exist, the table
is automatically created and the content of the first row of the input
CSV file is used to determine the name of all the columns in the
table. In other words, if the table does not previously exist, the
first row of the CSV file is interpreted to be column names and the
actual data starts on the second row of the CSV file.
For the second case, when the table already exists, every row of the
CSV file, including the first row, is assumed to be actual content. If
the CSV file contains an initial row of column labels, that row will
be read as data and inserted into the table. To avoid this, make sure
that table does not previously exist.
It is either/or. You will have to outsmart it.
Assuming "I can not delete them manually" means from the csv, not from the table, you could possibly sql delete the header line after the import.
Or: Import into a temp table in the target database, insert into target table from the temp table, drop the temp table.
Or:
connect to an in-memory database
import the CSV into a table
attach the target database
insert into target table from the imported in-memory table
just add option --skip 1, see https://www.sqlite.org/cli.html#importing_csv_files

Import csv to impala

So for my previous homework, we were asked to import a csv file with no columns names to impala, where we explicitly give the name and type of each column while creating the table. However, now we have a csv file but with column names given, in this case, do we still need to write down the name and type of it even it is provided in the data?
Yes, you still have to create an external table and define the column names and types. But you have to pass the following option right at the end of the create table statement
tblproperties ("skip.header.line.count"="1");
-- Once the table property is set, queries skip the specified number of lines
-- at the beginning of each text data file. Therefore, all the files in the table
-- should follow the same convention for header lines.

Using Bulk Insert to import a csv file without specifying column names?

When creating a table before the bulk insert, is there a way to NOT specify the column names and use whatever column names are on the csv file? I have some columns in my csv file that are quarters, like 2012Q2, 2012Q3, etc. In the future, these are going to change depending on the time and that's why I don't want to specify the column names. If this is possible, any help will be appreciated.
Thanks!
One way to this is:
Drop the table on BigQuery
Get column names from your CSV
Create table using the column names from your CSV.
PHP example can be found here: https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/bigquery/api/src/functions/create_table.php
(the fields = your column names)
Import the CSV
Done!