How to load a CSV file into the table using the console? The problem is that I have to somehow omit the headers from the CSV file (I can not delete them manually).
From the sqlite3 doc on CSV Import:
There are two cases to consider: (1) Table "tab1" does not previously
exist and (2) table "tab1" does already exist.
In the first case, when the table does not previously exist, the table
is automatically created and the content of the first row of the input
CSV file is used to determine the name of all the columns in the
table. In other words, if the table does not previously exist, the
first row of the CSV file is interpreted to be column names and the
actual data starts on the second row of the CSV file.
For the second case, when the table already exists, every row of the
CSV file, including the first row, is assumed to be actual content. If
the CSV file contains an initial row of column labels, that row will
be read as data and inserted into the table. To avoid this, make sure
that table does not previously exist.
It is either/or. You will have to outsmart it.
Assuming "I can not delete them manually" means from the csv, not from the table, you could possibly sql delete the header line after the import.
Or: Import into a temp table in the target database, insert into target table from the temp table, drop the temp table.
Or:
connect to an in-memory database
import the CSV into a table
attach the target database
insert into target table from the imported in-memory table
just add option --skip 1, see https://www.sqlite.org/cli.html#importing_csv_files
Related
I had a table named XXXX. Suddenly, my table's contents were dropped (DROP table). I want to recreate my table, I know the schema of my table and I am having a text file with contents of my table before dropped.
Can I recreate my table without insert each row because it would take
a long time?
Is there any easy way to transfer the contents of text file to table
independent of the query language?
I'm using (MSSQL,POSTGRESQL,etc...)?
On PotsgreSQL
Procedure will be :
CREATE TABLE tablename (your fields);
Then admit your file is with good fields separators. The default is a tab character in text format, a comma in CSV format, do :
COPY tablename FROM 'path-to-your-file/filename.txt' WITH DELIMITER ',';
The path to your file should be accessible by the PostgreSQL server to /tmp is usually the more simple way to put your file.
Then you get back your table.
I am trying to update only one column of the 1 million records in a table based on the value in the CSV file.
Sample CSV file:
1,Apple
2,Orange
3,Mango
The first column in the file is the PK I will use to filter the record and the second column is the new value of the column in the table I want to update.The PK in the CSV file may or may not exist in the DB though. I was thinking of creating a script to make a million update statements based on the file. I would like to know if there are any better way on how could I do this?
personally i would
load the CSV file into a new table using sqlldr
make sure the correct indexes are on the new and existing table
write ONE update statement to update the existing table from the new
one
I would:
Create an external table using the csv
Update existing table from the new external table in just one update
I am trying to copy content from a csv-file to an existing but empty table in PostgreSQL.
This is how far I've come:
COPY countries FROM 'C:\Program Files\PostgreSQL\9.5\data\countries-20140629.csv'
DELIMITERS ',' CSV HEADER
The Problem that I'm experiencing is that the csv-file containts three columns (code, english_name and French_name), whereas my table only persists of two columns (code, english_name).
Adding a third column to my table is not an Option.
Is there any way to tell PostgreSQL to only import the first two columns of the csv-file?
The easiest way is to modify your CSV and delete the last column.
You could try it like the documentation says:
In your case it would be like:
COPY countries (code, english_name) FROM 'C:\Program Files\PostgreSQL\9.5\data\countries-20140629.csv' DELIMITERS ',' CSV HEADER
Take a look in the documentation for further help:
PostgreSQL.org
As far as I can see, there is no possibility to tell postgres to only import a subset of the columns. What you can do, is to import the csv file into a temporary table and then transfer the data you want from the temporary table to your final table.
I want to build a ssis package that imports a flat file with variable number of columns and stores it in a database table. further i will be using that table to left join with few other tables to find if a particular column value matches with the ones i already have in the database. Then i want to give all non matching rows as output in another folder as a csv file.
So how to write a for each loop to import variable column number flat file and store it in a table. Can we write a package to create a table after it imports a file or can we store the imported file in a temp table created dynamically, because it doesnt matter if the table is deleted after the package stops.
thanks
I want to import new DUMP file to my old database which has same schema. I want to overwrite if there is anychange in record with dump file record I am importing and want to add new records. I dont want to delete the existing records in the DB I am gonna import.
If you create the dump file using mysqldump, you can give it the option --replace,which generates a dump file containing replace statements instead of insert statements.
When this dump file is loaded into MySQL, records that matches primary or unique keys in the database will replace the old ones, while records not matching existing keys will be inserted.