LOAD DATA INFILE 'filename.csv' INTO TABLE table_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES
(Date,col2,col3,col4,col5,col6,col7,#dummy_variable)
Set dummy_variable = 0
Loads fine but date format shows reads 0000-00-00
Date in csv is in the style dd/mm/yyyy, can't be changed in csv file.
You need to adjust the date to a mysql format with str_to_date
Something like below should work.
LOAD DATA INFILE 'filename.csv' INTO TABLE table_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES
(#myDate,col2,col3,col4,col5,col6,col7,#dummy_variable)
Set dummy_variable = 0, Date = str_to_date(#myDate,'%d/%m/%Y')
Related
I have a .csv file with a currency sign field separator (¤), when I execute this query to bulk load it to a table it raise an error.
The file is UTF-8 encoded.
BULK INSERT dbo.test
FROM 'file.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
FIRSTROW = 2,
CODEPAGE = 65001, --UTF-8 encoding
FIELDTERMINATOR = '¤', --CSV field delimiter
ROWTERMINATOR = '\n' --Use to shift the control to next row
);
The error I get is:
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
This is working fine with a semicolon as the separator.
I'm developing this solution where I'll receive a spool file and I need to insert it to a table.
I always use SQL* Loader and it fits well. But I never used it with dates. I'm getting this error as I'll show:
Control File
OPTIONS (ERRORS=999999999, ROWS=999999999)
load data
infile 'spool.csv'
append
into table A_CONTROL
fields terminated by ","
TRAILING NULLCOLS
(
AStatus,
ASystem,
ADate,
AUser,
)
spool.csv
foo,bar,2015/01/12 13:22:21,User
But when I run the loader I got this error
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
AStatus FIRST * , CHARACTER
ASystem NEXT * , CHARACTER
ADate NEXT * , CHARACTER
AUser NEXT * , CHARACTER
Record 1: Rejected - Error on table A_CONTROL, column ADate.
ORA-01861: literal does not match format string
Table A_CONTROL:
0 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Convert the string to date for insertion.
OPTIONS (ERRORS=999999999, ROWS=999999999)
load data
infile 'spool.csv'
append
into table A_CONTROL
fields terminated by ","
TRAILING NULLCOLS
(
AStatus,
ASystem,
ADate "TO_DATE(:ADate,'YYYY/MM/DD HH24:MI:SS')",
AUser,
)
I designed a table with a column whose data contains \n character (as the separator, I used this instead of comma or anything else). It must save the \n characters OK because after loading the table into a DataTable object, I can split the values into arrays of string with the separator '\n' like this:
DataTable dt = LoadTable("myTableName");
DataRow dr = dt.Rows[0]; //suppose this row has the data with \n character.
string[] s = dr["myColumn"].ToString().Split(new char[]{'\n'}, StringSplitOptions.RemoveEmptyEntries);//This gives result as I expect, e.g an array of 2 or 3 strings depending on what I saved before.
That means '\n' does exist in my table column. But when I tried selecting only rows which contain \n character at myColumn, it gave no rows returned, like this:
--use charindex
SELECT * FROM MyTable WHERE CHARINDEX('\n',MyColumn,0) > 0
--use like
SELECT * FROM MyTable WHERE MyColumn LIKE '%\n%'
I wonder if my queries are wrong?
I've also tested with both '\r\n' and '\r' but the result was the same.
How can I detect if the rows contain '\n' character in my table? This is required to select the rows I want (this is by my design when choosing '\n' as the separator).
Thank you very much in advance!
Since \n is the ASCII linefeed character try this:
SELECT *
FROM MyTable
WHERE MyColumn LIKE '%' || X'0A' || '%'
Sorry this is just a guess; I don't use SQLite myself.
Maybe you should just be looking for carriage returns if you arent storing the "\n" literal in the field. Something like
SELECT *
FROM table
WHERE column LIKE '%
%'
or select * from table where column like '%'+char(13)+'%' or column like '%'+char(10)+'%'
(Not sure if char(13) and 10 work for SQLite
UPDATED: Just found someone's solution here They recommend to replace the carriage returns
So if you want to replace them and strip the returns, you could
update yourtable set yourCol = replace(yourcol, '
', ' ');
The following should do it for you
SELECT *
FROM your_table
WHERE your_column LIKE '%' + CHAR(10) + '%'
If you want to test for carriage return use CHAR(13) instead or combine them.
I've found a solution myself. There is few way (with some dedicated function) to convert ascii code to symbol in SQLite at the moment (CHAR function is not support and using '\n' or '\r' directly doesn't work). But we can convert using CAST function and passing in a Hex string (specified by append X or x before the string) in SQLite like this:
-- use CHARINDEX
SELECT * FROM MyTable WHERE CHARINDEX(CAST(x'0A' AS text),MyColumn,0) > 0
-- use LIKE
SELECT * FROM MyTable WHERE MyColumn LIKE '%' || CAST(x'0A' AS text) || '%'
The Hex string '0A' is equal to 10 in ascii code (\r). I've tried with '0D' (13 or '\n') but it won't work. Maybe the \n character is turned to \r after being saved in to SQLite table.
Hope this helps others! Thanks!
I have the following query:
SELECT first, last, title, email, org
FROM people WHERE email <> ""
INTO OUTFILE 'C:/testfile.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
Which works. I need to select distinct emails (I don't want multiple entries from the same email).
Would it work like?:
SELECT first, last, title, distinct(email), org
FROM people WHERE email <> ""
INTO OUTFILE 'C:/testfile.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
Which platform / version of SQL are you using? Typically this would be done with a group by statement. Something like:
SELECT first, last, title, email, org
FROM people
GROUP BY Email
WHERE email <> ""
INTO OUTFILE 'C:/testfile.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
The above will actually work with some platforms / versions of SQL but the "correct" (standard sql) way to do it would be as follows (of course if the other fields are different for same email you get undefined results):
SELECT max(first), max(last), max(title), email, max(org)
FROM people
GROUP BY Email
WHERE email <> ""
INTO OUTFILE 'C:/testfile.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
I'm about to import a large (500 MB) *.csv file to a MySQL database.
I'm as far as that:
LOAD DATA INFILE '<file>'
REPLACE
INTO TABLE <table-name>
FIELDS
TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES ( #Header
<column-name1>,
<column-name2>,
...
);
I have a problem with one of the coluns (it's data type is int) - I get an error Message:
Error Code: 1366 Incorrect integer value: ' ' for column at row
I looked at this line in the *.csv-file. The cell that causes the error has just a whitespace inside (like this: ...; ;...).
How can I tell SQL to ignore whitespaces in this column?
As the *.csv-file is very big and I have to import even bigger ones afterwards, I'd like to avoid editing the *.csv-file; I'm looking for a SQL-solution.
Add a SET COLUMN like so:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, #var1)
SET column2 = #var1/100;
You need to replace the #var1/100 with an expression that handles the 'space' and convert to -Infinity or 0 or 42... not sure..
This answer was originally included in the question as an edit by #speendo; I have converted it into a proper answer.
The solution is:
LOAD DATA INFILE '<file>'
REPLACE
INTO TABLE <table-name>
FIELDS
TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES ( #Header
<column-name1>,
<column-name2>,
#var1 #the variable that causes the problem
...
)
SET <column-name-of-problematic-column> = CASE
WHEN #var1 = ' ' THEN NULL
ELSE #var1
END
;