When loading an external csv with Oracle SQL Loader is there a way to map the fields directly to each other in the control file?
At the moment I'm doing a simple loading, so the position of the source fields is important. Is there a way to do it otherwise? So instead of:
load data
into table1
fields terminated by "," optionally enclosed by '"'
(destination_field1, destination_field2, destination_field3)
do something like:
load data
into table1
fields terminated by "," optionally enclosed by '"'
(
source_field2 => destination_field1,
source_field1 => destination_field2,
source_field3 => destination_field3
)
Edit:
The main reason is that the order of the columns in the source file can change, therefore I can't be sure which field will be the first, second, etc.
You can include any data processing by means of Oracle functions in your control file.
E.g., this code swaps columns 1 and 2 and additionally converts source_field2 to number, silently replacing wrong values to nulls:
load data
append
into table SCHEMA.TABLE
fields terminated by ';' optionally enclosed by '"'
trailing nullcols
(
source_field1 BOUNDFILLER,
source_field2 BOUNDFILLER,
source_field3 BOUNDFILLER,
destination_field1 "to_number(regexp_substr(:source_field2, '^[-0-9,]*'),'9999999999D999','NLS_NUMERIC_CHARACTERS='', ''')",
destination_field2 ":source_field1",
destination_field3 ":source_field3"
)
Related
I have a table with a column whose type is varchar(32) the value it has is
'Movies & TV' . This data is loaded by Copy command, when I query this table like
select * from activity where name='Movies & TV' (typed this value)
it won't return any record this is mainly because of & character there is something going on with this character.
When I tried
Select ISUTF8(name) from activity it returns true, which means the data is actually stored in the UTF-8 format.
Select length(name) and length('Movies & TV') are also same. However, when I paste these values in the vi editor and saw an extra space in the DB string. In addition, the field name in activity table can have Chines characters too, which is stored correctly in DB now.
Any idea what is going on here? Should I specify explicit UTF-8 when loading the data?
If you are doing it from SQLPLUS use
SET DEFINE OFF
to stop it treading & as a special case.
An alternate solution, use concatenation and the chr function:
SELECT * FROM activity WHERE name= 'Movies ' || chr(38) || ' TV';
Solution from:
How to insert a string which contains an "&"
I have a csvfile, which contains a column of XML data. A sample record of the csv look like:
1,,,<capacidade><numfields>3</numfields><template>1</template><F1><name lang="pt">Apple</name></F1></capacidade>
I'd like to use SQL*Loader to import all the data into Oracle; I defined the ctl file as follows:
LOAD DATA
CHARACTERSET UTF8
INFILE '/home/db2inst1/result.csv'
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE "TEST"."T_DATA_TEMP"
FIELDS TERMINATED BY','
( "ID"
, "SOURCE"
, "CATEGORY"
, "RAWDATA"
)
Wen running this, the error log shows that the column of RAWDATA is treated as CHARACTER data type. How can I define the RAWDATA to be a XMLType in this case so that it can be correctly insert into the Oracle?
Try something like this:
Add a comma at the end of xml so as to mark a delimiter(can't use whitespace as xml may contain spaces in between)
then create your ctl file like below
LOAD DATA
APPEND
INTO TABLE "TEST"."T_DATA_TEMP" fields OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
( ID terminated by "," ,
SOURCE terminated by ",",
CATEGORY terminated by ",",
RAWDATA char(4000) terminated by ","
)
Recently started working on SQL Loader, enjoying the way it works.
We are stuck with a problem where we have to load all the columns in csv format say (10 columns in excel)but the destination table contains around 15 fields.
filler works when you want you skip columns in source file but unsure what to do here.
using is staging table helps but is there any alternative?
Any help is really appreciated.
thanks.
You have to specify the columns in the control file
Recommended reading: SQL*Loader Control File Reference
10 The remainder of the control file contains the field list, which provides information about column formats in the table being loaded. See Chapter 6 for information about that section of the control file.
Excerpt from Chapter 6:
Example 6-1 Field List Section of Sample Control File
1 (hiredate SYSDATE,
2 deptno POSITION(1:2) INTEGER EXTERNAL(2)
NULLIF deptno=BLANKS,
3 job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
NULLIF job=BLANKS "UPPER(:job)",
mgr POSITION(28:31) INTEGER EXTERNAL
TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
ename POSITION(34:41) CHAR
TERMINATED BY WHITESPACE "UPPER(:ename)",
empno POSITION(45) INTEGER EXTERNAL
TERMINATED BY WHITESPACE,
sal POSITION(51) CHAR TERMINATED BY WHITESPACE
"TO_NUMBER(:sal,'$99,999.99')",
4 comm INTEGER EXTERNAL ENCLOSED BY '(' AND '%'
":comm * 100"
)
In this sample control file, the numbers that appear to the left would not appear in a real control file. They are keyed in this sample to the explanatory notes in the following list:
1 SYSDATE sets the column to the current system date. See Setting a Column to the Current Date.
2 POSITION specifies the position of a data field. See Specifying the Position of a Data Field.
INTEGER EXTERNAL is the datatype for the field. See Specifying the Datatype of a Data Field and Numeric EXTERNAL.
The NULLIF clause is one of the clauses that can be used to specify field conditions. See Using the WHEN, NULLIF, and DEFAULTIF Clauses.
In this sample, the field is being compared to blanks, using the BLANKS parameter. See Comparing Fields to BLANKS.
3 The TERMINATED BY WHITESPACE clause is one of the delimiters it is possible to specify for a field. See TERMINATED Fields.
4 The ENCLOSED BY clause is another possible field delimiter. See Enclosed Fields.
We are trying to load a file created by FastExport into an oracle database.
However the Float column is being exported like this: 1.47654345670000000000 E010.
How do you configure SQL*Loader to import it like that.
Expecting Control Script to look like:
OPTIONS(DIRECT=TRUE, ROWS=20000, BINDSIZE=8388608, READSIZE=8388608)
UNRECOVERABLE LOAD DATA
infile 'data/SOME_FILE.csv'
append
INTO TABLE SOME_TABLE
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' AND '"'
trailing nullcols (
FLOAT_VALUE CHAR(38) "???????????????????",
FILED02 CHAR(5) "TRIM(:FILED02)",
FILED03 TIMESTAMP "YYYY-MM-DD HH24:MI:SS.FF6",
FILED04 CHAR(38)
)
I tried to_number('1.47654345670000000000 E010', '9.99999999999999999999 EEEE')
Error: ORA-01481: invalid number format model error.
I tried to_number('1.47654345670000000000 E010', '9.99999999999999999999EEEE')
Error: ORA-01722: invalid number
These are the solutions I came up with in order of preference:
to_number(replace('1.47654345670000000000 E010', ' ', ''))
to_number(TRANSLATE('1.47654345670000000000 E010', '1 ', '1'))
I would like to know if there are any better performing solutions.
As far as I'm aware there is no way to have to_number ignore the space, and nothing you can do in SQL*Loader to prepare it. If you can't remove it by pre-processing the file, which you've suggested isn't an option, then you'll have to use a string function at some point. I wouldn't expect it to add a huge amount of processing, above what to_number will do anyway, but I'd always try it and see rather than assuming anything - avoiding the string functions sounds a little like premature optimisation. Anyway, the simplest is possibly replace:
select to_number(replace('1.47654345670000000000 E010',' ',''),
'9.99999999999999999999EEEE') from dual;
or just for display purposes:
column num format 99999999999
select to_number(replace('1.47654345670000000000 E010',' ',''),
'9.99999999999999999999EEEE') as num from dual
NUM
------------
14765434567
You could define your own function to simplify the control file slightly, but not sure it'd be worth it.
Two other options come to mind. (a) Load into a temporary table as a varchar, and then populate the real table using the to_number(replace()); but I doubt that will be any improvement in performance and might be substantially worse. Or (b) if you're running 11g, load into a varchar column in the real table, and make your number column a virtual column that applies the functions.
Actually, a third option... don't use SQLLoader at all, but use the CSV file as an external table, and populate your real table from that. You'll still have to do the to_number(replace()) but you might see a difference in performance over doing it in SQLLoader. The difference could be that it's worse, of course, but might be worth trying.
Change number width with "set numw"
select num from blabla >
result >> 1,0293E+15
set numw 20;
select num from blabla >
result >> 1029301200000021
Here is the solution I went with:
OPTIONS(DIRECT=TRUE, ROWS=20000, BINDSIZE=8388608, READSIZE=8388608)
UNRECOVERABLE LOAD DATA
infile 'data/SOME_FILE.csv'
append
INTO TABLE SOME_TABLE
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' AND '"'
trailing nullcols (
FLOAT_VALUE CHAR(38) "REPLACE(:FLOAT_VALUE,' ','')",
FILED02 CHAR(5) "TRIM(:FILED02)",
FILED03 TIMESTAMP "YYYY-MM-DD HH24:MI:SS.FF6",
FILED04 CHAR(38)
)
In my solution the conversion to a number is implicit:
"REPLACE(:FLOAT_VALUE,' ','')"
In Oracle 11g, it's not needed to convert numbers specially.
Just use integer external in the .ctl-file:
I tried the following in my Oracle DB:
field MYNUMBER has type NUMBER.
Inside .ctl-file I used the following definition:
MYNUMBER integer external
In the datafile the value is: MYNUMBER: -1.61290E-03
As for the result: sqlldr loaded the notation correctly: MYNUMBER field: -0.00161290
I am not sure if it's a bug or a feature; but it works in Oracle 11g.
I'm trying to Load a CSV file into my MySQL database,
But I would like to skip the first line.
I fact It contains the name of my columns and no interesting data.
Here is the query I'm using:
LOAD DATA LOCAL INFILE '/myfile.csv'
INTO TABLE tableName
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
(column,column,column);
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
(reference)
For those curious, IGNORE N LINES should be after the separator qualifiers:
LOAD DATA LOCAL INFILE '/myfile.csv'
INTO TABLE tableName
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(column,column,column);
Try this:
IGNORE N LINES
LOAD DATA INFILE "/path/to/file.csv"
INTO TABLE MYTABLE
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;