Commit point reached - sql

Hi I'm trying to load some data in to an Oracle Table I created
Here's the table I created in Vivek schema
Desc STAR
Name NULL TYPE
-----------------------------------------------------
STAR_ID Not Null Number(4)
FIRST_NAME Varchar2(30)
LAST_NAME Varchar2(30)
DOB Date
SEX Char(1)
Nationality Varchar2(40)
Alive Char(1)
Following is the data in the STAR5.CSV file I am trying to upload using SQL Loader
10,NASEERUDDIN,SHAH,M,INDIAN,Y
11,DIMPLE,KAPADIA,F,INDIAN,Y
The control file is as follows
load data
infile '/home/oracle/host/Vivek12/STAR_DATA5.csv'
append
into table vivek.STAR
fields terminated by ","
( STAR_ID,FIRST_NAME,LAST_NAME,SEX,NATIONALITY,ALIVE )
When I run the SQL Loader with the following command
$ sqlldr vivek/password
control = /home/oracle/sqlldr_add_new.ct1
I get the message:
Commit point reached - logical record count 2
However the data is not loaded and the are put in the file STAR5.bad
Any idea why the data isn't getting loaded?

You most likely have either an "invisible" character at the end of your line. Perhaps you're executing this on Linux and the file was created on Windows so you've got an extra carriage return - Linux only uses line-feed as the line terminator.
Change your ctl file to remove terminating whitespace:
load data
infile '/home/oracle/host/Vivek12/STAR_DATA5.csv'
append
into table vivek.STAR
fields terminated by ","
( STAR_ID
, FIRST_NAME
, LAST_NAME
, SEX
, NATIONALITY
, ALIVE terminated by whitespace
)
If this doesn't work, then you're going to need to work out what characters are there and replace them.

Related

External table how to delete newline char from the end of each row

i have problem with loading rows from a file, the point is that when im using External table like this
create table table_name
(
id VARCHAR2(60)
)
organization external
(
type ORACLE_LOADER
default directory DIRECTORY
access parameters
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET EE8MSWIN1250 nobadfile nodiscardfile
FIELDS TERMINATED BY ";" OPTIONALLY ENCLOSED BY '\"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
ID VARCHAR2(60)
)
)
location ('tmp.txt')
)
reject limit 0;
my all rows have the newLine byte at the end of row, only thing that works is after loading data from file is update all rows using this
update table_name
set id = translate (id, 'x'||CHR(10)||CHR(13), 'x');
How can i make it automatically?
Check exactly what newline charcters are in your file and than define the record delimiter explicitely.
Example
records delimited by '\r\n'
The probable cause of your problem is that the newline character is not compatible with your operating system - which topic you can address as well.
while may have line delimiter as either \n or \r\n..
you can check that by opening file in notepad++ or any other supporting editor and by clicking show all characters
based no how the data is in the life you may create the external table as
RECORDS DELIMITED BY '\r\n' or
RECORDS DELIMITED BY '\n' etx

ORA-29913: error in executing ODCIEXTTABLEOPEN callout

I am creating external table using hr schema but i get errors
"ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400:
data cartridge error KUP-00554: error encountered while parsing access
parameters KUP-01005: syntax error: found "missing": expecting one of:
"column, (" KUP-01007: at line 4 column 3
29913. 00000 - "error in executing %s callout"
*Cause: The execution of the specified callout caused an error.
*Action: Examine the error messages take appropriate action."
----------------My Code-------------------
create directory ex_tab as 'C:\My Works\External Table';
create table strecords (
st_id number(4),
st_name varchar(10),
schl_name varchar(5),
st_city varchar(15),
st_year number(4)
)
ORGANIZATION EXTERNAL
(TYPE oracle_loader
DEFAULT DIRECTORY ex_tab
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline
FIELDS TERMINATED BY ','
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELDS VALUES ARE NULL
(
st_id number(4),
st_name char(10),
schl_name char(5),
st_city char(15),
st_year number(4)
)
)
LOCATION ('strecords.txt')
);
desc strecords;
select * from strecords;
This is my code, please check it and review it.
You have several issues here. The immediate one causing your problem is that you have the clauses in the wrong order, but you also have MISSING FIELDS instead of MISSING FIELD:
...
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
...
Then your field list contents have invalid data types for that part of the statement; you can just omit that entirely in this case as those match the table column definition.
If no field list is specified, then the fields in the data file are assumed to be in the same order as the fields in the external table.
So you can simplify it to:
create table strecords (
st_id number(4),
st_name varchar(10),
schl_name varchar(5),
st_city varchar(15),
st_year number(4)
)
ORGANIZATION EXTERNAL
(TYPE oracle_loader
DEFAULT DIRECTORY ex_tab
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
)
LOCATION ('strecords.txt')
);
Some operations needed against an Oracle Bug called
Select From External Table Returns Errors ORA-29913 ORA-29400 KUP-554 KUP-1005 (Doc ID 302672.1)
When creating an external table, the access parameters should be specified in the following order:
Comments
Record Format Info
Field definitions
Specify Comments, Record Format Info and Field definitions in the correct order. Even inside the Record format Info, 'Records delimited by ...' clause should come before any other clause.
For more information, refer to the access_parameters clause.

SQL Loader - Invalid Number

I'm trying to load data via SQLoader, but it gives me error at the numeric field of Invalid Number
My Data File:
00163604~12002~S~N~N~Panasonic Juicer 1.5l Steel Color~ss~E~A~12/15/2014 3:33:57 PM~N~N~N~Y~294~SA
Control File:
LOAD DATA
INFILE "/home/dmf/ITEMLOC.txt"
APPEND
INTO TABLE DMF.MIG_ITEM_LC
FIELDS TERMINATED BY "~"
TRAILING NULLCOLS
(
ITEM "SUBSTRB(:ITEM,1,25)",
LOC "TO_NUMBER(:LOC)",
LOC_TYPE "SUBSTRB(:LOC_TYPE,1,1)",
CLEAR_IND "SUBSTRB(:CLEAR_IND,1,1)",
TAXABLE_IND "SUBSTRB(:TAXABLE_IND,1,1)",
LOCAL_ITEM_DESC "SUBSTRB(:LOCAL_ITEM_DESC,1,250)",
LOCAL_SHORT_DESC "SUBSTRB(:LOCAL_SHORT_DESC,1,120)",
STORE_ORD_MULT "SUBSTRB(:STORE_ORD_MULT,1,1)",
STATUS_UPDATE_DATE sysdate,
STATUS "SUBSTRB(:STATUS,1,1)",
STORE_PRICE_IND "SUBSTRB(:STORE_PRICE_IND,1,1)",
RPM_IND "SUBSTRB(:RPM_IND,1,1)",
EXT_UIN_IND "SUBSTRB(:EXT_UIN_IND,1,1)",
RANGED_IND "SUBSTRB(:RANGED_IND,1,1)",
PRIMARY_SUPP "TO_NUMBER(:PRIMARY_SUPP)", -- The Error is coming here
PRIMARY_CNTRY "SUBSTRB(:PRIMARY_CNTRY,1,3)"
)
Rejected - Error on table DMF.MIG_ITEM_LC, column PRIMARY_SUPP.
ORA-01722: invalid number
If i write give constant to it, it loads successfully.
What could be the issue?
Your data as posted loads fine for me.
SQL> select version from v$instance;
VERSION
-----------------
11.2.0.2.0
Here's the create table statement I used:
create table test1
(
item varchar2(25),
loc number,
loc_type char(1),
clear_ind char(1),
taxable_ind char(1),
local_item_desc varchar2(250),
local_short_desc varchar2(120),
store_ord_mult char(1),
status char(1),
store_price_ind char(1),
rpm_ind char(1),
ext_uin_ind char(1),
ranged_ind char(1),
primary_supp number,
primary_cntry varchar2(3)
);
If you get this error while trying to load only the one record that you posted, then I would suspect an unprintable character, like #Gary_W suggested. View the data with a hex viewer to check.
A character set difference between the file and your NLS_LANG setting could be at fault, but I doubt it in this case, since your data looks to be all ASCII values.

import csv file to oracle database

I am trying to import data from txt file to table. The TXT file is having 5 records.
'ext.txt' is my file .'IMPORT' is a directory.
Records are
7499,ALLEN,SALESMAN,30
7521,WARD,SALESMAN,30
7566,JONES,MANAGER,20
7654,MARTIN,SALESMAN,30
I tried below query but its only inserts 3rd record to the external table.
Can anyone provide me the reason for this ans solution for insert all rows.
create table ext_tab (
empno CHAR(4),
ename CHAR(20),
job1 CHAR(20),
deptno CHAR(2)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY IMPORT
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE IMPORT:'test.bad'
LOGFILE IMPORT:'test.log'
FIELDS TERMINATED BY ',' (
empno char(4) ,
ename char(4),
job1 CHAR(20),
deptno CHAR(2)
)
)
LOCATION (import:'ext.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
This will work for your given test data
CREATE TABLE ext_tab (
empno VARCHAR(4),
ename VARCHAR(20),
job1 VARCHAR(20),
deptno VARCHAR(2)
)
ORGANIZATION EXTERNAL (
DEFAULT DIRECTORY import
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
)
LOCATION ('ext.txt')
);
providing that you have set up the import directory correctly.
Tested on 11.2.0.4
As for the reason it was given in the comments above. Always check the .bad and .log files first (created in the directory you put the file in). These are very helpful at telling you why rows are rejected.
I expect you have errors like this in the log:-
KUP-04021: field formatting error for field ENAME
KUP-04026: field too long for datatype
KUP-04101: record 1 rejected in file /import_dir/ext.txt
KUP-04021: field formatting error for field ENAME
KUP-04026: field too long for datatype
KUP-04101: record 3 rejected in file /import_dir/ext.txt
KUP-04021: field formatting error for field ENAME
KUP-04026: field too long for datatype
KUP-04101: record 4 rejected in file /import_dir/ext.txt
and that only WARD was imported because only his name fits into CHAR(4).

How do I upload a key=value format file into a Hive table?

I am new to data engineering, so this might be a basic question, appreciate your help here.
I have a file which is in the following format -
first_name=A1 last_name=B1 city=Austin state=TX Zip=78703
first_name=A2 last_name=B2 city=Seattle state=WA
Note: No zip code available for the second row.
I need to upload this into Hive, in the following format:
First_name Last_name City State Zip
A1 B1 Austin TX 78703
A2 B2 Seattle WA NULL
Thanks for your help!!
I figured a way to do this in Hive. The idea is to first upload the entire data into a n*1 table (n is the number of rows), and then parsing the key names in the second step using the str_to_map function.
Step 1: Upload all data into 1 column table. Input a delimiter which you are sure will not parse your data, and doesn't exist (\002 in this case)
DROP TABLE IF EXISTS kv_001;
CREATE EXTERNAL TABLE kv_001 (
col_import string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\002'
LOCATION 's3://location/directory/';
Step 2: Using the str_to_map function, extract the keys that are needed
DROP TABLE IF EXISTS required_table;
CREATE TABLE required_table
(first_name STRING
, last_name STRING
, city STRING
, state STRING
, zip INT);
INSERT OVERWRITE TABLE required_table
SELECT
params["first_name"] AS first_name
, params["last_name"] AS last_name
, params["city"] AS city
, params["state"] AS state
, params["zip"] AS zip
FROM
(SELECT str_to_map(col_import, '\001', '=') params FROM kv_001) A;
You can transform your file using python3 script and then upload it to hive table
Try this steps:
Script for example:
import sys
for line in sys.stdin:
line = line.split()
res = []
for item in line:
res.append(item.split("=")[1])
if len(line) == 4:
res.append("NULL")
print(",".join(res))
If only zip field can be empty, it works.
To apply it, use something like
cat file | python3 script.py > output.csv
Then upload this file to hdfs using
hadoop fs -copyFromLocal ./output.csv hdfs:///tmp/
And create the table in hive using
CREATE TABLE my_table
(first_name STRING, last_name STRING, city STRING, state STRING, zip STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
LOAD DATA INPATH '/tmp/output.csv'
OVERWRITE INTO TABLE my_table;