My database is hosted on a server to which I can only issue DML statements.
Is there an SQL command (for Oracle) that I could use to fill a table with the entries from a CSV file? The columns of the CSV file and the table are the same, but if there is a version of the command where I could decide which field from the file goes to which column it would be even better.
Also, I cannot install anything besides the Oracle SQL Developer so what I need is an SQL code that I can run from there. I believe that SQL*Loader and external tables don't help in this situation.
use oracle external table
create directory ext_data_files as 'C:\'; -- create oracle directory object point to the directory where your file resides, using this we will fetch the csv data
create table teachers_ext (
first_name varchar2(15),
last_name varchar2(15),
phone_number varchar2(12)
)
organization external (
type oracle_loader
default directory ext_data_files
access parameters (fields terminated by ',' )
location ('teacher.csv')
)
reject limit unlimited
/
your csv will be like
John,Smith,8737493
Foo, Bar, 829823832
Directly copied from Oracle 9i documentation:
CREATE TYPE student_type AS object (
student_no CHAR(5),
name CHAR(20))
/
CREATE TABLE roster (
student student_type,
grade CHAR(2));
Also assume there is an external table defined as follows:
CREATE TABLE roster_data (
student_no CHAR(5),
name CHAR(20),
grade CHAR(2))
ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
ACCESS PARAMETERS (FIELDS TERMINATED BY ',')
LOCATION ('foo.dat'));
To load table roster from roster_data, you would specify something
similar to the following:
INSERT INTO roster (student, grade)
(SELECT student_type(student_no, name), grade FROM roster_data);
The external table access driver (aka ORACLE_LOADER) accept a bunch of options to handle many different cases: fixed width, CSV, endianness (binary data), separators... Once again, see the doc for the details.
... if there is a version of the command where I could decide which field from the file goes to which column it would be even better.
As you understood it now, external tables are handled like any other table. So, you can as usual re-order and/or perform calculation on the fly in your INSERT ... SELECT ... statement.
Related
Is it possible to read data from a file to supply the data for IN clause?
SQL> SELECT a,b from TABLE123 where type=10 and values IN('file.txt');
The file.txt has list of values.
I cannot use a subquery because the table on which the subquery is to be applied in on a different database.
EDIT: I would prefer not to create a temporary table
assuming that you have copied the "file.txt" file to the Oracle server (under: 'ext_tab_data' directory):
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
Please find details here...
Here is your SQL:
SELECT a,b from TABLE123
where type=10
and values IN(select country_code from countries_ext);
PS off course you can replace your files, which would replace the contents of your external table...
Directly as stated, no. Somewhere a table-like entity must be defined.
If you don't mind copying and editing your text file, you can copy the text file to file.sql, add SELECT a,b from TABLE123 where type=10 and values IN( to the beginning of the file, and ); to the end of the file and add commas and quotes as needed to each line of the file.
Then from SQL*Plus you can just run the file:
SQL> #file.sql
Otherwise no, there's no way to do it without temporarily getting the file data into a table of some sort. #MaxU referenced the method I would choose to use.
I am stuck into a situation where I need to insert data into a blob column by reading a file from the Filesystem in DB2 (DB2 Express C on Windows 7).
Somewhere on the internet I found this INSERT INTO ... VALUES ( ..., readfile('filename'), ...); but here readfile is not an inbuilt function but I need to create it using UDF (c language libraries), but that might not be a useful solution.
Can somebody update us how to insert BLOB values using Insert command.
Also, you can insert blob values casting characters to its corresponding hex values:
CREATE TABLE BLOB_TEST (COL1 BLOB(50));
INSERT INTO BLOB_TEST VALUES (CAST('test' AS BLOB));
SELECT COL1 FROM BLOB_TEST;
DROP TABLE BLOB_TEST;
And this gives this result:
COL1
-------------------------------------------------------------------------------------------------------
x'74657374'
1 record(s) selected.
1) You could use LOAD or IMPORT via ADMIN_CMD. In this way, you can use SQL to call the administrative stored procedure that will call the tool. Import or Load can read files and then put that in a row.
You can also wrap this process by using a temporary table that will read the binary data from the file, insert it in the temporary table, and then return it to be from the table.
2) You can create an external stored procedure or UDF implemented in Java or C, that will read the data, and then insert that in the row.
I have not tried, but you can also use the Built-in modules that handle LOBs http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0055115.html
This is only available in DB2 LUW since version 9.7
I've succeeded to do this by using IBM Data Studio with following query:
INSERT INTO MY_TABLE (BLOB_COLUMN) values (?);
And selecting a file from pop-up dialog.
Somehow, same method from RAD 8 doesn't show an option to load blob column type the same way.
First and foremost, per IBM docs all LOB data in DB2 must have the following corresponding items in addition to a LOB column defined in a table. See docs for example CREATE statements.
LOB tablespace (one for every LOB column in each partition)
Auxiliary table on above table space that points to the blob column in base table (also, one for every LOB column in each partition)
A unique index in auxiliary table
Once this schema is prepared you can then run a LOAD command that can import with other data fields where blob content is referenced by file paths. Below is a demo with an Employees table:
DB Table (example table)
CREATE TABLE EMPLOYEES (
ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,
EMPLOYEE_NUMBER INTEGER,
EMPLOYEE_NAME VARCHAR(255),
EMPLOYEE_PIC BLOB(500K)
);
CSV FILE (comma being default delimiter in LOAD with no headers)
1234, "John Doe", johndoe.jpg
5678, "Jane Doe", janedoe.jpg
...
DB2 LOAD (simple version using defaults for many other LOAD parameters)
LOAD FROM "/path/to/file.csv"
OF DEL
LOBS FROM /path/to/picture/folder/ --PATH OF BLOB FILES WITH BASENAME IN CSV
--MUST END IN FORWARD SLASH
MODIFIED BY LOBSINFILE CHARDEL""
DUMPFILE="/path/to/dump.txt" --FOR FAILED IMPORTS
METHOD P (1,2,3) --NUMBER REFERENCE OF COLS, OR USE N FOR FIELD NAMES
MESSAGES "/path/to/messages.txt" --FOR LOAD COMMAND MESSAGES
REPLACE INTO "EMPLOYEES" --REMOVES EXISTING FOR IMPORT, OR USE INSERT TO ADD
(EMPLOYEE_NUMBER
EMPLOYEE_NAME,
EMPLOYEE_PIC);
Command lines
> db2 -tvf "/path/to/load_command.sql"
> db2 "SELECT LENGTH(EMPLOYEE_PIC) FROM EMPLOYEES"
DB2 SQL query to insert JPG file into table
create table table_name(column_name BLOB) /* BLOB is data type
insert into table_name(column_name)values(blob('c:\data\winter.jpg'))
c:\data\ its a path and winter.jpg its a image name
I'm trying to create a bucket in hive by using following commands:
hive> create table emp( id int, name string, country string)
clustered by( country)
row format delimited
fields terminated by ','
stored as textfile ;
Command is executing successfully: when I load data into this table, it executes successfully and all data is shown when using select * from emp.
However, on HDFS it is only creating one table and only one file is there with all data. That is, there is no folder for specific country records.
First of all, in the DDL statement you have to explicitly mention how many buckets you want.
create table emp( id int, name string, country string)
clustered by( country)
INTO 2 BUCKETS
row format delimited
fields terminated by ','
stored as textfile ;
In the above statement I have mention 2 buckets, similarly you can mention any number you want.
Still you are not done!!
After that, while loading data into the table you also have to mention the below hint to hive.
set hive.enforce.bucketing = true;
That should do it.
After this you should be able to see that number of files created under the table directory is same as the number of buckets mentioned in the DDL statement.
Bucketing doesn't create HDFS folders, rather if you want a separate floder to be created for a country then you should PARTITION.
Please go through hive partitioning and bucketing in detail.
EMPDET is an external table containing the columns EMPNO and ENAME. What is external table in oracle database?
Why can/cannot we update/delete from an external table?
A. UPDATE empdet
SET ename = 'Amit'
WHERE empno = 1234;
B. DELETE FROM empdet
WHERE ename LIKE 'J%';
An external table in an oracle-database is a way of accessing data residing in some .txt or .csv file via sql-commands. So the table-data is not kept in the database-tablespace but it is rather some kind of view on the sequential dataset. So there is no way the database can index or update the data since it is outside it's scope but it can only do selects on it.
"External Table" means you have a (typically) CSV file stored on your file system and Oracle reads this CSV file defined by settings in CREATE TABLE statement. The data is not saved in Oracle Tablespace but you can select them like a normal table. However, you can only select them (or logically create a view from it) but you cannot modify anything.
Here a simple example of an external table:
CREATE TABLE ADHOC_CSV_EXT (
C1 VARCHAR2(4000),
C2 VARCHAR2(4000),
C3 VARCHAR2(4000)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY SOME_FOLDER
ACCESS PARAMETERS (
records delimited BY newline
fields terminated BY ',' optionally enclosed BY '"'
missing field VALUES are NULL)
LOCATION ('foo.csv')
);
I know that you can create a table for export like this:
create table bulk_mbr organization external(
type ORACLE_DATAPUMP
default directory jason_home
location ('mbr.dat'))
as SELECT * FROM mbr;
But I'd like to do something like this for imports so I can create an external import table with the same structure as an existing table, load data into it, and then do a simple INSERT INTO/SELECT FROM query to move the data over there. Is there a way to do this?
I've tried this, but it doesn't work:
create table bulk_mbr organization external(
type ORACLE_LOADER
default directory jason_home
location ('mbr.dat'))
as SELECT * FROM mbr where 1=0;
But got:
ORA-30657: operation not supported on
external organized table
just use your table description:
SQL> CREATE TABLE bulk_mbr (
2 ID NUMBER,
3 d VARCHAR2(4000)
4 )
5 ORGANIZATION EXTERNAL (
6 TYPE ORACLE_LOADER
7 DEFAULT DIRECTORY jason_home
8 LOCATION ('mbr.dat')
9 );
Table created
Either from your DDL repository (you have one haven't you? :) or dynamically with DBMS_METADATA.get_ddl for example.