How to create external table in db2 with basic DML operation - sql

I created external table with following command
db2 "
CREATE EXTERNAL TABLE TEST(a int) using
(dataobject '/home/db2inst2/test.tbl' )
)
"
db2 "insert into TEST values(1)"
db2 "insert into TEST values(2)"
But looks like it is replacing value. Is there any option to append files & do basic DML operation on external table. Please let me know if any other option available in db2 V11.5

It's not possible.
CREATE EXTERNAL TABLE statement
Restrictions
External tables cannot be used by a Db2 instance running on a Windows system.
Data being loaded must be properly formatted.
You cannot delete, truncate, or update an external table.
For remote external tables (that is, for external tables are not located in a Swift or S3 object store and for which the REMOTESOURCE option is set to a value other than LOCAL):
A single query or subquery cannot select from more than one external table at a time, and cannot reference the same external table
more than once. If necessary, combine data from several external
tables into a single table and use that table in the query.
A union operation cannot involve more than one external table.
In addition:
For an unload operation, the following conditions apply:
If the file exists, it is overwritten.

Related

Setting transactional-table properties results in external table

I am creating a managed table via Impala as follows:
CREATE TABLE IF NOT EXISTS table_name
STORED AS parquet
TBLPROPERTIES ('transactional'='false', 'insert_only'='false')
AS ...
This should result in a managed table which does not support HIVE-ACID.
However, when I run the command I still end up with an external table.
Why is this?
I found out in the Cloudera documentation that neglecting the EXTERNAL-keyword when creating the table does not mean that the table definetly will be managed:
When you use EXTERNAL keyword in the CREATE TABLE statement, HMS stores the table as an external table. When you omit the EXTERNAL keyword and create a managed table, or ingest a managed table, HMS might translate the table into an external table or the table creation can fail, depending on the table properties.
Thus, setting transactional=false and insert_only=false leads to an External Table in the interpretation of the Hive Metastore.
Interestingly, only setting TBLPROPERTIES ('transactional'='false') is completly ignored and will still result in a managed table having transactional=true).

External Table data not getting Purged in Hive

I created 2 external tables Hive. In first table specified data location with create statement. In second table loaded data after creating it.
I can see data file created for second table in /hive/warehouse/ directory. Then I set "external.table.purge"="true" for both tables. And DROP both tables. But data files of both tables remains as is.
What is the behaviour of 'external.table.purge'='true'. Shouldn't it delete data files as well on issuing Drop command?
If Hive does not take any ownership over data files of external table, why is there even an option as 'external.table.purge'='true'.
I read in one of the threads, where someone mentioned it is possible to delete data as well for external tables by ALTER TABLE ... SET TBLPROPERTIES('external.table.purge'='true'), but unable to find that post again.
You can not drop the data in external table but you can do it for internal(managed) tables. So convert the table to internal and then drop it.
First change eternal property to false.
hive> ALTER TABLE nyse_external SET TBLPROPERTIES('EXTERNAL'='False');
and then you can easily drop it.
hive> drop table nyse_external;
TBLPROPERTIES ("external.table.purge"="true") should work for hive version 4.x+.
Answer to point 1:
Table property "external.table.purge", which if true (and if the table is an external table), will let Hive know to delete the table data when the table is dropped. This feature is introduced in this apache jira.
https://issues.apache.org/jira/browse/HIVE-19981 .
For reference on how to set the property take a look at this example,
https://docs.cloudera.com/runtime/7.2.7/using-hiveql/topics/hive_drop_external_table_data.html

Getting error when trying to Rename multiple tables in SPROC in DB2

I've created a DB2 sql script that populates a static table and then does a rename to swap out the live table with the newly updated one. Its a fairly large SQL script so I'm only including the areas that Im having a an error on.
I'm getting the error: "[IBM][CLI Driver][DB2/NT64] SQL0104N An unexpected token "RENAME" was found following "D_HOLIDAY_LOG_OLD; ". Expected tokens may include: "TRUNCATE". LINE NUMBER=382. SQLSTATE=42601".
I suspect, its a syntax issue with the RENAME commands. If I need to add the whole query, I can. Thanks in advance
CREATE OR REPLACE PROCEDURE NSD_HOLIDAY_LOG_SPROC()
LANGUAGE SQL
SPECIFIC SP_NSD_HOLIDAY_LOG_SPROC
DYNAMIC RESULT SETS 1
BEGIN
COMMIT;
TRUNCATE TABLE TMWIN.NSD_HOLIDAY_LOG immediate;
DROP TABLE NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG TO NSD_HOLIDAY_LOG_LIVE;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_OLD TO NSD_HOLIDAY_LOG;
END#
This is frequently asked.
As you are using static SQL in an SQL PL stored procedure, you must follow the documented rules for blocks of Compound SQL (Compiled) statements.
On of those rules is that static SQL has a restricted set of statements that can appear in such a block of code.
For example, with current versions of Db2-LUW, you cannot use any of the following statically (including rename table) :
ALTER , CONNECT,CREATE, DESCRIBE, DISCONNECT, DROP, FLUSH EVENT MONITOR, FREE LOCATOR, GRANT, REFRESH TABLE, RELEASE (connection only), RENAME TABLE, RENAME TABLESPACE, REVOKE, SET CONNECTION, SET INTEGRITY, SET PASSTHRU, SET SERVER OPTION ,TRANSFER OWNERSHIP
Other Db2 platforms (Z/OS, i-series) might have different restrictions but the same principle.
To achieve what you need you can use dynamic SQL instead of Static-SQL (as long as you understand the implications).
In other words, instead of writing:
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
you could instead use:
execute immediate('RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD' );
or equivalent.
You can also use two statements, one to PREPARE and the other to EXECUTE , whichever suits the design. Refer to the documentation for execute immediate.
The same is true for other statements that your version of Db2 disallows in static compound-SQL (compiled) blocks (for example, DROP, or CREATE etc.).

Deleting objects from SQL Server

In SQL Server Database Engine I have a table named Table A.
I deleted the table using graphical interface, but when I wanted to create a table with same name, the error shows
The object already exists
What is the remedy of this situation?
The following steps should help you track down what is going on and help you create your table:
Right-click on your database and select refresh
Verify that your table does not exist under this database.
If you table is
not shown here, then very likely your table is displayed under the
master database.
To create a table in your selected database,
first select the database and then run your query.
A better
option for number 4, just to be sure you are specifying the correct
database is to run the command use dbname; (where dbname is
the name of your database). Do this on the line above your create table code.

SQL*Loader problem

I am getting an error SQL*Loader-606, which means:
The synonym specified in the INTO
TABLE clause in the SQL*Loader control
file specifies a remote object via a
database link. Only a synonym for an
existing local table can be specified
in the INTO TABLE clause.
Is there any way we can insert into remote table using SQL*Loader?
Because you are on 10g you can use External Tables instead of SQL Loader.
Setting up an External Table is easy. Find out more.
To get the External Table to pick up a new file (which you may need to do because you have a repeating process), do this:
alter table your_ext_table_name location ('<newfile.name>')
/
Then you can do this:
insert into whatever_table#remote_db
select * from your_ext_table_name
/
This avoids two lots of DML. External tables are not as fast as a well-tuned SQL*Loader process, but that will be trivial compared to the network traffic tax (which is unavoidable in your scenario).
create table temp_table as select * from remote_table#remote_db where 1 = 2;
load using sql*loader into temp_table;
insert into remote_table#remote_db select * from temp_table;
Run SQL Loader on the server that has the table?
Must be a reason why not, but this seems the simplest to me.
If you couldn't use external tables (eg because the data file is on a client machine rather than on the database server), you can insert into a view on the remote object.
For example
create database link schema1 connect to schema1 identified by schema1 using 'XE';
create view schema1_test_vw as select * from test#schema1;
load data
infile *
append
into table schema1_test_vw
( id POSITION(1:4) INTEGER)
begindata
1001
1002
1003
succeeded on my XE test.
For a view all the column sizes,datatypes etc are fixed on the local schema so sqlldr doesn't have a problem.