I have set all the parameters that needs to be set in hive for using transactions.
set hive.support.concurrency=true;
set hive.enforce.bucketing=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.compactor.initiator.on=true;
set hive.compactor.worker.threads=0;
Created table using below command
CREATE TABLE Employee(Emp_id int,name string,company string,Desg string) clustered by (Emp_id) into 5 buckets stored as orc TBLPROPERTIES(‘transactional’=’true’);
Inserted Data in hive table by using below command
INSERT INTO table Employee values(1,’Jigyasa’,’Infosys’,’Senior System Engineer’), (2,’Pooja’,’HCL’,’Consultant’), (3,’Ayush’,’Asia Tours an travels’,’Manager’), (4,’Tarun’,’Dell’,’Architect’), (5,’Namrata’,’Apolo’,’Doctor’);
But while Updating the data
Update Employee set Company=’Ganga Ram’ where Emp_id=5;
I am getting below error message
FAILED:SemanticException [Error 10294]:Attempt to do Update or delete unsingtransaction manager thatdoes not support these operations.
Older versions of Hive have a bug where
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; at CLI doesn't take effect.
You can check this by running "set hive.txn.manager" which will print the current value.
The safest way is set this in hive-site.xml.
Related
I want to launch this command on my hive table
Delete from customer where id=3;
And i had this error
FAILED: SemanticException: [Error 10294]: Attempt to do update or delete using transactiob manager that does not support these operations.
Who can help me please??
You can not delete from any table in hive just like this. Thetable should be transactional.
Couple of steps has to be performed to enable this.
Set hive params
# this must be set to true. Hive uses txn manager to do DML.
SET hive.support.concurrency=true;
SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
# The follwoing are not required if you are using Hive 2.0
SET hive.enforce.bucketing=true;
SET hive.exec.dynamic.partition.mode=nostrict;
# required for standalone hive metastore
SET hive.compactor.initiator.on=true;
SET hive.compactor.worker.threads=1
Create table in orc format with table properties= true and having buckets(Hive 2.0).
Sample create table is -
CREATE TABLE db.emptrans (
id int,
name string)
STORED AS ORC
TBLPROPERTIES ('transactional'='true');
Pls refer to below answer for more details -
How to do update/delete operations on non-transactional table
currently, have hive properties:
SET hive.support.concurrency=true;
SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
SET hive.enforce.bucketing=undefined
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.compactor.initiator.on=true;
SET hive.compactor.worker.threads=2;
This, by default, creates ACID table.
I would like to create non-acid table by default. If i want to create non-acid, should i change the property to change hive.txn.manager to DummyTxnManager?
When user want to create transactional table, they should explicitly mention transactional=true while creating table. In this case, how does the transactional table get the features of transactional from DbTxnManager.
I would like to know, on what basis DbTxnManager applicable if we dont have properties set in hive-site.xml.
Also, want to know difference in DbTxnManager and setting transactional=true in table?
I want to create HIVE table with the transactional table with TBLPROPERTIES ("transactional"="true") set using create table statement. instead of setting for all tables can I set TBLPROPERTIES using hive-site.xml.
Unfortunately, we can't set it in hive-site.xml since transactional is a per table property. And we should not do it that way beacause 'transactional table' comes with some prerequisites and limitations.
I am trying to add a column to an existing phoenix table using alter table command as below
ALTER TABLE TABLE1 ADD "db_name" VARCHAR(20);
Its failing with below warning
WARN query.ConnectionQueryServicesImpl: Unable to update meta data repo within 1 seconds for TABLE1
Let me know, If there is any timeout I need to increase to get this working.
When altering a table, Phoenix will by default check with the server to ensure it has the most up to date table metadata and statistics. This RPC may not be necessary when you know in advance that the structure of a table may never change. The UPDATE_CACHE_FREQUENCY property was added in Phoenix 4.7 to allow the user to declare how often the server will be checked for meta data updates. You can set this property on your table like below
ALTER TABLE TABLE1 SET UPDATE_CACHE_FREQUENCY=900000
Please refer this doc for tuning tips.
I have SQLite database and mostly the same scheme in ":memory:" database. I create first, then attach second (memory). And I need to update table in on-disk database with values from table located in-memory database. I created trigger like:
create temporary trigger trg after update of _flushMem on mem.tbl "
begin
update tbl set
version = old.version,
...;
update tbl set
...;
end;
(there are 2 steps of update, so I have 2 "update" statements. OK. I have special field _flushMem which I use to run trigger with SQL statement like: update mem.tbl set _flushMem=1. As I understand, SQLite supports triggers between 2 databases with some limitations (but I update current database, not attached). Test shows that trigger does not run. Never. How to write and to launch such trigger?