insert into limifang_oracle_store002(id,name) values(1,'lisi');
exception:
Error: Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found limifang_oracle_store002 (state=42000,code=40000)
0: jdbc:hive2://192.168.2.16:2181,192.168.2.1> insert into liminfang_oracle_store002(id,name) values(1,'lisi');
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=kaif1, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/kaif1/.staging":root:supergroup:drwx------
Permission information is as follows:
show grant role kaif1;
|database| table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |
| ziy_db_109 | liminfang_oracle_store002 | kaif1 | ROLE | DELETE | false | 1022296989000
| ziy_db_109 |liminfang_oracle_store002 | kaif1 | ROLE | INSERT | false | 1022295356000 |
Check the Hadoop cluster HDFS in those groups, if there are hadoop, hdfs, supergroup and other groups.
first establish the corresponding user test and supergroup groups (Hadoop or HDFS group) in the Linux system.
2 add test users to the supergroup group (or Hadoop, HDFS group).
3 Connect hive with hive user (administrator) beeline, create role: test_role, execute command: create role test_role;
4 grant permission to test_role and execute the command: grant all on testdb. testtable to role test_role; (there must be a library testdb, table testtable in his)
5 assign the test_role role to the test user to execute the command :grant test_role to group test;
verify that use test user beeline to connect hive: use testdb; show tables; check if it is wrong.
Related
Executing the following command
MariaDB [(none)]> select distinct user from mysql.user;
results in
+-------------+
| User |
+-------------+
| app_user |
| |
| test_u |
| mariadb.sys |
| root |
+-------------+
5 rows in set (0.001 sec)
So I have probably created a user with no name, correct? Perhaps by using a wrong syntax in the past. Question is how to drop the user? Something like the following doesn't seem to work:
MariaDB [(none)]> drop user ' ';
ERROR 1396 (HY000): Operation DROP USER failed for ' '#'%'
drop is used to remove tables https://dev.mysql.com/doc/refman/8.0/en/drop-table.html, DROP TABLE User
Try DELETE FROM User WHERE User.user = ' '
After some trial and error I stumbled upon mysql.user does not exist. During the execution of mysql_secure_installation the terminal states
By default, a MariaDB installation has an anonymous user, allowing
anyone to log into MariaDB without having to have a user account
created for them. This is intended only for testing, and to make the
installation go a bit smoother. You should remove them before moving
into a production environment.
Remove anonymous users? [Y/n] Y
... Success!
So this ' ' user is equal to the anonymous user described here. Running select distinct user from mysql.user; after the removal of anonymous users.
Results in
+-------------+
| User |
+-------------+
| app_user |
| test_u |
| mariadb.sys |
| root |
+-------------+
So probably this ' ' user was already there.
I'm using impala 3.4 directly with hive 3.1.
The problem is that if you create a general table in the hive and then select it in impala, an error occurs.
The error message is as follows:
Query: show tables
+----------+
| name |
+----------+
| customer |
| lineitem |
| nation |
| orders |
| part |
| partsupp |
| region |
| supplier |
| t |
+----------+
Fetched 9 row(s) in 0.02s
[host.cluster.com] default> select * from customer;
Query: select * from customer
Query submitted at: 2020-11-20 09:56:12 (Coordinator: http://host.cluster.com:25000)
ERROR: AnalysisException: Operation not supported on transactional (ACID) table: default.customer
In the hive, the acid table and the orc table are only concerned with whether to delete or update, but I knew that selection is common.
In fact, the select statement is normally executed through hive jdbc. Only impala would like to help you understand why this error occurs.
I solved this problem. It was confirmed that the table created through Hive in impala operates normally.
There are two possible causes:
Connect impala built with Hive2 to Hive 3 databases.
When creating a Hive Table that I did not recognize, set the default flag related to ACID.
This version can't read ACID table wich are created by Hive. Hive creates ACID table by default.
I had 2 theories. 1. that it was a permissions error 2. that the table was corrupt. I seem to addressed both without result. What could cause this ERROR 1728 message?
Running it as mysql user does not work
MariaDB [mysql]> DROP FUNCTION IF EXISTS civicrm_strip_non_numeric;
ERROR 1728 (HY000): Cannot load from mysql.proc. The table is probably corrupted
It does not say that it is corrupt.
MariaDB [mysql]> repair table proc;
+------------+--------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+------------+--------+----------+----------+
| mysql.proc | repair | status | OK |
+------------+--------+----------+----------+
This fixes it
mysql_upgrade -u root -pxxx
wasn't aware that I upgraded as this is a new installation.
same for mariadb as mysql
I want to confirm which user is the owner of a database in HIVE. Where would I find this information?
DESCRIBE|DESC DATABASE shows the name of the database, its comment (if one has been set), and its root location on the filesystem. The uses of SCHEMA and DATABASE are interchangeable – they mean the same thing. DESCRIBE SCHEMA is added in Hive 0.15 (HIVE-8803).
EXTENDED also shows the database properties.
DESCRIBE DATABASE [EXTENDED] db_name;
DESCRIBE SCHEMA [EXTENDED] db_name; -- (Note: Hive 0.15.0 and later)
These examples show that cards database was created by cloudera user:
hive> SET hive.cli.print.header=true;
hive> describe database cards;
OK
db_name comment location owner_name owner_type parameters
cards hdfs://quickstart.cloudera:8020/user/hive/warehouse/cards.db cloudera USER
Time taken: 0.013 seconds, Fetched: 1 row(s)
hive> desc schema cards;
OK
db_name comment location owner_name owner_type parameters
cards hdfs://quickstart.cloudera:8020/user/hive/warehouse/cards.db cloudera USER
Time taken: 0.022 seconds, Fetched: 1 row(s)
Alternatively,
Hive database is nothing but a hdfs directory in Hive warehouse dir location with .db extension. You can get user by simply from hadoop fs -ls command:
For a directory it returns list of its direct children as in Unix. A directory is listed as:
permissions userid groupid modification_date modification_time dirname
Files within a directory are order by filename by default.
Example:
hadoop fs -ls /user/hive/warehouse/*.db |awk '{print $3,$NF}'
Both will solve your problem:
hive> describe database extended db_name;
hive> describe schema extended db_name;
The output will have the owner user name.
If you have configured the hive to an external metastore like mysql or derby , you can query the metastore table DBS to get the information .
Query
select NAME,OWNER_NAME,OWNER_TYPE from DBS;
Output
+--------------+------------+------------+
| NAME | OWNER_NAME | OWNER_TYPE |
+--------------+------------+------------+
| default | public | ROLE |
| employee | addy | USER |
| test | addy | USER |
We can change the default schema of a database using alter command. Is there some SQL command to get default schema?
You can check this via the _V_DATABASE view.
TESTDB.ADMIN(ADMIN)=> select database, defschema from _v_database where database='TESTDB';
DATABASE | DEFSCHEMA
----------+-----------
TESTDB | ADMIN
(1 row)