How do I find what user owns a HIVE database? - hive

I want to confirm which user is the owner of a database in HIVE. Where would I find this information?

DESCRIBE|DESC DATABASE shows the name of the database, its comment (if one has been set), and its root location on the filesystem. The uses of SCHEMA and DATABASE are interchangeable – they mean the same thing. DESCRIBE SCHEMA is added in Hive 0.15 (HIVE-8803).
EXTENDED also shows the database properties.
DESCRIBE DATABASE [EXTENDED] db_name;
DESCRIBE SCHEMA [EXTENDED] db_name; -- (Note: Hive 0.15.0 and later)
These examples show that cards database was created by cloudera user:
hive> SET hive.cli.print.header=true;
hive> describe database cards;
OK
db_name comment location owner_name owner_type parameters
cards hdfs://quickstart.cloudera:8020/user/hive/warehouse/cards.db cloudera USER
Time taken: 0.013 seconds, Fetched: 1 row(s)
hive> desc schema cards;
OK
db_name comment location owner_name owner_type parameters
cards hdfs://quickstart.cloudera:8020/user/hive/warehouse/cards.db cloudera USER
Time taken: 0.022 seconds, Fetched: 1 row(s)
Alternatively,
Hive database is nothing but a hdfs directory in Hive warehouse dir location with .db extension. You can get user by simply from hadoop fs -ls command:
For a directory it returns list of its direct children as in Unix. A directory is listed as:
permissions userid groupid modification_date modification_time dirname
Files within a directory are order by filename by default.
Example:
hadoop fs -ls /user/hive/warehouse/*.db |awk '{print $3,$NF}'

Both will solve your problem:
hive> describe database extended db_name;
hive> describe schema extended db_name;
The output will have the owner user name.

If you have configured the hive to an external metastore like mysql or derby , you can query the metastore table DBS to get the information .
Query
select NAME,OWNER_NAME,OWNER_TYPE from DBS;
Output
+--------------+------------+------------+
| NAME | OWNER_NAME | OWNER_TYPE |
+--------------+------------+------------+
| default | public | ROLE |
| employee | addy | USER |
| test | addy | USER |

Related

Impala ACID table select ERROR: Operation not supported on transactional (ACID) table:

I'm using impala 3.4 directly with hive 3.1.
The problem is that if you create a general table in the hive and then select it in impala, an error occurs.
The error message is as follows:
Query: show tables
+----------+
| name |
+----------+
| customer |
| lineitem |
| nation |
| orders |
| part |
| partsupp |
| region |
| supplier |
| t |
+----------+
Fetched 9 row(s) in 0.02s
[host.cluster.com] default> select * from customer;
Query: select * from customer
Query submitted at: 2020-11-20 09:56:12 (Coordinator: http://host.cluster.com:25000)
ERROR: AnalysisException: Operation not supported on transactional (ACID) table: default.customer
In the hive, the acid table and the orc table are only concerned with whether to delete or update, but I knew that selection is common.
In fact, the select statement is normally executed through hive jdbc. Only impala would like to help you understand why this error occurs.
I solved this problem. It was confirmed that the table created through Hive in impala operates normally.
There are two possible causes:
Connect impala built with Hive2 to Hive 3 databases.
When creating a Hive Table that I did not recognize, set the default flag related to ACID.
This version can't read ACID table wich are created by Hive. Hive creates ACID table by default.

Apache sentry hive grant insert privileges but it did not work

insert into limifang_oracle_store002(id,name) values(1,'lisi');
exception:
Error: Error while compiling statement: FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found limifang_oracle_store002 (state=42000,code=40000)
0: jdbc:hive2://192.168.2.16:2181,192.168.2.1> insert into liminfang_oracle_store002(id,name) values(1,'lisi');
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=kaif1, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/kaif1/.staging":root:supergroup:drwx------
Permission information is as follows:
show grant role kaif1;
|database| table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |
| ziy_db_109 | liminfang_oracle_store002 | kaif1 | ROLE | DELETE | false | 1022296989000
| ziy_db_109 |liminfang_oracle_store002 | kaif1 | ROLE | INSERT | false | 1022295356000 |
Check the Hadoop cluster HDFS in those groups, if there are hadoop, hdfs, supergroup and other groups.
first establish the corresponding user test and supergroup groups (Hadoop or HDFS group) in the Linux system.
2 add test users to the supergroup group (or Hadoop, HDFS group).
3 Connect hive with hive user (administrator) beeline, create role: test_role, execute command: create role test_role;
4 grant permission to test_role and execute the command: grant all on testdb. testtable to role test_role; (there must be a library testdb, table testtable in his)
5 assign the test_role role to the test user to execute the command :grant test_role to group test;
verify that use test user beeline to connect hive: use testdb; show tables; check if it is wrong.

Why impala not showing all tables created by Hive

I have imported all tables using sqoop into a Hive database "sqoop_import" able to see all tables imported successfully as below :-
hive> use sqoop_import;
OK
Time taken: 0.026 seconds
hive> show tables;
OK
categories
customers
departments
order_items
orders
products
Time taken: 0.025 seconds, Fetched: 6 row(s)
hive>
But when I am trying the same from impala-shell or Hue using the same user, It's showing different results as below : -
[quickstart.cloudera:21000] > use sqoop_import;
Query: use sqoop_import
[quickstart.cloudera:21000] > show tables;
Query: show tables
+--------------+
| name |
+--------------+
| customers |
| customers_nk |
+--------------+
Fetched 2 row(s) in 0.01s
[quickstart.cloudera:21000] >
What am I doing wrong?
When you import a new table with sqoop to hive, in order to see it through Impala-Shell you should INVALIDATE METADATA of the specific table. So from the Impala-Shell run the following command : impala-shell -d DB_NAME -q "INVALIDATE METADATA table_name"; .
But if you append new data files to an existing table through sqoop you need to do REFRESH. So from the Impala-Shell run the following command:
impala-shell -d DB_NAME -q "REFRESH table_name";.

SQL to get default schema of some database in Netezza?

We can change the default schema of a database using alter command. Is there some SQL command to get default schema?
You can check this via the _V_DATABASE view.
TESTDB.ADMIN(ADMIN)=> select database, defschema from _v_database where database='TESTDB';
DATABASE | DEFSCHEMA
----------+-----------
TESTDB | ADMIN
(1 row)

Postgres: Is there a way to tie a User to a Schema?

In our database we have users: A, B, C.
Each user has its own corresponding schema: A, B, C.
Normally if I wanted to select from a table in one of the schemas I would have to do:
select * from A.table;
My question is:
Is there a way to make:
select * from table
go to the correct schema based on the user that is logged in?
This is the default behavior for PostgreSQL. Make sure your search_path is set correctly.
SHOW search_path;
By default it should be:
search_path
--------------
"$user",public
See PostgreSQL's documentation on schemas for more information. Specifically this part:
You can create a schema for each user with the same name as that user. Recall that the default search path starts with $user, which resolves to the user name. Therefore, if each user has a separate schema, they access their own schemas by default.
If you use this setup then you might also want to revoke access to the public schema (or drop it altogether), so users are truly constrained to their own schemas.
Update RE you comment:
Here is what happens on my machine. Which is what I believe you are wanting.
skrall=# \d
No relations found.
skrall=# show search_path;
search_path
----------------
"$user",public
(1 row)
skrall=# create schema skrall;
CREATE SCHEMA
skrall=# create table test(id serial);
NOTICE: CREATE TABLE will create implicit sequence "test_id_seq" for serial column "test.id"
CREATE TABLE
skrall=# \d
List of relations
Schema | Name | Type | Owner
--------+-------------+----------+--------
skrall | test | table | skrall
skrall | test_id_seq | sequence | skrall
(2 rows)
skrall=# select * from test;
id
----
(0 rows)
skrall=#