We use Cloudera (CDH 5.7.5) and Hue [3.9.0]. For admin user, some of hive tables (60%) is accessible through impala. The other hive tables is not accessible. For non admin user, no database which is accessible through Impala. And again, some of database is accessible via hive.
Is it because Impala catalog not sync with hive metastore?
When I try to run invalidate metadata (for all database) I got read operation timeout error message.
I try to run invalidate metadata for some tables, but do not solve the problem, no impact. What I need to check.,
FYI, I'll got this error every time I run query via Impala. But not for hive.
AuthorizationException: User 'test.user' does not have privileges to execute 'SELECT' on: default.test01
FYI2, invalidate metadata running well. For admin user, all databases and tables is accessible via hive & impala. But for non admin user, authorized database only accessible through hive (impala no)
This is part of hue log:
[13/Jul/2018 10:32:05 +0700] thrift_util DEBUG Thrift call: <class 'ImpalaService.ImpalaHiveServer2Service.Client'>.CloseOperation(args=(TCloseOperationReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=3, operationId=THandleIdentifier(secret="o\xe8}\x9a\xf6'F\x8d\x9aC\xd4!\xb2#:\x91", guid="o\xe8}\x9a\xf6'F\x8d\x9aC\xd4!\xb2#:\x91"))),), kwargs={})
[13/Jul/2018 10:32:05 +0700] thrift_util DEBUG Thrift call <class 'ImpalaService.ImpalaHiveServer2Service.Client'>.CloseOperation returned in 1ms: TCloseOperationResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0))
[13/Jul/2018 10:32:05 +0700] access INFO 10.192.64.252 myuser.test - "POST /notebook/api/autocomplete/ HTTP/1.1"
[13/Jul/2018 10:32:05 +0700] dbms DEBUG Query Server: {'SESSION_TIMEOUT_S': 43200, 'QUERY_TIMEOUT_S': 600, 'server_name': 'impala', 'server_host': 'serverhost.com', 'querycache_rows': 50000, 'server_port': 21050, 'auth_password_used': False, 'impersonation_enabled': True, 'auth_username': 'hue', 'principal': 'impala/serverhost.com'}
[13/Jul/2018 10:32:05 +0700] dbms DEBUG Query Server: {'SESSION_TIMEOUT_S': 43200, 'QUERY_TIMEOUT_S': 600, 'server_name': 'impala', 'server_host': 'serverhost.com', 'querycache_rows': 50000, 'server_port': 21050, 'auth_password_used': False, 'impersonation_enabled': True, 'auth_username': 'hue', 'principal': 'impala/serverhost.com'}
Related
I'm using Tivoli Directory Integrator (TDI) to sync users from Domino LDAP to the local DB2 people database of HCL Connections. On a test installation, I got the following error when trying to initially sync the users:
[root#cnx65 tdisol]# LANG=en_US.utf8 ./sync_all_dns.sh
create synchronization lock
log4j:WARN No appenders could be found for logger (server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**********
CLFRN1275I: Begin to hash records in database.
CLFRN1269I: Finish hash records in database.
**********
"message": "CLFRN1254E: An error occurred while performing findEntry: {0}."
"exception": "com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while performing findEntry: {0}."
Synchronize of Database Repository failed
HCLs documentation recommend to check the logs in case of CLFRN1254E. The file logs/SyncUpdates.log contains the following exception:
2020-01-21 07:50:03,803 INFO [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - CTGDIS092I Use entry provided at runtime as work entry (first pass only).
2020-01-21 07:50:11,723 ERROR [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - [hash_db_entries] CTGDIS181E Error while evaluating the hook 'Function error' in the component 'hash_db_entries (hash_db_entries.functioncall_fail).
com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while executing findEntry: {0}.
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector$ProfileCodeBlock.handleRecoverable(ProfileConnector.java:1063)
at com.ibm.lconn.profiles.api.tdi.connectors.Util.TDICodeRunner.run(TDICodeRunner.java:41)
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector.getNextEntry(ProfileConnector.java:155)
at com.ibm.di.server.AssemblyLineComponent.executeOperation(AssemblyLineComponent.java:3370)
at com.ibm.di.server.AssemblyLineComponent.getnext(AssemblyLineComponent.java:932)
at com.ibm.di.server.AssemblyLine.msGetNextIteratorEntry(AssemblyLine.java:3689)
at com.ibm.di.server.AssemblyLine.executeMainStep(AssemblyLine.java:3388)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:3000)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:2983)
at com.ibm.di.server.AssemblyLine.executeAL(AssemblyLine.java:2952)
at com.ibm.di.server.AssemblyLine.run(AssemblyLine.java:1319)
Caused by: org.springframework.jdbc.BadSqlGrammarException: SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:97)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:249)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:296)
at com.ibm.lconn.profiles.internal.service.store.sqlmapdao.TDIProfileSqlMapDao.get(TDIProfileSqlMapDao.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
What could be the problem? How could I find out more information why this error occurs?
What I already tried
Increase log level
In profiles_tdi.properties I enabled debug logs for every component:
debug_collect=true
debug_draft=true
debug_fill_codes=true
debug_managers=true
debug_photos=true
debug_pronounce=true
debug_special=true
debug_update_profile=true
trace_profile_tdi_javascript=on
Since this had no effect, I set the log4j level to debug for the entire application in etc/log4j.properties:
log3j.rootCategory=DEBUG, Default
Also tried ALL instead of DEBUG. However, there is no change in the output. I expected to see the SQL query, which caused the exception.
Set mode in properties
According to this post, the mode attribute will be used to decide if an user is internal or external. Since the example config says
Actually, any string other than "external" is interpreted as employee.
it is set to mode=memberType. Also tried mode=uid and mode=mail. Both are fields containing a string not equal to "external", so this should result in all members imported as internal users.
Sync single users
Since my LDAP filter applies to around 60 users, I ran ./collect_dns.sh successfully and removed all users from collect.dns file except my own. Then sync the user from the dn file with ./populate_from_dn_file.sh. This was done for two other users, resulting always in the same error:
CLFRN0027I: After operation, success records is 0, duplicate records 0, failure records is 1, and last successful entry is null.
CLFRN1280I: 20200121105123 Iterations total number: 1.
The only difference is that logs/PopulateDBFromDNFile.log contains more detailled information about the fetched attributes, mappings and so on. Unfortunately, it doesn't really help me in terms of the error, since it produces a similiar message:
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS126I Return false.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS123I Returned object class java.lang.Boolean.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS075I Trying to exit TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS076I Succeeded exiting TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook after_functioncall not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $_already_lookup_manager.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS351I Map Attribute $manager_uid [1].
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS353I Script is: conn["$manager_uid"]
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $manager_uid.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook functioncall_ok not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook default_ok not enabled.
2020-01-21 10:55:27,538 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] Result: <My Name of the User in dn file>
2020-01-21 10:55:27,591 ERROR [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [ProfileConnector] SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
Found out that this was a unlucky logical mistake from me. The database is created using sql files, shipped with the Connections Installation Wizard. I automatically import them in a loop. Since it was very slow (about 30 min for all scripts), I tried to parallelize them by adding a & at the end of the command and finally wait at the end to make sure all scripts were executed.
- name: Check and create non existing DBs for CNX
become: yes
become_user: "{{ db2.instance.name }}"
shell: |
db={{ item.name }}
scripts=({{ item.files | join(' ') }})
existing_dbs=$(echo -e '{{ existing_dbs.stdout }}')
echo "Check db ${db}"
if ! echo ${existing_dbs} | grep -q ${db}; then
echo "DB ${db} doesn't exist, execute scripts"
for script in "${scripts[#]}"
do
echo "${db}: Execute script ${script}"
{{ db2.target }}/bin/db2 -td# -f {{ cnx_sql_dir }}/${script} &
done
wait
fi
register: db_check
changed_when: "'execute scripts' in db_check.stdout"
loop: "{{ cnx.db_scripts }}"
cnx.db_scripts is a mapping of database names to SQL files:
db_scripts:
- name: PEOPLEDB
files:
- profiles/db2/createDb.sql
- profiles/db2/appGrants.sql
- name: FORUM
files:
# - ...
In retrospect, this was a terrible logical mistake because I missed the fact that those scripts rely on each other: When profiles/db2/appGrants.sql is executed before profiles/db2/createDb.sql was finished, it wouldn't work because the db doesn't exists.
As a result, TDIs queries failed because the database and tables were only partly created. I didn't notice this immediately, since the machine was several re-deployed during of the Ansible playbook development. Strangely, TDI only failed in 2 of 10 deployments. Seems like DB2 make some kind of queue and depending on the timing, the people database required for TDI is created successfully on some runs.
In SAP HANA I am trying to call a StoredProcedure with a Table Type as input parameter.
Other Input parameters work just fine. But as soon as I use a Table Type I get the error:
Failed to execute action: InternalError: dberror($.hdb.Connection.executeProcedure): 258 - SQL error, server error code: 258. insufficient privilege: Not authorized at /sapmnt/ld7272/a/HDB/jenkins_prod/workspace/8uyiojyvla/s/ptime/query/checker/query_check.cc:4003
How to fix / debug this?
In the indexserver-trace is:
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949679 i TraceContext TraceContext.cpp(01028) : UserName=SAPDBCTRL, ApplicationUserName=SM_EFWK, ApplicationName=ABAP:AS2, ApplicationSource=CL_SQL_STATEMENT==============CP:304, Client=010, StatementHash=31c1e1f5ca72868a541d58fc5a77596b, EppRootContextId=0050560204981EE782C14A33A16BC68E, EppTransactionId=47BF1E2CEE9D05A0E005B7CF04FCF981, EppConnectionId=5B7C13CC22061B08E10000000A1807AF, EppConnectionCounter=1, EppComponentName=AS2/sapas2ci_AS2_01, EppAction=EFWK RESOURCE MANAGER
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949656 w SQLScriptExecuto se_eapi_proxy.cc(00144) : Error <exception 71000258: Not authorized
> in preparation of internal statement: delete from _SYS_STATISTICS.STATISTICS_PROPERTIES where key='internal.check.store_results'
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949904 e SQLScript trex_llvm.cc(00936) : Llang Runtime Error: Exception::SQLException258: insufficient privilege: Not authorized
at main (line 63) ("_SYS_STATISTICS"."SHARED_STORE_USED_VALUES": line 8 col 5 (at pos 456))
This seems rather straightforward:
The application user (the person using SAP NetWeaver) SM_EFWK logged on in client 010 is trying to delete data from an SAP HANA statistics service table _SYS_STATISTICS.STATISTICS_PROPERTIES.
The NetWeaver/ABAP program uses a secondary database connection with the database user SAPDBCTRL.
The error Exception::SQLException258: insufficient privilege: Not authorized is thrown, because this SAPDBCTRL database user, does not have the privilege to DELETE on this table assigned to it (neither directly, nor via schema or role privilege).
If the SQL command is part of an SAP standard program, then I'd check that the recommended setup has been implemented correctly.
If this command comes from a custom program, you may want to either assign the privilege or use a different technical user as SAPDBCTRL is an SAP standard user that shouldn't be modified.
I try to show SAP HANA tables from SQL Editor of SAP HANA Vora Tools like below:
show tables
using com.sap.spark.hana
options
(
host "192.168.88.200",
instance "00",
port "30215",
user "SYSTEM",
passwd "Passw0rd",
dbschema "LEAGUE_SCHEMA"
);
but appears this error:
com.sap.spark.hana.client.HANAJdbcBadStateException:
[DefaultHANAConfiguration(192.168.88.200,00,30215,SYSTEM,Passw0rd,None)]
Cannot acquire a connection with error code 0, status ERROR_STATUS
Host, instance, port, user, passwd parameters are correct, and the dbschema is created in SAP HANA successfully.
What could be the error?
Thanks for the support!
Omit the 'port' parameter if it is not a multi-tenant HANA.
Have you verified from command line that you are able to ping and reach IP address 192.168.88.200?
Also, typically in HANA the SQL port is 3{InstanceNumber}15.
So in your case if the Instance is in fact 00, then the port should be 30015.
If you are working with tenant databases add the tenantdatabase parameter:
SHOW TABLES USING com.sap.spark.hana
OPTIONS (
host "a.b.c.d",
instance "90",
user "SYSTEM",
passwd "password",
tenantdatabase "SYSTEMDB",
dbschema "SYSTEM",
tablepattern "%"
);
I use hue to execute hive sql show tables; everything is ok.
But executed hive sql select * from tablea limit 1; and got the exception:
java.net.SocketTimeoutException:callTimeout=60000, callDuration=68043:
row 'log,,00000000000000' on table 'hbase:meta' at
region=hbase:meta,,1.1588230740, hostname=node4,16020,1476410081203,
seqNum=0:5:1",
'org.apache.hadoop.hbase.client.RpcRetryingCaller:callWithRetries:RpcRetryingCaller.java:159',
'org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture:run:ResultBoundedCompletionService.java:64',
'*org.apache.hadoop.hbase.exceptions.ConnectionClosingException:Call
to node4/192.168.127.1:16020 failed on local exception:
org.apache.hadoop.hbase.exceptions.ConnectionClosingException:
Connection to node4/192.168.127.1:16020 is closing. Call id=9,
waitTime=1:16:11',
'org.apache.hadoop.hbase.ipc.RpcClientImpl:wrapException:RpcClientImpl.java:1239',
'org.apache.hadoop.hbase.ipc.RpcClientImpl:call:RpcClientImpl.java:1210',
'org.apache.hadoop.hbase.ipc.AbstractRpcClient:callBlockingMethod:AbstractRpcClient.java:213',
'org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation:callBlockingMethod:AbstractRpcClient.java:287',
'org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub:scan:ClientProtos.java:32651',
'org.apache.hadoop.hbase.client.ScannerCallable:openScanner:ScannerCallable.java:372',
'org.apache.hadoop.hbase.client.ScannerCallable:call:ScannerCallable.java:199',
'org.apache.hadoop.hbase.client.ScannerCallable:call:ScannerCallable.java:62',
'org.apache.hadoop.hbase.client.RpcRetryingCaller:callWithoutRetries:RpcRetryingCaller.java:200',
'org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC:call:ScannerCallableWithReplicas.java:369',
'org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC:call:ScannerCallableWithReplicas.java:343',
'org.apache.hadoop.hbase.client.RpcRetryingCaller:callWithRetries:RpcRetryingCaller.java:126',
'*org.apache.hadoop.hbase.exceptions.ConnectionClosingException:Connection
to node4/192.168.127.1:16020 is closing. Call id=9, waitTime=1:3:2',
'org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection:cleanupCalls:RpcClientImpl.java:1037',
'org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection:close:RpcClientImpl.java:844',
'org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection:run:RpcClientImpl.java:572'],
statusCode=3), results=None, hasMoreRows=None)
in the configuration file hive-site.xml
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
set the value to false.
the true means execute the hadoop job with the user who loginin the hiveserver2.
the false means execute the hadoop job with the user who start the hiveserver2.
Configuration (hortonworks)
hive: BUILD hive-1.2.1.2.3.0.0
Hadoop 2.7.1.2.3.0.0-2557
I'm trying to execute
lock table event_metadata EXCLUSIVE;
Hive response:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Current transaction manager does not support explicit lock requests. Transaction manager: org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
In the code there is obvious place where explicit locks are disabled:
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hive/hive-exec/1.2.0/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java#DbTxnManager
321 #Override
322 public boolean supportsExplicitLock() {
323 return false;
324 }
Questions:
how can I make explicit locks work? In what version of hive do they appear?
Here is an example http://www.ericlin.me/how-table-locking-works-in-hive for cloudera that explicit locks work.
You may set the concurrency parameter on the fly:
set hive.support.concurrency=true;
After this you may try executing your command
Hive includes a locking feature that uses Apache Zookeeper for locking. Zookeeper implements highly reliable distributed coordination. Other than some additional setup and configuration steps, Zookeeper is invisible to Hive users.
In the $HIVE_HOME/hive-site.xml file, set the following properties:
<property>
<name>hive.zookeeper.quorum</name>
<value>zk1.site.pvt,zk1.site.pvt,zk1.site.pvt</value>
<description>The list of zookeeper servers to talk to.
This is only needed for read/write locks.
</description>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
<description>Whether Hive supports concurrency or not.
A Zookeeper instance must be up and running for the default Hive lock manager to support read-write locks.</description>
</property>
After restarting hive, run the command
hive> lock table event_metadata EXCLUSIVE;
Reference: Programing Hive, O'REILLY
EDIT:
DummyTxnManager.java, which provides default Hive behavior, has
#Override
public boolean supportsExplicitLock() {
return true;
}
DummyTxnManager replicates pre Hive-0.13 behavior doesn't support transactions
where as
DbTxnManager.java,which stores the transactions in the metastore database, has:
#Override
public boolean supportsExplicitLock() {
return false;
}
Try the following:
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager;
unlock table tablename;