I'm trying to run a query using the hplsql command and getting this error. It seems to be a permission issue. My current logged in user is not being considered
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous/.staging":hdfs:hdfs:drwxrwxr-x
How do i fix this?
You need to specify the username in the hplsql-site.xml file.
The property that needs to be edited is
hplsql.conn.hive2conn
Here's how you can specify the property
<property>
<name>hplsql.conn.hive2conn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://localhost:10000;username;password</value>
<description>HiveServer2 JDBC connection</description>
</property>
If you don't have a password for the username, you can skip the mentioning the password after the username
Related
I`m trying to connect a Spark application to HBase with Kerberos enabled. Spark version is 1.5.0, CDH 5.5.2 and it's executed in yarn cluster mode.
When HbaseContext is initialized, it throws this error:
ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I have tried to do the authentication in the code, adding:
UserGroupInformation.setConfiguration(config)
UserGroupInformation.loginUserFromKeytab(principalName, keytabFilename)
I distribute the keytab file with --files option in spark-submit.
Now, the error is:
java.io.IOException: Login failure for usercomp#COMPANY.CORP from keytab krb5.usercomp.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
...
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
Is this the way to connect to Kerberized HBase from a Spark app?
please see the example configuration like below if you are missing anything like hadoop.security.authentication
val conf= HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "list of ip's")
conf.set("hbase.zookeeper"+ ".property.clientPort","2181");
conf.set("hbase.master", "masterIP:60000");
conf.set("hadoop.security.authentication", "kerberos");
Actually try to put your hbase-site.xml directly in the SPARK_CONF directory of your edge node (should be something like /etc/spark/conf or /etc/spark2/conf).
you can use loginUserFromKeytabAndReturnUGI, and uig.doAs
or you could put you hbase classpath to SPARK_DIST_CLASSPATH.
I want to configure LDAP for my portal. I have added the connection details such as:
Connection
Base Provider URL : ldap://SBS.ecompany.local:300(example)
Base DN: ecompany.local
Principal : myldap username
Credentials: myldap password.
also did the following mappings:
Screen Name : sn
Email Address: mail
Password: userPassword
First Name: givenName
Middle Name
Last Name: sn
Full Name: givenName sn
Job Title : title
I checked for the connection, and I got the following message:
Liferay has successfully connected to the LDAP server.
When I checked for Test LDAP Users I got the following message:
Test LDAP Users A subset of users has been displayed for you to review.
No users were found.
(Might be because I did not provide LDATP admin uname and pwd)
But when I tried to login I was not able to login using the LDAP configuration.
And got this error:
09:38:33,808 ERROR [liferay/scheduler_dispatch-5][PortalLDAPImporterImpl:210] Error importing LDAP users and groups
javax.naming.directory.InvalidSearchFilterException: Empty filter; remaining name 'ecompany.local'
at com.sun.jndi.ldap.Filter.encodeFilterString(Filter.java:57)
at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:548)
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1985)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1844)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:392)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:358)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:341)
at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:267)
at com.liferay.portal.security.ldap.PortalLDAPUtil.searchLDAP(PortalLDAPUtil.java:820)
at com.liferay.portal.security.ldap.PortalLDAPUtil.getUsers(PortalLDAPUtil.java:617)
at com.liferay.portal.security.ldap.PortalLDAPUtil.getUsers(PortalLDAPUtil.java:652)
at com.liferay.portal.security.ldap.PortalLDAPImporterImpl.importFromLDAPByUser(PortalLDAPImporterImpl.java:695)
at com.liferay.portal.security.ldap.PortalLDAPImporterImpl.importFromLDAP(PortalLDAPImporterImpl.java:203)
at com.liferay.portal.security.ldap.PortalLDAPImporterImpl.importFromLDAP(PortalLDAPImporterImpl.java:139)
at com.liferay.portal.security.ldap.PortalLDAPImporterUtil.importFromLDAP(PortalLDAPImporterUtil.java:43)
at com.liferay.portlet.admin.messaging.LDAPImportMessageListener.doImportOnStartup(LDAPImportMessageListener.java:38)
at com.liferay.portlet.admin.messaging.LDAPImportMessageListener.doReceive(LDAPImportMessageListener.java:48)
at com.liferay.portal.kernel.messaging.BaseMessageListener.receive(BaseMessageListener.java:26)
at sun.reflect.GeneratedMethodAccessor405.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.liferay.portal.kernel.bean.ClassLoaderBeanHandler.invoke(ClassLoaderBeanHandler.java:67)
at com.sun.proxy.$Proxy303.receive(Unknown Source)
at com.liferay.portal.kernel.scheduler.messaging.SchedulerEventMessageListenerWrapper.receive(SchedulerEventMessageListenerWrapper.java:77)
at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:72)
at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:69)
at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682)
at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593)
at java.lang.Thread.run(Thread.java:745)
The error is because of the DN. Can somebody help me solve this issue.
I got it working with an another issue. The problem was with the authentication search filter . I gave the format as :
Authentication Search Filter : Authentication Search Filter" as (mail=#email_address#). Actual format is Authentication Search Filter : (mail=#email_address#)
Next, the dn : format is dc=ecompany, dc=local.
The problem of ldap configuration was solved.
But when I changed the search filter from mail to screen name ie Authentication Search Filter : (sAMAccountName=#screen_name#) and also changed the How do users authenticate? to By screen name.
But I am not able to login using the ldap screen name and password. I checked the log files. I did not find any error. Please help .
We have put these entries in hive-site.xml:
hive.server2.authentication : KERBEROS
hive.server2.authentication.kerberos.keytab : /tmp/hive.keytab
hive.server2.authentication.kerberos.principal : hive/FQDN of the hive VM#xxxxxxxx.COM
Using kinit command on the hive VM, we have verified that Kerberos principal and the keytab file are valid:
kinit -t FILE:/tmp/hive.keytab -k hive/FQDN of the hive VM#xxxxxxxx.COM
Then if we do,
klist
it shows the same in Ticket Cache as the default Principal.
But, when we try to start the HiveServer2 using :
sudo service hive-server2 start
it throws the exception :
Starting HiveServer2
javax.security.auth.login.LoginException: Kerberos principal should have 3 parts: hive
at org.apache.hive.service.auth.HiveAuthFactory.getAuthTransFactory(HiveAuthFactory.java:127)
at org.apache.hive.service.cli.thrift.ThriftCLIService.run(ThriftCLIService.java:505)
at java.lang.Thread.run(Thread.java:679)
When we try to start the service (using ./hiveserver2) with any other logged in user, say User123, it throws the same exception with :
Starting HiveServer2
javax.security.auth.login.LoginException: Kerberos principal should have 3 parts: User123
at org.apache.hive.service.auth.HiveAuthFactory.getAuthTransFactory(HiveAuthFactory.java:127)
at org.apache.hive.service.cli.thrift.ThriftCLIService.run(ThriftCLIService.java:505)
at java.lang.Thread.run(Thread.java:679)
Shouldn’t Kerberos Principal be picked up from the hive-site.xml and not the login user? Are we missing out something.
--
I have created a principal hive/FQDN of the hive VM#xxxxxxxx.COM in advance and created a keytab file for it.
We are on CDH 4.7 (not installed using CM), OEL6 and Kerberos5
Kerberos secuirty should be configured for HDFS and MR too, and not just HIVE.
I've downloaded EMM 1.1.0 and configured a vm with all the pre-requites to run it.
Since I"m working from the local machines and the VM is a ubuntu server setup I have renamed all of the localhost in the config file to reflect the proper domain name so it can reach.
When I point my browser to https://mydomain.com:9443 I am able to login to carbon and change usernames
However when I goto https://mydomain.com:9443/emm/ it asks me to login again... when I do I get the following errors:
500: Something has gone wrong (very helpful!)
On the console/log file I capture the following:
[2014-06-24 10:06:34,041] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin#carbon.super [-1234]' logged in at [2014-06-24 10:06:34,041+0800]
[2014-06-24 10:06:34,321] INFO {JAGGERY.modules.common:js} - New connection was taken
[2014-06-24 10:06:34,618] WARN {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - Failed Administrator login attempt 'admin[-1234]' at [2014-06-24 10:06:34,618+0800]
[2014-06-24 10:06:34,630] ERROR {org.wso2.carbon.apimgt.hostobjects.APIProviderHostObject} - Login failed! Please recheck the username and password and try again..
[2014-06-24 10:06:35,154] WARN {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - Failed Administrator login attempt 'admin[-1234]' at [2014-06-24 10:06:35,154+0800]
[2014-06-24 10:06:35,156] ERROR {org.wso2.carbon.apimgt.hostobjects.APIStoreHostObject} - Login failed! Please recheck the username and password and try again.
[2014-06-24 10:06:35,326] ERROR {org.jaggeryjs.jaggery.core.manager.WebAppManager} - org.mozilla.javascript.EcmaError: TypeError: Cannot read property "prodConsumerKey" from undefined (/emm/modules/startup.js#59)
org.jaggeryjs.scriptengine.exceptions.ScriptException: org.mozilla.javascript.EcmaError: TypeError: Cannot read property "prodConsumerKey" from undefined (/emm/modules/startup.js#59)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:571)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:447)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doPost(JaggeryServlet.java:29)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:749)
... many many more
Can someone please point me in the right direction?
If the admin password is changed, you have to modify the api-manager config file.
A good practice should be create new user for the api-manager
This miss-configuration also causes blank page after EMM authentication in multi-tenancy.
/repository/conf/api-manager.xml
<!--
Authentication manager configuration for API publisher and API store. This is
a required configuration for both web applications as their user authentication
logic relies on this.
-->
<AuthManager>
<!--
Server URL of the Authentication service
-->
<ServerURL>https://${carbon.local.ip}:${mgt.transport.https.port}/services/</ServerURL>
<!--
Admin username for the Authentication manager.
-->
<Username>apiuser</Username>
<!--
Admin password for the Authentication manager.
-->
<Password>StrongPassword</Password>
</AuthManager>
If you have changed username and password of the admin you will have to change that in config.json file found in wso2emm-1.1.0\repository\deployment\server\jaggeryapps\emm\config. Just update the username and password in apiManagerConfigurations section and restart the EMM server.
There is a public JIRA regarding this[1].
The workaround is
You have to set admin username and password to admin and admin when first login to EMM.
If you have changed the password of admin please set the password to admin.
You can use chpasswd.sh/chpasswd.bin file in the bin folder to change the password.
Eg:
./chpasswd.sh --db-url "jdbc:h2:/repository/database/WSO2CARBON_DB" --db-username wso2carbon -db-password wso2carbon --username admin --new-password admin
Once you login to EMM first time, please use above command to change the password again
[1]. https://wso2.org/jira/browse/EMM-704
I used mysql as my DB: Documentation tell you to place the connector file in ${CARBON_HOME}/repository/components/lib.
Running
${CARBON_HOME}/bin/./chpasswd.sh --db-url jdbc:mysql://ip:3306/wso2emm_db --db-username user_db --db-password pass --username admin --new-password admin
I got this error:
java.sql.SQLException: No suitable driver found for jdbc:mysql://
Copying the connector file to ${CARBON_HOME}/repository/lib solved my issue.
i just installed hive and mysql..
and copied the mysqlconnector to the hive_home/lib folder
but when i try show databases and create table commands in the hive> prompt giving me the error as below:
create database saty;
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
and my hive_site.xml is
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hadoop?CreateDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
and i dont have a directory called /user/hive/warehouse in my file system.
i created these path with mkdir command.. and tried after reboot..
bout still getting the error..
regards,
satya
Try Specifying these two properties
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>username</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
<description>Password to use against metastore database</description>
</property>
And Similarly the username of your mysql login should have the permission to access the db sepcified in the JDBC connection string. This Can be acheived using the following command
GRANT ALL ON Databasename.* TO username#'%' IDENTIFIED BY 'password';
The answer is located at http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/5.0/CDH5-Installation-Guide/cdh5ig_hive_schema_tool.html
To suppress the schema check and allow the metastore to implicitly modify the schema, you need to set the hive.metastore.schema.verification configuration property to false in hive-site.xml.
reconfigure your hive using -hiveconf hive.root.logger=warn,console than find the detail reason that why you could not instantiate your hive mate store client.
The problem i met was wrong mysql configuration, the error message is "Binary logging not possible. Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'". When i change binlog_format from "statement" to "binlog_format=mixed", then hive meta store client instantiate successfully.
Hope it work for you.
I had similar issue and the cause in my case was selinux where it prevented Postgres from proper running.
I inserted following line to the first line of /etc/rc3.d/S64postgresql -
echo 0 > /selinux/enforce # Disable selinux temporarily
and restarted the node with the hive metastore.
So generally you can check two things:
Check whether the DB for the metastore is running properly
Whether the user/passwd from the property is correct