installing hive Class path contains multiple SLF4J bindings - hive

I am trying to install hive on ubuntu 20.04 following this tutorial but while starting $HIVE_HOME/bin/schematool –initSchema –dbType derby it will give me following error
SLF4J: Found binding in [jar:file:/home/hadoop/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
HiveSchemaTool:Parsing failed. Reason: Missing required option: [-help print this message, -alterCatalog Alter a catalog, requires --catalogLocation and/or --catalogDescription parameter as well, -initSchemaTo Schema initialization to a version, -upgradeSchemaFrom Schema upgrade from a version, -moveDatabase Move a database between catalogs. Argument is the database name. Requires --fromCatalog and --toCatalog parameters as well, -moveTable Move a table to a different database. Argument is the table name. Requires --fromCatalog, --toCatalog, --fromDatabase, and --toDatabase parameters as well., -initSchema Schema initialization, -createCatalog Create a catalog, requires --catalogLocation parameter as well, -upgradeSchema Schema upgrade, -info Show config and schema details, -validate Validate the database]
usage: schemaTool
-alterCatalog <arg> Alter a catalog, requires
--catalogLocation and/or
--catalogDescription parameter as well
-catalogDescription <arg> Description of new catalog
-catalogLocation <arg> Location of new catalog, required when
adding a catalog
-createCatalog <arg> Create a catalog, requires
--catalogLocation parameter as well
-dbOpts <databaseOpts> Backend DB specific options
-dbType <databaseType> Metastore database type
-driver <driver> driver name for connection
-dryRun list SQL scripts (no execute)
-fromCatalog <arg> Catalog a moving database or table is
coming from. This is required if you
are moving a database or table.
-fromDatabase <arg> Database a moving table is coming
from. This is required if you are
any idea ?

I just inverted parameters and it worked
schematool -dbType derby -initSchema

Related

Hive with HBase (both Kerberos) java.net.SocketTimeoutException .. on table 'hbase:meta'

Error
Receiving Timeout errors when trying to query HBase from Hive using HBaseStorageHandler.
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68199: row 'phoenix_test310,,'
on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hbase-master.example.com,16020,1583728693297, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
... 3 more
I tried to follow what documentation I could and have some hbase configuraiton options added to hive-site.xml based on this Cloudera link
Environment:
Hadoop 2.9.2
HBase 1.5
Hive 2.3.6
Zookeeper 3.5.6
First, the Cloudera link should be ignored, Hive detects the presence of HBase through environment variables and then automatically reads the hbase-site.xml configuration settings.
There is no need to duplicate HBase settings within hive-site.xml
Configuring Hive for HBase
Modify your hive-env.sh as folllows:
# replace <hbase-install> with your installation path /etc/hbase for example
export HBASE_BIN="<hbase-install>/bin/hbase"
export HBASE_CONF_DIR="<hbase-install>/conf"
Separately you should ensure HADOOP_* environment variables are set as well in hive-env.sh,
and that the hbase lib directory is added to HADOOP_CLASSPATH.
We solved this error,by adding this property hbase.client.scanner.timeout.period=600000
hbase 1.2
https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hbase_scanner_heartbeat.html#concept_xsl_dz1_jt

Oracle replication process error ORA-09925: Unable to create audit trail file

When I run script RMAN for replication database oracle its return error
connected to auxiliary database (not started)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/11/2016 14:16:44
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-09925: Unable to create audit trail file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9925
I sow adump directory permissions and owners there were everything okay
Do you have audit_file_dest parameter defined, and has sufficient permissions, especially writeable ?
Moreover, your disk, which holds that directory's files, should have enough space.

Error 'Unable to create core' when migrating from solr 6.6 to 7.1

When I was migrating from solr 6.6 to 7.1 I got this error when I used my old managed-schema to create core in solr 7.1:
bash-4.1$ ./solr create -c solr-testing -d _default -force
WARNING: Using _default configset. Data driven schema functionality is enabled by default, which is NOT RECOMMENDED for production use.
To turn it off:
curl http://localhost:8983/solr/solr-testing/config -d '{"set-user-property": {"update.autoCreateFields":"false"}}'
ERROR: Error CREATEing SolrCore 'solr-testing': Unable to create core [solr-testing] Caused by: root field must be defined using the exact same fieldType as the uniqueKey field (referenceID) uses: long
Looks like there are changes in schema rules for 7.1.
Both uniqueKey and root must have the same data type. Once I changed root fieldType to long in managed-schema it started working.

Logs for hive query executed via. beeline

i am running below hive coomand from beeline . Can someone please tell where can I see Map reudce logs for this ?
0: jdbc:hive2://<servername>:10003/> select a.offr_id offerID , a.offr_nm offerNm , b.disp_strt_ts dispStartDt , b.disp_end_ts dispEndDt , vld_strt_ts validStartDt, vld_end_ts validEndDt from gcor_offr a, gcor_offr_dur b where a.offr_id = b.offr_id and b.disp_end_ts > '2016-09-13 00:00:00';
When using beeline, MapReduce logs are part of HiveServer2 log4j logs.
If your Hive install was configured by Cloudera Manager (CM), then it will typically be in /var/log/hive/hadoop-cmf-HIVE-1-HIVESERVER2-*.out on the node where HiveServer2 is running (may or may not be the same as where you are running beeline from)
Few other scenarios:
Your Hive install was not configured by CM ? You will need to manually create log4j config file:
Create hive-log4j.properties config file in directory specified by HIVE_CONF_DIR environment variable. (This makes it accessible to HiveServer2 JVM classpath)
In this file, log location is specified by log.dir and log.file. See conf/hive-log4j.properties.template in your distribution for an example template for this file.
You run beeline in "embedded HS2 mode" (i.e. beeline -u jdbc:hive2:// user password) ?:
You will customize beeline log4j (as opposed to HiveServer2 log4j).
Beeline log4j properties file is strictly called beeline-log4j2.properties (in versions prior to Hive 2.0, it is called beeline-log4j.properties). Needs to be created and made accessible to beeline JVM classpath via HIVE_CONF_DIR. See HIVE-10502 and HIVE-12020 for further discussion on this.
You want to customize what HiveServer2 logs get printed on beeline stdout ?
This can be configured at HiveServer2 level using hive.server2.logging.operation.enabled and hive.server2.logging.operation configs.
Hive uses log4j for logging. These logs are not emitted to the standard output by default but are instead captured to a log file specified by Hive's log4j properties file. By default, Hive will use hive-log4j.default in the conf/ directory of the Hive installation which writes out logs to /tmp/<userid>/hive.log and uses the WARN level.
It is often desirable to emit the logs to the standard output and/or change the logging level for debugging purposes. These can be done from the command line as follows:
$HIVE_HOME/bin/hive --hiveconf hive.root.logger=INFO,console
set hive.async.log.enabled=false

Cannot Load Hive Table into Pig via HCatalog

I am currently configuring a Cloudera HDP dev image using this tutorial on CentOS 6.5, installing the base and then adding the different components as I need them. Currently, I am installing / testing HCatalog using this section of the tutorial linked above.
I have successfully installed the package and am now testing HCatalog integration with Pig with the following script:
A = LOAD 'groups' USING org.apache.hcatalog.pig.HCatLoader();
DESCRIBE A;
I have previously created and populated a 'groups' table in Hive before running the command. When I run the script with the command pig -useHCatalog test.pig I get an exception rather than the expected output. Below is the initial part of the stacktrace:
Pig Stack Trace
---------------
ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:991)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:412)
...
Has anyone encountered this error before? Any help would be much appreciated. I would be happy to provide more information if you need it.
The error was caused by HBase's Thrift server not being proper configured. I installed/configured Thrift and added the following to my hive-xml.site with the proper server information added:
<property>
<name>hive.metastore.uris</name>
<value>thrift://<!--URL of Your Server-->:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
I thought the snippet above was not required since I am running Cloudera HDP in pseudo-distributed mode.Turns out, it and HBase Thrift are required to use HCatalog with Pig.