Issue while accessing ORC transactional hive table through apache drill.
Apache drill 1.10.0
Hive 1.2.1
Below is the error coming while accessing data from the mentioned table through apache drill.
Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: NumberFormatException: For input string: "0000112_0000" [Error Id: ad9b4243-d48d-43c7-9755-388202d7c54d on inbbrdssvm16.india.tcs.com:31010]
Please help me in resolving the issue.
I suggest you to move onto latest Drill and Hive versions.
This issue is resolved in Apache Drill 1.13.0 version
https://issues.apache.org/jira/browse/DRILL-5978
Related
I am trying install hive in ubuntu. But I am getting this error when i try to create derby metastore
schematool -dbType derby -initSchema
I have configured these things in .bashrc
HIVE_HOME
I configured these things in bin/hive-config.sh
hive-config.sh
What am I doing wrong here? Please help me with this
Thanks in advance
I have tried different versions of hive. Also tried pasting there HIVE_HOME variables in different lines. Hadoop was running while configuring these things.
we have usecase of presto hive accessing s3 file present in avro format.
When we try to use standalone hive-metastore and read this avro data using external table ,we are getting issue SerDeStorageSchemaReader class not found issue
MetaException(message:org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader class not found)
at org.apache.hadoop.hive.metastore.utils.JavaUtils.getClass(JavaUtils.java:54)
We understand this error is coming because SerDeStorageSchemaReader class is not available in standalone-metastore.
i want to understand can be run hive-metastore without using hive/hadoop or there is any other option too?
standalone hive doesnt support avro. we need to install full hadoop plus hive version and start only hive metastore to fix it
I am trying to query HBase data through an HIVE external table. The query comes through a client, at this time it is Squirrel SQL. If i query through simple HIVE command line interface i am able to query the Hive external table (stored in HBASE)
However when i query through Squirrel SQL i get the error
Error: java.io.IOException: The connection has to be unmanaged.
The following is my environment
HBase - 1.1.5
Hive - 1.2.1
Hadoop - 2.6.0
Zookeeper - 3.4.6 Runs on 3 nodes
Please help.
Regards
Bala
I sorted this out as well. This was due to jar mismatch. Once i got all the right jars lined up and started thriftserver with --jars, this error went away.
Thanks
Bala
I am currently configuring a Cloudera HDP dev image using this tutorial on CentOS 6.5, installing the base and then adding the different components as I need them. Currently, I am installing / testing HCatalog using this section of the tutorial linked above.
I have successfully installed the package and am now testing HCatalog integration with Pig with the following script:
A = LOAD 'groups' USING org.apache.hcatalog.pig.HCatLoader();
DESCRIBE A;
I have previously created and populated a 'groups' table in Hive before running the command. When I run the script with the command pig -useHCatalog test.pig I get an exception rather than the expected output. Below is the initial part of the stacktrace:
Pig Stack Trace
---------------
ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:991)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:412)
...
Has anyone encountered this error before? Any help would be much appreciated. I would be happy to provide more information if you need it.
The error was caused by HBase's Thrift server not being proper configured. I installed/configured Thrift and added the following to my hive-xml.site with the proper server information added:
<property>
<name>hive.metastore.uris</name>
<value>thrift://<!--URL of Your Server-->:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
I thought the snippet above was not required since I am running Cloudera HDP in pseudo-distributed mode.Turns out, it and HBase Thrift are required to use HCatalog with Pig.
I am currently using HADOOP 2.2.0 , HIVE 0.12.0 and Impala 1.2.3. When i am trying to start imapala -server its not getting started. When i checked the log directory , i am getting the following error.
Any help is highly appreciated.
Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status;
Host Details : local host is: "XXXX/[IP-ADDESS]"; destination host is: "hadoop-master":9000;
E0219 13:15:16.223870 22635 impala-server.cc:403] Aborting Impala Server startup due to improper configuration
Hadoop 2.2 is using protobuf 2.5 and Impala is using protobuf 2.4.0a .
Unfortunately code generated with protobuf 2.5 is incompatible with older protobuf libraries.
You can check JIRA ISSUE(HADOOP-9845) for the background or design decision to upgrade protobuf in Hadoop.
SOLUTION
Remove older protobuf .
Install protbuf 2.5
Build Impala