With release of Hive 2.0, MSCK Repair Table doesn't work. It failed most of the times or hung forever. Any workaround for this issue?
Related
Hive 3 has a standalone metastore which seems to work great... although in order to run the schematool I still had to download (but not run) Hadoop.
Unfortunately Presto only works with Hive 1.x and Hive 2.x, as Hive 3 has default ACID2 tables which Presto does not work with (https://github.com/prestosql/presto/issues/576).
So, I'm trying to understand if I can run the Hive 2.x metastore without the rest of hive (hive server) or Hadoop running. Like... If I install Hadoop and hive but only run the metastore will it be functional, or are there limitations?
I am trying to install impala in a docker container(using MAPR documentstion).In this docker I am running only Impala service and remaining hive,maprfs services will be running on physical node.When starting impala-server(impala daemon) I am getting wearied errors.I just wanted to know whether this kind of installation is possible or not.
Thanks for Help!!
It is possible, but it depends on your Impala and MapR version. Impala 2.2.0 is supported on MapR 5.x. Impala 2.5.0 is supported on MapR 5.1 and later. Check enter link description here before proceeding.
I am trying to get the runtime metrics of hive after executing the hive queries. Are there any API's to obtain these metrics. Please suggest.
The whole intention behind asking this question is to gather metrics for different mapreduce jobs spawned at each stage and the amount of memory and cpu being used for each stage.
Hadoop Distribution: MapR (5.1)
Hive Version: 1.2.0 (Hive Server 2)
I have created a Hadoop Cluster with Ambari 2.1 including Hive. I would like to be able to do Update and Delete queries within Hive, but it looks like I currently have version 0.12.0.2.0 of Hive. I would like to upgrade to 0.13 or 0.14 to enable these transactions, but I am not sure how to do that with an existing installation of Ambari. Any help would be appreciated.
I think you could follow the HDP docs from hortonworks website
Manual Upgrade of HDP
Upgrading Stack - Ambari
Performing upgrade - Hortonworks
Hope this is helpful.
P.S: Upgrades/ Inserts are not supported in 0.13. You will have to have 0.14 or later for the same.
I am currently using hadoop 1.0.3 version. I recently installed Apache Hive to run with it. I was running the select * query which gave me an NoSuchMethodError: org.apache.hadoop.mapred.JobConf.unset
I further found out its a compatibility issue with my current version of hadoop and requires me to upgrade to 1.2 or later.
I am fairly new to hadoop and would like to upgrade my current version to 1.2 or later. How do I go about doing the same.
I could not find any resources online to do so.
Thanks.
Just download hadoop 1.2.x from here and do necessary configuration changes in your new hadoop. Change HADOOP_HOME to point to your new hadoop folder.
NOTE: Change all the environmental variables (including .bashrc) to point to your new hadoop.