HIVE - How it works without a meta store? - hive

I installed Hive 1.2.1 and configured to work with Hadoop 2.7.
But I didn't setup meta store for Hive with Derby or MySQL.
And also I don't have a copy of hive-site.xml under $HIVE_HOME/conf.
My question is how still I am able to create database & tables in Hive. Where all these meta data is stored?
Appreciate your insight.
Thanks in advance.

By default Hive uses Derby and starts metastore (based on derby) in embedded mode. The metastore and hiveserver runs in the same process. I believe hive initializes the metastore for you in embedded mode.
http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html

Related

How to run MSCK on Hive Standalone Metastore server via thrift client

I'm using Hive as my meta store database and the Hive Standalone Metastore for dealing with the DDLs, via this thrift client that implements the server thrift mapping.
I want to perform an MSCK (or some other method like this) to bulk add partitions to the Hive new tables.
But afaik, this Thrift mapping file doesn't expose an msck method.
Although, I see that there's something about the Msck implemented inside standalone server (I think that it should have been implemented in jira HIVE-17824). But there isn't in the HiveMetastore class (that I understood that is the mapping of the Thrift server methods).
Does anyone know whether I can run MSCK through the standalone hive server via thrift client?
With python I am currently using this client with success: PyHive.
And from dbeaver you can also do it (if the command must be run by some human): dbeaver.
EDIT (I did not realize that the question was about sending the command directly to hive metastore):
The interface called IMetaStoreClient (the protocol between hive client and hive metastore server) does not implement MSCK command because it does not need it. Let me explain the logic behind MSCK command:
Check if table exists in hive metastore.
Scan for new partitions in the physical file system where the table stores its data. See code checkMetastore.
Create/Add those new partitions. See code createPartitionsInBatches. This code ends up using the method called add_partitions of the hive metastore client.
See add_partitions. In this point and not before the client application sends data to the hive metastore server.
Drop partitions which are not in the file system anymore. See code dropPartitionsInBatches which ends up using the method called dropPartitions of the hive metastore client.
See dropPartitions. Again, it is in this point and not before where the client application sends data to the hive metastore server.
MSCK is not really a hive metastore command. It requires logic implemented by the client running that MSCK command. In your case, you should add that logic to the client that you want to use.
For example, Spark, already implements that logic when using MSCK.

Configure Hive Metastore for presto and query data from s3 and apache kudu

I am pretty new to Presto and hive. In one of our application we want to use presto to query data from apache kudu and aws s3. As per my knowledge presto has its own catalog(meta) service, but we want to configure hive metastore(without hadoop and hive) so that in future other application(e.g spark) can use hive metastore to query data from Kudu and s3. I have been using latest version of presto and kudu.
Could someone help me to configure this system?
Thanks and regards

what is the use of hive server and metastore server?

I am new to hive, and some question confusing me very much.
first, after installation of hive, I just run hive, then I can create, select tables. where is the hive server, what is the use of it.
second, what is the use of metastore server, I know we need the metastore to access the metadata about hive tables, does that mean if I start a metastore server I can request it in other app and get the information?
Metastore server talks to the backend such as Derby/MySql to store and retrieve table metadata. If any Hive component wants to get/set metadata, it calls the MetaStore APIs. APIs are such getTable(tableName), createDatabase(dbName) etc. Basically metastore abstracts and provides backend (derby/mysql/postgres) independent API layer. Similar to HiveServer this can also run as a server. If there is no metastore server running, then the Driver will load the metastore in its process. If metastore is running as a separate server then the Driver object communicates with the metastore over network.

Migration from HDP to MapR

I am bit new to MapR Hbase but I have worked with Hbase with HDP/Cloudera. We have hbase cluster in HDP and we are planning to migrate Hbase data to MapR Hbase cluster.
What should be appropriate approach that I can take here? (Downtime is not a problem for us at this moment.)
Should we use export/import utilities, copytable commands, etc.?
You would have to create the destination table by hand and then use the CopyTable command. For details, see
http://doc.mapr.com/display/MapR/Migrating+Between+Apache+HBase+Tables+and+MapR+Tables

How to load SQL data into the Hortonworks?

I have Installed Hortonworks SandBox in my pc. also tried with a CSV file and its getting in a table structerd manner its OK (Hive + Hadoop), nw I want to migrate my current SQL Databse into Sandbox (MS SQL 2008 r2).How I will do this? Also want to connect to my project (VS 2010 C#).
Is it possible to connect through ODBC?
I Heard sqoop is using for transferring data from SQL to Hadoop so how I can do this migration with sqoop?
You could write your own job to migrate the data. But Sqoop would be more convenient. To do that you have to download Sqoop and the appropriate connector, Microsoft SQL Server Connector for Apache Hadoop in your case. You can download it from here.Please go through the Sqoop user guide. It contains all the information in proper detail.
And Hive does support ODBC. You can find more on this at this page.
I wrote down the steps you need to go through in the Hortonworks Sandbox to install the JDBC driver and get it to work: http://hortonworks.com/community/forums/topic/import-microsoft-sql-data-into-sandbox/
To connect to Hadoop in your C# project you can use the Hortonworks Hive ODBC driver from http://hortonworks.com/thankyou-hdp13/#addon-table. Read the PDF (which is also on that page) to see how it works (I used Hive Server Type 2 with user name sandbox)