What apache airflow and apahce hive working together - hive

i'm learn about airlow
I wrote dag about hive operator task.
I don't know what happen when apache hive execute done, the hql is error and how hive interact with airflow?
thanks a lot.

Related

How can I execute a Bash Script with AWS Redshift

I'm new to Redshift and quite a beginner in AWS. I have a Redshift Cluster, and I need to execute a bash script- that has some SQLs running inside of it.
Is there any way I can execute my Bash script on my Redshift Cluster? I want to be able to connect to the Redshift Cluster, execute the Bash Script and all the SQLs inside on the cluster.
Can I do this through Lambda? A little detail will be appreciated.
Amazon Redshift is a database. You can connect to it via JDBC or via the Redshift Data API and then run SQL commands against the database. However, it is not a compute platform.
While Amazon Redshift does have the ability to run stored procedures, bash scripts cannot be run.
You would need to run your script on a compute platform (eg an Amazon EC2 instance or your own computer), and it can 'call' the Amazon Redshift cluster to run the SQL.
AWS Lambda can run bash scripts, although you would need to do some setup to support it. You would be better-off rewriting your code to work in a natively-supported language.
This is possible through lambda if you wanna use the bash script. But if you wanna just execute sqls, wrap those sqls in a .sql script and use Redshift query scheduler to schedule these via either manual schedule run or cron job schedule run

Configure Hive Metastore for presto and query data from s3 and apache kudu

I am pretty new to Presto and hive. In one of our application we want to use presto to query data from apache kudu and aws s3. As per my knowledge presto has its own catalog(meta) service, but we want to configure hive metastore(without hadoop and hive) so that in future other application(e.g spark) can use hive metastore to query data from Kudu and s3. I have been using latest version of presto and kudu.
Could someone help me to configure this system?
Thanks and regards

HIVE - How it works without a meta store?

I installed Hive 1.2.1 and configured to work with Hadoop 2.7.
But I didn't setup meta store for Hive with Derby or MySQL.
And also I don't have a copy of hive-site.xml under $HIVE_HOME/conf.
My question is how still I am able to create database & tables in Hive. Where all these meta data is stored?
Appreciate your insight.
Thanks in advance.
By default Hive uses Derby and starts metastore (based on derby) in embedded mode. The metastore and hiveserver runs in the same process. I believe hive initializes the metastore for you in embedded mode.
http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html

Cannot query Hive remote server

I am trying to query on remote hive server2. The connection was successful but I am not able to query.
The screen freezes for long time.
I am not sure the query request went to hive server.
Here I started Hive server2 and trying to connect through HIVE CLI
Actually HIVE CLI is for hiveserver1,
Beeline is correct one for hiveserver2

Accessing Hive through web browser using thrift php

I ve hive installed in my ubuntu. Installed PHP5 and apache2 server as well.
Started thrift server using hive --service hiveserver .
Querying hive tables from php file in Command line interface(CLI) giving me expected results.
but from the web browser(http://localhost:10000/) i'm not able to invoke hive.
Tried googling the problem couldn't find it. please help me the solution.
Hive thrift server just provide a thrift service for hive query but not web service.
I think what you need is HWI (hive web interface). I recommend this project. We use it in production environment.