I want to know what is the command to execute a hive Script
Complete the Code to execute the hive script ./custexport.hql
Scripts hive>
If you are using hive then
hive -f 'your hql file'
if you are using beeline then also you can use -f option with complete beeline command.
Related
I have a directory of say 10 SQL files and out of which I need to run only 5 SQL files. I am trying to achieve this by creating an array of only the five files that I need to execute and trying to call Beeline -f in for loop to execute the files . But the solution does not seem to be working . The SQL can be triggered when I run beeline -f mysqlfile.sql
for eachline in "${testarray[#]}"
do
beeline -f '${eachline}'.sql
done
I have to repair tables in hive from my shell script after successful completion of my spark application.
msck repair table <DATABASE_NAME>.<TABLE_NAME>;
Please suggest me a suitable approach for this which also works for large tables with partitions.
I have found a workaround for this using :
hive -S -e "msck repair table <DATABASE_NAME>.<TABLE_NAME>;"
-S : This silents the output generated from Hive.
-e : This is used for running hive command.
-f : This is used for providing a hql script.
In an interactive impala-shell session, is there a way to load and execute a text file containing one or more SQL statements? In Hive's beeline, for example, you can use !run <filename> to run the SQL commands in that file.
This is not currently possible. You can file a JIRA.
I believe it is possible - see impala-shell -h (version v2.1.1-cdh5):
-f QUERY_FILE, --query_file=QUERY_FILE
Execute the queries in the query file, delimited by ;
[default: none]
combine this with shell command in interactive mode:
shell impala-shell -f file;
We have couple of hql files for compiling ddls.
in hive we used the following command from bash :
hive -v -f abc.hql
but, in beeline this doesn't work from bash. Any idea what can be the equivalent command for beeline.
Make sure your hiveserver2 is up & running on some port
In beeline
beeline -u "jdbc:hive2://localhost:port/database_name/" -f abc.hql
Refer this doc for more commands
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
Refer this doc if you have not yet configured hiveserver2
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2
I have a DB2 script to first drop and then create some table spaces and functions. I can run the SQL script successfully in DB2 command line on the targeted database.
I need to execute this SQL script in a shell script multiple times. It can be executed successfully the first time, then it will hang at the second/third time. The command to execute the SQL script is very simple:
db2 CONNECT TO ktest4
db2 -v -f /tmp/sql/application_system/opmdb2_privilege_remove.sql.5342
I use DB2 9.7.8, and LINUX operating system. When the SQL script is hanged, I can still manually run the SQL script successfully in DB2 command line on the targeted database.
Does anyone know the reason? Thanks.
Xiaoyang Gao
Are you sure DB2 is blocking? Did you put a semicolon between commands
db2 CONNECT TO ktest4 ; db2 -v -f /tmp/sql/application_system/opmdb2_privilege_remove.sql.5342
In order to trace the execution, I advise you to put some output, in order to detect where is it blocking
date ; db2 -r /tmp/output.log CONNECT TO ktest4 ; db2 -r /tmp/output.log values current timestamp ; db2 -r /tmp/output.log -v -f /tmp/sql/application_system ; db2 -r /tmp/output.log values current timestamp ; db2 -r /tmp/output.log terminate
With a command like this, you will save all outputs, and then you could check where is the error.