Passing parameters to HIVE query - hive

I am passing parameter to HIVE script like this using --hiveconf parameter to pass one value to HIVE query. Is there any other way to pass parameters to HIVE script?
beeline -u "${dbconection}" --hiveconf load_id=${loadid} -f /etc/sql/hive_script.sql
hive_script.sql is doing selecting records from table-a and inserting in table-b.
INSERT into TABLE table-b
SELECT column1,
Column2,
Column3,
${hiveconf:loadid} as load_id,
Column5
From table-a;
I am getting following error message
Error: Failed to open new session: org.apache.hive.service.cli.HiveSQLException: java.lang.IllegalArgumentException: Cannot modify load_id at runtime. It is not in list of params that are allowed to be modified at runtime
Here is what setup for hive variable substitute in my environment.
set hive.variable.substitute;
+--------------------------------+--+
| set |
+--------------------------------+--+
| hive.variable.substitute=true |
+--------------------------------+--+

If you are using beeline , you need to use --hivevar
beeline -u "${dbconection}" --hivevar load_id=${loadid} -f /etc/sql/hive_script.sql
.sql or .hql extension will not make a difference.
And hive query will use variable in following way :
INSERT into TABLE table-b
SELECT column1,
Column2,
Column3,
${loadid} as load_id,
Column5
From table-a;

Here is what I used and it work for me instead of "--hiveconf" use "--hivevar" this will work for HIVE version v0.8.X and above .
loaded='201810251040'
beeline -u "${dbconection}" --hivevar load_id=${loadid} -f /etc/sql/hive_script.sql
Update hive_script.sql as follows
INSERT into TABLE table-b
SELECT column1,
Column2,
Column3,
${hivevar:load_id} as load_id,
Column5
From table-a;
This is pseudo code ....

Related

sqoop : Pull data to hive table with extra columns

I need to pull records from a MySQL table with n columns and store them in hive with extra columns. Is there any way in sqoop to perform it?
Example:
MySQL table has the following fields id, name, place. And,
Hive table structure is id, name, place and contact number(null).
So when performing sqoop, I want to add an extra column contact number in hive as (null).
You can specify it in the by using --query option in sqoop and select the extra column with NULL AS.
sqoop import \
--query 'SELECT id, name, place, NULL AS contact_number FROM mysql_table'
--connect jdbc:mysql://mysql.example.com/sqoop \
--Any other options

Hive Insert query on EMR just keeps running for more then 17 hours

Background:
EMR 5.4 cluster of 2 nodes(master+slave).
Supplied the external Hive metastore details during the setup.
The Hive warehouse has been set on S3.
I am using spark 2.1 to process the file and create a staging table.
Once the staging table is ready I am trying to load that data in to hive table using hive.
Problem: The insert statement which usually runs about 7-10 mins on other cluster(outside AWS) is taking to run forever on the EMR cluster. I was able to query the staging table that was created by spark from hive. The following are the statements that I am using:
CREATE TABLE Test1(
column1 string ,
column2 string,
column3 double)
PARTITIONED BY (Date_1 date)
INSERT OVERWRITE TABLE Test1 PARTITION(date_1)
SELECT
column1,
column2,
column3,
date_1
FROM Test1_stag
Any help would be appreciated.
Thanks

Hive insert with multiple select

I want to execute something like this in hive:
insert into mytable values (select count(*) from test2), (select count(*) from test3));
Is there a way to do this?
Why would you need to create a hive table with row count as a column? Assuming that you have to log the row count everyday, I am not sure if we could do this in hive.
But you can try running a shell script something like this if you want a snap shot of the row count of all the tables...
$hive -e 'use schema_name; show tables' | tee tables.txt
This stores all tables in the database in a text file tables.txt
Now, write a shell script to get the counts of all the tables that were gathered
while read line
do
echo "$line "
eval "hive -e 'select count(*) from $line'"
done
change the file permissions for the file generated now
$chmod +x count_tables.sh
$./count_tables.sh < tables.txt > counts.txt
If you are looking for a logging the row count periodically, you can store the rowcounts in a csv, by writing in the values as comma separated values and create an external table pointing to the file.
something like
$./count_tables.sh < tables.txt | sed 's/\t/,/g' > counts.txt
Hope that's the best way to achieve this
I found out the answer. It should be something like this:
INSERT INTO TABLE mytable
SELECT c1,c2 FROM
(SELECT count(*) FROM test2) AS c1
JOIN
(SELECT count(*) FROM test3) AS c2;

bashscript for hive queries giving errors

I am trying to execute a bashcript containing hive queries. But, when i execute the script it shows that raw_data and central table not found. I already have these tables in hive. Below, is the bash script. Kindly suggest what's wrong.
#!/bin/bash
hive -e
"CREATE TEMPORARY FUNCTION rowSequence AS 'com.hive.udf.UDFRowSequence'; "
hive -e "
create table staging (id String,speed String,time String,time_id int);"
hive -e "
insert into table staging select marker.marker.id,
marker.marker.speed ,
marker.marker.time as time,
rowSequence() as time_id
from raw_data
lateral view explode (raw_data.markers.marker)marker as marker;"
hive -e "
create table processed (plc string,direction string,table int,speed string,time_id string,day int);"
hive -e "
insert into table processed select c.plc,c.direction,c.table,t.speed as speed,t.time_id,0 from central c JOIN staging t ON (t.id = c.boxno);"
Include use [databasename]
Eg..
hive -e "use dummy_database;create table staging (id String,speed
String,time String,time_id int);"
hive -e "use dummy_database;insert into table staging select
marker.marker.id,
marker.marker.speed ,
marker.marker.time as time,
rowSequence() as time_id
from raw_data
lateral view explode (raw_data.markers.marker)marker as marker;"

Inserting Data into Hive Table

I am new to hive. I have successfully setup a single node hadoop cluster for development purpose and on top of it, I have installed hive and pig.
I created a dummy table in hive:
create table foo (id int, name string);
Now, I want to insert data into this table. Can I add data just like sql one record at a time? kindly help me with an analogous command to:
insert into foo (id, name) VALUES (12,"xyz);
Also, I have a csv file which contains data in the format:
1,name1
2,name2
..
..
..
1000,name1000
How can I load this data into the dummy table?
I think the best way is:
a) Copy data into HDFS (if it is not already there)
b) Create external table over your CSV like this
CREATE EXTERNAL TABLE TableName (id int, name string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 'place in HDFS';
c) You can start using TableName already by issuing queries to it.
d) if you want to insert data into other Hive table:
insert overwrite table finalTable select * from table name;
There's no direct way to insert 1 record at a time from the terminal, however, here's an easy straight forward workaround which I usually use when I want to test something:
Assuming that t is a table with at least 1 record. It doesn't matter what is the type or number of columns.
INSERT INTO TABLE foo
SELECT '12', 'xyz'
FROM t
LIMIT 1;
Hive apparently supports INSERT...VALUES starting in Hive 0.14.
Please see the section 'Inserting into tables from SQL' at: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
What ever data you have inserted into one text file or log file that can put on one path in hdfs and then write a query as follows in hive
hive>load data inpath<<specify inputpath>> into table <<tablename>>;
EXAMPLE:
hive>create table foo (id int, name string)
row format delimited
fields terminated by '\t' or '|'or ','
stored as text file;
table created..
DATA INSERTION::
hive>load data inpath '/home/hive/foodata.log' into table foo;
to insert ad-hoc value like (12,"xyz), do this:
insert into table foo select * from (select 12,"xyz")a;
this is supported from version hive 0.14
INSERT INTO TABLE pd_temp(dept,make,cost,id,asmb_city,asmb_ct,retail) VALUES('production','thailand',10,99202,'northcarolina','usa',20)
It's a limitation of hive.
1.You cannot update data after it is inserted
2.There is no "insert into table values ... " statement
3.You can only load data using bulk load
4.There is not "delete from " command
5.You can only do bulk delete
But you still want to insert record from hive console than you can do select from statck. refer this
You may try this, I have developed a tool to generate hive scripts from a csv file. Following are few examples on how files are generated.
Tool -- https://sourceforge.net/projects/csvtohive/?source=directory
Select a CSV file using Browse and set hadoop root directory ex: /user/bigdataproject/
Tool Generates Hadoop script with all csv files and following is a sample of
generated Hadoop script to insert csv into Hadoop
#!/bin/bash -v
hadoop fs -put ./AllstarFull.csv /user/bigdataproject/AllstarFull.csv
hive -f ./AllstarFull.hive
hadoop fs -put ./Appearances.csv /user/bigdataproject/Appearances.csv
hive -f ./Appearances.hive
hadoop fs -put ./AwardsManagers.csv /user/bigdataproject/AwardsManagers.csv
hive -f ./AwardsManagers.hive
Sample of generated Hive scripts
CREATE DATABASE IF NOT EXISTS lahman;
USE lahman;
CREATE TABLE AllstarFull (playerID string,yearID string,gameNum string,gameID string,teamID string,lgID string,GP string,startingPos string) row format delimited fields terminated by ',' stored as textfile;
LOAD DATA INPATH '/user/bigdataproject/AllstarFull.csv' OVERWRITE INTO TABLE AllstarFull;
SELECT * FROM AllstarFull;
Thanks
Vijay
You can use following lines of code to insert values into an already existing table. Here the table is db_name.table_name having two columns, and I am inserting 'All','done' as a row in the table.
insert into table db_name.table_name
select 'ALL','Done';
Hope this was helpful.
Hadoop file system does not support appending data to the existing files. Although, you can load your CSV file into HDFS and tell Hive to treat it as an external table.
Use this -
create table dummy_table_name as select * from source_table_name;
This will create the new table with existing data available on source_table_name.
LOAD DATA [LOCAL] INPATH '' [OVERWRITE] INTO TABLE <table_name>;
use this command it will load the data at once just specify the file path
if file is in local fs then use LOCAL if file is in hdfs then no need to use local