I have partitioned table in hive and I want to assign value for date column dynamically( yesterday's date ). Below is my current query but it's not working.
ALTER TABLE db1.table1 ADD IF NOT EXISTS PARTITION (loaddate="date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') , 1)") LOCATION "hdfs://location1/abc/rawdata/externalhivetables/downloading/data";
Instead of returning the date value it's returning me the complete expression.
select downloading.loaddate From downloading limit 3;
+------------------------------------------------------------+
| downloading.loaddate |
+------------------------------------------------------------+
| date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') , 1) |
| date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') , 1) |
| date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') , 1) |
In hive shell we cannot assign variable variables from the result of query yet, we need to have 2 steps:
Use Shell script to execute the query and store the result into a variable.
Then initialize the hive shell/script with the variable.
bash$ var=`hive -S -e "select date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'yyyy-MM-dd') , 1);"`
bash$ echo $var
Now initialize hive/beeline shell with the varvalue
bash$ hive -hiveconf dd=$var
hive> ALTER TABLE db1.table1 ADD IF NOT EXISTS PARTITION (loaddate='${hiveconf:dd}') LOCATION "hdfs://location1/abc/rawdata/externalhivetables/downloading/data";
Refer to this and this links for additional information.
Use shell to calculate date and substitute it using shell variable substitution:
bash$ dt=$(date -d '-1 day' +%Y-%m-%d)
bash$ hive -e "ALTER TABLE db1.table1 ADD IF NOT EXISTS PARTITION (loaddate='$dt') LOCATION 'hdfs://location1/abc/rawdata/externalhivetables/downloading/data'"
Related
Currently, I am trying to check the timestamp difference in hours with expressions passed as a variables through the command line. But I am unable to get the desired output when passing through variables.
a=2019-11-1812:49:43
b=2020-04-04 20:32:33
timediff=$(bq query --nouse_legacy_sql \ 'SELECT TIMESTAMP_DIFF(TIMESTAMP "'$a'", TIMESTAMP "$b", HOUR);')
Looks like the variables I am passing are not recognized. Can someone help me understand the correct way of doing it?
In addition to Hemant's answer to further contribute with the community I will provide an alternative method.
As stated in the documentation, it is possible to use parameterized queries in BigQuery using the Command-Line interface (CLI). You need to use the flag --parameter within your bq query command in order to specify the varibles/parameters you will use.
This flag must be in the format name:type:value. Although, if type is omitted it will used as STRING. As an example:
timediff= $(bq query --use_legacy_sql=false
--parameter='ts_value:TIMESTAMP:2016-12-07 08:00:00'
--parameter='ts_value1:TIMESTAMP:2016-12-07 09:00:00'
'SELECT
TIMESTAMP_DIFF(#ts_value,#ts_value1, HOUR)')
echo $timediff
And the output is:
+-----+
| f0_ |
+-----+
| -1 |
+-----+
You could use --format=csv to format the output as a line:
f0_ -1
In addition, I would like to add that you can use aliases to simplify your query. For instance:
alias bq_set="bq query --use_legacy_sql=false --format=pretty"
timediff=$(bq_set
--parameter='ts_value:TIMESTAMP:2016-12-07 08:00:00'
--parameter='ts_value1:TIMESTAMP:2016-12-07 09:00:00'
'SELECT
TIMESTAMP_DIFF(#ts_value,#ts_value1, HOUR)')
echo $timediff
The output:
+-----+
| f0_ |
+-----+
| -1 |
+-----+
As you can see it was just an alternative to simply your query.
Try using single quotes around the variables, but double-quotes around the entire query. For example:
a='2019-11-18 12:49:43'
b='2020-04-04 20:32:33'
timediff=$(bq query --format=csv --nouse_legacy_sql "SELECT TIMESTAMP_DIFF(TIMESTAMP '$a', TIMESTAMP '$b', HOUR);" | awk
'NR>1')
echo $timediff
-3319
I'd like to create a table name in Hive using variable substitution.
E.g.
SET market = "AUS";
create table ${hiveconf:market_cd}_active as ... ;
But it fails. Any idea how it can be achieved?
You should use backtrics (``) for name for that, like:
SET market=AUS;
CREATE TABLE `${hiveconf:market}_active` AS SELECT 1;
DESCRIBE `${hiveconf:market}_active`;
Example run script.sql from beeline:
$ beeline -u jdbc:hive2://localhost:10000/ -n hadoop -f script.sql
Connecting to jdbc:hive2://localhost:10000/
...
0: jdbc:hive2://localhost:10000/> SET market=AUS;
No rows affected (0.057 seconds)
0: jdbc:hive2://localhost:10000/> CREATE TABLE `${hiveconf:market}_active` AS SELECT 1;
...
INFO : Dag name: CREATE TABLE `AUS_active` AS SELECT 1(Stage-1)
...
INFO : OK
No rows affected (12.402 seconds)
0: jdbc:hive2://localhost:10000/> DESCRIBE `${hiveconf:market}_active`;
...
INFO : Executing command(queryId=hive_20190801194250_1a57e6ec-25e7-474d-b31d-24026f171089): DESCRIBE `AUS_active`
...
INFO : OK
+-----------+------------+----------+
| col_name | data_type | comment |
+-----------+------------+----------+
| _c0 | int | |
+-----------+------------+----------+
1 row selected (0.132 seconds)
0: jdbc:hive2://localhost:10000/> Closing: 0: jdbc:hive2://localhost:10000/
Markovitz's criticisms are correct, but do not produce a correct solution. In summary, you can use variable substitution for things like string comparisons, but NOT for things like naming variables and tables. If you know much about language compilers and parsers, you get a sense of why this would be true. You could construct such behavior in a language like Java, but SQL is just too crude.
Running that code produces an error, "cannot recognize input near '$' '{' 'hiveconf' in table name".(I am running Hortonworks, Hive 1.2.1000.2.5.3.0-37).
I spent a couple hours Googling and experimenting with different combinations of punctuation, different tools ranging from command line, Ambari, and DB Visualizer, etc., and I never found any way to construct a table name or a field name with a variable value. I think you're stuck with using variables in places where you need a string literal, like comparisons, but you cannot use them in place of reserved words or existing data structures, if that makes sense. By example:
--works
drop table if exists user_rgksp0.foo;
-- Does NOT work:
set MY_FILE_NAME=user_rgksp0.foo;
--drop table if exists ${hiveconf:MY_FILE_NAME};
-- Works
set REPORT_YEAR=2018;
select count(1) as stationary_event_count, day, zip_code, route_id from aaetl_dms_pub.dms_stationary_events_pub
where part_year = '${hiveconf:REPORT_YEAR}'
-- Does NOT Work:
set MY_VAR_NAME='zip_code'
select count(1) as stationary_event_count, day, '${hiveconf:MY_VAR_NAME}', route_id from aaetl_dms_pub.dms_stationary_events_pub
where part_year = 2018
The qualifies should be removed
You're using the wrong variable name
SET market=AUS; create table ${hiveconf:market}_active as select 1;
Hello I'm querying a teradata database like that :
for var in `db2 -x "$other_query"`;
do
query_update_date="update test SET date =Null WHERE
name_test='$var '"
db2 -v "$query_update_date"
done
My query is executed but what I would like to print the query_update_date only when one row or more is affected (changed ) by update.
Example :
If I have
First query of loop :
query_update_date="update test SET date =Null WHERE
name_test='John'"
and second query of the loop :
query_update_date="update test SET date =Null WHERE
name_test='Jeff'"
and in my table before the query :
name_test date
Jeff 01/07/2016
John Null
After the query
name_test date
Jeff Null
John Null
The date from John was already null , so it hasn't been affected by update.
And
db2 -v "$query_update_date"
print my queries. What I want for previous example is to print in my logs only
query_update_date="update test SET date =Null WHERE
name_test='Jeff'"
Take snapshots of the table before and after you execute the query. Use whatever tools you like: how about (assuming SQL) SPOOL a "SELECT * FROM T" query into this file beforehand, and into that file afterwards. Use the UNIX diff command to compare the two files, and merely count the length of its output:
LINES_OUT=$(diff oldResults newResults | wc -l)
if [[ $LINES_OUT = 0 ]]
then
# log the query, however you do that
fi
If $IS_ANY_DIFF is true, log the query; otherwise, don't.
I'm unable to reference a SELECT alias in BigQuery (standard mode).
Trying to do this query:
SELECT
REGEXP_EXTRACT_ALL(text,
r"(<div \w+>)") AS matches
FROM
regex.test
WHERE
matches IS NOT NULL
Here are steps to reproduce.
bq mk regex
bq mk -t regex.test id:integer,text:string
echo '{"id":1, "text":"<div a>"}' | bq insert regex.test
echo '{"id":2, "text":"<div b>"}' | bq insert regex.test
echo '{"id":3, "text":"<div>"}' | bq insert regex.test
bq query --use_legacy_sql=false "select REGEXP_EXTRACT_ALL(text, r\"(<div \w+>)\") AS matches FROM regex.test WHERE id IS NOT NULL"
+--------------+
| matches |
+--------------+
| [u'<div b>'] |
| [] |
| [u'<div a>'] |
+--------------+
When I try to reference the matches alias, I see an error:
bq query --use_legacy_sql=false "select REGEXP_EXTRACT_ALL(text, r\"(<div \w+>)\") AS matches FROM regex.test WHERE matches IS NOT NULL"
Error in query string: Error processing job 'myname': Unrecognized name:
matches
I am unable to reference the alias matches, and am unable to filter those results WHERE matches IS NOT NULL.
Does anyone know what I'm doing incorrectly here?
Thanks!
Even in BQ, you can't use a column alias in the where clause. Just use a subquery:
SELECT t.*
FROM (SELECT REGEXP_EXTRACT_ALL(text, r"(<div \w+>)") AS matches
FROM regex.test
) t
WHERE ARRAY_LENGTH(matches) > 0
Check out SELECT list aliases visibility
The reason why comparing with NULL does't work for REGEXP_EXTRACT_ALL is because
it returns array so checking with length is the way. Comparing with NULL still will work for REGEXP_EXTRACT
In addition, ideally you should be able use REGEX_MATCH to filter out records w/o matches, but looks like there is an issue with this function in standard mode
how to execute a HIVE query in background when the query looks like below
Select count(1) from table1 where column1='value1';
I am trying to write it using a script like below
#!/usr/bin/ksh
exec 1> /home/koushik/Logs/`basename $0 | cut -d"." -f1 | sed 's/\.sh//g'`_$(date +"%Y%m%d_%H%M%S").log 2>&1
ST_TIME=`date +%s`
cd $HIVE_HOME/bin
./hive -e 'SELECT COUNT(1) FROM TABLE1 WHERE COLUMN1 = ''value1'';'
END_TIME=`date +%s`
TT_SECS=$(( END_TIME - ST_TIME))
TT_HRS=$(( TT_SECS / 3600 ))
TT_REM_MS=$(( TT_SECS % 3600 ))
TT_MINS=$(( TT_REM_MS / 60 ))
TT_REM_SECS=$(( TT_REM_MS % 60 ))
printf "\n"
printf "Total time taken to execute the script="$TT_HRS:$TT_MINS:$TT_REM_SECS HH:MM:SS
printf "\n"
but getting error like
FAILED: SemanticException [Error 10004]: Line 1:77 Invalid table alias or column reference 'value1'
let me know exactly where I am doing mistake.
Create a document named example
vi example
Enter the query in the document and save it.
create table sample as
Select count(1) from table1 where column1='value1';
Now run the document using the following command:
hive -f example 1>example.error 2>example.output &
You will get the result as
[1]
Now disown the process :
disown
Now the process will run in the background. If you want to know the status of the output, you may use
tail -f example.output
True #Koushik ! Glad that you found the issue.
In the query, bash was unable to form the hive query due to ambiguous single quotes.
Though SELECT COUNT(1) FROM Table1 WHERE Column1 = 'Value1' is valid in hive,
$hive -e 'SELECT COUNT(1) FROM Table1 WHERE Column1 = 'Value1';' is not valid.
The best solution would be to use double quotes for the Value1 as
hive -e 'SELECT COUNT(1) FROM Table1 WHERE Column1 = "Value1";'
or use a quick and dirty solution by including the single quotes within double quotes.
hive -e 'SELECT COUNT(1) FROM Table1 WHERE Column1 = "'"Value1"'";'
This would make sure that the hive query is properly formed and then executed accordingly. I'd not suggest this approach unless you've a desperate ask for a single quote ;)
I am able to resolve it replacing single quote with double quote. Now the modified statement looks like
./hive -e 'SELECT COUNT(1) FROM Table1 WHERE Column1 = "Value1";'