hivevars not going through with beeline in emr - hive

I am running a hive sql script in emr. I am calling the script from beeline (in a shell file) with some hivevars. But they are not going through to the script. Am I missing something obvious?
This is my shell script:
ST_DATE=$(TZ=America/Los_Angeles date +%Y-%m-%d -d "today");
beeline -u jdbc:hive2:// --hivevar ST_DATE="$ST_DATE" --hivevar lookback=45 -f ./ETL/predict_etl_hive.sql || exit 1
And this is how the hive sql file is
-- DEBUG display hivevars
select '${hivevar:lookback}';
select '${hivevar:ST_DATE}';
------------------- STEP 1: ----------------------------
DROP TABLE IF EXISTS string1.us_past_con_${hivevar:lookback}d;
CREATE TABLE IF NOT EXISTS string1.us_past_con_${hivevar:lookback}d
However it does not take the values. when using select to get the values it stays as the name:
0: jdbc:hive2://>
0: jdbc:hive2://> -- DEBUG display hivevars
0: jdbc:hive2://> select '${hivevar:lookback}';
+----------------------+
| _c0 |
+----------------------+
| ${hivevar:lookback} |
+----------------------+
0: jdbc:hive2://> select '${hivevar:ST_DATE}';
+---------------------+
| _c0 |
+---------------------+
| ${hivevar:ST_DATE} |
+---------------------+
And then it eventually fails at the DROP line:
FAILED: ParseException line 1:46 missing EOF at '$' near 'us_past_con_'
22/07/06 21:53:11 [ef15917c-31f8-48ae-8ca4-770871367e05 main]: ERROR ql.Driver: FAILED: ParseException line 1:46 missing EOF at '$' near 'us_past_con_'
org.apache.hadoop.hive.ql.parse.ParseException: line 1:46 missing EOF at '$' near 'us_past_con_'
The vars do go through if I use hive -f instead of beeline

Related

Why is my script not printing output on one line?

This is an image of what I'm asking for
I am using the following -echo- in a script and after I execute, the output format is as shown below:
`echo -e "UPDATE table1 SET table1_f1='$Fname' ,table1_f2='$Lname' where table1_f3='$id';\ncommit;" >> $OutputFile`
output: UPDATE table1 SET table1_f1='Fname' ,table1_f2='Lname' where table1_f3='id ';
the '; is appearing on a new line, why is that happening?
The variable $id in your shell script actually contains that newline (\n or \r\n) at the end; so there isn't really anything wrong in the part of the script you've shown here.
This effect is pretty common if the variable is created based on external commands (update:) or by reading external files as you are here.
For simple values, one way to strip the newline off the end of the value, prior to using it in your echo is:
id=$( echo "${id}" | tr -s '\r' '' | tr -s '\n' '' );
or for scripts that already rely on a particular bash IFS value:
OLDIFS="${IFS}";
IFS=$'\n\t ';
id=$( echo "${id}" | tr -s '\r' '' | tr -s '\n' '' );
IFS="${OLDIFS}";

Postgres full text search error when using setweight()

I have PostgreSQL 9.6.8 running on Fedora 27 64bit. When i execute this query:
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col1",'')), 'D') ||
setweight(to_tsvector('french', coalesce("col2",'')), 'D');
I get this error:
ERROR: cache lookup failed for function 3625
********** Error **********
ERROR: cache lookup failed for function 3625
SQL state: XX000
but when I execute either:
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col1",'')), 'D');
or
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col2",'')), 'D');
I get:
Query returned successfully: 0 rows affected, 11 msec execution time.
My question is why does it work for either column individually but it does not work when together?
This link shows that it should be possible to use both columns in the same query (at the end of section 12.3.1).
Edit: here is what the system returns for Laurenz's queries. The first query returns
oprname | oprleft | oprright | oprcode
---------+----------+----------+----------
|| | tsvector | tsvector | 3625
The second query returns an empty result set.
Your database is corrupted, and you are lacking the function tsvector_concat which is the function behind the || operator.
This is how it should look on a healthy system:
SELECT oprname, oprleft::regtype, oprright::regtype, oprcode
FROM pg_operator
WHERE oid = 3633;
oprname | oprleft | oprright | oprcode
---------+----------+----------+-----------------
|| | tsvector | tsvector | tsvector_concat
(1 row)
SELECT proname, proargtypes::regtype[], prosrc
FROM pg_proc
WHERE oid = 3625;
proname | proargtypes | prosrc
-----------------+---------------------------+-----------------
tsvector_concat | [0:1]={tsvector,tsvector} | tsvector_concat
(1 row)
The second part is missing in your case.
You should restore from a backup.
Try to figure out how you got into this mess so that you can avoid it in the future.

Escaping special character when SQL output to variable in shell script

Trying to assign output from SQL query on an object containing special characters to a variable in shell script.
Running directly on the database:
db2 -x 'select count(*) from <SCHEMA>."/BIC/TEST"'
11000
Yet when I include this in script I need to use double quotes as I am using variables passed into the sql. Using single quotes
Output=$(db2 -x 'select count(*) from ${_SCHEMA}."/BIC/TEST"')
echo -e "$Output"
Results in:
SQL20521N Error occurred processing a conditional compilation directive near
"_". Reason code="7". SQLSTATE=428HV
When I use double quotes I hit:
SQL0104N An unexpected token "'/BIC/TEST'" was found following "ount(*)
Tried to escape the double quotes using another set of double quotes:
db2 -x 'select count(*) from ${_SCHEMA}.""/BIC/TEST""'
But this doesn't seem to work in script. It works for tables where there is no special characters/requirement to encase in quotations.
Any help is appreciated.
The code below works fine for me, notice the escaped quotes. If it fails for you, you need to give more details of your DB2-version and the DB2-server operating system platform.
#!/bin/ksh
db2 connect to sample
(($? > 0 )) && print "Failed to connect to database" && exit 1
db2 -o- "drop table \"/bin/test\" "
db2 -v "create table \"/bin/test\"(a integer)"
(($? > 0 )) && print "Create table failed" && exit 1
db2 -v "insert into \"/bin/test\"(a) values(1),(2),(3),(4)"
(($? > 0 )) && print "insert rows failed" && exit 1
db2 -v describe table \"/bin/test\"
typeset -i count_rows=$(db2 -x "select count(*) from \"/bin/test\"" )
(($? > 0 )) && print "query count rows failed" && exit 1
print "\nRow count is: ${count_rows}\n"
db2 -o- connect reset

index returns relation "search_name_0" does not exist in "Starting rank 2"

I have generated nominatim database before and never faced such an issue.
Please tell me which "setup.php" command generates the "search_name_0" and such tables?
Full error output is:
-bash-4.2$ ./utils/setup.php --index --threads 8 --osm2pgsql-cache 24000
nominatim version 2.5.1
Starting indexing rank (0 to 4) using 8 threads
Starting rank 0
Done 0 in 0 # 0.000000 per second - FINISHED
Starting rank 1
Done 0 in 0 # 0.000000 per second - FINISHED
Starting rank 2
index_placex: UPDATE failed: ERROR: relation "search_name_0" does not exist
LINE 1: DELETE from search_name_0 WHERE place_id = in_place_id
^
QUERY: DELETE from search_name_0 WHERE place_id = in_place_id
CONTEXT: PL/pgSQL function deletesearchname(integer,bigint) line 1260 at SQL statement
PL/pgSQL function placex_update() line 75 at assignment
index_placex: UPDATE failed: ERROR: relation "search_name_0" does not exist
LINE 1: DELETE from search_name_0 WHERE place_id = in_place_id
^
QUERY: DELETE from search_name_0 WHERE place_id = in_place_id
CONTEXT: PL/pgSQL function deletesearchname(integer,bigint) line 1260 at SQL statement
PL/pgSQL function placex_update() line 75 at assignment
index_placex: UPDATE failed: ERROR: relation "search_name_0" does not exist
LINE 1: DELETE from search_name_0 WHERE place_id = in_place_id
^
QUERY: DELETE from search_name_0 WHERE place_id = in_place_id
CONTEXT: PL/pgSQL function deletesearchname(integer,bigint) line 1260 at SQL statement
PL/pgSQL function placex_update() line 75 at assignment
index_placex: UPDATE failed: ERROR: relation "search_name_0" does not exist
LINE 1: DELETE from search_name_0 WHERE place_id = in_place_id
^
QUERY: DELETE from search_name_0 WHERE place_id = in_place_id
CONTEXT: PL/pgSQL function deletesearchname(integer,bigint) line 1260 at SQL statement
PL/pgSQL function placex_update() line 75 at assignment
ERROR: Error executing external command: /srv/Nominatim-2.5.1/nominatim/nominatim -i -d nominatim -P 5432 -t 8 -R 4
Error executing external command: /srv/Nominatim-2.5.1/nominatim/nominatim -i -d nominatim -P 5432 -t 8 -R 4
-bash-4.2$
It took some time to figure out but it happens when your "create-partition-tables" part of setup fails.
Restarting it will fail to re-create the tables. I had to manually remove all partition tables and then restarted the setup with "create-partition-tables" key to solve this!

Export Vertica query result to csv file

I`m working with Vertica. I try to export data from SELECT query into csv. I tried making it with sql query:
SELECT * FROM table_name INTO OUTFILE '/tmp/fileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
I got an error:
[Vertica][VJDBC](4856) ERROR: Syntax error at or near "INTO"
Is there a way to export a query result to a csv file? I prefer not to use vsql, but if there no other way, I will use it. I tried the following:
vsql -c "select * from table_name;" > /tmp/export_data.txt
Here is how you do it:
vsql -U dbadmin -F ',' -A -P footer=off -o dumpfile.txt -c "select ... from ... where ...;"
Reference: Exporting Data Using vsql
Accordingly to https://my.vertica.com/docs/7.1.x/HTML/Content/Authoring/ConnectingToHPVertica/vsql/ExportingDataUsingVsql.htm
=> SELECT * FROM my_table;
a | b | c
---+-------+---
a | one | 1
b | two | 2
c | three | 3
d | four | 4
e | five | 5
(5 rows)
=> \a
Output format is unaligned.
=> \t
Showing only tuples.
=> \pset fieldsep ','
Field separator is ",".
=> \o dumpfile.txt
=> select * from my_table;
=> \o
=> \! cat dumpfile.txt
a,one,1
b,two,2
c,three,3
d,four,4
e,five,5
By following way you can write to CSV file as comma separated and no footer.
vsql -h $HOST -U $USER -d $DATABASE -w $PASSWORD -f $SQL_PATH/SQL_FILE -A -o $FILE -F ',' -P footer=off -q