Hive - Unable to use PATINDEX in hive - hive

I am using string function PATINDEX in SQL to find the index of particular pattern match in string. I have a column in table named as volume which contains values such as 0.75L, 1.0L, 0.375L.
When I execute the below SQL query i.e.
select PATINDEX('%L%',VOLUME) from volume_details
o/p of query -> 5,4,6
But in hive query how to achieve the same since PATINDEX is not supported currently?

Related

How can you filter Snowflake EXPLAIN AS TABULAR syntax when its embedded in the TABLE function? Can you filter it with anything?

I have a table named Posts I would like to count and profile in Snowflake using the current Snowsight UI.
When I return the results via EXPLAIN using TABLULAR I am able to return the set with the combination of TABLE, RESULT_SCAN, and LAST_QUERY_ID functions, but any predicate or filter or column reference seems to fail.
Is there a valid way to do this in Snowflake with the TABLE function or is there another way to query the output of the EXPLAIN using TABLULAR?
-- Works
EXPLAIN using TABULAR SELECT COUNT(*) from Posts;
-- Works
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t;
-- Does not work
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t where operation = 'GlobalStats';
-- invalid identifier 'OPERATION', the column does not seem recognized.
Tried the third example and expected the predicate to apply to the function output. I don't understand why the filter works on some TABLE() results and not others.
You need to double quote the column name
where "operation"=
From the Documentation
Note that because the output column names from the DESC USER command
were generated in lowercase, the commands use delimited identifier
notation (double quotes) around the column names in the query to
ensure that the column names in the query match the column names in
the output that was scanned

SQL query to get reserved words used in postgres sql db

I need a SQL query that returns all the reserved words used in PostgreSQL database.
Postgres provides the set-returning function pg_get_keywords() for that:
select *
from pg_get_keywords()
The columns of the result are documented in the manual

What does the expression : Select `(column1|column2|column3)?+.+` from Table in SQL means?

I am trying to convert a SQL Code into Pyspark SQL.
While selecting the columns from a table , the Select Statement has something as below :
Select a.`(column1|column2|column3)?+.+`,trim(column c) from Table a;
I would like to understand what
a.`(column1|column2|column3)?+.+`
expression resolves to and what it actually implies? How to address this while converting the sql into pyspark?
That is a way of selecting certain column names using regexps. That regex matches (and excludes) the columns column1, column2 or column3.
It is the Spark's equivalent of the Hive's Quoted Identifiers. See also Spark's documentation.
Be aware that, for enabling this behavior, it is first necessary to run the following command:
spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true").show(false)

How to Pass Comma String as Single String in IN Clause

I'm writing a SQL code using an IN operator. It is working good unless I encountered some records having comma in the strings.
This is where I am failing to fetch those records.
For example -: 'Pratik,Sarangi' is a record i want to pass in the IN clause where in it reads as 2 different datas.
I am using Oracle SQL to build the query
select * from table where name in (:name)
where :name is a user input parameter.

Hive function to replace comma in column value

I have a hive table which has String column having value as 12,345. Is there any way hive function which can remove comma during insertion in this hive table ?
You can use regexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT) which is a function in Hive.
So if you are moving the data from a table that contains the comma to a new table you will use :
insert into table NEW select regexp_replace(commaColumn,',','') from OLD;
Hive does have split function. which can be used here. split and concat to achieve the desired result
You may refer this question.
Does Hive have a String split function?