hive error with partition by - hive

Here is my create table
CREATE TABLE parquetpoc.employee USING PARQUET
LOCATION '/mnt/adlsQA/parquetPOC/output/employee'
PARTITIONED BY (`snapshot_year_month` string,`snapshot_day` string)
In the PARTITIONED BY I have tried, quotes, single quotes, double quotes, no quotes, and this thing `.
My folder structure is
/mnt/adlsQA/parquetPOC/output/employee/snapshot_year_month=201807/snapshot_day=01
I am getting this error
Error in SQL statement: ParseException:
extraneous input 'string' expecting ')'(line 3, pos 60)
How can I fix this?

Related

ParseException when having a field name which contains '#' in SparkSQL

I'm doing a simple operation of inserting some fields of table into another table, both tables are hive tables in databricks, so I'm able to do it with a simple query like:
INSERT INTO <BBDD_NAME>.<TABLE1_NAME> (<FIELD_1>, <FIELD_2>)
SELECT <FIELD_1>, <FIELD_2># FROM <BBDD_NAME>.<TABLE2_NAME>
The problem I have is because one of the fields has a '#' inside its name and consequently I get a ParseException Error:
Error in SQL statement: ParseException:
mismatched input '#' expecting {<EOF>, ';'}
TABLE2 is F0911 from JDE (JDE table doc) and is being inserted directly into databricks via spark inferring the schema from its origin. So, table was created with no problem containing the '#' containing field.
Is there any way of avoiding this ParseException Error?
Thanks in advance.
You can use backticks to escape illegal identifiers in column names.
This should work:
INSERT INTO <BBDD_NAME>.<TABLE1_NAME> (<FIELD_1>, <FIELD_2>)
SELECT <FIELD_1>, `<FIELD_2>#` FROM <BBDD_NAME>.<TABLE2_NAME>

SQL in Hive SemanticException [Error 10004]: Line 3:5 Invalid table alias or column reference

trying o figure out hive sql, not having much luck with what appears to be basics, but I'm just not getting!!
I have a query;
select
from_unixtime(unix_timestamp(unixTimeStampField)+43200) as MyLocalTime,
cast(MyLocalTime as timestamp) as EventTime,
*
from mart.table
where names in ('abc','xyz')
What I am trying to do is, first convert the unixtime to my local time using from_unixtime then from this convert, using cast the column into a date/time field so my graphs can read it as a date/time vs a string value.
Am getting this error;
Error
Error while compiling statement: FAILED: SemanticException [Error 10004]: Line 3:5 Invalid table alias or column reference
Tried some suggested fixes in the chats, but none I seem to get a result with. Thanks in advance
Can you please try this ?
If you select all columns along with something else, you need to alias the table and use it to fetch all columns.
select
from_unixtime(unix_timestamp(unixTimeStampField)+43200) as MyLocalTime,
cast(MyLocalTime as timestamp) as EventTime,
t.* -- You need to call the table by alias.
from mart.table t -- alias the table.
where names in ('abc','xyz')
Thanks for that, I did try and no luck unfortunately. I did though modify the unix conversion to then cast it as a timestamp, that seemed to work instead.
cast(from_unixtime(unix_timestamp(tfield)+43200)as TIMESTAMP)
so it looks like this
`select
cast(from_unixtime(unix_timestamp(tfield)+43200)as TIMESTAMP) as MyLocalTime,
*
from
mart.table
where
names in ('abc','xyz')`

How to query table when table name has hyphen in Redshift Spectrum?

I am attempting to query a table that has a hyphen in it.
I have tried backticks and quotes (`, ', ") and they don't work.
Query
select * from hubspot.contacts__form-submissions
Error message:
Error running query: syntax error at or near "-" LINE 7: from hubspot.contacts__form-submissions ^
I don't have write permissions so I can't rename the table.
Any suggestions on how I can query this?
Try
select * from hubspot."contacts__form-submissions";

Hive- issue with Create Table with column have space

Will need some advice. In HIVE DB is it possible to create table with column have space as below
CREATE TABLE TEST2("Kod ASS" String)
get an error as below
Error: Error while compiling statement: FAILED: ParseException line 1:19 cannot recognize input near '"Kod ASS"' 'String' ')' in column specification
SQLState: 42000
ErrorCode: 40000
show manual about column names:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
In Hive 0.12 and earlier, only alphanumeric and underscore characters are allowed in table and column names.
In Hive 0.13 and later, column names can contain any Unicode character (see HIVE-6013). Any column name that is specified within
backticks (`) is treated literally. Within a backtick string, use
double backticks (``) to represent a backtick character. Backtick
quotation also enables the use of reserved keywords for table and
column identifiers.
To revert to pre-0.13.0 behavior and restrict column names to alphanumeric and underscore characters, set the configuration property
hive.support.quoted.identifiers to none. In this configuration,
backticked names are interpreted as regular expressions. For details,
see Supporting Quoted Identifiers in Column Names.
CREATE TABLE DB_Name.Table_name (
First name VARCHAR(64), Last name VARCHAR(64), Location id VARCHAR(64) , age INT, gpa DECIMAL(3,2)) CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC;
OR
CREATE TABLE TEST2(Kod ASS String) STORED AS TEXTFILE;
You can use and put column name inside.
I hope both worked for you.

Values inserted in hive table with double quotes for string from csv file

I am exporting a csv file into hive table.
about the csv file : column values are enclosed within double-quotes , seperated by comma .
Sample record from csv
"4","good"
"3","not bad"
"1","very worst"
I created a hive table with the following statement,
create external table currys(review_rating string,review_comment string ) row format fields delimited by ',';
Table created .
now I loaded the data using the command load data local inpath and it was successful.
when I query the table,
select * from currys;
The result is :
"4" "good"
"3" "not bad"
"1" "very worst"
instead of
4 good
3 not bad
1 very worst
records are inserted with double-quotes which shouldnt be.
Please let me know how to get rid of this double quote .. any help or guidance is highly appreciated...
Thanks beforehand!
Are you using any serde? If so, then you can write a regex command in the SERDE PROPERTIES to remove the quotes.
Or you can use the csv-serde from here and define the quote character.