getting error while loading data in a table using hive - hive

Am getting error as
"analysisException: Syntax error in line 1: undefined: LOAD DATA LOCAL
INPATH'/home/cloudera...Encounttered:IDENTIFIER Expected:INPATH CAUSED
BY:Exception:Syntax error"
My code is
LOAD DATA LOCAL INPATH '/home/cloudera/UNSW_NB15.csv' OVERWRITE INTO TABLE mybigdata;

Related

error when trying to access file on pc from H2 Console

when i try to access existed file from hfu console ,shows an error up .
the error says the following:C:\Users\najla\Pictures\Screenshots
IO Exception: "IOException reading C:\Users\najla\Desktop\Data-B\titanic.csv"; SQL statement:
select *
from csvread('C:\Users\najla\Desktop\Data-B\titanic.csv') [90028-214] 90028/90028 (Hilfe)
org.h2.jdbc.JdbcSQLNonTransientException: IO Exception: "IOException reading C:\Users\najla\Desktop\Data-B\titanic.csv"; SQL statement:
select *
from csvread('C:\Users\najla\Desktop\Data-B\titanic.csv') [90028-214]
at

Why am I getting an error when uploading a small CSV file to BigQuery?

The error message is
Failed to create table: Error while reading data, error message: Error detected while parsing row starting at position: 0. Error: Bad character (ASCII 0) encountered.

why dbt runs in cli but throws an error on cloud UI for the exact same model?

I am executing dbt run -s model_name on CLI and the task completes successfully. However, when I run the exact same command on dbt cloud, I get this error:
Syntax or semantic analysis error thrown in server while executing query.
Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: org.apache.spark.sql.AnalysisException: cannot
resolve '`pv.meta.uuid`' given input columns: []; line 6 pos 4;
\n'Project ['pv.meta.uuid AS page_view_uuid#249595,
'pv.user.visitorCookieId AS (80) (SQLExecDirectW)")
it looks like it fails recognizing 'pv.meta.uuid' syntax which extract data from a json format. It is not clear to me what is going on. Any thoughts? Thank you!

How do I upload a file to sql with this error message?

Failed to create table: Error while reading data, error message: Error detected while parsing row starting at position: 0. Error: Bad character (ASCII 0) encountered.
This is the error message when I type it into SQL to create a table from my csv format document.

pyhs2/hive No files matching path file and file Exists

Using the hive or beeline client, I have no problem executing this statement:
hive -e "LOAD DATA LOCAL INPATH '/tmp/tmpBKe_Mc' INTO TABLE unit_test_hs2"
The data from the file is loaded successfully into hive.
However, when using pyhs2 from the same machine, the file is not found:
import pyhs2
conn_str = {'authMechanism':'NOSASL', 'host':'azus',}
conn = pyhs2.connect(conn_str)
with conn.cursor() as cur:
cur.execute("LOAD DATA LOCAL INPATH '/tmp/tmpBKe_Mc' INTO TABLE unit_test_hs2")
Throws exception:
Traceback (most recent call last):
File "data_access/hs2.py", line 38, in write
cur.execute("LOAD DATA LOCAL INPATH '%s' INTO TABLE %s" % (csv_file.name, table_name))
File "/edge/1/anaconda/lib/python2.7/site-packages/pyhs2/cursor.py", line 63, in execute
raise Pyhs2Exception(res.status.errorCode, res.status.errorMessage)
pyhs2.error.Pyhs2Exception: "Error while compiling statement: FAILED: SemanticException Line 1:23 Invalid path ''/tmp/tmpBKe_Mc'': No files matching path file:/tmp/tmpBKe_Mc"
I've seen similar questions posted about this problem, and the usual answer is that the query is running on a different server that doesn't have the local file '/tmp/tmpBKe_Mc' stored on it. However, if that is the case, why would running the command directly from the CLI work but using pyhs2 not work?
(Secondary question: how can I show which server is trying to handle the query? I've tried cur.execute("set"), which returns all configuration parameters but when grepping for "host" the returned parameters don't seem to contain a real hostname.)
Thanks!
This happens because pyhs2 trying to find file on cluster
Solution is to have your source saved in related hdfs location instead of /tmp