Good morning,
When trying to create an external table in the Hue Impala query editor I get a permission denied error. Where to do I need to grant the permissions?
The error message:
ImpalaRuntimeException: Error making 'createTable' RPC to Hive Metastore: CAUSED BY: MetaException: Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=impala, access=WRITE, inode="/user/ds":ds:ds:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
Here's the command that fails:
create external table hdfs_timedim
(
timedimkey INT,
ampmname STRING,
begin15minutehourminutesecondname STRING,
creationauditjobcontrolid INT,
end15minutehourminutesecondname STRING,
hourminutesecondampmname STRING,
hournumber INT,
lastupdateauditjobcontrolid INT,
militaryhourminutesecondname STRING,
militaryhournumber INT,
minutenumber INT,
slotname STRING,
slotnumber INT,
sqltime string
)
row format delimited fields terminated by ',' LINES TERMINATED BY '\n'
location '/xxx/timedim';
Related
I am trying to create an external table in hive with the following query in HDFS.
CREATE EXTERNAL TABLE `post` (
FileSK STRING,
OriginalSK STRING,
FileStatus STRING,
TransactionType STRING,
TransactionDate STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS PARQUET TBLPROPERTIES("Parquet.compression"="SNAPPY")
LOCATION 'hdfs://.../post'
getting error
Error while compiling statement: FAILED: ParseException line 11:2
missing EOF at 'LOCATION' near ')'
What is the best way to create a HIVE external table with data stored in parquet format?
I am able to create table after removing property TBLPROPERTIES("Parquet.compression"="SNAPPY")
CREATE EXTERNAL TABLE `post` (
FileSK STRING,
OriginalSK STRING,
FileStatus STRING,
TransactionType STRING,
TransactionDate STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS PARQUET,
LOCATION 'hdfs://.../post'
So i am new to this.
I created a partition table and was trying to insert data into it
this is my main table >>>>
CREATE TABLE test1(
FIPS INT, Admin2 STRING, Province_State STRING, Country_Region STRING, Last_Update TIMESTAMP, Lat FLOAT, Long_ FLOAT, Confirmed INT, Deaths INT, Recovered INT, Active INT, Combined_Key STRING, Incident_Rate FLOAT, Case_Fatality_Ratio FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
partition table >>>
create table province_state_part(country_region string,
confirmed int, deaths int)
PARTITIONED BY(province_state string)
row format delimited fields terminated by ',' lines terminated by '\n'
inserting into partition table from main >>>>
INSERT into TABLE province_state_part PARTITION(province_state)
SELECT country_region, confirmed, deaths, province_state
FROM test1;
but i get this error
Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
what is this and how do i solve it ?
I have an IBM cloud where I have Hive/Hbase, I just create a "table" on Hive and I also load some data from a csv file.
My csv file contains information from google play store apps.
My commands for creating and upload data to my table are the following ones:
hive> create table if not exists app_desc (name string,
category string, rating int,
reviews int, installs string,
type string, price int,
content string, genres string,
last_update string, current_ver string,
android_ver string)
row format delimited fields terminated by ',';
hive > load data local inpath '/home/uamibm130/googleplaystore.csv' into table app_desc;
Ok, It works correctly and using a Select I obtain the data correctly.
Now what I want to do is to create a HBASE table, my problem is that I don't know how to do it correctly.
First of all I create a Hbase Db -> create google_db_ , google_data, info_data
Now I try to create an external table using this hive command, but what I am getting is an error that my table is not found.
This is the command I am using for the creation of the external hive table.
create external table uamibm130_hbase_google (name string, category string, rating int, reviews int, installs string, type string, price int, content string, genres string, last_update string, current_ver string, android_ver string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,
google_data:category,google_data:rating, info_data:reviews,
info_data:installs, info_data:type, info_data:price, info_data:content,
info_data:genres, info_data:last_update, info_data:current_ver,
info_data:android_ver") TBLPROPERTIES("hbase.table.name" = "google_db_");
I don't know the correct way for the creation of Hbase table based on an Hive schema, for uploading correctly my .csv data.
Any idea ? I am new on it.
Thanks!
Try with below create table statement in HBase,
Create Hbasetable:
hbase(main):001:0>create 'google_db_','google_data','info_data'
Create Hive External table on Hbase:
hive> create external table uamibm130_hbase_google (name string, category string, rating int, reviews int, installs string, type string, price int, content string, genres string, last_update string, current_ver string, android_ver string)
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,
google_data:category,google_data:rating, info_data:reviews,
info_data:installs, info_data:type, info_data:price, info_data:content,
info_data:genres, info_data:last_update, info_data:current_ver,
info_data:android_ver") TBLPROPERTIES("hbase.table.name" = "google_db_",
"hbase.mapred.output.outputtable" = "google_db_");
Then insert data into Hive-Hbase table(uamibm130_hbase_google) from Hive table(app_desc).
Insert data into Hive-Hbase table:
Hive> insert into table uamibm130_hbase_google select * from app_desc;
I am new to hadoop hive. I am using open source hadoop 2.7.1 hive 1.2.2. It is installed on ubuntu a single node cluster. I have 106 rows and 30 columns data in csv file. I import it into hive table using following code:
CREATE TABLE clinicaldatabc (comp_tcga_id String, gender String, age_inti_diag int, ER_status String, PR_status String, HER2_final_status String, Tumor String, Tumor_T1_code String, Node String, Node_coded String, Metastasis String, Metastasis_coded String, AJCC_Stage String, Converted_stage String, Survival_dt_from String, Vital_Status String, d_to_date_of_last_contact int, d_to_Day_of_Death int, OS_event int,OS_time int, PAM50_mRNA String, SigClust_unsupervised_mRNA int, SigClust_intrinsic_mRNA int, miRNA_clusters int, methylation_clusters int,RPPA_clusters int, CN_clusters int, integrated_clusters_with_PAM50 int, integrated_cluster_no_exp int, integrated_clusters_unsup_exp int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',';
Then I got null column name:
first half of returns
second half of returns
Please help me how to solve it. Thank you in advance!
Possible duplicate of NULL column names in Hive query result
The first thing to note here is that the NULL values occur in columns that are not of type string
Have a refer
CREATE EXTERNAL TABLE IF NOT EXISTS ejREGandTEST(
DBN STRING,
School_name STRING,
Year_of_SHST INT,
Grade_level INT,
Enrollment INT,
Number_of_registered INT,
Number_students_SHSAT INT)
row format delimited fields terminated by ','
location "/user/ebin/kaggleData/csv"
TBLPROPERTIES("skip.header.line.count"="1");
I create a external table with a wrong(non-exists) path :
create external table IF NOT EXISTS ds_user_id_csv
(
type string,
imei string,
imsi string,
idfa string,
msisdn string,
mac string
)
PARTITIONED BY(prov string,day string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
stored as textfile
LOCATION 'hdfs://cdh0:8020/user/hive/warehouse/test.db/ds_user_id';
And I can not drop the table:
[cdh1:21000] > drop table ds_user_id_csv
> ;
Query: drop table ds_user_id_csv
ERROR:
ImpalaRuntimeException: Error making 'dropTable' RPC to Hive Metastore:
CAUSED BY: MetaException: java.lang.IllegalArgumentException: Wrong FS: hdfs://cdh0:8020/user/hive/warehouse/test.db/ds_user_id, expected: hdfs://nameservice1
So how to solve this? Thank you.
Use the following command to change the location
ALTER TABLE name ds_user_id_csv SET LOCATION '{new location}';