Invalid table alias or column reference eeror in HIVE - hive

I have created a HIVE partitioned table and added a new column to one of the partition ( without the option of cascade).
I am able to see the column for the partition when i perform describe formatted , but i am not able to read data for the newly added column.
How to access the newly added column for a partition in HIVE?
Error message below:
newly added column name is "new_name".
Error: Error while compiling statement: FAILED: SemanticException [Error 10004]: Line 1:10 Invalid table alias or column reference 'new_name': (possible column names are: id, name, dob) (state=42000,code=10004)

Related

Hive - Can't Use ORDER BY with INSERT

I'm using Hive 3.1.3 on AWS EMR. When I try to INSERT records with an ORDER BY statement, the statement fails with error message SemanticException [Error 10004]: Line 5:9 Invalid table alias or column reference 'ColumnName': (possible column names are: _col0...n). When I remove the ORDER BY, the INSERT works fine. Here's a simple example that reproduces the error:
CREATE TABLE People (PersonName VARCHAR(50), Age INT);
INSERT INTO People (PersonName, Age)
SELECT 'Mary' PersonName, 32 Age
UNION
SELECT 'John' PersonName, 41 Age
ORDER BY Age DESC;
FAILED: SemanticException [Error 10004]: Line 5:9 Invalid table alias or column reference 'Age': (possible column names are: _col0, _col1)
I know I can simply remove the ORDER BY, but the codebase is an existing application built to run on a traditional RDBMS. There are lots of ORDER BYs on INSERT statements. Is there any way I can make the INSERTs with ORDER BYs to work so I don't have to comb through thousands of lines of SQL and remove them?

How to invalidate that which column having data type mismatch and also ORA-01722 while insert data into table from one user to another

I have table with some records in one user and another table with empty records. I want to migrate the data of that table from one user to another but I got one error ORA: 01722 because the datatype of the target table is slightly mismatch. What should I do resolve this problem without changes the datatype.
Data type of the source table is-
Description of the target table-
In both table in different user only one column is mismatch datatype LOTFRONTAGE. In source table datatype is varchar2 and in target table datatype is Number.
How to invalidate that which column having data type mismatch
While I insert the data using this SQL query-
insert into md.house(ID,MSSUBCLASS,MSZONING,
CAST(LOTFRONTAGE AS VARCHAR2(15)),LOTAREA,LOTSHAPE,LOTCONFIG,
NEIGHBORHOOD,CONDITION1,BLDGTYPE,OVERALLQUAL,
YEARBUILT,ROOFSTYLE,EXTERIOR1ST,MASVNRAREA)
select ID,MSSUBCLASS,MSZONING,LOTFRONTAGE,
LOTAREA,LOTSHAPE,LOTCONFIG,NEIGHBORHOOD,CONDITION1,
BLDGTYPE,OVERALLQUAL,YEARBUILT,ROOFSTYLE,
EXTERIOR1ST,MASVNRAREA from SYS.HOUSE_DATA;
Then i got an error
ORA-00917: comma missing
You could try this:
INSERT INTO 2ndTable (ID,...,LOTFRONTAGE,....MASVNAREA)
SELECT ID,...,to_number(LOTFRONTAGE),....MASVNAREA
FROM 1stTable;

Error on insert data from another table into an existing partition

I am trying to insert on a partitioned table like:
insert into table select * from table#dblink
where date = 201702003;
At first, I created the correct partition:
TP_201702003 values (201702003)
And got this error:
Error report -
SQL Error: ORA-14400: a chave de partição inserida não corresponde a nenhuma partição
14400. 00000 - "inserted partition key does not map to any partition"
*Cause: An attempt was made to insert a record into, a Range or Composite
Range object, with a concatenated partition key that is beyond
the concatenated partition bound list of the last partition -OR-
An attempt was made to insert a record into a List object with
a partition key that did not match the literal values specified
for any of the partitions.
*Action: Do not insert the key. Or, add a partition capable of accepting
the key, Or add values matching the key to a partition specification
This is the only month with this problem.
Can you see anything wrong?
Thanks in advance.

SemanticException Partition spec {col=null} contains non-partition columns

I am trying to create dynamic partitions in hive using following code.
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
create external table if not exists report_ipsummary_hourwise(
ip_address string,imp_date string,imp_hour bigint,geo_country string)
PARTITIONED BY (imp_date_P string,imp_hour_P string,geo_coutry_P string)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://abc';
insert overwrite table report_ipsummary_hourwise PARTITION (imp_date_P,imp_hour_P,geo_country_P)
SELECT ip_address,imp_date,imp_hour,geo_country,
imp_date as imp_date_P,
imp_hour as imp_hour_P,
geo_country as geo_country_P
FROM report_ipsummary_hourwise_Temp;
Where report_ipsummary_hourwise_Temp table contains following columns,
ip_address,imp_date,imp_hour,geo_country.
I am getting this error
SemanticException Partition spec {imp_hour_p=null, imp_date_p=null,
geo_country_p=null} contains non-partition columns.
Can anybody suggest why this error is coming ?
You insert sql have the geo_country_P column but the target table column name is geo_coutry_P. miss a n in country
I was facing the same error. It's because of the extra characters present in the file.
Best solution is to remove all the blank characters and reinsert if you want.
It could also be https://issues.apache.org/jira/browse/HIVE-14032
INSERT OVERWRITE command failed with case sensitive partition key names
There is a bug in Hive which makes partition column names case-sensitive.
For me fix was that both column name has to be lower-case in the table
and PARTITION BY clause's in table definition has to be lower-case. (they can be both upper-case too; due to this Hive bug HIVE-14032 the case just has to match)
It says while copying the file from result to hdfs jobs could not recognize the partition location. What i can suspect you have table with partition (imp_date_P,imp_hour_P,geo_country_P) whereas job is trying to copy on imp_hour_p=null, imp_date_p=null, geo_country_p=null which doesn't match..try to check hdfs location...the other point what i can suggest not to duplicate column name and partition twice
insert overwrite table report_ipsummary_hourwise PARTITION (imp_date_P,imp_hour_P,geo_country_P)
SELECT ip_address,imp_date,imp_hour,geo_country,
imp_date as imp_date_P,
imp_hour as imp_hour_P,
geo_country as geo_country_P
FROM report_ipsummary_hourwise_Temp;
The highlighted fields should be the same name available in the report_ipsummary_hourwise file

SemanticException adding partiton Hive table

Attempting to create a partition on a Hive table with the following:
> alter table stock_ticker add if not exists
> partition(stock_symbol='ASP')
> location 'data/stock_ticker_sample/stock_symbol=ASP/'
Which produces the following output
FAILED : SemanticException table is not partitioned but partition spec exists: {stock_symbol=ASP}
There are no partitions on this table prior to this addition attempt
> show partitions stock_ticker;
which results in
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
Table stock_ticker_sample is not a partitioned table
There is no question that the stock_symbol column exists and is of type string.
The query is what steps need to be taken in order to add this partition?
Solution would be to add partitioning info into the definition of stock_ticker table:
CREATE EXTERNAL TABLE stock_ticker (
...
)
PARTITIONED BY (stock_symbol STRING);
Then easily you can add external data to your table by:
> alter table stock_ticker add if not exists
> partition(stock_symbol='ASP')
> location 'data/stock_ticker_sample/stock_symbol=ASP/'
GL!