I'm using Hive 3.1.3 on AWS EMR. When I try to INSERT records with an ORDER BY statement, the statement fails with error message SemanticException [Error 10004]: Line 5:9 Invalid table alias or column reference 'ColumnName': (possible column names are: _col0...n). When I remove the ORDER BY, the INSERT works fine. Here's a simple example that reproduces the error:
CREATE TABLE People (PersonName VARCHAR(50), Age INT);
INSERT INTO People (PersonName, Age)
SELECT 'Mary' PersonName, 32 Age
UNION
SELECT 'John' PersonName, 41 Age
ORDER BY Age DESC;
FAILED: SemanticException [Error 10004]: Line 5:9 Invalid table alias or column reference 'Age': (possible column names are: _col0, _col1)
I know I can simply remove the ORDER BY, but the codebase is an existing application built to run on a traditional RDBMS. There are lots of ORDER BYs on INSERT statements. Is there any way I can make the INSERTs with ORDER BYs to work so I don't have to comb through thousands of lines of SQL and remove them?
Related
I have a simple table called source with 3 columns (source_id, name, tenant_id) and some data in it.
I a trying to insert a data into the table by checking its existence in the table. But I am getting this error ..any idea how to resolve
The Query -
INSERT INTO moa.source(source_id, name, tenant_id)
SELECT ((select max(source_id)+1 from moa.source), 'GE OWS - LHR', 1)
WHERE NOT EXISTS(SELECT 1 FROM moa.source where name = 'GE OWS - LHR');
The Error :
ERROR: INSERT has more target columns than expressions
LINE 1: INSERT INTO moa.source(source_id, name, tenant_id)
^
HINT: The insertion source is a row expression containing the same number of columns expected by the INSERT. Did you accidentally use extra parentheses?
SQL state: 42
The Table:
Sorry..figured it out..there should be no paranthesis after Select
i try update data from another table using PostgreSQL 9.6 following the document and stackoverflow advice with this query
1. this query is for finding the id_vertex of geom that close to lokasi_esb.geom. You can ignore this one, it works properly
CREATE TEMP TABLE temp1 AS
WITH kuery2 as(
SELECT id_esb, id_vertex, distant, rank() OVER (PARTITION BY id_esb ORDER BY distant asc) as ranked FROM table vertex)
select id_esb, id_vertex, distant, ranked
from kuery2
where ranked=1;
2. this query to update the lokasi_esb table with id_vertex_nearest column without excluded table. ////i already know it's wrong and i update on the number 3
INSERT INTO lokasi_esb(id_esb, id_vertex_nearest)
select id_esb,id_vertex
from temp1
ON CONFLICT (id_esb) DO UPDATE
SET lokasi_esb.id_vertex_nearest = temp1.id_vertex;
i got this error
ERROR: missing FROM-clause entry for table « temp1 »
SQL state: 42P01
Character: 634
3. this query to update the lokasi_esb table with id_vertex_nearest column with excluded table
INSERT INTO lokasi_esb(id_esb, id_vertex_nearest)
select id_esb,id_vertex
from temp1
ON CONFLICT (id_esb) DO UPDATE
SET lokasi_esb.id_vertex_nearest = excluded.id_vertex;
igot this error(transleted from indonesia)
ERROR: column excluded.id_vertex not yet exist
SQL state: 42703
Character: 634
So can anybody can help me figure out what happened here?
The column names from the "excluded" record refer to the columns of the target table. And the target column in the SET expression must not be prefixed with the table name (because you can't update a different table anyway)
So you need to use:
SET id_vertex_nearest = excluded.id_vertex_nearest
i have a scenario ...
i have a table called sample with three columns:
id, name, address
where id is a unique auto increment column. here is my data:
id name address
1 john LA
2 peter. VS
my next column would be 3, 'smith', 'vegas'
i tried like this:
insert into sample select c1 as id from (select max(id)+1 from sample) c1, 'smith' as name , 'vegas' as address;
getting Error: Error compiling statement: FAILED: SemanticException Error in parsing (sttae=402000,code=40000)
i have tried for udf for auto increment column but no luck.
Hive (alas) doesn't support auto-increments. You can implement this as:
insert into sample (id, name, address)
select coalesce(max(id) + 1, 1), 'smith' as name , 'vegas' as address
from sample c1;
That said, I strongly recommend that you don't do this. Two inserts running at the same time will (likely) see the same maximum value -- and insert the same value for the id. To get around this, you would need to lock the entire table for each insert. And that is quite expensive.
Use UUIDs and an insertion date instead.
It might be a syntax thing. since i dont have a hive instance to try out,
does this work?
insert
into sample
select max(id)+1 c1
,max('smith') as name
,max('vegas') as address
from sample
I have created a HIVE partitioned table and added a new column to one of the partition ( without the option of cascade).
I am able to see the column for the partition when i perform describe formatted , but i am not able to read data for the newly added column.
How to access the newly added column for a partition in HIVE?
Error message below:
newly added column name is "new_name".
Error: Error while compiling statement: FAILED: SemanticException [Error 10004]: Line 1:10 Invalid table alias or column reference 'new_name': (possible column names are: id, name, dob) (state=42000,code=10004)
I was trying to test an example of hive indexes. I am not able to create indexes on a partitioned column but works on all other columns. Most of the sites gave examples on partitioned column but for some reason I am not able to get it worked. I am using hive 14 and the example was taken from Programming in hive. Can some one let me know if something is wrong in the below code?
CREATE TABLE employees (
name STRING,
salary FLOAT,
subordinates ARRAY<STRING>,
deductions MAP<STRING, FLOAT>,
address STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>
)
PARTITIONED BY (country STRING, state STRING);
CREATE INDEX employees_index
ON TABLE employees (country)
AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'
WITH DEFERRED REBUILD
IDXPROPERTIES ('creator = 'me', 'created_at' = 'some_time')
IN TABLE employees_index_table
PARTITIONED BY (country, name)
COMMENT 'Employees indexed by country and name.';
Error :
FAILED: ParseException line 7:0 missing EOF at 'PARTITIONED' near 'employees_index_table'
org.apache.hadoop.hive.ql.parse.ParseException: line 7:0 missing EOF at 'PARTITIONED' near 'employees_index_table'