I'm having an odd problem with impala when I'm trying to create a table via "create table abc as (select)". Even if it creates the table the query returns 'table not found'
Anyone knows why this can happen?
I0126 04:01:36.553565 25748 coordinator.cc:584] Finalizing query: 86456b134c56d5e6:9f58d67400000000
I0126 04:01:36.643385 25748 coordinator.cc:606] Removing staging directory: hdfs://nameserviceHDFS/user/hive/warehouse/abc/_impala_insert_staging/86456b134c56d5e6_9f58d67400000000/
I0126 04:01:36.658812 25748 coordinator.cc:488] ExecState: query id=86456b134c56d5e6:9f58d67400000000 execution completed
I0126 04:01:36.658973 25748 coordinator.cc:863] Release admission control resources for query_id=86456b134c56d5e6:9f58d67400000000
I0126 04:01:36.673594 25748 client-request-state.cc:1100] Updating metastore with 1 altered partitions ()
I0126 04:01:36.673655 25748 client-request-state.cc:1115] Executing FinalizeDml() using CatalogService
E0126 04:01:36.677054 25748 client-request-state.cc:1121] ERROR Finalizing DML: TableNotFoundException: Table not found: abc
Try adding the schema name before table name and also share the full query here.
I have created Hive Managed Table with ORC and PARQUET format. While getting the values from table with "Select * from table_name" I am getting below error.
java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 (state=,code=0)"
Check the DDL of the table. Table seems to be a bucketed table. However, the underlying folders/files are of different bucket sizes compared to the table definition.
I have table with some records in one user and another table with empty records. I want to migrate the data of that table from one user to another but I got one error ORA: 01722 because the datatype of the target table is slightly mismatch. What should I do resolve this problem without changes the datatype.
Data type of the source table is-
Description of the target table-
In both table in different user only one column is mismatch datatype LOTFRONTAGE. In source table datatype is varchar2 and in target table datatype is Number.
How to invalidate that which column having data type mismatch
While I insert the data using this SQL query-
insert into md.house(ID,MSSUBCLASS,MSZONING,
CAST(LOTFRONTAGE AS VARCHAR2(15)),LOTAREA,LOTSHAPE,LOTCONFIG,
NEIGHBORHOOD,CONDITION1,BLDGTYPE,OVERALLQUAL,
YEARBUILT,ROOFSTYLE,EXTERIOR1ST,MASVNRAREA)
select ID,MSSUBCLASS,MSZONING,LOTFRONTAGE,
LOTAREA,LOTSHAPE,LOTCONFIG,NEIGHBORHOOD,CONDITION1,
BLDGTYPE,OVERALLQUAL,YEARBUILT,ROOFSTYLE,
EXTERIOR1ST,MASVNRAREA from SYS.HOUSE_DATA;
Then i got an error
ORA-00917: comma missing
You could try this:
INSERT INTO 2ndTable (ID,...,LOTFRONTAGE,....MASVNAREA)
SELECT ID,...,to_number(LOTFRONTAGE),....MASVNAREA
FROM 1stTable;
Scenario :
Generate rows -> Table input -> Delay row -> Table output.
Generate Rows : (4 copies)
Generate 10 rows.
Pass the field 'value' with value 1.
Table input : (4 copies)
Run for each row and use the value 1 (as where 1 = ?. So no effect).
insert data from previous step.
Get the count of a table my_table (select count(*) from...).
Field = val_count.
Delay row: (4 copies)
1 second delay.
Table output: (1 copy)
Insert the val_count to the same table, i.e. my_table.
commit row = 1.
truncate table.
Database :
Oracle
once the transform finish, the my_table filled with the value only 0 (totally 40 zero's). Why the table input not getting the actual row count after the first round of execution (round 2 to 10). or what mistake i did in this design?
Pentaho : Kettle - Spoon General Availability Release - 5.3.0.0-213
Java : jdk1.8.0_51 (64)
Os : Windows 8.1 (64)
Oracle : Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
More info added after analysis
In the four links to the table output, i removed the delay from one link. So i got some what expected. So i removed all the delay and i got the expected result. But i can't able to understand the reason.
I am using SQL Bulk Insert to insert data into a temporary table.
My table has a column XYZ which is defined as varchar NOT NULL and I want if the XYZ column data in the delimited file is empty, it should be written to error file. Currently SQL BI treats it as a 0 length string and inserts into the table.
The delimited file looks like this:
Col1|XYZ|Col2
abc||abc
abc||abc
abc|abc|abc
I tried using CHECK_CONSTRAINTS in SQL BI query and created a Check constraint on XYZ column in table as XYZ <> '', but rather then writing particular row to error file, it causes the entire SQL BI to fail.
Please help.