Hive "Show Tables" Fails with MetaException - hive

Using Hive 2.3.7 on AWS EMR (5.33.1) I have created a database which shows correctly when calling show databases;. I then create a table which seems to work correctly (no exceptions). When I call describe <table>; It correctly returns the name and schema of the table. However when I run show tables; the following error is returned:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException
Exception thrown when executing query :
SELECT A0.TBL_NAME,A0.TBL_NAME AS NUCORDER0 FROM TBLS A0 LEFT OUTER JOIN DBS B0 ON
A0.DB_ID = B0.DB_ID WHERE B0.`NAME` = ? AND LOWER(A0.TBL_NAME) LIKE '_%' ESCAPE '\' ORDER BY NUCORDER0)
If anyone can shed any light on this issue it would be really appreciated.
I have googled around and found nothing of any use.
EDIT: show tables in <schema>; returned the same result
EDIT 2: This issue was solved by updating the EMR to emr-6.4.0. I have no great insight into the issue beyond what is mentioned here.

I think your metadata database has been corrupted/has bad data. I would take a backup. And then see if you can restore some previous backups. I would connect to the database directly and look at the those tables and see if anything looks out of the ordinary. If you find a bad table entry don't delete it. I'd try using "Delete table" commands (via hive) to remove it to keep integrity. If you have to you can delete entries in your database, you have a backup and could restore back the tables.

Hive meta store is using datanucleus, https://www.datanucleus.org/, for all CRUD of metastore database. It's generating \\ to escape backslash itself, but Mariadb interprete \\ as string literal. So it needs to use \\ as escape character.
You can see sql_mode setting here, https://mariadb.com/kb/en/sql-mode/#sql_mode-values.
Get rid of NO_BACKSLASH_ESCAPE from the mode and it should be all right.

Try providing the schema which you want to see the tables:
show tables in schema_name;

Related

#table in FROM clause

My manager (not currently working in an IT environment) sent me a chunk of code to run some data however one line confuses me. For some context, this is ORACLE SQL.
She has a line set as "FROM ma1 #proddb m1"
I'm not currently sure what it does or even trying to achieve. It's hard for me to visualize it since I do not have access to the database itself.
What's the proper syntax for that line as that's where I'm currently getting errors. Thank you!
What's the proper syntax for that line as that's where I'm currently getting errors.
# indicates a database link
The syntax is:
FROM table_name#dblink table_alias
So for you:
ma1 is the name of the table/view/materialized view.
proddb is the name of the database link.
m1 is the table alias.
The only thing that is wrong with your syntax is that you need to remove the space character between ma1 and #proddb. If you do that then it should work assuming that the database link and the remote table/view/materialized view exist.
"FROM ma1#proddb m1" -> ma1 is a table from a different DB, with #my_remote_DB you have access to objects on a remote database. In your case proddb is your remote database link.

Deleting rows in BigQuery fails with "Invalid schema update"

I'm trying to delete some rows from a BigQuery table (using standard SQL dialect):
DELETE FROM ocds.releases
WHERE
ocid LIKE 'ocds-b5fd17-%'
However, I get the following error:
Query Failed
Error: Invalid schema update. Field packageInfo has changed mode from REQUIRED to NULLABLE
Job ID: ocds-172716:bquijob_2f60927_15d13c97149
It seems as though BigQuery doesn't like deleting rows with a REQUIRED column. Is there any way around this?
It has been a known limitation that BigQuery DML doesn't work with tables with required fields (see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language#known_issues).
We are in the process of removing this limitation. We whitelisted your project today. Please try running your query again in the same project. Let us know if the problem is still there, or if you want to have more projects whitelisted.

Pentaho kettle : Below delete doesnt seem to work in SQL script

Ive tried to execute below delete through SQL script in Pentaho Job, I get the error as
Unknown table 'a' in MULTI DELETE. Can somebody throw light on this. Is there any other way
to go around this?
DELETE a.* FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST a
WHERE EXISTS
(SELECT 1 FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST_3 b WHERE b.TM_EVENT_ID=a.TM_EVENT_ID
GROUP BY b.TM_EVENT_ID)
This is mysql right?
See similar solutions here - recommends removing the table alias.
Worth noting this is nothing to do with Pentaho, if you did it in a SQL client you'd get the same error. If you don't then the difference is probably in the jdbc driver version - may be worth checking that.
i can suggest these options:
dont use aliases
try this directly on your mysql and check if it works for you.
dont use pentaho like this : make a transformation and break apart the query to steps
with table input and lookup then delete the rows by row_id
its a little bit longer but a lot more undersrandable and easy to maintain.
"dont over optimize"

How can I programmatically run arbitrary SQL statements against my Hibernate/HSQL database?

I'm looking for a way to programmatically execute arbitrary SQL commands against my DB.
(Hibernate, JPA, HSQL)
Query.createNativeQuery() doesn't work for things like CREATE TABLE.
Doing LOTS of searching, I thought I could use the Hibernate Session.doWork().
By using the deprecated Configuration.buildSesionFactory() seems to show that doWork won't work.
I get "use lacks privilege or object not found" for all the CREATE TABLE statements.
So, what other technique is there for executing arbitratry SQL statements?
There were some notes on using the underlying JDBC Statement, but I haven't figure out how to get a JDBC Connection object from Hibernate to try that.
Note that the hibernate.hbm2ddl.auto=create setting will NOT work for me, as I have ARRAY[] columns which it chokes on.
I don't think there is any problem executing a create table statement with a Hibernate native query. Just make sure to use Query.executeUpdate(), and not Query.list() or Query.uniqueResult().
If it doesn't work, please tell us what happens when you execute it, and join the full stack trace of the exception and the SQL query you're executing.
"use lacks privilege or object not found" in HSQL may mean anything, for example existence of a table with the same name. Error messages in HSQL are completely misleading. Try listing your tables using DatabaseMetadata - you have probably already created the table.

copying a table from one database to another

I am trying to archive some of my tables into another database on the same server. However the INSERT INTO...SELECT...FROM gives me an error (SQLSTATE=42704) on build. The table exists in the second database.
Can anyone help with this?
It's not clear from your question what version of DB2 is being used. I'll presume that it's the Linux, Unix & Windows version. You look to be using federation to link the two databases.
Does the SELECT part of your query work from LS2DB001? It's worth trying to pin down which database you have the issue with.
Presuming that the problem is on LS2DB001, if the user you have defined the federated link with has permissions on the base tables in the query, check also that they have permissions on the system catalog tables. If not, they would not be able to parse and validate that you can run the query.
We've cracked it! If the following script is used then it works. The LOAD works without having to COMMIT in between batches of rows copied. ('Transaction Log full...' error problem is also solved)
CONNECT TO LS2DB001;
EXPORT TO "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.EXM"
SELECT * FROM LS2USER.TIN_TRIGGER_OUT;
CONNECT RESET;
CONNECT TO LQIFCOLD;
LOAD FROM "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.IMM"
INSERT INTO LS2USER.TIN_TRIGGER_OUT COPY NO INDEXING MODE AUTOSELECT;
COMMIT;
CONNECT RESET;
I found this on http://www.connx.com/products/connx/Connx%208.6%20UserGuide/CONNXCDD32D/DB2_SQL_States.htm:
42704 Undefined object or constraint name. Revise SQL syntax and retry.
For more help try to be more specific, eg paste the full sql statement, the table scheme etc.
You can do
Select 'insert into tblxxxx (blabla,blabal) values(' + fld1 + ',' + fld2 + ',' ...... + ')'
From tblxxxxxx
copy the result as a text script and execute it in the other DB.
The best way to do this would be to create a custom script. Depending on the size of the tables (how many records) you could either do a select of all of the data into memory and then roll over them inserting them into a copy of the table you create first, or you could export the data out as a csv file or some other text based file and then roll over that to insert the data into the other table.
If you do not have some sort of formal backup procedures that could do this already, this would be your best bet.
Note: some db2 databases, such as those on an iSeries do not actually have "databases", they have libraries. With the right user profile you can access two libraries at the same time, joining tables from them together or doing a
create table library/newFilename as
(select * from originallibrary/originalfilename) with data
But this only applies to the iSeries I believe.
I'm writing this response as another answer so I have more space.
I can only suggest breaking the steps down to their components, and working through to see where the error is occuring. Again, I'm assuming you're using federation:
a) In your FROM db, connecting as the user you're using for the federated link, does your select work?
b) In your TO db, using the link, does the select work?
c) In your TO db, using the link via a stored proc, does the select work?
d) In your TO db, using an INSERT...values(x,y,z), can you insert into the table?
e) In your TO db, via a stored proc, using INSERT...values(x,y,z), can you insert?
Without more information, this is the best line of attack I can suggest.