Acess Denied on deleting a table in trino hive connector - hive

I'm using trino, I have an hive connector where i created a schema over it.
Im trying to delete tables inside the schema i created and i've got the error:
Access Denied: Cannot drop table
Ive tried to add 'allow-drop-table=true' but its only grant me access to delete tables on the default schema.

Related

Getting a Databricks drop schema error for delta table

I have a delta table schema that needs new columns/changed data types (Usually I do this on non delta tables and those work fine)
I have already dropped the existing delta table and tried dropping the schema and getting a 'v1 session catalog' error.
I am currently using SQL, 10.4 LTS cluster, spark3.2.1, scala 2.12 (I cant change these computes), driver and workers are standard E_v4
What I already did, and worked as usual
drop table if exists dbname.tablename;
What I wanted to do next:
drop schema if exists dbname.tablename;
The error I got instead:
Error in SQL statement: AnalysisException: Nested databases are not supported by v1 session catalog: dbname.tablename
When I try recreating the schema in the same location I get the error:
AnalysisException: The specified schema does not match the existing schema at dbfs:locationOfMy/table
... Differences
-Specified schema has additional fields newColNameIAdded, anotherNewColIAdded
-Specified type for myOldCol is different from existing schema ...
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
How can I do the schema drop and re-register it in same location and same name with new definitions?
Answering a month later since I didnt get replies and found the right solution;
Delta files have left over partitions and logs that cannot be updated using the drop commands. I had to manually delete the logs depending on where my location was.
Try this:
dbutils.fs.rm(path, True)
Use the path of your schema.
Then create your table again.

Unable to delete a DataBricks table: Container xxx in account yyy.blob.core.windows.net not found

I have a series of parquet files in different folders on an Azure Storage Account Container.
I can expose them all as SQL tables with command like:
create table table_name
using parquet
location 'wasbs://mycontainer#mystorageaccount.blob.core.windows.net/folder_or_parquet_files'
And all is fine. However I want to drop them all and they all drop except one, which gives me:
Error in SQL statement: AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException:
MetaException(message:Got exception: shaded.databricks.org.apache.hadoop.fs.azure.AzureException
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container mycontainer in account
mystorageaccount.blob.core.windows.net not found,and we can't create it using
anoynomous credentials, and no credentials found for them in the configuration.)
Obviously mystorageaccount and mycontainer are in to replace my real values, and creating / droping other folders of parquet files in that container / storage account works fine.
It's just one table seems a little messed up.
How can I get rid of this broken table, please?

Not able to drop Hive table pointing to Azure Storage account that no longer exists

I am using Hive based on HDInsight Hadoop cluster -- Hadoop 2.7 (HDI 3.6).
We have some old Hive tables that point to some very storage accounts that don't exist any more. But these tables still point to these storage locations , basically the Hive Metastore still contain references to the deleted storage accounts. If I try to drop such a hive table , I get an error
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.fs.azure.AzureException org.apache.hadoop.fs.azure.AzureException: No credentials found for account <deletedstorage>.blob.core.windows.net in the configuration, and its container data is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials.)
Manipulating the Hive Metatstore directly is risky as it could land the Metastore in an invalid state.
Is there any way to get rid of these orphan tables?

cannot create a view in redshift spectrum external schema

I am facing an issue in creating a view in an external schema on a spectrum external table. Below is the script I am using to create the view
create or replace view external_schema.test_view as
select id, name from external_schema.external_table with no schema binding;
I'm getting below error
ERROR: Operations on local objects in external schema are not enabled.
Please help in creating view under spectrum external table
External tables are created in an external schema. An Amazon Redshift External Schema references a database in an external Data Catalog in AWS Glue or in Amazon Athena or a database in Hive metastore, such as Amazon EMR.
External schemas are not present in Redshift cluster, and are looked up from their sources. External tables are also only read only for the same reason.
As a result, you will not be able to bind a view that you are creating to a schema not is not stored in the cluster. You can create a view on top of external tables (WITH NO SCHEMA BINDING clause), but the view will reside in a schema local to Redshift.
TL;DR Redshift doesn’t support creating views in external schemas yet, so the view can only reside in a schema local to Redshift.
Replace external_schema with internal_schema as follows:
create or replace view internal_schema.test_view as
select id, name from external_schema.external_table with no schema binding;

Redshift Spectrum and Hive Metastore - Ambiguous Error

From Redshift, I created an external schema using the Hive Metastore. I can see the Redshift metadata about the tables (such as using: select * from SVV_EXTERNAL_TABLES), however when querying one of these tables, I get an ambiguous error "error: Assert"
I tried creating the external schema and querying the tables. I can query the metadata about the tables, but cannot actually query the tables themselves.
I created the external schema as follows:
create external schema hive_schema
from hive metastore
database 'my_database_name'
uri 'my_ip_address' port 9083
iam_role 'arn:aws:iam::123456789:role/my_role_name';
Here is the error message when running "select * from hive_schema.my_table_name;"
-----------------------------------------------
error: Assert
code: 1000
context: loc->length() > 5 && loc->substr(0, 5) == "s3://" -
query: 1764
location: scan_range_manager.cpp:221
process: padbmaster [pid=26902]
-----------------------------------------------
What is the LOCATION of your Hive table? Seems like Redshift is asserting the location to start with s3://.
You should see LOCATIONs of your tables by running that query:
select location from SVV_EXTERNAL_TABLES
Where are your Hive tables stored? Is it maybe HDFS? I doubt whether Redshift supports any other locations than S3 - in the section Considerations When Using AWS Glue Data Catalog of this AWS guide they describe how to setup your Hive Metastore to store data in S3.