Chnage location of hive managed table after creation - hive

I have created a managed table.now i want to change the default location of the table to non hdfs location
i tried the below command but not working
ALTER TABLE my_table SET LOCATION "/path/to/file"

Related

What happens if I don´t define a specific schema to the database?

If a specific schema is not defined in a database, where are the database objects going to be stored? Is that a good or a bad thing? Why?
Quote from the manual
In the previous sections we created tables without specifying any schema names. By default such tables (and other objects) are automatically put into a schema named “public”. Every new database contains such a schema
If no schema is defined when creating a table, the first (existing) schema that is found in the schema search path will be used to store the table.
In psql
create database sch_test;
CREATE DATABASE
\c sch_test
You are now connected to database "sch_test" as user "postgres".
--Show available schemas
\dn
List of schemas
Name | Owner
--------+----------
public | postgres
drop schema public ;
DROP SCHEMA
\dn
List of schemas
Name | Owner
------+-------
(0 rows)
show search_path ;
search_path
-----------------
"$user", public
create table tbl(id integer);
ERROR: no schema has been selected to create in
LINE 1: create table tbl(id integer);
create table test.tbl(id integer);
ERROR: schema "test" does not exist
LINE 1: create table test.tbl(id integer);
Just to show that an object may not be created if a schema does not exist. Bottom line is an object(table, function, etc) needs to be created in a schema. If there is none available for search_path to find or you specifically point at one that does not exist the object creation will fail.

create view in redshift particular schema

I've created an external schema in my redshift database using the script below:
create external schema exampleschema
from data catalog database 'examplesource'
iam_role 'arn:aws:iam::627xxxxx:role/dxxxx'
region 'us-west-2'
CREATE EXTERNAL DATABASE IF NOT EXISTS;
I'm now trying to create a view in that exampleschema schema using the script below, but I seem to only be able to create views in the "public" schema. How do I create a view in the exampleschema schema?
create view vw_ticket as select * from exampleschema.ticket
with no schema binding;
You need to specify the schema in the CREATE statement i.e. create view exampleschema.vw_ticket ...

Can I alias an external table in redshift to remove the schema name?

We want to migrate tables to Spectrum, which requires defining an external schema
create external schema spectrum
from data catalog
database 'spectrumdb'
iam_role 'my_iam_role'
create external database if not exists;
I created an external table in Redshift like this:
create external table spectrum.my_table(
id bigint,
accountId bigint,
state varchar(65535),
) stored as parquet
location 's3://some_bucket/my_table_files';
Is it possible to alias the table such that when querying it, I can call it my_table_alias instead of spectrum.my_table? Basically, we want to make the change to external tables opaque to clients of our Redshift instance (this means we can't change the table names). Thanks so much for your help!
Redshift does not have aliases, your best option is to create a view.
You need to use WITH NO SCHEMA BINDING option while creating the view since the view is on an external table.
If you like to not specify schema names or you have a requirement like this create the view(s) in public schema or set the users default schema to the schema where the views are
alter user .. set search_path to ..
Additional benefits of using a view to access an external table are, you can
rename columns to be more user friendly
add or remove columns with view definition
change data types and/or date/time formats
you will have the ability to change name/structure of the external table without effecting user access
Let me know if this answers your question.

change default location when creating an external table in hive

I would like to create an external table in hive from a view and change the default location:
CREATE external TABLE market.resultats like v_ca_mag
LOCATION '/user/training/market/db/resultats';
The table is created and is external but the location is the default one /user/hive/warehouse/market.db/resultats.
Why is the location not taken into account?
I am using cdh 5.4.
Probably it's a bug please open a jira to account for this issue.
As a work around once you are done with creating external table then execute alter table statement to change the location of your newly created table to the desired location.
hive> CREATE external TABLE market.resultats like v_ca_mag;
hive> alter table market.resultats set location 'hdfs://nnaddress/user/training/market/db/resultats';

How to change hive table owner

I have a hive table 'sample' created with owner as 'X'
hive> show table extended like sample;
--shows owner as 'X'
Is there a way I can change the owner to some other 'Y', without recreating the table (don't want to lose the data)
A known option is to update the owner directly in the postgres hive metastore table.
hive=# update "TBLS" set "OWNER" = 'Y' where "OWNER" = 'X' and "TBL_NAME" = 'sample';
Is this safe?
As I know there is no other way to do it besides direct change in metastore DB. I needed to do this couple of times - there were no any issues.
I just experienced this issue and can share my resolution notes, to add color to what a "Direct Change in The Metastore DB" for me. In our configuration, we're using Presto that connects to Hive. Tables will be created in hive with whatever user that Presto connects as (with the --user flag on the Presto CLI).
We were getting an error message such as:
Access Denied: Cannot drop table SCHEMA.TABLE_NAME: Owner of the table is
different from session user
I can see the users of the table by executing the following query on the Hive Metastore:
select t.OWNER, p.PRINCIPAL_NAME, count(1)
from TBLS t
join TBL_PRIVS p on p.TBL_ID=t.TBL_ID
group by t.OWNER, p.PRINCIPAL_NAME;
And then, I can update the tables as needed by executing:
update TBLS set OWNER='NEW_OWNER' where OWNER='OLD_OWNER';
update TBL_PRIVS set PRINCIPAL_NAME='NEW_OWNER' where PRINCIPAL_NAME='OLD_OWNER';
NOTE: You should run this in a transaction and make sure your Metastore is backed up first.