I've been trying to rename a table from "fund performance" to fund_performance in SQLWorkbench for a Redshift database. Commands I have tried are:
alter table schemaname."fund performance"
rename to fund_performance;
I received a message that the command executed successfully, and yet the table name did not change.
I then tried copying the table to rename it that way. I used
#CREATE TABLE fund_performance LIKE "schema_name.fund performance";
CREATE TABLE fund_performance AS SELECT * FROM schema_name."fund performance";
In both these cases I also received a message that the statements executed successfully, but nothing changed. Does anyone have any ideas?
Use following it may work out for you
SELECT * into schema_name.fund_performance FROM schema_name.[fund performance]
It will copy the data by creating new table as fund_performance but it won't create any constraints and Identity's
To Rename specific table without disturbing existing constraints
EXEC sp_rename 'schema_name.[fund performance]', 'schema_name.fund_performance';
Related
I run a query on Databricks:
DROP TABLE IF EXISTS dublicates_hotels;
CREATE TABLE IF NOT EXISTS dublicates_hotels
...
I'm trying to understand why I receive the following error:
Error in SQL statement: AnalysisException: Cannot create table ('default.dublicates_hotels'). The associated location ('dbfs:/user/hive/warehouse/dublicates_hotels') is not empty but it's not a Delta table
I already found a way how to solve it (by removing it manually):
dbutils.fs.rm('.../dublicates_hotels',recurse=True)
But I can't understand why it's still keeping the table?
Even though that I created a new cluster (terminated the previous one) and I'm running this query with a new cluster attached.
Anyone can help me to understand that?
I also faced a similar problem, then tried the command line CREATE OR REPLACE TABLE and it solved my problem.
DROP TABLE & CREATE TABLE work with entries in the Metastore that is some kind of database that keeps the metadata about databases and tables. There could be the situation when entries in metastore don't exist so DROP TABLE IF EXISTS doesn't do anything. But when CREATE TABLE is executed, then it additionally check for location on DBFS, and fails if directory exists (maybe with data). This directory could be left from some previous experiments, when data were written without using the metastore.
if the table created with LOCATION specified - this means the table is EXTERNAL, so when you drop it - you drop only hive metadata for that table, directory contents remains as it is. You can restore the table by CREATE TABLE if you specify the same LOCATION (Delta keeps table structure along with it's data in the directory).
if LOCATION wasn't specified while table creation - it's a MANAGED table, DROP will destroy metadata and directory contents
I am creating a managed table via Impala as follows:
CREATE TABLE IF NOT EXISTS table_name
STORED AS parquet
TBLPROPERTIES ('transactional'='false', 'insert_only'='false')
AS ...
This should result in a managed table which does not support HIVE-ACID.
However, when I run the command I still end up with an external table.
Why is this?
I found out in the Cloudera documentation that neglecting the EXTERNAL-keyword when creating the table does not mean that the table definetly will be managed:
When you use EXTERNAL keyword in the CREATE TABLE statement, HMS stores the table as an external table. When you omit the EXTERNAL keyword and create a managed table, or ingest a managed table, HMS might translate the table into an external table or the table creation can fail, depending on the table properties.
Thus, setting transactional=false and insert_only=false leads to an External Table in the interpretation of the Hive Metastore.
Interestingly, only setting TBLPROPERTIES ('transactional'='false') is completly ignored and will still result in a managed table having transactional=true).
I created 2 external tables Hive. In first table specified data location with create statement. In second table loaded data after creating it.
I can see data file created for second table in /hive/warehouse/ directory. Then I set "external.table.purge"="true" for both tables. And DROP both tables. But data files of both tables remains as is.
What is the behaviour of 'external.table.purge'='true'. Shouldn't it delete data files as well on issuing Drop command?
If Hive does not take any ownership over data files of external table, why is there even an option as 'external.table.purge'='true'.
I read in one of the threads, where someone mentioned it is possible to delete data as well for external tables by ALTER TABLE ... SET TBLPROPERTIES('external.table.purge'='true'), but unable to find that post again.
You can not drop the data in external table but you can do it for internal(managed) tables. So convert the table to internal and then drop it.
First change eternal property to false.
hive> ALTER TABLE nyse_external SET TBLPROPERTIES('EXTERNAL'='False');
and then you can easily drop it.
hive> drop table nyse_external;
TBLPROPERTIES ("external.table.purge"="true") should work for hive version 4.x+.
Answer to point 1:
Table property "external.table.purge", which if true (and if the table is an external table), will let Hive know to delete the table data when the table is dropped. This feature is introduced in this apache jira.
https://issues.apache.org/jira/browse/HIVE-19981 .
For reference on how to set the property take a look at this example,
https://docs.cloudera.com/runtime/7.2.7/using-hiveql/topics/hive_drop_external_table_data.html
I have a problem understand the real meaning behind this Apache Hive code, Can someone please explain to me whether this code is really doing anything?
ALTER TABLE a RENAME TO a_tmp;
DROP TABLE a;
CREATE TABLE a AS SELECT * FROM a_tmp;
ALTER TABLE a RENAME TO a_tmp;
This simply allows you to rename your table a to a_tmp.
Let's say your table a initially points to /user/hive/warehouse/a, then after executing this command your data will be moved to /user/hive/warehouse/a_tmp and the content of /user/hive/warehouse/a will no longer exist. Note that this behavior of moving HDFS directories only exist in more recent versions of Hive. Before that the RENAME command was only updating the metastore and not moving directories in HDFS.
Similarly, if you do a show tables after, you will see that a doesn't exist anymore, but a_tmp exists. You can no longer query a at that point because it is no longer registered in the metastore.
DROP TABLE a;
This does basically nothing, because you already renamed a to a_tmp. So a doesn't exist anymore in the metastore. This will still print "OK" because there's nothing to do.
CREATE TABLE a AS SELECT * FROM a_tmp;
You are asking to create a brand new table called a and register it in the metastore. You are also asking to populate it with the same data that is in a_tmp (which you already copied from a before)
So in short you're moving your initial table to a new one, and then copying the new one back to the original, so the only thing these query do is duplicating your initial data into both a and a_tmp.
Is it possible to copy a table (with definition, constraints, identity) to a new table?
Generate a CREATE script based on the table
Modify the script to create a different table name
Perform an INSERT from selecting everything from the source table
No, not really, you have to script it out, then change the names
you can do this
select * into NewTable
FROM OldTable
WHERE 1 =2 --if you only want the table without data
but it won't copy any constraints
It's not the most elegant solution, but you could use a tool like the free Database Publishing Wizard from Microsoft.
It creates an SQL script of the table definition including data and including indexes and stuff. But you would have to alter the script manually to change the table name...
Another possibility:
I just found this old answer on SO.
This script is an example to script the constraints of all tables, but you can easily change it to select only the constraints of "your" table.
So, you could do the following:
Create the new table with data like SQLMenace said (select * into NewTable from OldTable)
Add constraints, indexes and stuff by changing this SQL script