How to remove Hive table bucketing - hive

I mistakenly executed the following ALTER statement with table stocks:
ALTER TABLE stocks
CLUSTERED BY (exchange, symbol)
INTO 48 BUCKETS;
How to undo this command?

If table is managed, then make it external:
ALTER TABLE stocks SET TBLPROPERTIES('EXTERNAL'='TRUE');
Describe table and note it's location, serde properties and check it is EXTERNAL:
describe formatted stocks;
Then drop table and create again specifying location, data will remain for EXTERNAL table:
DROP TABLE stocks;
CREATE EXTERNAL TABLE stocks(
columns definition)
STORED AS TEXTFILE --use the same DDL
location '/table_location_path'; --use the same path from describe table
Alternatively you can create table with another name pointing to the same location, check it works, make first table EXTERNAL, drop it, rename second table using
ALTER TABLE tablename RENAME TO stocks.

Related

Cannot rename column X which belongs to a clustering key

I have defined a table like this:
create or replace TABLE TEST_TABLE cluster by LINEAR(ARTICLE, ORDER_DATE) (
ORDER_DATE DATE
ARTICLE VARCHAR(1555)
note VARCHAR(1555)
);
If I try to rename the column ORDER_DATE, I get an error that it cannot be renamed since it belongs to a clustering key. There is data inside this table that I do not want to get rid of. It is also not convenient to create a new table and copy the entire data into it since there is a lot of data.
Is there any way to temporarily remove the clustering key, rename it and add the key again?
or is there a way to do use a single statement that renames the column and changes the clustering col name at the same time?
Fastest solution: Drop the clustering key, rename the column and re-define the clustering key:
alter table TEST_TABLE rename column ORDER_DATE to ORDERDATE;
-- Cannot rename column 'ORDER_DATE' which belongs to a clustering key
alter table TEST_TABLE DROP CLUSTERING KEY;
alter table TEST_TABLE cluster by (ARTICLE, ORDERDATE);
This will not mess up the clustering of your table - Automatic Clustering doesn't need to recluster the table from scratch.
you cannot rename a column which is a part of cluster key,
one option is to recreate the table with the new column name or create the table with out a cluster key and rename the column.
create or replace TABLE TEST_TABLE --cluster by LINEAR(ARTICLE, ORDER_DATE)
(
ORDER_DATE DATE ,
ARTICLE VARCHAR(1555) ,
note VARCHAR(1555)
);
alter table TEST_TABLE rename column ARTICLE to ARTICLE_new;
alter table TEST_TABLE rename column ORDER_DATE to ORDER_DATE_new;

How to add SUPER column to existing AWS Redshift table?

GOAL
I would like to add a Redshift SUPER column to and existing redshift table.
I need this to store JSON data there
CODE
This is how Normally I would add a new column.
ALTER TABLE products
ADD COLUMN created_at NOT NULL;
1. Tried
CREATE TABLE temp_test_persons (
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
ALTER TABLE temp_test_persons ADD COLUMN newSuperColumn SUPER NOT NULL;
Error running query: ALTER TABLE ADD COLUMN defined as NOT NULL must have a non-null default expression
Reviewed Solutions
Alter column data type in Amazon Redshift
AWS Redshift - Add IDENTITY column to existing table
Adding column to existing tables
Add dynamic column to existing MySQL table?
SQLAlchemy: How to add column to existing table?
adding columns to an existing table
add column to existing table postgres
How to add not null unique column to existing table
Adding columns to existing csv file using super-csv
UCanAccess: Add column to existing table
Adding columns to existing redshift table
How to add column to existing table in laravel?
Add column to existing table in rds
oracle add column to existing table
The solution is to set a default value for the column. Then you can define the column as NotNull.
Like this
ALTER TABLE temp_test_persons
ADD COLUMN newSuperColumn SUPER NOT NULL
DEFAULT '';
ALTER TABLE temp_test_persons
ADD COLUMN newSuperColumn SUPER;

Change a normal table to a foreign "cstore_fdw" table

Is it possible to change a normal table to a foreign table in Postgresql?
At least, if it's not possible, can I copy data from a normal table to a foreign table?
https://github.com/citusdata/cstore_fdw:
To load or append data into a cstore table, you have two options:
You can use the COPY command to load or append data from a file, a program, or STDIN.
You can use the INSERT INTO cstore_table SELECT ... syntax to load or append data from another table.
so follow the example:
create foreign table and insert data to it from your local table.

Efficient way to change the table's filegroup

I have around 300 tables which are located in different partition and now these tables are not in use for such huge data as it was. Now, I am getting space issue time to time and some of but valuable space is occupied by the 150 filegroups that was created for these tables so I want to change table's filegroup to any one instead of 150 FG and release the space by deleting these filegroups.
FYI: These tables are not holding any data now but defined many constraints and indices.
Can you please suggest me, how it can be done efficiently ?
To move the table, drop and then re-create its clustered index specifying the new FG. If it does not have a clustered index, create one then drop it.
It is best practice not to keep user data on primary FG. Leave that for system objects, and put your data on other file groups. But a lot of people ignore this...
I found few more information on the ways of changing the FG group of existing table:
1- Define clustered index in every object using NEW_FG (Mentioned in #under answer)
CREATE UNIQUE CLUSTERED INDEX <INDEX_NAME> ON dbo.<TABLE_NAME>(<COLUMN_NAME>) ON [FG_NAME]
2- If we can't define clustered index then copy table and data structure to new table, drop old and rename new to old as below
Changes Database's default FG to NEW_FG so that every table can be created using INTO, under that new FG by default
ALTER DATABASE <DATABASE> MODIFY FILEGROUP [FG_NAME] DEFAULT
IF OBJECT_ID('table1') IS NOT NULL
BEGIN
SELECT * INTO table1_bkp FROM table1
DROP TABLE table1
EXEC sp_rename table1_bkp, table1
END
After all the operation Database's default FG as before
ALTER DATABASE <DATABASE> MODIFY FILEGROUP [PRIMARY] DEFAULT
3- Drop table if feasible then create it again using NEW_FG
DROP TABLE table1
CREATE TABLE [table1] (
id int,
name nvarchar(50),
--------
) ON [NEW_FG]

Need to replace one table for another (with the same type, but other data) in SQL Server 2014

I made a copy of my table, after this I make some commands on my table (base, not on copy) like insert/delete/update, and I have problem when I want to replace my table for my copy. Select into give me error. When I try to drop table and recreate with copy, I got error that I can't delete table with foreign keys. I don't have any other idea, can somebody help me ? :)
You can't drop table with FK, but you can
Drop the constraints
Create temporary table
Copy your data in the temporary table
Truncate the initial table
Recreate the constraints in the initial table.
Copy your temporary table in your initial table.