If I have created a table with VARBINARY(16) and later decide I need to store longer binary data, can I safely alter an existing table? Or do I even need to? Internally is hsqldb optimizing the storage of the length and will it need to rewrite all the existing data?
The documentation does not cover this case.
With HSQLDB, all SQL statements for altering the tables are safe. With VARBINARY and VARCHAR, you need to execute the ALTER COLUMN statement to increase the maximum length. For example:
ALTER TABLE thetable ALTER COLUMN thecolumn SET DATA TYPE VARBINARY(32)
When increasing the length, the statement executes instantly. When reducing, it will check all the data and will not change the limit if any existing data is longer than the new maximum.
http://hsqldb.org/doc/guide/databaseobjects-chapt.html#dbc_table_manupulation
Related
This is an extension of a previous question I asked: Is it possible to change partition metadata in HIVE?
We are exploring the idea of changing the metadata on the table as opposed to performing a CAST operation on the data in SELECT statements. Changing the metadata in the MySQL metastore is easy enough. But, is it possible to have that metadata change applied to a column that is on a partitioned table (they are daily)? Note: the column itself is not the partitioning column. It is a simple ID field that is being changed from STRING to BIGINT.
Otherwise, we might be stuck with current and future data being of type BIGINT while the historical is STRING.
Question: Is it possible to change partition meta data in Hive? If yes, how?
Note: I am asking this as a separate question as the original answer appears to be for a column on a partitioned table that is also the partitioning column. So, I do not want to muddy the waters.
Update:
I ran the ALTER TABLE .. CHANGE COLUMN ... CASCADE command, but I get the following error:
Error while processing statement: FAILED: Execution Error, return code
1 from org.apache.hadoop.hive.ql.exec.DDLTask. Not allowed to alter
schema of Avro stored table having external schema. Consider removing
avro.schema.literal or avro.schema.url from table properties.
The metadata is stored in a separate avro file. I can confirm that the updated metadata is in the avro file, but not in the individual partition file.
Note: The table is stored as EXTERNAL.
You can easily change column type:
Use alter table in Hive, change type to STRING, etc:
alter table table_name change column col_name col_name string cascade; --change to string
See documentation.
ALTER TABLE CHANGE COLUMN with CASCADE command changes the columns of a table's metadata, and cascades the same change to all the partition metadata.
Alternatively you can recreate table like in this answer: https://stackoverflow.com/a/58299056/2700344
I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.
Could we change attribute's data type when the database table has record in SQL?
I am using Microsoft Management Studio 2008. The error that i am getting is:
** Error converting data type nvarchar to float. **
In short: It is possible with alter column command ONLY if the altered data type is compatible with newly modified one. In addition, it is recommended to be done with transaction.
For example: You may change a column from a varchar(50) to a nvarchar(200), with a script below.
alter table TableName
alter column ColumnName nvarchar(200)
Edit: Regarding your posted error while altering column type.
** Error converting data type nvarchar to float. **
One way would be to create a new column, and convert all good (convertible and compatible) records to new column. After that you may wanna to clean-up the bad records that do not convert, delete old column and re-name your newly added and populated column back to the original name. Important: use testing environment for all this manipulations first. Usually, playing with productions tables turns to be a bad practice to screw things up.
References to look for more discussions on similar SE posts:
Change column types in a huge table
How to change column datatype in SQL Server database without losing data
Obviously, there is no default conversion to your new datatype. One solution could be to create a second column with the requested type, and write your own conversion function. Once this done, delete the first column and rename the second one with the same name.
Things to consider: How big your table is. You then use the alter table syntax. We do not know what data type you want to change, so just for e.g.
alter column:
Alter Table [yourTable] Alter column [yourColumn] varchar(15)
You could also try to add a new column and then update that column using your old column. Drop the old column. Rename the new column. This is a safe better way, becasue at times the data that you hold might not react well to the new data type...
A post to look into for ideas: Change column types in a huge table, How to change column datatype in SQL database without losing data
Alter datatype of that column ..But In general sql wont allow to channge.It will prompt u drop that column..There is setting to achive that thing.
Go to Tool->Option->designers->Table and Database designers and Uncheck Prevent saving option.I m taking abt sql server 2008R2.Now u can easily alter data type.
I need to change column datatypes in a database table from varchar to nvarchar in order to support Chinese characters (currently, the varchar fields that have these characters are only showing question marks).
I know how to change the values, but I want to see if it's safe to do so. Is there anything to look out for before I do the changing? Thanks!
Note that this change is a size-of-data update, see SQL Server table columns under the hood. The change will add a new NVARCHAR column, it will update each row copying the dta from the old VARCHAR to the new NVARCHAR column, and then it will mark the old VARCHAR column as dropped. IF the table is large, this will generate a large log, so be prepared for it. After the update, run DBCC CLEANTABLE to reclaim the space used by the former VARCHAR column. If you can afford it , better run ALTER TABLE ... REBUILD, which will not only reclaim the space it will also completely remove physical deleted VARCHAR column. The linked article at the beginning has more details.
You may also be interested in enabling Unicode Compression for your table.
You can do on non primary key fields:
ALTER TABLE [TableName]
ALTER COLUMN [ColumnName] nvarchar(N) null
On the primary key fields it will not work - you will have to recreate the table
Make sure that the length doesn't exceed 4000 since the maximum for VARCHAR is 8000 while NVARCHAR is only 4K.
The table will get bigger. Each character in the column will take twice the space to store. You might not notice unless the table is really big.
Stored procedures/views/queries that work with the column data might need to be modified to deal with the nvarchar.
Check all the dependencies for this table as stored procs, functions, temp tables based on this table and variables used for inserts/updates etc may also need to be updated to NVARCHAR.
Also check if the table is in replication! That could cause you a new set of problems!
We are facing problem in the alter table SQL statement. Some time we update our database at client side and the alter table sql taking very much time. I like to know, how alter works? Does alter statement performance correlated to that table data? Means, if table have large data then alter will take much time.
There is also problem with the Oracle 11G R2. Is there any changes which need to incorporate to our code? Our code is very old and working fine till now?
There could be several reasons for this:
If the table is locked by another
query/resource. It would wait for the
lock to be released and then execute
the update...
If the table contains many rows and you have added a new column in the table with a default value, it would execute an update query for whole table after altering the table to update all the existing records with the default value...
If for example you add a new column with a default value in a large table, then it will take time depending the size of the table.