How can I alter a UDT in HSQLDB? - hsqldb

In HSLQDB v 2.3.1 there is a create type clause for defining UDTs. But there appears to be no alter type clause, as far as the docs are concerned (and the db returns a unexpected token error if I try this).
Is it possible to amend/drop a UDT in HSQLDB? What would be the best practice, if for example I originally created
create type CURRENCY_ID as char(3)
because I decide I'm going to use ISO codes. But then I actually decide that I'm going to store the codes as integers instead. What is the best way to modify the schema in my db? (this is a synthetic example, obviously I wouldn't use integers in this case).
I guess I might do
alter table inventory alter column ccy set data type int
drop type CURRENCY_ID
create type CURRENCY_ID as int
alter table inventory alter column ccy set data type CURRENCY_ID
but is there a better way to do this?

After trying various methods, I ended up writing a script to edit the *.script file of the database directly. It's a plain text file with SQL commands that recreates the DB programmatically. In detail:
open db, shutdown compact
Edit the script file: replace the type definition, e.g. create type XXX as int to create type XXX as char(4)
For each table, replace the insert into table XXX values (i,...) with insert into table XXX values('str',...). This was done with a script that had the mappings from the old (int) value into the new (char) value.
In my particular case, I was changing a primary key, so I had to remove the identity directive from the create table statement, and also I had to remove a line that had a alter table XXX alter column YYY restart sequence 123.
save and close script file, open db, shutdown compact
This isn't great, but it worked. Advantages :
Ability to re-define UDT.
Ability to map the table values programmatically.
Method is generic and can be used for other schema changes, beside UDTs.
Cons
No checking that schema is consistent (although it does throw up errors if it can't read the script).
Dangerous when reading file as a text file. e.g. what if I have a VARCHAR column with newlines in it? When I parse the script file and write it back, this would need to be escaped.
Not sure if this works with non-memory DBs. i.e. those that don't only have a *.script file when shutdown.
Probably not efficient for large DBs. My DB was small ~ 1MB.

Related

Is it possible to convert a field of varchar(32) to BLOB in Firebird database

I would like to keep data which is already saved into a Table field varchar(32) and to convert it to BLOB in Firebird database.
I am using a software: IBExpert ....
If it is possible, how to do that ?
Let's consider you have table TEST with one column NAME:
create table test (name varchar(32));
insert into test values('test1');
insert into test values('test2');
insert into test values('test3');
commit;
select * from test;
It is possible to change the column from varchar to BLOB by the following script:
alter table test add name_blob blob;
update test set name_blob = name;
commit;
alter table test drop name;
alter table test alter column name_blob to name;
commit;
select * from test;
Specifically in IBExpert it easy to do with Firebird 2.1 and Firebird 2.5 via direct modifications of system tables (this "obsolete" method was prohibited in Firebird 3 but nothing was introduced to replace it).
This works in both directions, VARCHAR to BLOB and BLOB to VARCHAR.
You have to have a DOMAIN - that is, a named data type, in SQL, and that domain should be of BLOB type - then IBExpert itself would issue the command, that Firebird 2.x executes, if you set Firebird 2.x in database options.
If you don't have IB Expert then the following command you have to issue:
CREATE DOMAIN T_TEXT_BLOB AS -- prerequisite
BLOB SUB_TYPE 1;
update RDB$RELATION_FIELDS -- the very action
set RDB$FIELD_SOURCE = 'T_TEXT_BLOB'
/* the name of the blob domain to be set as the field's new data type */
where RDB$FIELD_SOURCE = '.....' /* name of the column */
and RDB$RELATION_NAME = '....' /* name of the table */
See also:
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-domn.html
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-relfields.html
When you create a column or change its datatype without explicitly naming the type (like just varchar(10)) then under the hood Firebird creates automatically-managed one-per-column user domains with names like RDB$12345, so while you can do that too, perhaps having an explicit named domain would be more practical and more safe.
This method however fails on Firebird 3, where you do have to copy the whole table, as shown by Maxim above.
http://tracker.firebirdsql.org/browse/CORE-6052
FB devs warn about some "bugs" and "it never worked properly" however refuse to give details.
UPDATE I finally managed to reproduce the bug mentioned in the tracker.
At least in Firebird 2.1 reference counting is broken w.r.t. BLOB payloads.
So the trick seems to be to instantly re-write implicit blobs into explicit, tricking Firebird to think we supplied new content for all the BLOB values.
Assuming the names from Maxim's answer above...
To trigger and demonstrate the bug take the "vanilla" VarChar database, apply the conversion described above, and issue the following commands:
update test set name = name -- trying to force data conversion, but instead forcing Firebird reference counting bug
select cast( name as VarChar(200) ) from test - or actually any command that would try to actually read the field contents - any such attempt would be shot down with the infamous invalid BLOB ID Firebird error.
To work around the bug we must prevent Firebird from (broken) reference counting. So, we must do a fake update, invoking expression evaluator, so that Firebird optimizer would loose track of value sources and would fail to realize the data was not really changed.
update test set name = '' || name || '' -- really forcing data over-writing and conversion, bypassing BLOB reference counting.
select cast( name as VarChar(200) ) from test - now works like a charm (albeit 200 was too short a string and you would be stuck with "overflow" error
The update command can be any other triggering expression evaluator, for example update test set name = cast( name as VarChar( NNN ) ) - but you would need to devise large enough NNN for specific column of specific table. So, string concatenations with empty string is universal and does the work, at least on Firebird 2.1.
The above stands for Firebird 2.1.7 Win32. I did not manage to trigger "invalid BLOB id" with Firebird 2.5.8 Win64 - it "just worked".
At least using the single-connection schema update script, which is anyway the intended way to do database upgrades.
Maybe if I would do schema upgrades while simultaneously there would be users actively working - the FB 2.5 would get broken too, don't know.
Whether to use this shortcut risky way disregarding FB developers' hints, or to use "official" Maxim's answer, possibly dismounting and then re-creating half the database which happened to have "dependencies" upon the field to be dropped, stays up to the reader.

Change column type from bigint to numeric(18,0) in sql server

I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.

change attributes data type in database table when it is already filled with records

Could we change attribute's data type when the database table has record in SQL?
I am using Microsoft Management Studio 2008. The error that i am getting is:
** Error converting data type nvarchar to float. **
In short: It is possible with alter column command ONLY if the altered data type is compatible with newly modified one. In addition, it is recommended to be done with transaction.
For example: You may change a column from a varchar(50) to a nvarchar(200), with a script below.
alter table TableName
alter column ColumnName nvarchar(200)
Edit: Regarding your posted error while altering column type.
** Error converting data type nvarchar to float. **
One way would be to create a new column, and convert all good (convertible and compatible) records to new column. After that you may wanna to clean-up the bad records that do not convert, delete old column and re-name your newly added and populated column back to the original name. Important: use testing environment for all this manipulations first. Usually, playing with productions tables turns to be a bad practice to screw things up.
References to look for more discussions on similar SE posts:
Change column types in a huge table
How to change column datatype in SQL Server database without losing data
Obviously, there is no default conversion to your new datatype. One solution could be to create a second column with the requested type, and write your own conversion function. Once this done, delete the first column and rename the second one with the same name.
Things to consider: How big your table is. You then use the alter table syntax. We do not know what data type you want to change, so just for e.g.
alter column:
Alter Table [yourTable] Alter column [yourColumn] varchar(15)
You could also try to add a new column and then update that column using your old column. Drop the old column. Rename the new column. This is a safe better way, becasue at times the data that you hold might not react well to the new data type...
A post to look into for ideas: Change column types in a huge table, How to change column datatype in SQL database without losing data
Alter datatype of that column ..But In general sql wont allow to channge.It will prompt u drop that column..There is setting to achive that thing.
Go to Tool->Option->designers->Table and Database designers and Uncheck Prevent saving option.I m taking abt sql server 2008R2.Now u can easily alter data type.

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.

sql server helper stored procedure or utility for alter table alter column IDENTITY(1,1)

I wanted to modify a column in a sql server 2005 table to IDENTITY(1,1)
Incidentally this table is empty and the column to be changed is a primary key.
This column is also a foreign key for two other tables.
After googling I found that you cannot use Alter table syntax to modify a column and make it an indentity column.
Link #1 : How do I add the identity property to an existing column in SQL Server
Link #2 : Adding an identity to an existing column -SQL Server
I ended up checking the dependent tables (2 of them) removing the foreign keys (generated the script from SSMS) then dropping the main table then re-creating with identity. (could try the rename option here as well)
Then re-created the foreign keys for the earlier dependent two tables.
But all this was manual work, any scripts or SPs out there to make this easier.
Ideally all these steps would be done by such a script/tool/utility:
Check dependent tables keys
Generate Create and drop foreign key scripts for this
Generate create script for the main table
drop the main table (or rename the table if the table has data)
re-create the table with identity column enabled
re-create foreign keys
You can use SSMS to generate a script (Edit a table, save script), but otherwise it's a manual process as you identified.
The SSMS scripts will pick up dependencies etc. For this kind of work, I tend to use SSMS to generate a basic script, pimp it a bit, run it carefully, then use a comparison tool (such as Red Gate compare) to generate a safer version.
Edit: The SSMS error is not an error, it's a safety check that can be switched off
(This is merely a follow-up to gbn's post with more details -- it isn't all that easy to figure this stuff out.)(
It isn't impossible to write a utility to do this, just very complex and very hard. Fortunately, Microsoft has already done it -- its called SSMS (or SMO?). To generate such a script:
In the Object Explorer, drill down to the database and table that you want to modify
Right click and select Design
Make the desired changes to the one table in the design screen. It's reasonably intuitive.
To add/remove the identity property, select the column in the upper pane, and in the lower pane/"Column Properties" tab, expand and configure the settings under "Identity Specification".
To generate a script to implement all your changes, incorporating all the dependent key changes, click on the "Generate Change Script" toolbar button. This is also an option under the "Table Designer" menu.
I also do this to generate scripts (that I later modify--SSMS doesn't always produce the most efficient code.) Once done, you can exit out without saving your changes -- leaving you a DB you can test your new script on.
drop the pk and build the same datatype column
copy the data of the column which you want to set identity to the new column.
drop the old column
reset primary key
ALTER TABLE UserRole
DROP CONSTRAINT PK_XX
ALTER TABLE XX
ADD newX int not null identity(1,1) primary key
update XX set newX = oldX
alter table XX
DROP COLUMN oldX
this is the simplest way to set identity column.
if you don't want to use the long generated script.