Sybase ASE: Add NOT NULL column without a DEFAULT fails. Why? - sql

Consider the following empty (as in without rows) table:
CREATE TABLE my_table(
my_column CHAR(10) NOT NULL
);
Trying to add a NOT NULL column without a DEFAULT will fail:
ALTER TABLE my_table ADD my_new_column CHAR(10) NOT NULL;
Error:
*[Code: 4997, SQL State: S1000]
ALTER TABLE my_table failed.
Default clause is required in order to add non-NULL column 'my_new_column'.
But adding the column as NULL and then change it to be NOT NULL will work:
ALTER TABLE my_table ADD my_new_column CHAR(10) NULL;
ALTER TABLE my_table MODIFY my_new_column CHAR(10) NOT NULL;
Setting a default and then removing the default will work too:
ALTER TABLE my_table ADD my_new_column CHAR(10) DEFAULT '' NOT NULL;
ALTER TABLE my_table REPLACE my_new_column DEFAULT NULL;
What's the justification for this behavior? What is the database trying to do internally that adding the column directly fails? I have a feeling that it might have something to do with internal versioning but I can't find anything in this regard.

This is speculation. I am guessing that Sybase is being overly conservative. In general, you cannot add a new not null column with no default value to a table that has rows. This is true in all databases, because there is no way to populate the existing rows for the new column.
I am guessing that Sybase simply doesn't check if the table has rows, only if it exists. Clearly it is not doing the check for the alter.

This is only speculation, but I suspect it has to do the combination of needing to both acquire a lock on the whole table to guarantee continued compliance with the schema, and re-allocate space for the records.
Allowing a direct add of a NOT NULL column would compromise any existing records if there's no default value. Yes, we know the table is empty. And the database can (eventually) know the table is empty at execution time... but it can't really know the table is empty at execution plan compile time, because a row could be added while the execution plan is determined.
This means the database would need to generate the worst-possible execution plan, involving a lock on the entire table, for the query to run in a transactionally-safe way. Additionally, adding (or removing) a column causes extra work for the database because it needs to re-allocate any pages and rebuild indexes in order to account for the changed size of individual records.
Put the two together, and it becomes difficult to just rollback a failed query, because you may have actual pages in different states. For whatever reason, the developers chose not to allow this.
The other options allow you to simply fail the query if a bad row gets in the way and would violate the schema, because you're not re-sizing records within pages. It might even allow you to get away with some page and row locks, rather than full table locks.

Related

SQL Server: Existing column and value incrementing

I'm trying to create and increment by one some values to put into an already existing (but empty) column. I'm currently using the identity function, but I wouldn't mind using a custom made function. Right now, SSMS is saying there's incorrect syntax near IDENTITY. Could anybody help me fix this syntax?
ALTER Table anthemID IDENTITY(1,1)
First, you can't make a column identity after the fact: it has to be set that way at creation time.
Second, I'm not quite sure what you mean by "increment the value of an already existing column by one." You can only increment the value of rows within a column--perform a DML (Data Modification Language) query. The script you suggested above is a DDL (Data Definition Language) query that actually modifies the structure of the table, affecting the entire column--all rows.
If you just want to increment all the rows by 1, you'd do this:
UPDATE dbo.YourTable SET anthemID = anthemID + 1;
On the other hand, if you want the anthemID column to acquire the identity property so that new inserts to the table receive unique, autoincrementing values, you can do that with some juggling:
Back up your database and confirm it is a good backup.
Script out your table including all constraints.
Drop all constraints on your table or other tables that involve anthemID.
ALTER TABLE dbo.YourTable DROP CONSTRAINT PK_YourTable -- if part of PK
ALTER TABLE dbo.AnotherTable DROP CONSTRAINT FK_AnotherTable_anthemID -- FKs
Rename your table
EXEC sp_rename 'dbo.YourTable', 'YourTableTemp';
Modify the script you generated above to make anthemID identity (add in identity(1,1) after int);
Run the modified script to create a new table with the same name as the original.
Insert the data from the old table to the new one:
SET IDENTITY_INSERT dbo.YourTable ON;
INSERT dbo.YourTable (anthemID, AnotherColumn, RestOfColumns)
SELECT anthemID, AnotherColumn, RestOfColumns
FROM dbo.YourTableTemp;
SET IDENTITY_INSERT dbo.YourTable OFF;
Re-add all constraints that were dropped.
Drop the original, renamed table after confirming you don't need the data any more.
You may be able to do this from SSMS's GUI table designer, and it will take care of moving the data over for you. However, this has bitten some people in the past and if you don't have a good database backup, well, don't do it because you might encounter some regret in the process.
UPDATE
Now that I know the column is blank, it's even easier.
ALTER TABLE dbo.YourTable DROP COLUMN anthemID;
ALTER TABLE dbo.YourTable ADD anthemID int identity(1,1) NOT NULL;
This does have the drawback of moving the column to the end of the table. If that's a problem, you can follow much the same procedure as I outlined above (to fix things yourself, or alternately use the designer in SQL Server Management Studio).
I recommend in the strongest terms possible that you use an identity column and do not try to create your own means of making new rows get an incremented value.
For emphasis, I'll quote #marc_s's comment above:
The SELECT MAX(ID)+1 approach is highly unsafe in a concurrent environment - in a system under some load, you will get duplicates. Don't do this yourself - don't try to reinvent the wheel - use the proper mechanisms (here: IDENTITY) that your database gives you and let the database handle all the nitty-gritty details!
I wholeheartedly agree with him.

Add new column without table lock?

In my project having 23 million records and around 6 fields has been indexed of that table.
Earlier I tested to add delta column for Thinking Sphinx search but it turns in holding the whole database lock for an hour. Afterwards when the file is added and I try to rebuild indexes this is the query that holds the database lock for around 4 hours:
"update user_messages set delta = false where delta = true"
Well for making the server up I created a new database from db dump and promote it as database so server can be turned live.
Now what I am looking is that adding delta column in my table with out table lock is it possible? And once the column delta is added then why is the above query executed when I run the index rebuild command and why does it block the server for so long?
PS.: I am on Heroku and using Postgres with ika db model.
Postgres 11 or later
Since Postgres 11, only volatile default values still require a table rewrite. The manual:
Adding a column with a volatile DEFAULT or changing the type of an existing column will require the entire table and its indexes to be rewritten.
Bold emphasis mine. false is immutable. So just add the column with DEFAULT false. Super fast, job done:
ALTER TABLE tbl ADD column delta boolean DEFAULT false;
Postgres 10 or older, or for volatile DEFAULT
Adding a new column without DEFAULT or DEFAULT NULL will not normally force a table rewrite and is very cheap. Only writing actual values to it creates new rows. But, quoting the manual:
Adding a column with a DEFAULT clause or changing the type of an
existing column will require the entire table and its indexes to be rewritten.
UPDATE in PostgreSQL writes a new version of the row. Your question does not provide all the information, but that probably means writing millions of new rows.
While doing the UPDATE in place, if a major portion of the table is affected and you are free to lock the table exclusively, remove all indexes before doing the mass UPDATE and recreate them afterwards. It's faster this way. Related advice in the manual.
If your data model and available disk space allow for it, CREATE a new table in the background and then, in one transaction: DROP the old table, and RENAME the new one. Related:
Best way to populate a new column in a large table?
While creating the new table in the background: Apply all changes to the same row at once. Repeated updates create new row versions and leave dead tuples behind.
If you cannot remove the original table because of constraints, another fast way is to build a temporary table, TRUNCATE the original one and mass INSERT the new rows - sorted, if that helps performance. All in one transaction. Something like this:
BEGIN
SET temp_buffers = 1000MB; -- or whatever you can spare temporarily
-- write-lock table here to prevent concurrent writes - if needed
LOCK TABLE tbl IN SHARE MODE;
CREATE TEMP TABLE tmp AS
SELECT *, false AS delta
FROM tbl; -- copy existing rows plus new value
-- ORDER BY ??? -- opportune moment to cluster rows
-- DROP all indexes here
TRUNCATE tbl; -- empty table - truncate is super fast
ALTER TABLE tbl ADD column delta boolean DEFAULT FALSE; -- NOT NULL?
INSERT INTO tbl
TABLE tmp; -- insert back surviving rows.
-- recreate all indexes here
COMMIT;
You could add another table with the one column, there won't be any such long locks. Of course there should be another column, a foreign key to the first column.
For the indexes, you could use "CREATE INDEX CONCURRENTLY", it doesn't use too heavy locks on this table http://www.postgresql.org/docs/9.1/static/sql-createindex.html.

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.

Varchar(255) to Varchar(MAX)

Is it possible to change a column type in a SQL Server 2008 database from varchar(255) to varchar(MAX) without having to drop the table and recreate?
SQL Server Management Studio throws me an error every time I try to do it using that - but to save myself a headache would be nice to know if I can change the type without having to DROP and CREATE.
Thanks
You should be able to do it using TSQL.
Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
'Saving changes is not permitted. The
changes you have made require the
following tables to be dropped and
re-created. You have either made
changes to a table that can't be
re-created or enabled the option
Prevent saving changes that require
table to be re-created.' Option
'Prevent saving changes' is not
enabled..
That's a new "feature" in SQL Server Management Studio 2008 which by default is turned on. Whenever you make a larger change, SSMS can only recreate the table by creating a new one and then moving over the data from the old one - all in the background (those changes include re-ordering of your columns amongst other things).
This option is turned off by default, since if your table has FK constraints and stuff, this way of re-doing the table might fail. But you can definitely turn that feature on!
It's under Tools > Options and once you uncheck that option you can do these kind of changes to table structure in the table designer again.
Be aware
with Something like
ALTER TABLE [table] ALTER COLUMN [column] VARCHAR(MAX)
https://dba.stackexchange.com/questions/15007/change-length-of-varchar-on-live-prod-table
Martin Smith's answare:
If you are increasing it to varchar(100 - 8000) (i.e. anything other than varchar(max)) and you are doing this through TSQL rather than the SSMS GUI ALTER TABLE YourTable ALTER COLUMN YourCol varchar(200) [NOT] NULL and not altering column nullability from NULL to NOT NULL (which would lock the table while all rows are validated and potentially written to or from NOT NULL to NULL in some circumstances then this is a quick metadata only change. It might need to wait for a SCH-M lock on the table but once it acquires that the change will be pretty much instant.
One caveat to be aware of is that during the wait for a SCH-M lock other queries will be blocked rather than jump the queue ahead of it so you might want to consider adding a SET LOCK_TIMEOUT first.
Also make sure in the ALTER TABLE statement you explicitly specify NOT NULL if that is the original column state as otherwise the column will be changed to allow NULL.

Changing the size of a column referenced by a schema-bound view in SQL Server

I'm trying to change the size of a column in sql server using:
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
where the length of Addr1 was originally 40.
It failed, raising this error:
The object 'Address_e' is dependent on column 'Addr1'.
ALTER TABLE ALTER COLUMN Addr1 failed because one or more objects access
this column.
I've tried to read up on it and it seems that because some views are referencing this column and it seems that SQL Server is actually trying to drop the column that raised the error.
Address_e is a view created by the previous DB Administrator.
Is there any other way I can change the size of the column?
ALTER TABLE [table_name] ALTER COLUMN [column_name] varchar(150)
The views are probably created using the WITH SCHEMABINDING option and this means they are explicitly wired up to prevent such changes. Looks like the schemabinding worked and prevented you from breaking those views, lucky day, heh? Contact your database administrator and ask him to do the change, after it asserts the impact on the database.
From MSDN:
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When
SCHEMABINDING is specified, the base
table or tables cannot be modified in
a way that would affect the view
definition. The view definition itself
must first be modified or dropped to
remove dependencies on the table that
is to be modified.
If anyone wants to "Increase the column width of the replicated table" in SQL Server 2008, then no need to change the property of "replicate_ddl=1". Simply follow below steps --
Open SSMS
Connect to Publisher database
run command -- ALTER TABLE [Table_Name] ALTER COLUMN [Column_Name] varchar(22)
It will increase the column width from varchar(x) to varchar(22) and same change you can see on subscriber (transaction got replicated). So no need to re-initialize the replication
Hope this will help all who are looking for it.
See this link
Resize or Modify a MS SQL Server Table Column with Default Constraint using T-SQL Commands
the solution for such a SQL Server problem is going to be
Dropping or disabling the DEFAULT Constraint on the table column.
Modifying the table column data type and/or data size.
Re-creating or enabling the default constraint back on the sql table column.
Bye
here is what works with the version of the program that I'm using: may work for you too.
I will just place the instruction and command that does it. class is the name of the table. you change it in the table its self with this method. not just the return on the search process.
view the table class
select * from class
change the length of the columns FacID (seen as "faci") and classnumber (seen as "classnu") to fit the whole labels.
alter table class modify facid varchar (5);
alter table class modify classnumber varchar(11);
view table again to see the difference
select * from class;
(run the command again to see the difference)
This changes the the actual table for good, but for better.
P.S. I made these instructions up as a note for the commands. This is not a test, but can help on one :)
Check the column collation. This script might change the collation to the table default. Add the current collation to the script.
You can change the size of the column in 3 steps:
Alter view Address_e and take in comment column /*Addr1*/
Run your script
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
Then again alter view Address_e, in order to uncomment column Addr1