Sql change Data Type - sql

There is one column named Line_no (smallint) now. I want to change this column data type is bigint ,but this column is primary key, and have so many tables has foreign key reference on it, so how to change it?, i need to change both Sql server and oracle database

First of all there's no easy way to do that currently. especially in Oracle, in order to change the data type, all the values of the field should be null. anyway the following process works for both Oracle and SQL Server:
make your database off line so that no operation can disturb our
process.
Add a new field, say line_num having your new data type.
update the the new field with the line_no values for all records.
write a Stored Procedure to drop all the FKs referencing current
PK, using meta data and this SP should write the add FK command to
dbms output, while it is looping, so that later you can execute them
to add these FKs again in step 9.
drop the primary key off the line_no field.
drop the field line_no.
rename the field
line_num to line_no.
add the primary key on the new field.
run the commands generated in step 4 to add all the FKs again.
make your db online :)

It depends on your DBMS. You may have to drop the foreign key constraints, alter the columns and re-create the constraints.

Related

SQL server 2012, combined primary key needs to be unique?

Hello i have a question i assume mainly about SQL server functionality. Im building a test database and i have stumbled a problem when i try to insert data into my tables.
picture 1 shows the error message i get when trying to add rows.
https://i.stack.imgur.com/GW3C2.png
Picture 2 shows all relations in the database
//i.stack.imgur.com/7BhHa.png
picture 3 shows the table i am currently trying to update
//i.stack.imgur.com/3JqtA.png
In the table i have a combined primary key ("SDat" and "Kurs") The error message i get implyes that primary key must be uniqe, but what dont understand is since i have a third column "Elev" which makes the row uniqe, why wont SQL server me insert this row to table? I have tried making the same database in Acess and it works so i assume problem is something in SQL server
Regards Robert
A Primary Key by definition means the value must be unique. So if you have a combined primary key on 2 fields, then that value on those 2 fields needs to be unique meaning it can only have 1 row. If you need to enforce unique values on the combination of 3 fields (SDat, Kurs, and Elev) then your PK needs to include all 3 fields.
If you really need to enforce a unique constraint across alot of fields in a table, I wouldn't use a PK to enforce that, but instead use a UNIQUE constraint.
ALTER TABLE tablename ADD CONSTRAINT constraintname UNIQUE (column1, ..., columnn)
Then you can create a different column for your Primary Key so that as you add columns and need to require those additional columns be unique, you don't have to edit your PK and rebuild your table.

PostgreSQL column type conversion from bigint to bigserial

When I try to change the data type of a column in a table by alter command...
alter table temp alter column id type bigserial;
I get
ERROR: type "bigserial" does not exist
How can I change the datatype from bigint to bigserial?
As explained in the documentation, SERIAL is not a datatype, but a shortcut for a collection of other commands.
So while you can't change it simply by altering the type, you can achieve the same effect by running these other commands yourself:
CREATE SEQUENCE temp_id_seq;
ALTER TABLE temp ALTER COLUMN id SET NOT NULL;
ALTER TABLE temp ALTER COLUMN id SET DEFAULT nextval('temp_id_seq');
ALTER SEQUENCE temp_id_seq OWNED BY temp.id;
Altering the owner will ensure that the sequence is removed if the table/column is dropped. It will also give you the expected behaviour in the pg_get_serial_sequence() function.
Sticking to the tablename_columnname_seq naming convention is necessary to convince some tools like pgAdmin to report this column type as BIGSERIAL. Note that psql and pg_dump will always show the underlying definition, even if the column was initially declared as a SERIAL type.
As of Postgres 10, you also have the option of using an SQL standard identity column, which handles all of this invisibly, and which you can easily add to an existing table:
ALTER TABLE temp ALTER COLUMN id
ADD GENERATED BY DEFAULT AS IDENTITY
ALTERing a column from BIGINTEGER to BIGSERIAL in order to make it auto-increment won't work. BIGSERIAL is not a true type, it is a trick that automates PK and SEQUENCE creation.
Instead you can create a sequence yourself, then assign it as the default for a column:
CREATE SEQUENCE "YOURSCHEMA"."SEQNAME";
ALTER TABLE "YOURSCHEMA"."TABLENAME"
ALTER COLUMN "COLUMNNAME" SET DEFAULT nextval('"YOURSCHEMA"."SEQNAME"'::regclass);
ALTER TABLE "YOURSCHEMA"."TABLENAME" ADD CONSTRAINT pk PRIMARY KEY ("COLUMNNAME");
This is a simple workaround:
ALTER TABLE table_name drop column column_name, add column column_name bigserial;
Sounds like alot of professionals out there on this subject... if the original table did indeed have data then the real answer to this dilemma is to have designed the db correctly in the first place. However, that being the case, to change the column rule (type) would require integrity verification of that column for the new paradigm. And, don't forget, anywhere where that column is manipulated (added/updated) then that would need to be looked into.
If it's a new table then okay, simples: delete column and re-add new column (takes care of the sequence for you). Again, design, design, design.
I think we've all fouled on this.

postgresql: error duplicate key value violates unique constraint

This question have been asked by several people but my problem seems to be different.
Actually I have to merge same structured tables from different databases in postgresql into a new DB. What I am doing is that I connect to remote db using dblink, reads that table in that db and insert it into the table in the current DB like below
INSERT INTO t_types_of_dementia SELECT * FROM dblink('host=localhost port=5432 dbname=snap-cadence password=xxxxxx', 'SELECT dementia_type, snapid FROM t_types_of_dementia') as x(dementia_type VARCHAR(256),snapid integer);
First time this query runs fine, but when I run it again or try to run it with some other remote database's table: it gives me this error
ERROR: duplicate key value violates unique constraint
"t_types_of_dementia_pkey"
I want that this new tables gets populated by entries of others tables from other dbs.
Some of the solutions proposed talks about sequence, but i am not using any
The structure of the table in current db is
CREATE TABLE t_types_of_dementia(
dementia_type VARCHAR(256),
snapid integer NOT NULL,
PRIMARY KEY (dementia_type,snapid)
);
P.S. There is a specific reason why both the columns are used as a primary key which may not be relevant in this discussion, because same issue happens in other tables where this is not the case.
As the error message tells you - you can not have two rows with the same value in the columns dementia_type, snapid since they need to be unique.
You have to make sure that the two databases has the same values for dementia_type, snapid.
A workaround would be to add a column to your table alter table t_types_of_dementia add column id serial generated always and use that as primary key instead of your current.

Is there a smart way to append a number to an PK identity column in a Relational database w/o total catastrophe?

It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.

Is it possible to change SQL user-defined data type?

I have a bunch of tables using user-defined data type for PK column. Is it possible to change this type Using SQL Server 2005?
I would suggest that it is always possible to refactor poor or outmoded database designs, it simply depends on how much work you are willing to go to in order to do so.
If you are looking to replace the user-defined data with a surrogate key then you should be able to simply alter the existing table to contain a non-nullable identity column and this should cause all of the existing records to be assigned a new key automatically.
Once the new field is populated with unique id's, if you need to move out and replace foreign key references to this table, then I would simply alter those tables to contain the new field and use something like the following:
UPDATE child_table
SET new_fk_val =
SELECT new_pk_val
FROM parent_table
WHERE parent_table.old_pk_val = child_table.old_fk_val
Once that step is complete, then you could drop the old foreign key constraint, drop the old foreign key column, drop the old primary key column, establish the new primary key constraint, and then establish the new foreign key constraint.
Of course, if the old version of the parent and child tables relationship was such that you have invalid records in the child table you may have to do something like the following:
DELETE FROM child_table
WHERE old_fk_val NOT IN
( SELECT old_pk_val FROM parent_table)