Altering more than one column in a table in oracle - sql

Will the two scripts below (for altering a table) make diff..??
script 1 :
alter table ACNT_MGR_HSTRY add DM_BTNUMBER DATA_TYPE ;
alter table ACNT_MGR_HSTRY add DM_BTID DATA_TYPE ;
alter table ACNT_MGR_HSTRY add DM_USERID DATA_TYPE ;
alter table ACNT_MGR_HSTRY add DM_WSID DATA_TYPE ;
script 2 :
alter table ACNT_MGR_HSTRY
add
(
DM_BTNUMBER DATA_TYPE,
DM_BTID DATA_TYPE,
DM_USERID DATA_TYPE,
DM_WSID DATA_TYPE
);
will update makes a diff..???
update OPERATIONAL_UNIT
set ( BANK_ID=
ENTY_CODE_ID=
TIME_ZONE=
DM_BTNUMBER=
DM_BTID=
DM_USERID=
DM_WSID=
);
-----------
update OPERATIONAL_UNIT set BANK_ID=;
update OPERATIONAL_UNIT set ENTY_CODE_ID=;
update OPERATIONAL_UNIT set TIME_ZONE=;
update OPERATIONAL_UNIT set DM_BTNUMBER=;
update OPERATIONAL_UNIT set DM_BTID=;
update OPERATIONAL_UNIT set DM_USERID=;
update OPERATIONAL_UNIT set DM_WSID=;

The two examples are equivalent.
I've only ever used statements like you have in the first example; I don't know if it's possible that you won't get as good an error message if using the second example format in the event of an error.. Gary Myers confirmed my belief:
Mostly the same. If, for example, DM_WSID already existed then the relevant statement would fail. In script 1, you'd get three of the columns added. In script 2 you wouldn't. If you have DDL triggers or AUDIT, then they will get fired multiple times for case 1. Script 1 will commit multiple times and MAY wait for an exclusive table lock several times.

Script 2 will generally perform much better than script 1. Grouping similar changes and performing them all at once is almost always faster. But the real question is, is the difference significant?
Based on your comment about 50 tables with 15 columns each, I'd say the difference is at least somewhat significant, and possibly very significant depending on your configuration.
Just yesterday I made almost the exact same change, modifying about 30 columns for about 100 tables. Running the script locally using SQL*Plus, the time decreased from 2 minutes to 4 seconds. Most of the time was probably spent communicating between SQL*Plus and the database. If you have a SQL*Plus script that needs to be run remotely those round trips could make your script painfully slow.

One more way we can Modify our Columns by bracketing each column that we need to alter it .
Here The instance :-
Alter table news
modify (Newsid number primary key )
modify (newsArticleNo number check (newsArticleNo > 0))
modify (NewsArea char(15) default '' );

Related

Copy a table data from one database to another database SQL

I have had a look at similar problems, however none of the answers helped in my case.
Just a little bit of background. I have Two databases, both have the same table with the same fields and structure. Data already exists in both tables. I want to overwrite and add to the data in db1.table from db2.table the primary ID is causing a problem with the update.
When I use the query:
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
It works to a blank table, because none of the primary keys exist. As soon as the primary key exists it fails.
Would it be easier for me to overwrite the primary keys? or find the primary key and update the fields related to the field_id? Im really not sure how to go ahead from here. The data needs to be migrated every 5min, so possibly a stored procedure is required?
first you should try to add new records then update all records.you can create a procedure like below code
PROCEDURE sync_Data(a IN NUMBER ) IS
BEGIN
insert into db2.table
select *
from db1.table t
where t.field_id not in (select tt.field_id from db2.table tt);
begin
for t in (select * from db1.table) loop
update db2.table aa
set aa.field1 = t.field1,
aa.field2 = t.field2
where aa.field_id = t.field_id;
end loop;
end;
END sync_Data
Set IsIdentity to No in Identity Specification on the table in which you want to move data, and after executing your script, set it to Yes again
I ended up just removing the data in the new database and sending it again.
DELETE FROM db2.table WHERE db2.table.field_id != 0;
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
Its not very efficient, but gets the job done. I couldnt figure out the syntax to correctly do an UPDATE or to change the IsIdentity field within MariaDB, so im not sure if they would work or not.
The overhead of deleting and replacing non-trivial amounts of data for an entire table will be prohibitive. That said I'd prefer to update in place (merge) over delete /replace.
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT t.field_id,t.field1,t.field2
FROM table t
ON DUPLICATE KEY UPDATE field1 = t.field1, field2 = t.field2
This can be used inside a procedure and called every 5 minutes (not recommended) or you could build a trigger that fires on INSERT and UPDATE to keep the tables in sync.
INSERT INTO database1.tabledata SELECT * FROM database2.tabledata;
But you have to keep length of varchar length larger or equal to database2 and keep the same column name

Oracle SQL update double-check locking

Suppose we have table A with fields time: date, status: int, playerId: int, serverid: int
We added constraint on time, playerid and serverid (UNQ_TIME_PLAYERID_SERVERID)
At some time we try to update all rows in table A with new status and date:
update status = 1, time = sysdate where serverid=XXX and status != 1 and time > sysdate
Problem that there are two separated processes on separate machines that can execute same update at same sysdate.
And UNQ_TIME_PLAYERID_SERVERID violation occurs!
Is there any possibility to force Oracle check where cause before concrete update (when lock on row acquired)?
I do not want to use any 'select for update' things
If it's really the same update 100% of the time, then just catch the exception and ignore it.
In case you want to prevent an error occuring in the first place, you need to implement some logic to prevent the second update statement from ever executing.
I could think of a "lock table" just for this purpose. Create a table TABLE_A_LOCK_TB (add columns based on what information you want to have stored there for administrative reasons, e.g. user who set the lock or a timestamp, ...).
Before you execute an update statement on table A, just insert a row to TABLE_A_LOCK_TB. Once an update was successful, delete said row.
Before executing any update statement on table A just check whether the TABLE_A_LOCK_TB has a dataset. If it doesn't your update is good to go, if it does you don't execute the update.
To make this process easier you could just write a package for "locking" and "unlocking" table A by inserting / deleting a row from the TABLE_A_LOCK_TB. Also implement a function to check the "lock status".
If you need this logic for several tables you can also make it dynamic by just having a column holding the table name in TABLE_A_LOCK_TB and checking against that.
In your application logic you can handle every update like this then (pseudocode):
IF your_lock_package.lock_status(table_name) = false THEN
your_lock_package.set_lock(table_name);
-- update statement(s)
your_lock_package.release_lock(table_name);
ELSE
-- "error" handling / information to user + exit

Efficiently add a column with a value

I am trying to add a column to a tsql table, i do this using SMO in c#. Altering the table is fine but i want to set the column to have a value. The table contains 650 million rows and the update query is taking over a day and a half to run.
Update [TempDatabase].[dbo].[table1] set RawSource = 'DTP'
This is the query I am running above.
Can anyone think of a more efficient way of doing this?
Thanks in advance.
Sometimes, it is more efficient to copy the table with the new value and re-create the table in a single command. Also, you might want to be sure that you have minimal logging for these operations.
Probably the best solution is to use a default value when you create the column:
alter table table1 add RawSource varchar(255) not null default 'DTP';
If you don't want the default moving forward, you can remove it after the column is added.
Another method uses computed columns, but basically does the same thing:
alter table table1 add _RawSource varchar(255);
alter table1 add RawSource as (coalesce(_RawSource, 'DTP'));
at the time of addition of column to table only we can set a default value which will applies for all rows
Note:U should keep not null compulsory because if not all rows are applicable with nulls
alter table table_name
add row_source nvarchar(5) not null default(N'DTP')

Postgresql - change the size of a varchar column to lower length

I have a question about the ALTER TABLE command on a really large table (almost 30 millions rows).
One of its columns is a varchar(255) and I would like to resize it to a varchar(40).
Basically, I would like to change my column by running the following command:
ALTER TABLE mytable ALTER COLUMN mycolumn TYPE varchar(40);
I have no problem if the process is very long but it seems my table is no more readable during the ALTER TABLE command.
Is there a smarter way? Maybe add a new column, copy values from the old column, drop the old column and finally rename the new one?
Note: I use PostgreSQL 9.0.
In PostgreSQL 9.1 there is an easier way
http://www.postgresql.org/message-id/162867790801110710g3c686010qcdd852e721e7a559#mail.gmail.com
CREATE TABLE foog(a varchar(10));
ALTER TABLE foog ALTER COLUMN a TYPE varchar(30);
postgres=# \d foog
Table "public.foog"
Column | Type | Modifiers
--------+-----------------------+-----------
a | character varying(30) |
There's a description of how to do this at Resize a column in a PostgreSQL table without changing data. You have to hack the database catalog data. The only way to do this officially is with ALTER TABLE, and as you've noted that change will lock and rewrite the entire table while it's running.
Make sure you read the Character Types section of the docs before changing this. All sorts of weird cases to be aware of here. The length check is done when values are stored into the rows. If you hack a lower limit in there, that will not reduce the size of existing values at all. You would be wise to do a scan over the whole table looking for rows where the length of the field is >40 characters after making the change. You'll need to figure out how to truncate those manually--so you're back some locks just on oversize ones--because if someone tries to update anything on that row it's going to reject it as too big now, at the point it goes to store the new version of the row. Hilarity ensues for the user.
VARCHAR is a terrible type that exists in PostgreSQL only to comply with its associated terrible part of the SQL standard. If you don't care about multi-database compatibility, consider storing your data as TEXT and add a constraint to limits its length. Constraints you can change around without this table lock/rewrite problem, and they can do more integrity checking than just the weak length check.
Ok, I'm probably late to the party, BUT...
THERE'S NO NEED TO RESIZE THE COLUMN IN YOUR CASE!
Postgres, unlike some other databases, is smart enough to only use just enough space to fit the string (even using compression for longer strings), so even if your column is declared as VARCHAR(255) - if you store 40-character strings in the column, the space usage will be 40 bytes + 1 byte of overhead.
The storage requirement for a short string (up to 126 bytes) is 1 byte
plus the actual string, which includes the space padding in the case
of character. Longer strings have 4 bytes of overhead instead of 1.
Long strings are compressed by the system automatically, so the
physical requirement on disk might be less. Very long values are also
stored in background tables so that they do not interfere with rapid
access to shorter column values.
(http://www.postgresql.org/docs/9.0/interactive/datatype-character.html)
The size specification in VARCHAR is only used to check the size of the values which are inserted, it does not affect the disk layout. In fact, VARCHAR and TEXT fields are stored in the same way in Postgres.
I was facing the same problem trying to truncate a VARCHAR from 32 to 8 and getting the ERROR: value too long for type character varying(8). I want to stay as close to SQL as possible because I'm using a self-made JPA-like structure that we might have to switch to different DBMS according to customer's choices (PostgreSQL being the default one). Hence, I don't want to use the trick of altering System tables.
I ended using the USING statement in the ALTER TABLE:
ALTER TABLE "MY_TABLE" ALTER COLUMN "MyColumn" TYPE varchar(8)
USING substr("MyColumn", 1, 8)
As #raylu noted, ALTER acquires an exclusive lock on the table so all other operations will be delayed until it completes.
if you put the alter into a transaction the table should not be locked:
BEGIN;
ALTER TABLE "public"."mytable" ALTER COLUMN "mycolumn" TYPE varchar(40);
COMMIT;
this worked for me blazing fast, few seconds on a table with more than 400k rows.
Adding new column and replacing new one with old worked for me, on redshift postgresql, refer this link for more details https://gist.github.com/mmasashi/7107430
BEGIN;
LOCK users;
ALTER TABLE users ADD COLUMN name_new varchar(512) DEFAULT NULL;
UPDATE users SET name_new = name;
ALTER TABLE users DROP name;
ALTER TABLE users RENAME name_new TO name;
END;
Here's the cache of the page described by Greg Smith. In case that dies as well, the alter statement looks like this:
UPDATE pg_attribute SET atttypmod = 35+4
WHERE attrelid = 'TABLE1'::regclass
AND attname = 'COL1';
Where your table is TABLE1, the column is COL1 and you want to set it to 35 characters (the +4 is needed for legacy purposes according to the link, possibly the overhead referred to by A.H. in the comments).
Try run following alter table:
ALTER TABLE public.users
ALTER COLUMN "password" TYPE varchar(300)
USING "password"::varchar;
I have found a very easy way to change the size i.e. the annotation #Size(min = 1, max = 50) which is part of "import javax.validation.constraints" i.e.
"import javax.validation.constraints.Size;"
#Size(min = 1, max = 50)
private String country;
when executing this is hibernate you get in pgAdmin III
CREATE TABLE address
(
.....
country character varying(50),
.....
)

Syntax Difference in Table Alter/Add with and without a NULL

I have a question, but let me first say that this is being performed on a database which I inherited and I am currently trying to improve the design on. Also the reason for the syntax is the fact that there are a lot of tables that the datatypes have been 'tweaked' by tech support people (/facepalm) causing lots of issues.
IF NOT EXISTS(Select *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'RXINFO'
AND TABLE_SCHEMA = N'scriptassist'
AND COLUMN_NAME = N'Price')
BEGIN
Alter Table [scriptassist].[RXINFO] Add [Price] FLOAT
Print 'Price Field nonexistant creating this field'
END
ELSE
BEGIN
If Not Exists(Select *
From Information_Schema.Columns
Where Table_Name = N'RXINFO'
And Table_Schema = N'scriptassist'
And Column_Name = N'Price'
And DATA_Type = N'FLOAT'
AND IsNull(CHARACTER_MAXIMUM_LENGTH, 0) = 0)
BEGIN
Alter Table [scriptassist].[RXINFO] Alter Column Price FLOAT
Print 'Price Field needed type updating'
END
END
Is what I am currently doing to determine if a column needs to be altered or added to a database. However even in the case of only having to add say 3-4 columns on a 500K-750K line database, where the table is about 100 columns wide, I'm estimating that this is taking anywhere from 15-20 minutes per column.
Things I have done to try to speed it up:
Removed the indexes before and then re-add after
Single user mode
ensured no connection to the database other than mine
I still don't feel like it should be taking as long as it is, so my question is do I need to explicitly add the NULL after the column type for this to work as fast as I think it should?
If you specifiy "NULL" or perhaps 'NOT NULL WITH DEFAULT 0.0" then only the schema will be updated and the rows in the table will not be altered, so the change will sub second for each table.
If the column is NULLable it doesn't have to be there so there is no need to update the rows when you alter the schema.
If the column is not NULLable without a default then when you ALTER the schema to add the column every row in the existing table will be updated to add a column with the default default value of 0.0. This is why your alters take minutes.
If you specify 'NOT NULL WITH DEFAULT' then the "DEFAULT" is added when the row is read so again there is no need to update the table rows.
In both cases the current INSERT/UPDATE SQL will work without any changes. If you add a "NOT NULL" column with no default then current update SQLs will bomb out unless you add in the price column. This could be good thing or a bad thing depending on what you want to achieve.