Currently I need to move three columns from table A to table B. And I am using the update join table script to copy the existing data to the new columns. Afterwards the old column at table A will be drop.
Alter table NewB add columnA integer
Alter table NewB add columnB integer
Update NewB
Set NewB.columnA = OldA.columnA, NewB.columnB = OldA.columnB
From NewB
Join OldA on NewB.ID = OldA.ID
Alter table OldA drop column columnA
Alter table OldA drop column columnB
These script will add new columns and update the existing data from the old table to the newly created columns. Then remove the old columns.
But due to system structure, I will required to run SQL Script for more than one times to makes sure the database is up to date.
Although I did If (Columns Exist) Begin (Alter Add, Update, Alter Drop) End to ensure the existence of columns required. But when the script runs at the next time, it will hit error that says the columns was not found from the old table in the "update" query. Because the columns were dropped when the script run at the first time.
Is there other ways to solve?
you will not be able to update using join, But you can do like this :
Update NewB set NewB.columnA = (select OldA.columnA from OldA where NewB.ID = OldA.ID);
Update NewB set NewB.columnB = (select OldA.columnB from OldA where NewB.ID = OldA.ID);
I don't know which database you are using, in database there are some system tables, from where you can get whether the column does exist in table or not, like in oracle, All_TAB_COLUMNS contains the information of all the columns of tables, so you can hit that table like below :
select 1 from ALL_TAB_COLUMNS where TABLE_NAME='OldA' and COLUMN_NAME in ('columnA','columnB');
if resulting records are empty that means specified columns are not present in the table so you can skip your queries.
There must be something wrong with your is column exists check. I have similar DDL and DML operations many times. As you did not show how you are checking column existence I am not able to tell you what's wrong.
Anyway, you are adding a new column to a table. We can check if such column exists, if not - run the script, if yes- skip the script. And here is the check:
IF EXISTS(SELECT 1 FROM [sys].[columns] WHERE OBJECT_ID('[dbo].[NewB]') = [object_id] AND [name] = 'columnA')
BEGIN
BEGIN TRANSACTION;
....
COMMIT TRANSACTION;
END;
Related
I need to do the following update query through a stored procedure:
UPDATE table1
SET name = #name (this is the stored procedure inputparameter)
WHERE name IS NULL
Table1 has no indexes or keys, 5 columns which are 4 integers and 1 varchar (updatable column 'name' is the varchar column)
The NULL records are about 15.000.000 rows that need updating. This takes about 50 minutes, which I think is too long.
I'm running an Azure SQL DB Standard S6 (400DTU's).
Can anyone give me an advise to improve performance?
As you don't have any keys, or indexes, I can suggest following approach.
1- Create a new table using INTO (which will copy the data) like following query.
SELECT
CASE
WHEN NAME IS NULL THEN #name
ELSE NAME
END AS NAME,
<other columns >
INTO dbo.newtable
FROM table1
2- Drop the old table
drop table table1
3- Rename the new table to table1
exec sp_rename 'dbo.newtable', 'table1'
Another approach can be using batch update, sometime you get better performance compared to bulk update (You need to test by adjusting the batch size).
WHILE EXISTS (SELECT 1 FROM table1 WHERE name is null)
BEGIN
UPDATE TOP (10000) table1
SET name = #name
WHERE n ame is null
END
can you do with following method ?
UPDATE table1
SET name = ISNULL(name,#name)
for null values it will update with #name and rest will be updated with same value.
No. You are updating 15,000,000 rows which is going to take a long time. Each update has overhead for finding the row and logging the value.
With so many rows to update, it is unlikely that the overhead is finding the rows. If you add an index on name, the update is going to actually have to update the index as well as updating the original values.
If your concern is locking the database, you can set up a loop where you do something like this over and over:
UPDATE TOP (100000) table1
SET name = #name (this is the stored procedure inputparameter)
WHERE name IS NULL;
100,000 rows should be about 30 seconds or so.
In this case, an index on name does help. Otherwise, each iteration of the loop would in essence be reading the entire table.
I've created a Stored Procedure that refreshes the data in a table. It first re-loads the entire table. Next, several filters are applied. (Example: the column 'Model' must equal 'W'; all rows with model 'B' are deleted.) This happens after the table has been loaded (and not during) because I want to log how many rows are deleted because of each individual filter. After the filters have been applied, some columns contain the same value in every row (the other values were deleted in the filtering process). These columns are now useless, so I want to delete them.
This seems to be problematic for SQL Server. When given the command to execute the SP, it indicates that the columns it is supposed to remove in its final step do not currently exist and refuses to run. That is technically correct, the columns currently don't exist, but they will be created by the SP itself.
Some mockup code:
CREATE PROCEDURE dbo.Procedure AS (
DROP TABLE dbo.Table
SELECT * INTO dbo.Table FROM dbo.View
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
DELETE FROM dbo.Table WHERE Model <> 'W'
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
ALTER TABLE dbo.Table DROP COLUMN Model
)
Error code when executing:
[2016-09-02 12:25:20] [S0001][207] Invalid column name 'Model'.
How do I circumvent this problem and get the SP to run?
If I understand correctly, you can use dynamic SQL:
exec sp_executesql 'ALTER TABLE dbo.Table DROP COLUMN Model';
Syntax to remove any column from table in SQL Server is
ALTER TABLE TableName DROP COLUMN ColumnName ;
This may be cause for issue.
Can you check one more time for the existency of the column 'Model' exists in the view.
because i have tried with the same scenario and its works for me..
I'm trying to figure out how to determine if a table has been affected by a number of processes that run in sequence, and need to know what the state of the table is before and after each runs. What I've been trying to do is run some SQL before all the processes run that saves a before checksum of every table in the db to a table, then running it again when each ends and updating the table row with an after checksum. After all the processes are over, I compare the checksums and get all rows where before <> after.
Only problem is that I'm not the best guy for SQL, and am a little lost. Here's where I'm at right now:
select checksum_agg(binary_checksum(*)) from empcomp with (nolock)
create table Test_CheckSum_Record ( TableName varchar(max), CheckSum_Before int, CheckSum_After int)
SELECT name into #TempNames
FROM sys.Tables where is_ms_shipped = 0
And the pseudocode for what I want to do is something like
foreach(var name in #TempNames)
insert into Test_CheckSum_Record(name, ExecuteSQL N'select checksum_agg(binary_checksum(*)) from ' + name + ' with (nolock)', null)
But how does one do this?
Judging by the comments you need to create a trigger that handles all CRUD operations and just places a flag
Syntax is
Create TRIGGER [TriggerName] ON [TableName]
AFTER UPDATE, AFTER Delete, AFTER UPDATE
In the trigger you can do a
select CHECKSUM_AGG([Columns you want to compare against])
from [ParentTable] store that value in a variable and check it against the checksum table before column. If it does not exist you add a new entry with the DELETED tables checksum_AGG value as the before entry
Please note the choice not to use the inserted table is just preference for me on calculated columns
I will edit later when I have more time to add code
Recently I have started learning Oracle-sql. I know that with the help of DELETE command we can delete a particular row(s). So, Is it possible to delete entire data from a particular column in a table using only DELETE command. (I know that using UPDATE command by setting null values to entire column we can achieve the functionality of DELETE).
DELETE
The DELETE statement removes entire rows of data from a specified
table or view
If you want to "remove" data from particular column update it:
UPDATE table_name
SET your_column_name = NULL;
or if column is NOT NULL
UPDATE table_name
SET your_column_name = <value_indicating_removed_data>;
You can also remove entire column using DDL:
ALTER TABLE table_name DROP COLUMN column_name;
In SQL, delete deletes rows not columns.
You have three options in Oracle:
Set all the values to NULL using update.
Remove the column from the table.
Set the column to unused.
The last two use alter table:
alter table t drop column col;
alter table t set unused (col);
Use Invisible Type, which is from an oracle 12cR2.
ALTER TABLE LOG1
MODIFY operation INVISIBLE
It is a better than drop of a particular column.If you need to visible you can get back by altering with an VISIBLE of a column name.
update employee set commission=nvl2(commission,'','')
this will remove all the data from the column
I have a sql table that I am trying to add a column from another table to. Only when I execute the alter table query it does not pull the values out of the table to match the column where I am trying to make the connection.
For example I have column A from table 1 and column A from table 2, they are supposed to coincide. ColumnATable1 being an identification number and ColumnATable2 being the description.
I tried this but got an error...
alter table dbo.CommittedTbl
add V_VendorName nvarchar(200)
where v_venkey = v_vendorno
It tells me that I have incorrect syntax... Anyone know how to accomplish this?
alter table dbo.CommittedTbl
add V_VendorName nvarchar(200);
go
update c
set c.V_VendorName = a.V_VendorName
from CommittedTbl c
join TableA a
on c.v_venkey = a.v_vendorno;
go
I'm just guessing at your structure here.
alter table 2 add column A <some_type>;
update table2 set column A = (select column_A from table2 where v_venkey = v_vendorno);
Your names for tables and columns are a bit confusing but I think that should do it.
There is no WHERE clause for an ALTER TABLE statement. You will need to add the column (your first two lines), and then insert rows based upon a relationship you define between the two tables.
ALTER TABLE syntax:
http://msdn.microsoft.com/en-us/library/ms190273%28v=sql.90%29.aspx
There are several languages within SQL:
DDL: Data Definition Language - this defines the schema (the structure of tables, columns, data types) - adding a column to a table affects the table definitions and all rows will have that new column (not just some rows according to a criteria)
DML: Data Manipulation Language - this affects data within a table, and inserting, updating or other changes fall into this and you can update some data according to criteria (and this is where a WHERE clause would come in)
ALTER is a DDL statement, while INSERT and UPDATE are DML statements.
The two cannot really be mixed as you are doing.
You should ALTER your table to add the column, then INSERT or UPDATE the column to include appropriate data.
Is it possible that you want a JOIN query instead? If you want to join two tables or parts of two tables you should use JOIN.
have a look at this for a start if you need to know more LINK
hope that helps!