I'm working on Oracle SQL database, quite big database. One of (among 150 tables) this table has to be changed because it's redundant (it can be generated through a join). I have been asked to delete a column from this table, to get rid of the redundancy. The problem is that now I have to change code everywhere someone made a insert/update/etc on this table (and don't forget the constraint!). I thought "I can make a view that do the right join" so the problem it's solved for all the select, but it's not working for the insert, because I'm updating 2 tables... Is there a way to solve this problem?
My goal is to rename my original table original_table in original_table_smaller (with one less column) and create a view (or something like a view) called original_table that work like the original table.
Is this possible?
As your view will contain one column that is not present in the real table, you will need to use an instead of trigger to make the view updateable.
Something like this:
create table smaller_table
(
id integer not null primary key,
some_column varchar(20)
);
create view real_table
as
select id,
some_column,
null as old_column
from smaller_table;
Now your old code would run something like this:
insert into real_table
(id, some_column, old_column)
values
(1, 'foo', 'bar');
which results in:
ORA-01733: virtual column not allowed here
To get around this, you need an INSTEAD OF trigger:
create or replace trigger comp_trigger
instead of insert on smaller_table
begin
insert into old_table
(id, some_column)
values
(:new.id, :new.some_column);
end;
/
Now the value for the "old_column" will be ignored. You need something similar for updates as well.
If your view contains a join, then you can handle that situation as well in the trigger. Simply do an update/insert according to the data to two different tables
For more details and examples, see the manual
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#i1006376
It is possible to insert/update on views.
You might want to check with USER_UPDATABLE_COLUMNS which columns you can insert to the view.
Check with this query:
select * from user_updatable_columns where table_name = 'VIEW_NAME';
Oracle has two different ways of making views updatable:-
The view is "key preserved" with respect to what you are trying to update. This means the primary key of the underlying table is in the view and the row appears only once in the view. This means Oracle can figure out exactly which underlying table row to update OR
You write an instead of trigger.
Related
I often see these suggestions on how to copy one table into another:
This one should create a table with the same properties (i.e. primary key and constraints), it doesn't actually work in PostgreSQL though:
CREATE TABLE New_Users LIKE Old_Users;
This one should just copy the content, and some posts (like this one)[https://stackoverflow.com/a/11433568] claim you can do it after the first command.
INSERT INTO New_Users SELECT * FROM Old_Users GROUP BY ID;
In PostgreSQL, it seems like the second command creates the table if it does not exist, and fails if the table is already there, but it indeed does not preserve the properties of the original table.
I'd like to atomically drop a table and create a new one with the same name, but with the contents and properties of another existing table. Something like this:
BEGIN;
DROP TABLE New_Users;
CREATE TABLE New_Users (LIKE Old_Users);
INSERT INTO New_Users SELECT * FROM Old_Users;
END;
This should atomically delete the current version of the table, create a skeleton with all properties and finally populate it with the content of the reference table. How can I do this in PostgreSQL?
You can use like with an option
DROP TABLE New_Users;
create table New_Users (like Old_Users including all)
or much faster
TRUNCATE New_Users RESTART IDENTITY;
INSERT INTO New_Users SELECT * FROM Old_Users;
There is a requirement to rename the DB tables and column names,
so all the tools/application taking data from the source will have to change their queries. The solution we are planning to implement is that for every table name change we will create a VIEW with the original table name. Easy and simple to implement. No query change required, but there are cases where a table name remains the same but a column name changes within the table, so we can't create another view (any object with the same object name).
Is there a Column Synonym kind of thing which we can propose here?
Any solutions/ideas are welcome. Requirement is to have queries containing original column names referring to the new columns in the same tables.
For example:
Table Name: DATA_TABLE
Existing Column Name: PM_DATE_TIME
New Column Name: PM_DATETIME
Existing Query select pm_Date_time from Data_Table; should refer to new column pm_Datetime
You could consider renaming your original table, and then create a View in its place providing both the old and the new column-names:
CREATE TABLE Data_Table ( pm_Date_time DATE );
ALTER TABLE Data_Table RENAME TO Data_Table_;
CREATE VIEW Data_Table AS
(
SELECT pm_Date_time,
pm_Date_time AS pm_Datetime -- Alias to provide the new column name
FROM Data_table_
);
-- You can use both the old columnn-name...
INSERT INTO Data_Table( pm_Date_time ) VALUES ( SYSDATE );
-- ... or the new one
UPDATE Data_Table SET pm_Datetime = SYSDATE;
There are things that won't work the same way as before:
-- INSERT without stating column-names will fail.
INSERT INTO Data_Table VALUES ( SYSDATE );
-- SELECT * will return both columns (should not do this anyway)
SELECT * FROM Data_Table
Once you are done with your changes drop the view and rename the table and the columns.
You'll want to add virtual columns:
ALTER TABLE Data_Table ADD pm_Date_time as (pm_Datetime);
UPDATE: Oracle (11g at least) doesn't accept this and raises "ORA-54016: Invalid column expression was specified". Please use Peter Lang's solution, where he pseudo-adds zero days:
ALTER TABLE Data_Table ADD (pm_Datetime + 0) AS pm_Date_time;
This works like a view; when accessing pm_Date_time you are really accessing pm_Datetime.
Rextester demo: http://rextester.com/NPWFEW17776
And Peter is also right in this point that you can use it in queries, but not in INSERT/columns or UPDATE/SET clauses.
This was basically touched on in the answer by Thorsten Kettner, but what your looking for is a pseudocolumn.
This solution looks a little hacky because the syntax for a pseudocolumn requires an expression. The simplest expression I can think of is the case statement below. Let me know if you can make it more simple.
ALTER TABLE <<tablename>> ADD (
<<new_column_name>> AS (
CASE
WHEN 1=1 THEN <<tablename>>.<<old_column_name>>
END)
);
This strategy basically creates a new column on the fly by evaluating the case statement and copying the value of <old_column_name> to <new_column_name>. Because you are dynamically interpolating this column there is a performance penalty vs just selecting the original column.
One gotcha here is that this will only work if you are duplicating a column once. Multiple pseudocolumns cannot contain duplicate expressions in Oracle.
we cant create a another view (any object with the same object name).
That's true within a schema. Another somewhat messy approach is to create a new user/schema with appropriate privileges and create all your views in that, with those querying the modified tables in the original schema. You could include instead-of triggers if you need to do more than query. They would only need the old columns names (as aliases), not the new ones, so inserts that don't specify the columns (which is bad, of course) would still work too.
You could also create synonyms to packages etc. in the original schema if the applications/tools call any and their specifications haven't changed. And if they have changed you can create wrapper packages in your new schema.
Then your legacy tools/applications can connect to that new schema and if it's all set up right will see things apparently as they were before. That could potentially be done by setting current_schema, perhaps through a login trigger, if the way they connect or the account they connect to can't be modified.
As the tools and applications are upgraded to work with the new table/column names they can switch back to the original schema.
I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.
We are performing a database migration to SQL Server, and to support a legacy app we have defined views on the SQL Server table which present data as the legacy app expects.
However, we're now having trouble with INSTEAD OF INSERT triggers defined on those views, when the fields may have default values.
I'll try to give an example.
A table in the database has 3 fields, a, b, and c. c is brand new, the legacy app doesn't know about it, so we also have a view with 2 fields, a and b.
When the legacy app tries to insert a value into its view, we use an INSTEAD OF INSERT trigger to lookup the value that should go in field c, something like this:
INSERT INTO realTable(a, b, c) SELECT Inserted.a, Inserted.b, Calculated.C FROM...
(The details of the lookup aren't relevant.)
This trigger works well, unless field b has a default value. This is because if the query
INSERT INTO legacyView(a) VALUES (123)
is executed, then in the trigger, Inserted.b is NULL, not b's default value. Now I have a problem, because I can't tell the difference the above query, which would put the default value into b, and this:
INSERT INTO legacyView(a,b) VALUES (123, NULL)
Even if b was non-NULLABLE, I don't know how to write the INSERT query in the trigger such that if a value was provided for b, it's used in the trigger, but if not the default is used instead.
EDIT: added that I'd rather not duplicate the default values in the trigger. The default values are already in the database schema, I would hope that I could just use them directly.
Paul: I've solved this one; eventually. Bit of a dirty solution and might not be to everyone's taste but I'm quite new to SQL Server and such like:
In the Instead_of_INSERT trigger:
Copy the Inserted virtual table's data structure to a temporary table:
SELECT * INTO aTempInserted FROM Inserted WHERE 1=2
Create a view to determine the default constraints for the view's underlying table (from system tables) and use them to build statements which will duplicate the constraints in the temporary table:
SELECT 'ALTER TABLE dbo.aTempInserted
ADD CONSTRAINT ' + dc.name + 'Temp' +
' DEFAULT(' + dc.definition + ')
FOR ' + c.name AS Cmd, OBJECT_NAME(c.object_id) AS Name
FROM sys.default_constraints AS dc
INNER JOIN sys.columns AS c
ON dc.parent_object_id = c.object_id
AND dc.parent_column_id = c.column_id
Use a cursor to iterate through the set retrieved and execute each statement. This leaves you with a temporary table with the same defaults as the table to be inserted into.
Insert default record into the temporary table (all fields are nullable as created from Inserted virtual table):
INSERT INTO aTempInserted DEFAULT VALUES
Copy the records from the Inserted virtual table into the view's underlying table (where they would have been inserted originally, had the trigger not prevented this), joining the temporary table to supply default values. This requires use of the COALESCE function so that only unsupplied values are defaulted:
INSERT INTO realTable([a], [b],
SELECT COALESCE(I.[a], T.[a]),
COALESCE(I.[a], T.[b])
FROM Inserted AS I,
aTempInserted AS T
Drop the temporary table
Some ideas:
If the legacy application is specifying column lists for INSERTs, and naming columns rather than using SELECT *, then can't you just bind a default to column c and let the application use your original (modified) table?
If there was any way that you could make the legacy app use a different view or table for its INSERTs than for SELECT or DELETE, you could put the required defaults on that table and use a regular after-trigger to move the new columns over to the real table.
How about leaving the original table alone and adding your additional columns in a separate table which has a 1-1 relationship with the original? Then create a view that combines these two tables and put appropriate instead-of trigger(s) on this new view to handle all data operations split across the two tables. I realize this has performance implications, but it might be the only way around the problem. This would be an ideal case for a materialized view, which would slow down updates but make the result perform exactly like a table for reads. (Materialized views lend themselves best to inner joins and require no aggregation. They also put schema locks on the source tables.)
I've run into a similar problem where I couldn't tell the difference between intentionally NULL values and skipped columns in an instead-of UPDATE trigger on a view. I eventually made an instead-of INSERT trigger on the view to convert inserts to updates (if the key already existed it was an update, otherwise it was an insert). Though this won't help you directly, it might spur some ideas for you or others.
What about using something like this???:
insert into realtable
values inserted.a, isnull(inserted.b, DEFAULT), computedC
from inserted
We've got a view that's defined like this
CREATE VIEW aView as
SELECT * from aTable Where <bunch of conditions>;
The "value" of the view is in the where-condition, so it is okay to use a Select * in this case.
When a new column is added to the underlying table, we have to redefine the view with a
CREATE OR REPLACE FORCE VIEW aView as
SELECT * from aTable Where <bunch of conditions>;
as the Select * seems to get "translated" into all the columns present at the time the view is (re-)defined.
My question: How can we avoid this extra step?
(If the answer is dependent on the RDBMS, we're using Oracle.)
I know you specified Oracle, but the behavior is the same in SQL Server.
One way to update the view with the new column is to use:
exec sp_refreshview MyViewName
go
Of course, I also agree with the other comments about not using a SELECT * in a view definition.
This extra step is mandatory in Oracle: you will have to recompile your view manually.
As you have noticed, the "*" is lost once you create a view:
SQL> create table t (id number);
Table created
SQL> create view v as select * from t;
View created
SQL> select text from user_views where view_name = 'V';
TEXT
-------------------------------------------------------
select "ID" from t
You should not be using * in your views. Specify the columns explicitly.
That way you are only retrieving the data you need, and thus avoid potential issues down the road where someone adds a column to a table that you do not want that view to return (e.g., a large binary column that would adversely impact performance).
Yes, you need to recompile the view to add another column, but this is the correct process. That way you avoid other compilation issues, such as if the view reference two tables, and someone adds a duplicate column name in one of the tables. The compiler would then have issues determining which of the columns was being referred to if you did not prefix a reference to the column with a table alias, or it might complain if there are duplicate column names in the results.
The problem with automatically updating views to add columns comes when you extend your model, for example to
SELECT a.*, std_name_format(a.first_name, a.middle_names, a.last_name) long_name
or even
SELECT a.*, b.* from table_a a join table_b b....
If you have a view of just SELECT * FROM table, then you probably should be using a synonym or addressing the table directly.
If the view is hiding rows (SELECT * FROM table WHERE...), then you can look at the feature variously known as Fine Grained Access Control (FGAC), Row Level Security (RLS) or Virtual Private Database (VPD).
You might be able to do something with a DDL trigger but that would get complicated.