Sometimes I found some code that makes triggers to allow user to delete from View ?
But why we need to delete from View ,, because simply we can Go to Table > and execute delete like this simple code :
delete from table where x=y ;
My question why we use deletion on views by using TRIGGERS ؟
in another words what is the advantages of using delete over a View !
The reason we use view are mainly to hide the complexity of data
sources , Hide away the actual tables (For security reasons), saving
us writing the same code over and over again.
Now if you havent let your users access the tables directly and they
only work through View in this case they will be executing Deletes
against views.
Triggers are only used when your View has more than One underlying
table. You cannot do update, insert or delete operations on view if it
effects multiple underlying tables. Therefore we make use of Instead
of Delete/Update/Insert Triggers to do these operations against
views.
When you have multiple Underlying tables if you Update,Delete or
Insert effects only One underlying table it does allow you to execute
the statement but it is not guaranteed that it will be done correctly.
therefore if you have a situation where your view is based on
multiple underlying tables you should always do update/delete/inserts
1) Directly to underlying tables. 2) Use Instead of triggers.
Also Read here for more details on why you shouldnt do update/delete/insert operations on views with multiple underlying tables.
Related
I have very complicated SQL that starts out with the WITH statement in Vertica. It fails without the materialization because of resource constraints, but runs in < 20 seconds with the materialization. Then I put it into a create view statement. It appears to lose the materialization as part of the view. That's pretty frustrating. The documentation doesn't mentioned any limitations of using it inside a view.
create view view_name as
with /*+ENABLE_WITH_CLAUSE_MATERIALIZATION */
report_quarters as
From this part of the Vertica documentation:
https://www.vertica.com/docs/10.1.x/HTML/Content/Authoring/SQLReferenceManual/Statements/CREATEVIEW.htm
CREATE VIEW
Defines a view. Views are read only, so they do not support insert, update, delete, or copy operations.
The /*+ENABLE_WITH_CLAUSE_MATERIALIZATION */ hint actually triggers a CREATE LOCAL TEMPORARY TABLE ... process in the background. And that involves not only DML, like insert, update, delete and copy, but also DDL., and that is why it is not supported at this time.
A possible way of working around this, from Version 11.0.1 on, might be to use a stored procedure that:
creates a LOCAL TEMPORARY TABLE out of the WITH clause that you want to materialize.
creates a target table, using `CREATE TABLE new AS SELECT ..FROM temp_table, etc.
The end user can call the stored procedure, then select from the newly created table.
Problem statement
I have a view for recursively collecting and aggregating infos from 3 different large to very large tables. This view itself needs quite a time to execute but is needed in many select statements and is executed quite often.
The resulting view, however, is very small (a few dozend results in 2 columns).
All updating actions typically start a transaction, execute many thousand INSERTs and then commit the transaction. This does not occur very frequently, but if something is written to the database it is usually a large amount of data.
What I tried
As the view is small, does not change frequently and is read often, I thought of creating an indexed view. However, sadly you can not create an indexed view with CTEs or even recursive CTEs.
To 'emulate' a indexed or materialized view, I thought about writing a trigger that executes the view and stores the results into a table every time one of the base tables get modified. However, I guess this would take forever if a large amout of entries are UPDATEed or INSERTed and the trigger runs for each INSERT/UPDATE statement on those tables, even if they are inside a single transaction.
Actual question
Is it possible to write a trigger that runs once before commiting and after the last insert/update statement of a transaction has finished and only if any of the statements has changed any of the three tables?
No, there's no direct way to make a trigger that runs right before the end of a transaction. DML Triggers run once per triggering DML statement (INSERT, UPDATE, DELETE), and there's no other kind of trigger related to data modification.
Indirectly, you could have all of your INSERT's insert into a temporary table and then INSERT them all together from the #temp table into the real table, resulting in one trigger firing for that table. But if you are writing to multiple tables, you would still have the same problem.
The SOP (Standard Operating Practice) way to address this is to have a stored procedure handle everything up front instead of a Trigger trying to catch everything on the back-side.
If data consistency is important, then I'd recommend that you follow the SOP approach based in a stored procedure that I mentioned above. Here's a hi-level outline of this approach:
Use a stored procedure that dumps all of the changes into #temp tables first,
then start a transaction,
then make the changes, moving data/changes from your #temp table(s) into your actual tables,
then do the follow-up work you wanted in a trigger. If these are consistency checks, then if they fail, you should rollback the transaction.
Otherwise, it then commits the transaction.
This is almost always how something like this is done correctly.
If your view is small and queried frequently and your underline tables are rarely changed, you don't need a "view". Instead you need a summary table with the same result of the view and updated by your triggers on each underline table.
A trigger is triggered every time you have data modification (insert, delete and update) but one modification will only trigger once, whether it updates one record or one million rows. You don't need worry about the size of update. Instead the frequency of updating is your concern.
If your have a procedure periodically insert large number of rows, or updates large number of rows one by one, you can change the procedure and disable the triggers before the update so the summary table will be updated only before the end of procedure, where you can call the same "sum" procedure and enable those triggers.
If you HAVE TO keep the "summary" up-to-date all the time, even during large number of transactions (i doubt it's very helpful or practical, if your view is slow to execute), you can disable those triggers, do some calculation by yourself after each transaction, update the summary table after each transaction, in your procedure.
I want users to SELECT and Modify Data trough a view, but give no permissions to the base table.
SELECTing Data trough a View works fine.
But when I want to INSERT, UPDATE or DELETE data through a view, SQL Server says that there are missing permissions on the base table.
Following objects:
Table, named: dbo.test
View, named: dbo.vw_test
The table has two columns:
Column_1 IDENTITY....
Column_2 int (updateable column)
The view has following statement:
SELECT * FROM dbo.test;
I have created a LOGIN and a USER on this database with SELECT, INSERT, UPDATE, DELETE permissions on this view. There is no DENY on the base table.
As said, SELECT works, but updating Column_2 not.
Why? Do I need to grant all rights to the base table?
I hopefully think not. I already have created an INSTEAD OF INSERT trigger on the view to test it. But it doesn't work.
What can I do, to modify data trough a view?
I guess that you've misunderstood views. If you are modifying data inside a view it means that you are directly accessing all tables that exist into the SQL statement defining that View. This means that if you want to modify data, modifications will be done directly to all tables that are represented by that view, which at the end means that you have to give enough permissions in order to be able to perform those kind of actions. Please see the reference in this link (section Before you begin -> permissions).
I am looking back at Oracle (11g) development after few years for my team project and need help. We are trying to implement a POC where any add/drop column will drop and recreate a corrosponding view. View refers to a Mapping table for producing its alias names and selection of columns.
My solutions:
--1. DDL Trigger that scans for Add Column, Drop Column -> Identifies Column Names -> Updates Field_Map table -> Drops View -> Creates View with Field_Map table alias names
Challenge: Received recursive trigger error because of View creation inside DDL
--2. DDL Trigger scans for Add Column, Drop Column -> -> Updates Field Map table -> Writes identified column names, tables to Audit_DDL table -> DML trigger on Audit_DDL table fires -> Disables DDL trigger (to avoid recursion) -> Drops view -> Creates view with Field_Map table alias names
Challenge: Received recursive trigger error. I think, it is still considering whole flow as one transaction. Separating create view under DML trigger didn't help.
so, I am thinking of alternatives:
--3. Store Trigger, Tables in Schema1 and View Schema2. I am expecting, this may avoid recursion since create view will now happen on schema2 and trigger is built on schema1.
--4. Create a Stored Procedure which scans for Audit_DDL entries (from #2) for tables, columns updated. Creates views and marks checked for processed Audit_DDL entries. Hourly job now runs this procedure.
Any suggestions? Thanks in advance for helping me out!
If you want to do DDL from a trigger, it would need to be asynchronous. The simplest solution would be for the DDL trigger to submit a job using the DBMS_JOB package that would execute whatever DDL you want to do. That job would not run until the triggering transaction (the ALTER statement) committed. But it would probably run a few seconds later (depending on how many other jobs are running, how many jobs are allowed, etc.). Whether you build the DDL statement you want to execute in the trigger and just pass it to the job or whether you store the information the job will need in a table and pass some sort of key (i.e. the object name) and let the job assemble the DDL statement is an implementation detail.
That being said, this seems like a really exceptionally poor architecture. If you are adding or removing a column, that is something that should be going through a proper change control process. If the change is going through change control, it should be easy enough to include the changes to the views in the same script. And applications that depend on the views should be tested as part of the change control process. If the change is not going through change control and columns are being added to or removed from views willy-nilly, you've got much bigger problems in the business process and you're very likely to cause one or more applications to barf in strange and wonderful ways at seemingly obscure points in time.
In SQL it is possible to run inserts and updates against a view, as long as the view only selects data from one table. However, deletes don't seem to work quite so well. Can anyone help out?
Take this view for example:
CREATE VIEW v_MyUpdatableView
AS
SELECT x.* FROM MyPrimaryTable x
LEFT OUTER JOIN AnotherTable y ON y.MyPrimaryTableId = x.Id
I can run updates and inserts against this view and they happily pass through to MyPrimaryTable.
However, if I run a delete I receive the following exception:
View or function 'v_MyUpdatableView' is not updatable because the modification affects multiple base tables.
Quote:
DELETE statements remove data in one or more of the member tables through the partitioned view. The DELETE statements must adhere to this rule:
DELETE statements are not allowed if there is a self-join with the same view, or any of the member tables.
Data Modification Rules - Creating a Partitioned View
I would just create a stored procedure that would delete the data from two tables. I know it's not pretty, but it would work or do logical deletes, where you update a column to be "deleted".