Is it possible to restrict updating a column in SQL without using a trigger ? If so how ? (need the query)
PS:
I mean, I have a table
CREATE TABLE MYBUDGET.tbl_Income
(
[IncomeID] INT NOT NULL IDENTITY(1,1),
[IncomeCatID] INT NOT NULL,
[IncomeAmnt] MONEY NOT NULL,
[IncomeCurrencyID] INT NOT NULL,
[ExchangeRateID] INT NOT NULL,
[IncomeAmnt_LKR] MONEY NOT NULL,
[AddedOn] DATETIME NOT NULL,
[Remark] VARCHAR(250),
)
I need to allow users to update only [ExchangeRateID] and [IncomeAmnt_LKR] fields. All other fields can not be updated. only insert.
Use DENY to block update. e.g.
DENY UPDATE ON
MYBUDGET.tbl_Income
(
[IncomeID],
[IncomeCatID],
[IncomeAmnt] ,
[IncomeCurrencyID] ,
[AddedOn] ,
[Remark]
)
TO Mary, John, [Corporate\SomeUserGroup]
One should still consider how ownership chaining can override the DENYs see gbn's answer
It comes down to permissions.
You DENY UPDATE on the columns as per Conrad Frix's answer.
However, these will be ignored with db_owner/dbo and sysadmin/sa so you need to ensure your permission model is correct.
If you have views or stored procs that write to the table, then permissions won't be checked either if the same DB users owns both code and table. This is known as ownership chaining
I mention all this because there was another question 2 days ago where permissions were bypassed
If your permission-based approach fails and you can't/won't change it, then you'll need to use triggers
Create a view using the table and hide the column you want ..and give acess to that view to the users.
CREATE VIEW view_name AS
SELECT column_name(s)
FROM table_name
WHERE condition
Make a VIEW from that table and then obscure the column you need, also give the users access of that VIEW
Related
It was a 2 part question and I got the timestamp correctly.
I'm on 12C and trying to do something like:
ALTER TABLE customers
ADD modified_by (USER FROM DUAL);
Basically just columns in a table that show who modified the table and at what time they did so.
I also tried
ALTER TABLE customers
ADD last_modified TIMESTAMP;
ALTER TABLE customers
ADD modified_by USER;
and other combinations of keywords that I found on this site and other sites but none of them work.
We only learned dual in class but I'm looking for any way to do these.
Edit:
I now understand what was taught to me by the user with almost 1 million points.
Still unsure how to do the username.
Read this:
https://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2114.htm#REFRN20302
and tried:
ALTER TABLE customers
ADD modified_by USERNAME;
doesn't work get invalid datatype.
Then saw this: https://www.techonthenet.com/oracle/questions/find_users.php
and tried:
ALTER TABLE customers
ADD modified_by USERNAME FROM DBA_USERS;
but getting invalid alter table option. SQL is hard.
After you edited the question, it seems that you are somewhat closer to what you want. This:
ALTER TABLE customers ADD modified_by VARCHAR2(30);
^^^^^^^^^^^^
datatype goes here, not what you'd like
to put into this column
Then, during inserts or updates of that table, you'd put USER in there, e.g.
insert into customers (id, modified_by) values (100, USER);
Or, probably even better, set it to be default, as #a_horse_with_no_name suggested:
ALTER TABLE customers ADD modified_by VARCHAR2(30) DEFAULT USER;
so - if you don't explicitly put anything into that column, it'll be populated with the USER function's value.
If you read through the Oracle documentation on virtual columns, you will find this:
Functions in expressions must be deterministic at the time of table creation, but can subsequently be recompiled and made non-deterministic
A deterministic function is one that returns the same value when it is passed the same arguments. The expressions that you are using contain non-deterministic functions. Hence, they are not allowed.
The error that I get when I try this is:
ORA-54002: only pure functions can be specified in a virtual column expression
For some unknown reason, Oracle equates "pure" with "deterministic" in the error message. But this is saying that the function needs to be deterministic.
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
I'm preparing for an exam on Model-Driven Development. I came across a specific database trigger:
CREATE TRIGGER tManager_bi
FOR Manager BEFORE INSERT AS
DECLARE VARIABLE v_company_name CHAR(30);
BEGIN
SELECT M.company
FROM Manager M
WHERE M.nr = NEW.reports_to
INTO :v_company_name;
IF (NOT(NEW.company = v_company_name))
THEN EXCEPTION eReportsNotOwnCompany;
END
This trigger is designed to prevent input in which a manager reports to an outside manager, i.e. one that is not from the same company. The corresponding OCL constraint is:
context Manager
inv: self.company = self.reports_to.company
The relevant table looks like (simplified):
CREATE TABLE Manager
(
nr INTEGER NOT NULL,
company VARCHAR(50) NOT NULL,
reports_to INTEGER,
PRIMARY KEY (nr),
FOREIGN KEY (reports_to) REFERENCES Manager (nr)
);
The textbook says that this trigger will also work correctly when the newly inserted manager doesn't report to anyone (i.e. NEW.reports_to is NULL), and indeed, upon testing, it does work correctly.
But I don't understand this. If NEW.reports_to is NULL, that would mean the variable v_company_name will be empty (uninitialized? NULL?), which would then mean the comparison NEW.company = v_company_name would return false, causing the exception to be thrown, right?
What am I missing here?
(The SQL shown is supposed to be SQL:2003 compliant. The MDD tool is Cathedron, which uses Firebird as an RDBMS.)
You're missing the fact that when you compare NULL to NULL (or to any other value), the answer is NULL, not false. And negation of NULL is still NULL, so in the IF statement the ELSE part would fire (if there is one).
I suggest you read the Firebird Null Guide for better understanding it all.
AS. Making this answer for the sake of code highlighting.
You might want to modify your trigger to respond to both updates and inserts.
CREATE TRIGGER tManager_bi
FOR Manager BEFORE INSERT OR UPDATE AS
...
You also may avoid hand-writing the trigger at all, if you do not need that specific exception identifier.
You may just use SQL Check Constraint for that
alter table Manager
add constraint chk_ManagerNotRespondsOneself
CHECK ( NOT EXISTS (
SELECT * FROM Manager M
WHERE M.nr = reports_to
AND M.company = company
) )
Specifying custom exceptions over CHECK constraints does not look possible now... http://tracker.firebirdsql.org/browse/CORE-1852
I am working on a database, using Sql Server 2012. In our data model we have a type of User, with basic login information, name, address, etc, etc. Some of these users will be Technicians, who have all the same properties as a User, but some other properties like Route, Ship To Location, etc.
My question is, in designing a database, how does one simulate this situation. I have thought of 2 options.
Have a foreign key in the Technician table to the PK of the User database to link them up. My worry with this one is how will I know if a user is a technician, I would have to run a query on the technicians table each time a user logs in.
Have a field in User table link up with the PK of the Technician database, and if this field is null, or -1 or whatever I know this user is not a technician. I dont see any immediate problems with this one, but I am no expert at database design.
Do either of these have an advantage, and if so, why? Currently I have 2 different tables with two completely different id's, and they are not linked in any way, which I am now facing problems because of.
lets say you have 3 different sub class type of Class user. you can have a column in User table to identify the subclass Type. for example UserTypeID. if possible values are too many you can create new table to store these userTypes.
UserTypeID
1=Technician
2=Mechanic
3=Accounttant
Edit1
UserTypeID will be exist in all sub class entities.
Also from the other comments I feel lot concerns about getting data out of sync w/o explicit RI constraint. Just wanted to make sure that this column value should not be coming from app code or user instead the sql API inserting record should find out the right value based on which sub class entity is getting the insert record.
For example Pr_InsertUser API insert new technician. This insert API first finds out why I the UserTypeId for technician and insert record in to class user and get userid. Then passes the userId and UserTypeId to subclass technician an call another private sql API Pr_Insert_Technician to insert more attributes.
So the point I am trying to make is as SQL does not support explicit FK from multiple tables to single table that should be taken care in SQL API.
Declare #user Table
(
userid int
,UserTypeID Tinyint
,username sysname
,userAddress sysname
)
Declare #Technician Table
(
userid int
,UserTypeID Tinyint
,attr1 sysname
,attr2 sysname
)
Declare #Mechanic Table
(
userid int
,UserTypeID Tinyint
,attr3 sysname
,attr4 sysname
)
Declare #Accounttant Table
(
userid int
,UserTypeID Tinyint
,attr2 sysname
,attr4 sysname
)
You may want to familiarize yourself with the way ORM's do it.
Even if you don't use an ORM. It will lay out some of the options.
http://nhibernate.info/doc/nh/en/#inheritance
http://ayende.com/blog/3941/nhibernate-mapping-inheritance
I am trying to take a stored procedure that copies parent/child/grandchild rows into the same tables with a new unique identifier. The purpose of this is to produce a duplicate 'Order' with 'Order Lines' and 'Order Line Attributes'. The procedure currently in place is done using cursors, and I'd like to try and create a set based version.
One issue I've hit early on, is that automatic numbering in a human friendly format is done in a stored procedure.
DECLARE #sales_order_id nvarchar(50)
EXEC GetAutoNumber 'Order', #sales_order_id output
The execution is done within the cursor as it loops through the order Lines of a single order. Is there any way to call this on the fly? I thought of using a table value function but can't because the stored procedure updates the autonumber table to create the new value.
Ideally I would want to craft an insert statement that automatically retrieves/updates the AutoNumber that could be done across multiple rows simulatenously, for example:
INSERT INTO ORDER (
OrderId, -- Guid
OrderNumber, -- Human Friendly value that needs Autoincremented
...
)
SELECT
NEWID(),
???
FROM ORDER
WHERE OrderId = #OrderToBeCopied
I'm using SQL Server 2008, any suggestions?
EDIT: One reason that an identity column would not work is that the table these autonumbers are being stored in serves multiple entities and their prefixes. For instance, here is the DDL for the autonumber table:
CREATE TABLE [dbo].[pt_autonumber_settings](
[autonumber_id] [uniqueidentifier] NULL,
[autonumber_prefix] [nvarchar](10) NULL,
[autonumber_type] [nvarchar](50) NULL,
[changed_by] [nvarchar](30) NOT NULL,
[change_date] [datetime] NOT NULL,
[origin_by] [nvarchar](30) NOT NULL,
[origin_date] [datetime] NOT NULL,
[autonumber_currentvalue] [nvarchar](50) NULL
) ON [PRIMARY]
So the end result from the stored procedure is the newest autonumber_id for a certain autonumber_type, and it also retrieves the autonumber_prefix and concatenates the two together.
Is there some reason you can't use an IDENTITY column?
Please read edit below as my original answer isn't satisfactory
I'm not entirely sure how your OrderNumber is incremented but you could certainly use ROW_NUMBER() for this. Check out the MSDN doco on it
Assuming you just wanted a number allocated to each Order Id, then you'd have something like;
SELECT NEWID(),
ROW_NUMBER() OVER (ORDER BY <whatever column you need to order by to make the row number meaningful>) As MyFriendlyId
FROM ORDER
WHERE OrderId = #OrderToBeCopied
If it needs to have some sort of initial seed value, then you can always use
ROW_NUMBER() OVER (ORDER BY <whatever column you need to order by to make the row number meaningful>) + 1000 As MyFriendlyId -- or whatever your seed value should be
Edit:
I just re-read your question, and I suspect you wish to make OrderNumber to be unique across all records. I misread it initially to be something like an incremental line number to detail line items of the OrderId.
My solution in this case won't be any good, and I'd more inclined to go with the other answer suggested about having an identity column.
You could potentially select MAX(OrderNumber) at the beginning and then use that in conjunction with ROW_NUMBER but this is dangerous as it is likely to be a dirty read and won't guarantee uniquness (if someone performs a concurrent insert). If you did have a unique constraint and there was a concurrent insert while both reading the same MAX(OrderNumber) then you are likely to face unique constraint violations...so yeah...why can't you use an identity column :)