I am working on a database, using Sql Server 2012. In our data model we have a type of User, with basic login information, name, address, etc, etc. Some of these users will be Technicians, who have all the same properties as a User, but some other properties like Route, Ship To Location, etc.
My question is, in designing a database, how does one simulate this situation. I have thought of 2 options.
Have a foreign key in the Technician table to the PK of the User database to link them up. My worry with this one is how will I know if a user is a technician, I would have to run a query on the technicians table each time a user logs in.
Have a field in User table link up with the PK of the Technician database, and if this field is null, or -1 or whatever I know this user is not a technician. I dont see any immediate problems with this one, but I am no expert at database design.
Do either of these have an advantage, and if so, why? Currently I have 2 different tables with two completely different id's, and they are not linked in any way, which I am now facing problems because of.
lets say you have 3 different sub class type of Class user. you can have a column in User table to identify the subclass Type. for example UserTypeID. if possible values are too many you can create new table to store these userTypes.
UserTypeID
1=Technician
2=Mechanic
3=Accounttant
Edit1
UserTypeID will be exist in all sub class entities.
Also from the other comments I feel lot concerns about getting data out of sync w/o explicit RI constraint. Just wanted to make sure that this column value should not be coming from app code or user instead the sql API inserting record should find out the right value based on which sub class entity is getting the insert record.
For example Pr_InsertUser API insert new technician. This insert API first finds out why I the UserTypeId for technician and insert record in to class user and get userid. Then passes the userId and UserTypeId to subclass technician an call another private sql API Pr_Insert_Technician to insert more attributes.
So the point I am trying to make is as SQL does not support explicit FK from multiple tables to single table that should be taken care in SQL API.
Declare #user Table
(
userid int
,UserTypeID Tinyint
,username sysname
,userAddress sysname
)
Declare #Technician Table
(
userid int
,UserTypeID Tinyint
,attr1 sysname
,attr2 sysname
)
Declare #Mechanic Table
(
userid int
,UserTypeID Tinyint
,attr3 sysname
,attr4 sysname
)
Declare #Accounttant Table
(
userid int
,UserTypeID Tinyint
,attr2 sysname
,attr4 sysname
)
You may want to familiarize yourself with the way ORM's do it.
Even if you don't use an ORM. It will lay out some of the options.
http://nhibernate.info/doc/nh/en/#inheritance
http://ayende.com/blog/3941/nhibernate-mapping-inheritance
Related
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
What is the query to get a list of schemas names in a specific database in Informix?
Schemas are not commonly used in Informix databases and have very little trackability within a database. The CREATE SCHEMA notation is supported because it was part of SQL-89. The AUTHORIZATION clause is used to determine the (default) 'owner' of the objects created with the CREATE SCHEMA statement. There is nothing to stop a single user running the CREATE SCHEMA statement multiple times, either consecutively or at widely different times (in any given database within an Informix instance).
CREATE SCHEMA AUTHORIZATION "pokemon"
CREATE TABLE gizmo (s SERIAL NOT NULL PRIMARY KEY, v VARCHAR(20) NOT NULL)
CREATE TABLE widget(t SERIAL NOT NULL PRIMARY KEY, d DATETIME YEAR TO SECOND NOT NULL)
;
CREATE SCHEMA AUTHORIZATION "pokemon"
CREATE TABLE object (u SERIAL NOT NULL PRIMARY KEY, i INTEGER NOT NULL)
CREATE TABLE "pikachu".complain (C SERIAL NOT NULL PRIMARY KEY, v VARCHAR(255) NOT NULL)
;
After the CREATE SCHEMA statement executes, there is no way of tracking that either pair of these tables were created together as part of the same schema; there's no way to know that "pikachu".complain was part of a CREATE SCHEMA statement executed on behalf of "pokemon". There is no DROP SCHEMA statement that would necessitate such support.
A schema belongs to a user. You can list all available users from the sysusers system catalog :
SELECT username FROM "informix".sysusers;
Since only DBAs and Resource privilieges allow a user to issue a CREATE SCHEMA statement, we could restrict the query like :
SELECT username FROM "informix".sysusers WHERE usertype IN ('D', 'R');
Another solution is to list only users that actually have created tables ; for that, you can query the systables system catalog and list distinct owners.
SELECT DISTINCT owner FROM FROM "informix".systables
As commented by #JonathanLeffler, a user could have been granted RESOURCE privileges and have created a table, and then be 'demoted' to CONNECT privileges. The user would still own the table. Hence the second solution is the most accurate.
How i can search / select a field in more than 2000 databases in sql server.
i have a main database consisting a table called 'Kewword" where i store key world under 'kewwordtitle' field in keyword table, when new user register a new database are created for the user and user use a keyword,
now the situtation is, how i can find that how much user use a key work, here keywordtitle is primary key,...
thanks/
Your question is a little bit fuzzy, but if this is a one shot and if all your databases are on the same instance, you could do something like:
declare #t table(col int)
insert #t
exec sp_MSforeachdb 'use ?; select 1 from keyword where keywordtitle = ''<yourkeyword>'''
select count(*) from #t
Is it possible to restrict updating a column in SQL without using a trigger ? If so how ? (need the query)
PS:
I mean, I have a table
CREATE TABLE MYBUDGET.tbl_Income
(
[IncomeID] INT NOT NULL IDENTITY(1,1),
[IncomeCatID] INT NOT NULL,
[IncomeAmnt] MONEY NOT NULL,
[IncomeCurrencyID] INT NOT NULL,
[ExchangeRateID] INT NOT NULL,
[IncomeAmnt_LKR] MONEY NOT NULL,
[AddedOn] DATETIME NOT NULL,
[Remark] VARCHAR(250),
)
I need to allow users to update only [ExchangeRateID] and [IncomeAmnt_LKR] fields. All other fields can not be updated. only insert.
Use DENY to block update. e.g.
DENY UPDATE ON
MYBUDGET.tbl_Income
(
[IncomeID],
[IncomeCatID],
[IncomeAmnt] ,
[IncomeCurrencyID] ,
[AddedOn] ,
[Remark]
)
TO Mary, John, [Corporate\SomeUserGroup]
One should still consider how ownership chaining can override the DENYs see gbn's answer
It comes down to permissions.
You DENY UPDATE on the columns as per Conrad Frix's answer.
However, these will be ignored with db_owner/dbo and sysadmin/sa so you need to ensure your permission model is correct.
If you have views or stored procs that write to the table, then permissions won't be checked either if the same DB users owns both code and table. This is known as ownership chaining
I mention all this because there was another question 2 days ago where permissions were bypassed
If your permission-based approach fails and you can't/won't change it, then you'll need to use triggers
Create a view using the table and hide the column you want ..and give acess to that view to the users.
CREATE VIEW view_name AS
SELECT column_name(s)
FROM table_name
WHERE condition
Make a VIEW from that table and then obscure the column you need, also give the users access of that VIEW
I have a number of tables that have text columns that contain only a few different distinct values. I often play the tradeoff between the benefits (primarily reduced row size) of extracting the possible values into a lookup table and storing a small index in the table against the amount of work required to do so.
For the columns that have a fixed set of values known in advance (enumerated values), this isn't so bad, but the more painful case is when I know I have a small set of unique values, but I don't know in advance what they will be.
For example, if I have a table that stores log information on different URLs in a web application:
CREATE TABLE [LogData]
(
ResourcePath varchar(1024) NOT NULL,
EventTime datetime NOT NULL,
ExtraData varchar(MAX) NOT NULL
)
I waste a lot of space by repeating the for every request. There will be a very number of duplicate entries in this table. I usually end up with something like this:
CREATE TABLE [LogData]
(
ResourcePathId smallint NOT NULL,
EventTime datetime NOT NULL,
ExtraData varchar(MAX) NOT NULL
)
CREATE TABLE [ResourcePaths]
(
ResourcePathId smallint NOT NULL,
ResourceName varchar(1024) NOT NULL
)
In this case however, I no longer have a simple way to append data to the LogData table. I have to a lookup on the resource paths table to get the Id, add it if it is missing, and only then can I perform the actual insert. This makes the code much more complicated and changes my write-only logging function to require some sort of transacting against the lookup table.
Am I missing something obvious?
If you have a unique index on ResourseName, the lookup should be very fast even on a big table. However, it has disadvantages. For instance, if you log a lot of data and have to archive it off periodically and want to archive the previous month or year of logdata, you are forced to keep all of resoursepaths. You can come up with solutions for all of that.
yes inserting from existing data doing the lookup as part of the insert
Given #resource, #time and #data as inputs
insert( ResourcePathId, EventTime, ExtraData)
select ResourcePathId, #time, #data
from ResourcePaths
where ResourceName = #resource