How to relate 3 tables depending on event - sql

I have a table that have information about different types of events that can be done by persons in two categories civil and worker
so for each one of them I have their respective tables
civil{ civil_id, name, age,telephone...} the primary key is civil_id
worker{ worker_id, name, duty, department...} the primary key is worker_id
then the event table has a list of all possible events
event {type_of_event} the primary key is type_of_event
then I am planing to store information in other table
with eventype, the person that did the job (worker or civil)
id event_type date person
-----------------------------------
1 type1 12-12-12 x
2 type1 05-12-10 y
3 type2 02-12-12 y
Now in this design I do not know how to relate whose person did the job if, I would had only a kind of person (aka civil) i would only store the civil_id in person field in this last table....but how to know if it was civil or worker, do I need other intermediate table?

Generally, there are to ways to model this type of situation...
Using Exclusive Foreign Keys
In the event, both civil_id and worker_id are NULL-able, but there is also a constraint ensuring exactly one of them is non-NULL at any given time:
CHECK (
(civil_id IS NOT NULL AND worker_id IS NULL)
OR (civil_id IS NULL AND worker_id IS NOT NULL)
)
Using Inheritance1
For more on inheritance, take a look at "Subtype Relationships" chapter in the ERwin Methods Guide and at this post.
1 Aka. category, subtyping, subclassing, generalization hierarchy...

In this can, you cannot set up a foreign key because you have multiple parent. In order to do fast search or to avoid full table scan define an index on column person on table event and join the table using LEFT JOIN. eg,
SELECT ....,
COALESCE(b.name, c.name) AS personname
FROM event a
LEFT JOIN civil b
ON a.person = b.civil_id
LEFT JOIN worker c
ON a.person = c.worker_ID
Adding INDEX
ALTER TABLE event ADD INDEX (person)

Related

Join Table vs Foreign Key/Ref

Imagine I have two tables and they have a 1-to-Many relationship. Is it better to have a Join table storing the relationship, or issuing a foreign key in one of these tables? Take a look of these two situations:
Situation A:
Table 1: CreditCard
Table 2: Person
It seems to me quite making sense to put the creditCard_id as part of the Person table
Situation B:
Table 1: Order
Table 2: Person
This time I think I will put the order_id and person_id in a Join table?
Am I making a mistake in the above? Is there a standard/better way of determining this?
For 1 to Many relation, people usually put the foreign key into the heavier table or the "Many" table.
So from your example, both go CreditCard and Order tables, by doing so you will remove duplicate data.
Imagine you which one is better:
FK goes to the "Many" table
Table People:
ID NAME
1 A
2 B
Table CreditCard:
ID PEOPLE_ID
1 1
2 1
FK goes to "1" table:
Table People:
ID NAME CreditCard_ID
1 A 1
1 A 2
2 B 3
Table CreditCard:
ID
1
2
3
Note: See how the ID and Name are repeated(ID=1, NAME=A) in the second example, that happens if you put the FK in the wrong table.
I would make three tables; a person table with all their info( name, address, etc. ), a credit card table with all the info( expiration date, security number?, etc.. ) then another table connecting them with the PersonID and CreditCardID. But what do I know, I'm still in school lol so wait for someone else to answer you.

DB design: Should I use constraints within a table or a new table

I inherited a large existing DB and I'd like to know if I should refactor it because 95% of my queries require joining at least 4 tables.
The DB has a 5 tables that only have an ID and Name column with less than 20 rows. I assume the author did this so he could change the names there and not change them in the other tables, but many of those tables are only referenced in one other table. Should I refactor these small 2 column tables into the a larger table and add a constraint to the column so users can't input incorrect names instead of having seperate tables?
Resist that urge. From your description I can deduce that the existing design is solid and probably well normalized. Your refactoring may actually undo a good db structure.
If you are bothered by writing a lot of joins in your queries I would suggest creating views to mitigate the boilerplate.
...the author did this so he could change the names there not change
them in the other tables...
That is evidence of good design and exactly what you should strive for in a normalized database.
no.
your db is normalized and proper.
and you save space, lookup time, indexing for storing an int rather then a varchar name
small tables are optimized away if they are properly keyed.
Sounds like what you have are lookup tables. Let me tell you waht happens when you decide to put all lookups in one table with an additonal column to specify which type it is. Fisrt instead of joining to 4 different tables in one query, you have to join to the same table 4 times. There ends up being more contention for the resources in the "one table to rule them all". Further, you lose FK constraints. That means you eventually lose data integrity. So if one lookup is state, nothing wil prevent you from putting the id values for a different lookup for customer type in the stateid column in the customeraddress table. When the lookups are separate you con enforce that relationship.
Suppose instead of one big table you decide to have a constraint on the column for customer type. Constraints are now enforced but you have a problem when they need to change. Now you have to alter the database in order to add a new type. Again usually this is a very bad idea espcially when the table gets large.
Short story: Replacing strings with ID numbers has nothing to do with normalization. Using natural keys in your case might improve performance. In my tests, queries using natural keys were faster by 1 or 2 orders of magnitude.
You might have accepted an answer too quickly.
The DB has a 5 tables that only have an ID and Name column with less
than 20 rows.
I'm assuming these tables have a structure something like this.
create table a (
a_id integer primary key,
a_name varchar(30) not null unique
);
create table b (...
-- Just like a
create table your_data (
yet_another_id integer primary key,
a_id integer not null references a (a_id),
b_id integer not null references b (b_id),
c_id integer not null references c (c_id),
d_id integer not null references d (d_id),
unique (a_id, b_id, c_id, d_id),
-- other columns go here
);
And it's obvious that your_data will require four joins (at least) to get usable information from it.
But the names in table a, b, c, and d are unique (ahem), so you can use the unique names as targets for foreign key references. You could rewrite the table your_data like this.
create table your_data (
yet_another_id integer primary key,
a_name varchar(30) not null references a (a_name),
b_name varchar(30) not null references b (b_name),
c_name varchar(30) not null references c (c_name),
d_name varchar(30) not null references d (d_name),
unique (a_name, b_name, c_name, d_name),
-- other columns go here
);
Replacing id numbers with strings doesn't change the normal form. (And replacing strings with id numbers doesn't have anything to do with normalization.) If the original table were in 5NF, then this rewrite will be in 5NF, too.
But what about performance? Aren't id numbers plus joins supposed to be faster than strings?
I tested that by inserting 20 rows into each of the four tables a, b, c, and d. Then I generated a Cartesian product to fill one test table written with id numbers, and another using the names. (So, 160K rows in each.) I updated the statistics, and ran a couple of queries.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id)
inner join b on (b.b_id = your_data_id.b_id)
inner join c on (c.c_id = your_data_id.c_id)
inner join d on (d.d_id = your_data_id.d_id)
...
Total runtime: 808.472 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
Total runtime: 132.098 ms
The query using id numbers takes a lot longer to execute. I used a WHERE clause on all four columns, which returns a single row.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id and a.a_name = 'a one')
inner join b on (b.b_id = your_data_id.b_id and b.b_name = 'b one')
inner join c on (c.c_id = your_data_id.c_id and c.c_name = 'c one')
inner join d on (d.d_id = your_data_id.d_id and d.d_name = 'd one)
...
Total runtime: 14.671 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
where a_name = 'a one' and b_name = 'b one' and c_name = 'c one' and d_name = 'd one';
...
Total runtime: 0.133 ms
The tables using id numbers took about 100 times longer to query.
Tests used PostgreSQL 9.something.
My advice: Try before you buy. I mean, test before you invest. Try rewriting your data table to use natural keys. Think carefully about ON UPDATE CASCADE and ON DELETE CASCADE. Test performance with representative sample data. Edit your original question and let us know what you found.

Underlying rows in Group By

I have a table with a certain number of columns and a primary key column (suppose OriginalKey). I perform a GROUP BY on a certain sub-set of those columns and store them in a temporary table with primary key (suppose GroupKey). At a later stage, I may need to get more details about one or more of those groupings (which can be found in the temporary table) i.e. I need to know which were the rows from the original table that formed that group. Simply put, I need to know the mappings between GroupKey and OriginalKey. What's the best way to do this? Thanks in advance.
Example:
Table Student(
StudentID INT PRIMARY KEY,
Level INT, --Grade/Class/Level depending on which country you are from)
HomeTown TEXT,
Gender CHAR)
INSERT INTO TempTable SELECT HomeTown, Gender, COUNT(*) AS NumStudents FROM Student GROUP BY HomeTown, Gender
On a later date, I would like to find out details about all towns that have more than 50 male students and know details of every one of them.
How about joining the 2 tables using the GroupKey, which, you say, are the same?
Or how about doing:
select * from OriginalTable where
GroupKey in (select GroupKey from my_temp_table)
You'd need to store the fields you grouped on in your temporary table, so you can join back to the original table. e.g. if you grouped on fieldA, fieldB, and fieldC, you'd need something like:
select original.id
from original
inner join temptable on
temptable.fieldA = original.fieldA and
temptable.fieldB = original.fieldB and
temptable.fieldC = original.fieldC

database model structure

I have a column groups. Groups has different type stored in group_types (buyers, sellers, referee). Only when the group is of type buyer it has another type (more specialized) like electrical and mechanical.
I'm a bit puzzled with how I will store this in a database.
Someone can suggest me a database structure?
thanks
Store your group_types as a hieararchical table (with nested sets or parent-child model):
Parent-child:
typeid parent name
1 0 Buyers
2 0 Sellers
3 0 Referee
4 1 Electrical
5 1 Mechanic
SELECT *
FROM mytable
WHERE group IN
(
SELECT typeid
FROM group_types
START WITH
typeid = 1
CONNECT BY
parent = PRIOR typeid
)
will select all buyers in Oracle.
Nested sets:
typeid lower upper Name
1 1 2 Buyers
2 3 3 Sellers
3 4 4 Referee
4 1 1 Electrical
5 2 2 Mechanic
SELECT *
FROM group_types
JOIN mytable
ON group BETWEEN lower AND upper
WHERE typeid = 1
will select all buyers in any database.
Nested sets is implementable anywhere and more performant, if you don't need hierarchical ordering or frequent updates on group_types.
Parent-child is implementable easily in Oracle and SQL Server and with a little effort in MySQL. It allow easy structure changing and hierarchical ordering.
See this article in my blog on how to implement it in MySQL:
Hierarchical queries in MySQL
You could possibly store additional types like, buyer_mechanical or buyer_electrical.
You could try:
Group
group_id
group_name
group_parent_id
with entries (1, buyers, 0), (2, sellers, 0), (3, referee, 0), (4, electrical, 1), (5, mechanical, 1)
This has the advantage of being infinitely scalable, so each subgroup can have as many subgroups as you want.
Typically, you have extension tables. These are simply additional tables in your schema which hold additional information linked to the main table by some type of key
For example let's say your main table is:
People
PersonId int, PK
GroupTypeId int, FK to GroupTypes
Name varchar(100)
GroupTypes
GroupTypeId int, PK
GroupTypeName varchar(20)
BuyerTypes
BuyerTypeId int, PK
BuyerTypeName varchar(20)
BuyerData
PersonId int, FK
BuyerTypeId int FK
====
Additionally, the BuyerData would have a composite primary key (PK) on PersonId and BuyerTypeId
When pulling Buyer data out, you could use a query like
SELECT *
FROM People P
INNER JOIN BuyerData BD on (P.PersonId = BD.PersonId)
INNER JOIN BuyerTypes BT on (BD.BuyerTypeId = BT.BuyerTypeId)
grouptype: ID, Name ('buyers', 'sellers', 'referee')
group: GroupTypeID, ID, Name ('electrical' and 'mechanical' if grouptypeid == 'buyers')
contact: GroupTypeID (NOT NULL), GroupID (NULL), other attributes
Table Group is populated with records for GroupTypes as required.
Contact.GroupID can be NULL since a GroupType need not have any Groups.
UI has to take care of Group selection. You can have a trigger check the group/type logic.

What's the best way to store (and access) historical 1:M relationships in a relational database?

Hypothetical example:
I have Cars and Owners. Each Car belongs to one (and only one) Owner at a given time, but ownership may be transferred. Owners may, at any time, own zero or more cars. What I want is to store the historical relationships in a MySQL database such that, given an arbitrary time, I can look up the current assignment of Cars to Owners.
I.e. At time X (where X can be now or anytime in the past):
Who owns car Y?
Which cars (if any) does owner Z own?
Creating an M:N table in SQL (with a timestamp) is simple enough, but I'd like to avoid a correlated sub-query as this table will get large (and, hence, performance will suffer). Any ideas? I have a feeling that there's a way to do this by JOINing such a table with itself, but I'm not terribly experienced with databases.
UPDATE: I would like to avoid using both a "start_date" and "end_date" field per row as this would necessitate a (potentially) expensive look-up each time a new row is inserted. (Also, it's redundant).
Make a third table called CarOwners with a field for carid, ownerid and start_date and end_date.
When a car is bought fill in the first three and check the table to make sure no one else is listed as the owner. If there is then update the record with that data as the end_date.
To find current owner:
select carid, ownerid from CarOwner where end_date is null
To find owner at a point in time:
select carid, ownerid from CarOwner where start_date < getdate()
and end_date > getdate()
getdate() is MS SQL Server specific, but every database has some function that returns the current date - just substitute.
Of course if you also want additional info from the other tables, you would join to them as well.
select co.carid, co.ownerid, o.owner_name, c.make, c.Model, c.year
from CarOwner co
JOIN Car c on co.carid = c.carid
JOIN Owner o on o.ownerid = co.ownerid
where co.end_date is null
I've found that the best way to handle this sort of requirement is to just maintain a log of VehicleEvents, one of which would be ChangeOwner. In practice, you can derive the answers to all the questions posed here - at least as accurately as you are collecting the events.
Each record would have a timestamp indicating when the event occurred.
One benefit of doing it this way is that the minimum amount of data can be added in each event, but the information about the Vehicle can accumulate and evolve.
Also, with the timestamp, events can be added after the fact (as long as the timestamp accurately reflects when the event occurred.
Trying to maintain historical state for something like this in any other way I've tried leads to madness. (Maybe I'm still recovering. :D)
BTW, the distinguishing characteristic here is probably that it's a Time Series or Event Log, not that it's 1:m.
Given your business rule that each car belongs to at least one owner (ie. owners exist before they are assigned to a a car) and your operational constraint that the table may grow large, I'd design the schema as follows:
(generic sql 92 syntax:)
CREATE TABLE Cars
(
CarID integer not null default autoincrement,
OwnerID integer not null,
CarDescription varchar(100) not null,
CreatedOn timestamp not null default current timestamp,
Primary key (CarID),
FOREIGN KEY (OwnerID ) REFERENCES Owners(OwnerID )
)
CREATE TABLE Owners
(
OwnerID integer not null default autoincrement,
OwnerName varchar(100) not null,
Primary key(OwnerID )
)
CREATE TABLE HistoricalCarOwners
(
CarID integer not null,
OwnerID integer not null,
OwnedFrom timestamp null,
Owneduntil timestamp null,
primary key (cardid, ownerid),
FOREIGN KEY (OwnerID ) REFERENCES Owners(OwnerID ),
FOREIGN KEY (CarID ) REFERENCES Cars(CarID )
)
I personally would not touch the third table from my client application but would simply let the database do the work - and maintain data integrity - with ON UPDATE AND ON DELETE triggers on the Cars table to populate the HistoricalCarOwners table whenever a car changes owners (i.e whenever an UPDATE is committed on the OwnerId column) or a car is deleted.
With the above schema, selecting the current car owner is trivial and selecting historical car owners is a simple as
select ownerid, ownername from owners o inner join historicalcarowners hco
on hco.ownerid = o.ownerid
where hco.carid = :arg_id and
:arg_timestamp between ownedfrom and owneduntil
order by ...
HTH, Vince
If you really do not want to have a start and end date you can use just a single date and do a query like the following.
SELECT * FROM CarOwner co
WHERE co.CarId = #CarId
AND co.TransferDate <= #AsOfDate
AND NOT EXISTS (SELECT * FROM CarOwner co2
WHERE co2.CarId = #CarId
AND co2.TransferDate <= #AsOfDate
AND co2.TransferDate > co.Transferdate)
or a slight variation
SELECT * FROM Car ca
JOIN CarOwner co ON ca.Id = co.CarId
AND co.TransferDate = (SELECT MAX(TransferDate)
FROM CarOwner WHERE CarId = #CarId
AND TransferDate < #AsOfDate)
WHERE co.CarId = #CarId
These solution are functionally equivalent to Javier's suggestion but depending on the database you are using one solution may be faster than the other.
However, depending on your read versus write ratio you may find the performance better if you redundantly update the end date in the associative entity.
Why not have a transaction table? Which would contain the car ID, the FROM owner, the TO owner and the date the transaction occcured.
Then all you do is find the first transaction for a car before the desired date.
To find cars owned by Owner 253 on March 1st:
SELECT * FROM transactions WHERE ownerToId = 253 AND date > '2009-03-01'
cars table can have an id called ownerID, YOu can then simply
1.select car from cars inner join owners on car.ownerid=owner.ownerid where ownerid=y
2.select car from cars where owner=z
Not the exact syntax but simple pseudo code.