I am using PostgreSQL 9.5 (but upgrade is possible to say 9.6).
I have permissions table:
CREATE TABLE public.permissions
(
id integer NOT NULL DEFAULT nextval('permissions_id_seq'::regclass),
item_id integer NOT NULL,
item_type character varying NOT NULL,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
CONSTRAINT permissions_pkey PRIMARY KEY (id)
)
-- skipping indices declaration, but they would be present
-- on item_id, item_type
And 3 tables for many-to-many associations
-companies_permissions (+indices declaration)
CREATE TABLE public.companies_permissions
(
id integer NOT NULL DEFAULT nextval('companies_permissions_id_seq'::regclass),
company_id integer,
permission_id integer,
CONSTRAINT companies_permissions_pkey PRIMARY KEY (id),
CONSTRAINT fk_rails_462a923fa2 FOREIGN KEY (company_id)
REFERENCES public.companies (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_rails_9dd0d015b9 FOREIGN KEY (permission_id)
REFERENCES public.permissions (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
CREATE INDEX index_companies_permissions_on_company_id
ON public.companies_permissions
USING btree
(company_id);
CREATE INDEX index_companies_permissions_on_permission_id
ON public.companies_permissions
USING btree
(permission_id);
CREATE UNIQUE INDEX index_companies_permissions_on_permission_id_and_company_id
ON public.companies_permissions
USING btree
(permission_id, company_id);
-permissions_user_groups (+indices declaration)
CREATE TABLE public.permissions_user_groups
(
id integer NOT NULL DEFAULT nextval('permissions_user_groups_id_seq'::regclass),
permission_id integer,
user_group_id integer,
CONSTRAINT permissions_user_groups_pkey PRIMARY KEY (id),
CONSTRAINT fk_rails_c1743245ea FOREIGN KEY (permission_id)
REFERENCES public.permissions (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_rails_e966751863 FOREIGN KEY (user_group_id)
REFERENCES public.user_groups (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
CREATE UNIQUE INDEX index_permissions_user_groups_on_permission_and_user_group
ON public.permissions_user_groups
USING btree
(permission_id, user_group_id);
CREATE INDEX index_permissions_user_groups_on_permission_id
ON public.permissions_user_groups
USING btree
(permission_id);
CREATE INDEX index_permissions_user_groups_on_user_group_id
ON public.permissions_user_groups
USING btree
(user_group_id);
-permissions_users (+indices declaration)
CREATE TABLE public.permissions_users
(
id integer NOT NULL DEFAULT nextval('permissions_users_id_seq'::regclass),
permission_id integer,
user_id integer,
CONSTRAINT permissions_users_pkey PRIMARY KEY (id),
CONSTRAINT fk_rails_26289d56f4 FOREIGN KEY (user_id)
REFERENCES public.users (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_rails_7ac7e9f5ad FOREIGN KEY (permission_id)
REFERENCES public.permissions (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
CREATE INDEX index_permissions_users_on_permission_id
ON public.permissions_users
USING btree
(permission_id);
CREATE UNIQUE INDEX index_permissions_users_on_permission_id_and_user_id
ON public.permissions_users
USING btree
(permission_id, user_id);
CREATE INDEX index_permissions_users_on_user_id
ON public.permissions_users
USING btree
(user_id);
I will have to run SQL query like this a lot times:
SELECT
"permissions".*,
"permissions_users".*,
"companies_permissions".*,
"permissions_user_groups".*
FROM "permissions"
LEFT OUTER JOIN
"permissions_users" ON "permissions_users"."permission_id" = "permissions"."id"
LEFT OUTER JOIN
"companies_permissions" ON "companies_permissions"."permission_id" = "permissions"."id"
LEFT OUTER JOIN
"permissions_user_groups" ON "permissions_user_groups"."permission_id" = "permissions"."id"
WHERE
(companies_permissions.company_id = <company_id> OR
permissions_users.user_id in (<user_ids> OR NULL) OR
permissions_user_groups.user_group_id IN (<user_group_ids> OR NULL)) AND
permissions.item_type = 'Topic'
Let's say we have about 10000+ permissions and similar amount of records inside other tables.
Do I need to worry about performance?
I mean... I have 4 LEFT OUTER JOINs and it should return results pretty fast (say <200ms).
I was thinking about declaring 1 "polymorphic" table, something like:
CREATE TABLE public.permissables
(
id integer NOT NULL DEFAULT nextval('permissables_id_seq'::regclass),
permission_id integer,
resource_id integer NOT NULL,
resource_type character varying NOT NULL,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
CONSTRAINT permissables_pkey PRIMARY KEY (id)
)
-- skipping indices declaration, but they would be present
Then I could run query like this:
SELECT
permissions.*,
permissables.*
FROM permissions
LEFT OUTER JOIN
permissables ON permissables.permission_id = permissions.id
WHERE
permissions.item_type = 'Topic' AND
(permissables.owner_id IN (<user_ids>) AND permissables.owner_type = 'User') OR
(permissables.owner_id = <company_id> AND permissables.owner_type = 'Company') OR
(permissables.owner_id IN (<user_groups_ids>) AND permissables.owner_type = 'UserGroup')
QUESTIONS:
Which options is better/faster? Maybe there is better way to do this?
a) 4 tables (permissions, companies_permissions, user_groups_permissions, users_permissions)
b) 2 tables (permissions, permissables)
Do I need to declare different indexes than btree on permissions.item_type ?
Do I need to run a few times per day vacuum analyze for tables to make indices work (both options)?
EDIT1:
SQLFiddle examples:
wildplasser suggestion (from comment), not working: http://sqlfiddle.com/#!15/9723f8/1
Original query (4 tables): http://sqlfiddle.com/#!15/9723f8/2
{ I also removed backticks in wrong places thanks #wildplasser }
I'd recommend abstracting all access to your permissions system to a couple of model classes. Unfortunately, I've found that permission systems like this do sometimes end up being performance bottlenecks, and I've found that it is sometimes necessary to significantly refactor your data representation.
So, my recommendation is that try to keep the permission-related queries isolated in a few classes and try to keep the interface to those classes independent of the rest of the system.
Examples of good approaches here are what you have above. You don't actually join against the topics table; you already have the topic IDs you care about when you're constructing the permissions.
Examples of bad interfaces would be class interfaces that make it easy to join the permissions tables into arbitrary other SQL.
I understand you asked the question in terms of SQL rather than a particular framework on top of SQL, but from the rails constraint names it looks like you are using such a framework, and I think taking advantage of it will be useful to your future code maintainability.
In the 10,000 rows cases, I think either approach will work fine.
I'm not actually sure that the approaches will be all that different. If you think about the query plans generated, assuming you're getting a small number of rows from the table, the join might be handled with a loop against each table in exactly the same way that the or query might be handled assuming that the index is likely to return a small number of rows.
I have not fed a plausible data set into Postgres to figure out whether that's what it actually does given a real data set. I have reasonably high confidence that Postgres is smart enough to do that if it makes sense to do so.
The polymorphic approach does give you a bit more control and if you run into performance problems you may want to check if moving to it will help.
If you choose the polymorphic approach, I'd recommend writing code to go through and check to make sure that your data is consistent. That is, make sure that resource_type and resource_id corresponds to actual resources that exist in your system.
I'd make that recommendation in any case where application concerns force you to denormalize your data such that database constraints are not sufficient to enforce consistency.
If you start running into performance problems, here are the sorts of things you may need to do in the future:
Create a cache in your application mapping objects (such as topics) to the set of permissions for those objects.
Create a cache in your application caching all the permissions a given user has (including the groups they are a member of) for the objects in your application.
Materializing the user group permissions. That is create a materialized view that combines the user_group permissions with the user permissions and the user group memberships.
In my experience the thing that really kills performance of permission systems is when you add something like permitting one group to be a member of another group. At that point you very quickly get to a point where you need caching or materialized views.
Unfortunately, it's really hard to give more specific advice without actually having your data and looking at real query plans and real performance. I think that if you prepare for future changes you'll be fine though.
Maybe it's an obvious answer, but I think the option with 3 tables should be just fine. SQL databases are good at doing join operations and you have 10,000 records - this is not a big amount of data at all, so I am not sure what makes you think there will be a performance problem.
With proper indexes (btree should be OK), it should work fast and actually you can go just a bit further and generate the sample data for you tables and see how your query actually works on real amount of data.
I also don't think you'll need to worry about something like running vacuum manually.
Regarding the option two, polymorphic table, it can be not very good as you now have single resource_id field which can point out to different tables which is a source of problems (for example, due to a bug you can have a record with resource_type=User and resource_id pointing to Company - table structure doesn't prevent it).
One more note: you do not tell anything about relations between User, UserGropup and Company - if they are all related too, it may be possible to fetch permissions just using user id(s), joining also gropus and companies to users.
And one more: you don't need ids in many-many tables, nothing bad happens if you have them, but it's enough to have permission_id and user_id and make them to be composite primary key.
You can try to denormalize the many-to-many relations in a permission field on each of the 3 tables (user, user_group, company).
You can use this field to store the permissions in JSON format, and use it only for reading (SELECTs). You can still use the many-to-many tables for changing the permissions of specific users, groups and companies, just write a trigger on them, that will update the denormalized permission field whenever there is a new change on the many-to-many table. With this solution you will still get fast query execution time on the SELECTs, while keeping the relationship normalized and in compliance with database standards.
Here is an example script, that I have written for mysql for a one-to-many relation, but a similar thing can be applied for your case as well:
https://github.com/martintaleski/mysql-denormalization/blob/master/one-to-many.sql
I have used this approach several times, and it makes sense when the SELECT statements outnumber and are more important than the INSERT, UPDATE and DELETE statements.
In case you do not often change your permissions, materialized views might speed up your search enormously. I will prepare an example based on your setting later today and will post it. Afterwards, we can do some benchmark.
Nevertheless, materialized views require an update of the materialized view after changing the data. So that solution might be fast, but will speed up your queries only if basic data are not changed so often.
Related
I have my user table (pseudo sql, because I use an ORM and I must support several different DB types):
id: INTEGER, PK, AUTOINCREMENT
UUID : BINARY(16) (inserted by an update, it's a hash(id) )
I am currently using id for FK in all other tables.
However, in my REST API, I have to serve informations with the UUID, which causes a problem later to query.
Should I:
FK on the UUID instead?
just lookup id(UUID) each time (fast thanks to cache mechanism after a while)?
In general, it is better to use the auto-incremented id for the foreign key reference rather than some other combination of unique columns.
One important reason is that indexes on a single integer are more efficient than indexes on other column types -- if for no other reason than the index being smaller, so it occupies less disk and less memory. Also, there is additional overhead to storing the longer UUID in secondary tables.
This is not the only consideration. Another consideration is that you could change the UUID, if necessary, without changing the foreign key references. For instance, you may wake up one day and say "that id has to start with AAA". You can alter the table and update the table and be done with it -- or you could worry about foreign key references as well. Or, you might add an organization column and decide that the unique key is a combination of the UUID and organization. These operations are much harder/slower if the UUID is being used as a foreign key reference.
When you have composite primary keys (more than one column), using the auto-incremented id is an even better idea. In this case, using the id for joins prevents mistakes where one of the join conditions might be left out.
As you point out, looking up the UUID for a given id should be a fast operation with the correct indexes. There may be some borderline cases where you would not want to have an id, but in general, it is a good idea.
Is there a best practice in that is closest to one of these examples?
CREATE TABLE TABLE1
(
ID NUMBER(18) CONSTRAINT TABLE1_PK PRIMARY KEY,
NAME VARCHAR2(10) CONSTRAINT NAME_NN NOT NULL
);
or
CREATE TABLE TABLE1
(
ID NUMBER(18),
NAME VARCHAR2(10) CONSTRAINT NAME_NN NOT NULL
);
ALTER TABLE TABLE1 ADD CONSTRAINT TABLE1_PK
PRIMARY KEY (ID)
USING INDEX (CREATE UNIQUE INDEX IDX_TABLE1_PK ON TABLE1 (ID));
Is either scenario going to result in a better outcome in general? The first option is much more readable, but perhaps there are reasons why the latter is preferable.
Definitely personal preference. I prefer to do as much as I can in the single CREATE TABLE statement simply because I find it more concise. Most everything I need is described right there.
Sometimes that's not possible. Say you have two tables with references to each, or you want to load up a table with a bunch of data first, so you add the additional indexes after the table is loaded.
You'll find many tool that create schemas from DBs will separate them (mostly because it's always correct -- define all the tables, then define all of the relationships).
But personally, if practical, I find having it all in one place is best.
When building a deployment script that is eventually going to be run by someone else later on, I prefer splitting the scripts a fair bit. If something goes wrong, it's a bit easier to tell from the logs what exactly failed.
My table creation script will usually only have NOT NULL constraints. The PK, unique and FK constraints will be added afterwards.
This is a minor point though, and I don't have anything in particular against combining it all in one big CREATE TABLE statement.
You may find that your workplace already has a standard in place. e.g. my current client requires separate scripts for the CREATE TABLE, then more separate scripts for constraints, indexes, etc.
The exception, of course, is index-organized tables which must have a PK constraint declared upfront.
It's a personal preference to define any attributes or defaults for a field in the actual create statement. One thing I noticed is your second statement won't work since you haven't specified the id field is NOT NULL.
I guess it's a personal best practice for readability that I specify the table's primary key upfront.
Another thing to consider when creating the table is how you want items identified, uniquely or composite. ALTER TABLE is good for creating composite keys after the fact.
I am new to SQL and I have a basic question about performance.
I want to create a users database which will store information about my users:
Id
Log in name
Password
Real name
Later I want to perform a SELECT query on: Id, Log in name and Real name.
What would be the best design for this database, what tables and what keys should I create?
If it's only about those 4 fields it looks like just one table. Primary key on ID, unique index on LoginName. You may not want to store the password, but only a hash.
Depending on your queries, create different indexes. Furthermore, you may not need the ID field at all.
UPDATE:
Creating an index on certain column(s) enables the database to optimize its SQL statements. Given your user table:
USER
USER_ID BIGINT NOT NULL
LOGIN_ID VARCHAR(<size>) NOT NULL
PASSWORD VARCHAR(<size>) NOT NULL
NAME VARCHAR(<size>) NOT NULL
CONSTRAINT PK_USER PRIMARY KEY ( USER_ID )
The databases I know will automatically create an index on the primary key, which in fact means that the database maintains an optimized lookup table, see WikiPedia for further details.
Now say, you want to query users by LOGIN_ID which is a fairly common use case, I guess, you can create another index like:
CREATE INDEX I_USER_1 ON USER ( LOGIN_ID asc )
The above index will optimize the select * from USER where LOGIN_ID='foo'. Furthermore, you can create a unique index instead, assuming that you do not want duplicate LOGIN_IDs:
CREATE UNIQUE INDEX UI_USER_1 ON USER ( LOGIN_ID asc )
That's the whole story, so if you want to optimize a query for the users real name (NAME), you just create another index:
CREATE INDEX I_USER_2 ON USER ( NAME asc )
Just to add to the #homes answer you should work out what sort of queries you will be running and then optimize for those sorts of queries. For example if you are doing a lot of writes and not as many reads having lots of indexes can cause performance issues. It's a bit like tuning an engine for a car, are you going to be going quickly down a drag strip or are you tuning it for driving long distances.
Anyway you also asked about the NAME column. If you are going to be matching on a varchar column it might be worth investigating the use of FULLTEXT Indexes.
http://msdn.microsoft.com/en-us/library/ms187317.aspx
This allows you to do optimized searchs on names where you might be matching parts of a name and the like. As the #homes answer said it really does depend on what your queries and intent is when writing the query.
Might be worth making the table and using the query execution plan in something like SQL management studio against your queries and see what impact your indexes have on the amount of rows and sort of looks up that are happening.
http://www.sql-server-performance.com/2006/query-execution-plan-analysis/
I've put some thought into this, and I haven't been able to come up with anything better. So let me describe my problem, my current solution, and what I'd like to improve. I've also got a few concerns, like whether or not my design is actually normalized or not.
I'm making a database where I'd like to store VS Match information for tournaments. For simplicity, lets just pretend they are chess matches. 1v1. My current design is as follows:
CREATE TABLE matches(
match_id bigserial PRIMARY KEY,
tournament_id int NOT NULL,
step int NOT NULL,
winner match_winner,
(etc. etc.)
UNIQUE(match_id, tournament_id, step), -- Actual primary key
FOREIGN KEY (tournament_id) references tournaments(tournament_id)
ON DELETE RESTRICT
ON UPDATE CASCADE
);
CREATE TABLE match_players(
match_id bigint NOT NULL,
tournament_id int NOT NULL,
step int NOT NULL,
player_id int NOT NULL,
first boolean NOT NULL,
PRIMARY KEY (match_id, tournament_id, step, player_id),
UNIQUE (tournament_id, step, player_id),
foreign key (match_id, tournament_id, step) -- keep em together
references matches(match_id, tournament_id, step)
ON DELETE RESTRICT
ON UPDATE CASCADE,
foreign key (player_id) references accounts(player_id)
ON DELETE RESTRICT
ON UPDATE CASCADE
);
-- Partial index, ensure no more than one "first" player exists per match
CREATE UNIQUE INDEX idx_match_players_primary
ON match_players
USING btree
(match_id, tournament_id, step)
WHERE first=true;
-- Also ensure that no more than one "not-first" player exists per match
CREATE UNIQUE INDEX idx_match_players_not_primary
ON match_players
USING btree
(match_id, tournament_id, step)
WHERE first=false;
To get the actual vs matches, I can simply join match_players to itself (on mp1.match_id = mp2.match_id, and mp1.first = true and mp2.first = false, where mp1 and mp2 are the two instances of matches). The partial unique indexes ensure that a maximum of two players can be added.
The database has been normalized this way because players are unordered. As in, A vs B is the same as B vs A. I've added the "first" boolean to matches so that A vs B can be consistently displayed. (I guess I could simplify it so that mp1.player_id < mp2.player_id... but the "first" boolean seems to work).
The tournament_id and step are repeated in the 2nd table because they are needed on the Unique index of that table... to ensure that players only have one match per step of the tournament.
Here's my primary question:
It is currently possible to have orphaned rows in the first table
(matches). A match should have exactly two players in it. In
particular, if a match exists in the matches table, it is possible
to have no rows matching it in the match_players table. Is there a
way to ensure that matches ALWAYS has two associated rows in
matches_players ? With the "first" method, I've definitely limited
the number of players per match to be less than 2... so figuring out
a way to ensure the minimum of 2 players would solve the problem.
Here is one of my concerns:
As orphaned rows can exist still, are there any other data anomalies
that can crop up in this design? I'm a bit uncomfortable with the
compound (triple) primary key in match_players, but I think the
compound foreign_key requirement covers me for that table.
Thanks to anyone who can help me. This is the best I could do so far. I think if I solve the orphaned rows issue, then this design would be perfect. I guess I can set up a cron job to clear out the orphaned rows, but I'd like to know if a cleaner design exists before settling on this one.
I do think that a subquery in a check constraint would solve the issue, but alas, I don't think PostgreSQL actually supports that feature yet.
This is what I call the "forward looking problem" namely that you can have issues regarding data constraints that rely on rows not yet inserted. The transaction as a whole has requirements that the individual rows do not. Most databases offer few if any tools for addressing this problem. Fortunately PostgreSQL offers you a few options.
Denormalized "input buffer" using TOAST
The first approach would be to add a column to matches called match of type match_player[]. Then you'd store an array of players on the match here. This would be materialized to the match_player table using a trigger. This has significant penalties though in terms of development and foreseeing corner cases. I do see it as a viable option but it is not an ideal one. This avoids forward-looking constraints by flattening the table. However it could only store step 0 records. Once folks are making moves... that must be done by inserting only into match_players.
Constraint Triggers
The second approach would be to create a trigger function which runs once per statement as an INITIALLY DEFERRED DEFERRABLE CONSTRAINT TRIGGER executing at the end of the transaction. This would pull system columns from the table to look for rows inserted, and then would check to make sure that matches occur in the other table. This is probably the best, general approach to solving the forward-looking constraint problem.
A couple of recent questions discuss strategies for naming columns, and I was rather surprised to discover the concept of embedding the notion of foreign and primary keys in column names. That is
select t1.col_a, t1.col_b, t2.col_z
from t1 inner join t2 on t1.id_foo_pk = t2.id_foo_fk
I have to confess I have never worked on any database system that uses this sort of scheme, and I'm wondering what the benefits are. The way I see it, once you've learnt the N principal tables of a system, you'll write several orders of magnitude more requests with those tables.
To become productive in development, you'll need to learn which tables are the important tables, and which are simple tributaries. You'll want to commit an good number of column names to memory. And one of the basic tasks is to join two tables together. To reduce the learning effort, the easiest thing to do is to ensure that the column name is the same in both tables:
select t1.col_a, t1.col_b, t2.col_z
from t1 inner join t2 on t1.id_foo = t2.id_foo
I posit that, as a developer, you don't need to be reminded that much about which columns are primary keys, which are foreign and which are nothing. It's easy enough to look at the schema if you're curious. When looking at a random
tx inner join ty on tx.id_bar = ty.id_bar
... is it all that important to know which one is the foreign key? Foreign keys are important only to the database engine itself, to allow it to ensure referential integrity and do the right thing during updates and deletes.
What problem is being solved here? (I know this is an invitation to discuss, and feel free to do so. But at the same time, I am looking for an answer, in that I may be genuinely missing something).
I agree with you. Putting this information in the column name smacks of the crappy Hungarian Notation idiocy of the early Windows days.
I agree with you that the foreign key column in a child table should have the same name as the primary key column in the parent table. Note that this permits syntax like the following:
SELECT * FROM foo JOIN bar USING (foo_id);
The USING keyword assumes that a column exists by the same name in both tables, and that you want an equi-join. It's nice to have this available as shorthand for the more verbose:
SELECT * FROM foo JOIN bar ON (foo.foo_id = bar.foo_id);
Note, however, there are cases when you can't name the foreign key the same as the primary key it references. For example, in a table that has a self-reference:
CREATE TABLE Employees (
emp_id INT PRIMARY KEY,
manager_id INT REFERENCES Employees(emp_id)
);
Also a table may have multiple foreign keys to the same parent table. It's useful to use the name of the column to describe the nature of the relationship:
CREATE TABLE Bugs (
...
reported_by INT REFERENCES Accounts(account_id),
assigned_to INT REFERENCES Accounts(account_id),
...
);
I don't like to include the name of the table in the column name. I also eschew the obligatory "id" as the name of the primary key column in every table.
I've espoused most of the ideas proposed here over the 20-ish years I've been developing with SQL databases, I'm embarrassed to say. Most of them delivered few or none of the expected benefits and were with hindsight, a pain in the neck.
Any time I've spent more than a few hours with a schema I've fairly rapidly become familiar with the most important tables and their columns. Once it got to a few months, I'd pretty much have the whole thing in my head.
Who is all this explanation for? Someone who only spends a few minutes with the design isn't going to be doing anything serious anyway. Someone who plans to work with it for a long time will learn it if you named your columns in Sanskrit.
Ignoring compound primary keys, I don't see why something as simple as "id" won't suffice for a primary key, and "_id" for foreign keys.
So a typical join condition becomes customer.id = order.customer_id.
Where more than one association between two tables exists, I'd be inclined to use the association rather than the table name, so perhaps "parent_id" and "child_id" rather than "parent_person_id" etc
I only use the tablename with an Id suffix for the primary key, e.g. CustomerId, and foreign keys referencing that from other tables would also be called CustomerId. When you reference in the application it becomes obvious the table from the object properties, e.g. Customer.TelephoneNumber, Customer.CustomerId, etc.
I used "fk_" on the front end of any foreign keys for a table mostly because it helped me keep it straight when developing the DB for a project at my shop. Having not done any DB work in the past, this did help me. In hindsight, perhaps I didn't need to do that but it was three characters tacked onto some column names so I didn't sweat it.
Being a newcomer to writing DB apps, I may have made a few decisions which would make a seasoned DB developer shudder, but I'm not sure the foreign key thing really is that big a deal. Again, I guess it is a difference in viewpoint on this issue and I'll certainly take what you've written and cogitate on it.
Have a good one!
I agree with you--I take a different approach that I have seen recommended in many corporate environments:
Name columns in the format TableNameFieldName, so if I had a Customer table and UserName was one of my fields, the field would be called CustomerUserName. That means that if I had another table called Invoice, and the customer's user name was a foreign key, I would call it InvoiceCustomerUserName, and when I referenced it, I would call it Invoice.CustomerUserName, which immediately tells me which table it's in.
Also, this naming helps you to keep track of the tables your columns are coming from when you're joiining.
I only use FK_ and PK_ in the ACTUAL names of the foreign and primary keys in the DBMS.