I have a lot of tables that share keys but in the diagram does not show a route from table to table. Most tables will show a route to but a lot of them do not although they should. I've already checked data types, and I can join these tables, so I am wondering if this is something I have to do within the diagram tool to set the relationships to correlating tables. I assumed this is something sql server automatically does when you select tables for the diagram.
Any suggestions?
Edited:
It could also just be that you need to "refresh" the diagram, remove and ad back the table to the diagram.
Original:
The relationship is not shown in the diagram if this does not exist. Maybe you defined they key in both tables with same name and data type but you are missing the explicit reference (constraint). Something like this:
ALTER TABLE Sales.TempSalesReason
ADD CONSTRAINT FK_TempSales_SalesReason FOREIGN KEY (TempID)
REFERENCES Sales.SalesReason (SalesReasonID)
ON DELETE CASCADE
ON UPDATE CASCADE
;
You may want to ommit the ON [ACTION] CASCADE, though. The reason you can use JOIN in the queries is because the contraint is not mandatory in order to JOIN the tables, you can JOIN any columns as longs as the data type allows it. You can even JOIN with columns that don't have a PK, but this will have a bad performance (and is also another topic).
Refer to official documentation on how to do it with the graphic tool or with code:
https://learn.microsoft.com/en-us/sql/relational-databases/tables/create-foreign-key-relationships?view=sql-server-2016
Related
Background
I wanted to migrate data with insert operation on multiple tables.
The tables have foreign key relationships between themselves.
If an INSERT operation is done on a table with a foreign key before the referenced table is being inserted to, the operation might fail due to violation of the foreign key.
Requirement
Produce a list of tables within a database ordered according to their dependencies.
Tables with no dependencies (no foreign keys) will be 1st.
Tables with dependencies only in the 1st set of tables will be 2nd.
Tables with dependencies only in the 1st or 2nd sets of tables will be 3rd.
and so on...
If you still want to know FK dependencies for other purpose:
SELECT master_table.table_name master_table_name,
master_table.column_name master_key_column,
detail_table.table_name detail_table_name,
detail_table.column_name detail_column
FROM all_constraints constraint_info
JOIN
all_cons_columns detail_table ON constraint_info.constraint_name = detail_table.constraint_name
JOIN
all_cons_columns master_table ON constraint_info.r_constraint_name = master_table.constraint_name
WHERE
detail_table.position = master_table.position
AND constraint_info.constraint_type = 'R'
AND constraint_info.owner = 'SCHEMA_OWNER'
From there, you can do a recursive query starting with detail_table_name not in the master_table_name list and connecting by prior master_table_name = detail_table_name.
From my point of view, that's wrong approach.
There's utility which takes care about your problem, and its name is Data Pump.
export (expdp) will export everything you want
import (impdp) will then import data, taking care about foreign keys
I suggest you use it.
If you don't want to (though, I can't imagine why), then consider
creating target tables with no referential integrity constraints (i.e. no foreign keys)
run your insert statements - all of them should succeed (at least, regarding foreign keys; can't tell about other constraints)
note that this is really, really slow ... insert runs one-row-per-statement so, if there's a lot of data, it'll take time
once you're done with inserts, enable (or create) foreign key constraints - all of them should succeed as there's parent for every child (presuming that none of inserts failed)
I am currently working on a set of queries to pull data from a SQL table and then loop to pull any entries from other tables that are referenced in the first table through foreign keys.
aka if Table A column A can only have values that appear in Table B's primary key, I want to pull all rows of Table B referenced in my extract from Table A.
To do this in the past, I would have written a query that looks at information_schema.table_constraints and matched it against the key columns; Something like the suggested query in this article. However, when I pull the information from the table_constraints table in my current database, I get back an empty response; I get the table headers, but no rows. This is despite the fact that I know that there are many constraints, particularly foreign-key constraints, in the Postgresql database that I am using. The query giving me the empty response is as simple as possible, shown below:
SELECT * FROM information_schema.table_constraints
Is there somewhere else that I should be referencing to get the foreign key constraint information? How else can I find the foreign key constraints on a table?
EDIT: I am having a touch more luck finding things through pg_catalog; The data at least seems to exist in there. However, it is all abstracted as numerical IDs, and I am having a little trouble linking enough together to get to the actual column names and other key data.
In the information_schema views you can only see objects for which you have permissions.
You are planning to do a join, not lots of little queries by primary key, right?
I'm writing a program which is creating view with joining two tables. I have lack of information about how to join them.
When user choose two tables and their columns to be in one view, I need to join them together automatically for the user if it is possible. For this purpose I already found the code to find primary key and foreign key of tables.
With this knowledge how can I say which column of first table should join with which table of second table.
As I found information about this, Primary key of one table should join with Foreign key of another table. Is this right? Can be any other situation. I want to suggest to user that this join may work and they can modify it.
I am working with a SQL Server database which contains almost 850 tables. It has many defined relationships and plenty of undefined relationships(FK), undefined primary keys etc. It is a mess. I don't have access to the application source code, so I can't track down the undefined relations through code.
Is there any software or query by which I can just look at the data and figure out the relationships between the tables? To be more specific, every fields(columns) in each tables are mapped (join) against every columns of all other tables and provide me with a report of some sort. Almost 60% of the cases the column names would be similar in related tables but many tables have same column name for primary key(for example item_id).
I need all those undefined relationships which is making my life miserable everyday!! :(
I think your best bet would be to use the profiler to capture the statements being executed and try infer the relationships from that. This is a tough one, and there aren't any easy solutions that I'm aware of.
Good Luck !
Well, you can query the metadata - INFORMATION_SCHEMA.COLUMNS - filter out things which are highly unlikely to be joined as keys - like TEXT/NVARCHAR(MAX). Put it in some kind of data dictionary table where you start to tag the columns with information.
You can query with things like:
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS AS C
INNER JOIN INFORMATION_SCHEMA.TABLES AS T
ON C.COLUMN_NAME = T.TABLE_NAME + '_ID';
to see if there are obvious matches.
That might help you get a handle on the database. But it will take a lot of work.
Without a foreign key constraint, it's even possible that they've done things like "multi-keys" where a certain column is a foreign key to one table or another depending on some kind of type selector (these aren't possible with foreign key constraints) - it's possible you won't even see this in the profiler except between separate joins - so one time you might see it join to one table and sometimes another.
I would like to create a table called "NOTES". I was thinking this table would contain a "table_name" VARCHAR(100) which indicates what table put in the note, a "key" or multiple "key" columns representing the primary key values of the table that this note applies to and a "note" field VARCHAR(MAX). When other tables use this table they would supply THEIR primary key(s) and their "table_name" and get all the notes associated with the primary key(s) they supplied. The problem is that other tables might have 1, 2 or more PKs so I am looking for ideas on how I can design this...
What you're suggesting sounds a little convoluted to me. I would suggest something like this.
Notes
------
Id - PK
NoteTypeId - FK to NoteTypes.Id
NoteContent
NoteTypes
----------
Id - PK
Description - This could replace the "table_name" column you suggested
SomeOtherTable
--------------
Id - PK
...
Other Columns
...
NoteId - FK to Notes.Id
This would allow you to keep your data better normalized, but still get the relationships between data that you want. Note that this assumes a 1:1 relationship between rows in your other tables and Notes. If that relationship will be many to one, you'll need a cross table.
Have a look at this thread about database normalization
What is Normalisation (or Normalization)?
Additionally, you can check this resource to learn more about foreign keys
http://www.w3schools.com/sql/sql_foreignkey.asp
Instead of putting the other table name's and primary key's in this table, have the primary key of the NOTES table be NoteId. Create an FK in each other table that will make a note, and store the corresponding NoteId's in the other tables. Then you can simply join on NoteId from all of these other tables to the NOTES table.
As I understand your problem, you're attempting to "abstract" the auditing of multiple tables in a way that you might abstract a class in OOP.
While it's a great OOP design principle, it falls flat in databases for multiple reasons. Perhaps the largest single reason is that if you cannot envision it, neither will someone (even you) looking at it later have an easy time reassembling the data. Smaller that that though, is that while you tend to think of a table as a container and thus similar to an object, in reality they are implemented instances of this hypothetical container you are trying to put together and operate better if you treat them as such. By creating an audit table specific to a table or a subset of tables that share structural similarity and data similarity, you increase the performance of your database and you won't run in to strange trigger or select related issues later.
And you can't envision it not because you're not good at what you're doing, but rather, the structure is not conducive to database logging.
Instead, I would recommend that you create separate logging tables that manage the auditing of each table you want to audit or log. In fact, some fast google searches show many scripts already written to do much of this tasking for you: Example of one such search
You should create these individual tables and then if you want to be able to report on multiple table or even all tables at once, you can create a stored procedure (if you want to make queries based on criterion) or a view with an included SELECT statement that JOINs and/or UNIONs the tables you are interested in - in a fashion that makes sense to the report type. You'll still have to write new objects in to the view, but even with your original table design, you'd have to account for that.