SQL Server : show foreign key constraints tied to a single record - sql

This probably is a bit complicated but is there anyway or already a script out there that could show you all foreign key constraints tied to a single table row.
What I mean by this is say you have the following DB structure:
TABLE 1
column a
column b
TABLE 2
column c
column d (foreign key constraint to 1.a)
TABLE 3
column e
column f (foreign key constraint to 2.c)
TABLE 4
column g (foreign key constraint to 3.e)
column h
Then, you have 2 rows in Table 1. One of the rows is constrained through table 2, then further to table 3, BUT not further to table 4 (IDs tied throughout tables 1-3).
I would like to simply query one of the rows in Table 1 and have it tell me that for that row there are ties that go to Table 2, and then those rows have ties to Table 3. Using this 'query' on the second row in Table 1 would simply just return nothing as there are no foreign keys that are tying that row down.
Something like this would be immensely useful when it comes to tracking down what tables/rows are currently using a particular starting row.
Thanks!

I think what you're looking for can be accomplished by:
SELECT a, t2=COUNT(d), t3 = COUNT(f), t4 = COUNT(g)
FROM [1] LEFT JOIN [2] ON 1.a=2.d
LEFT JOIN [3] ON 2.c = 3.f
LEFT JOIN [4] ON 4.g = 3.e

Related

Update table column primary key based on mapping table

I have two tables.
table contains records with column A (type number, primary key).
table contains records with columns A, B (type number). The second represents mapping table.
What is the problem?
I need to do remapping all records in table 1, specifically column A to column B based on mapping table 2.
But the problem is that table 1 contains records which have also values B from table 2 (in column A). That means when I will do remapping table 1 then can appear problem with uniqueness because column A in table 1 is primary key.
I have tried to select count of all records which have to be remapped but I dont know exactly if my query is correct.
Here are those two tables:
select * from temp_1;
select * from temp_2;
Here is the select with count:
SELECT count(*) FROM temp_1 T1
WHERE EXISTS (SELECT 1 FROM temp_2 T2 WHERE T2.a = T1.a
and not exists (select 1 from temp_1 T1b where T2.b = T1b.a));
Sample data:
Table 1:
1, 2, 3, 4, 5, 40, 50
Table 2:
1-11, 2-22, 3-33, 4-40, 5-50
Result Table 1 after remapping:
11, 22, 33, 4, 5, 40, 50 remaining problem values
These bold marked values are the problem values if you understand me.
So, you have table 1 with column A that contains values that may also appear as new values from the re-mapping. The only solution is to use a temporary table into which you deploy the new mapping and, once you are done, copy the new mapping onto table 1.
This is not an answer - posting as one so the query can be formatted.
You may want to check to see if the PK constraint is deferrable. For example, you could run the query below. '......' means your table name, in single quotes (and in ALL CAPS). If the tables aren't yours, query ALL_CONSTRAINTS instead of USER_CONSTRAINTS. If you are lucky, the constraint is deferrable. If not we can think about other solutions. Good luck!
select constraint_name, deferrable, deferred
from user_constraints
where constraint_type = 'P'
and table_name = '.....'

How do I delete record through a Join

I have an Access database where I wish to delete a record from a table using its referential entigrity to another table. For example I have the following two tables;
CI_Aliases with fields - CI_Ref (Primary Key) with a value of 3
and Aliase_ID (Foreign Key) with a value of 5
Aliases_Table with fields - Aliase_ID (Primary Key) with a value of 5
and Aliase with a value of "AMSS"
I have tried the following DELETE statement but I get the message "Cannot delete records from the specified table" - what am I doing wrong?
DELETE FROM Aliases_Table a
INNER JOIN CI_Aliases c
ON a.Aliase_ID = c.Aliase_ID
WHERE c.CI_Ref = 3
I should confirm that it is the record in the Aliases_Table i wish to delete but using the CI_Aliase primary key of "3"
Use the alias first, then the FROM clause. The syntax is a bit unintuitive
DELETE a.*
FROM Aliases_Table a
WHERE a.Aliase_ID IN (
SELECT
c.Aliase_ID
FROM CI_Aliases c
WHERE c.CI_Ref = 3
)

Insert data from one table to other using select statement and avoid duplicate data

Database: Oracle
I want to insert data from table 1 to table 2 but the catch is, primary key of table 2 is the combination of first 4 letters and last 4 numbers of the primary key of table 1.
For example:
Table 1 - primary key : abcd12349887/abcd22339887/abcder019987
In this case even if the primary key of table 1 is different, but when I extract the 1st 4 and last 4 chars, the output will be same abcd9887
So, when I use select to insert data, I get error of duplicate PK in table 2.
What I want is if the data of the PK is already present then don't add that record.
Here's my complete stored procedure:
INSERT INTO CPIPRODUCTFAMILIE
(productfamilieid, rapport, mesh, mesh_uitbreiding, productlabelid)
(SELECT DISTINCT (CONCAT(SUBSTR(p.productnummer,1,4),SUBSTR(p.productnummer,8,4)))
productnummer,
ps.rapport, ps.mesh, ps.mesh_uitbreiding, ps.productlabelid
FROM productspecificatie ps, productgroep pg,
product p left join cpiproductfamilie cpf
on (CONCAT(SUBSTR(p.productnummer,1,4),SUBSTR(p.productnummer,8,4))) = cpf.productfamilieid
WHERE p.productnummer = ps.productnummer
AND p.productgroepid = pg.productgroepid
AND cpf.productfamilieid IS NULL
AND pg.productietype = 'P'
**AND p.ROWID IN (SELECT MAX(ROWID) FROM product
GROUP BY (CONCAT(SUBSTR(productnummer,1,4),SUBSTR(productnummer,8,4))))**
AND (CONCAT(SUBSTR(p.productnummer,1,2),SUBSTR(p.productnummer,8,4))) not in
(select productfamilieid from cpiproductfamilie));
The highlighted section seems to be wrong, and because of this the data is not picking up.
Please help
Try using this.
p.productnummer IN (SELECT MAX(productnummer) FROM product
GROUP BY (CONCAT(SUBSTR(productnummer,1,4),SUBSTR(productnummer,8,4))))

retriveing data using join in an sql query and reprenting it in jdbc template

I have 3 tables:
REPOTRANSSMISSION TABLE column are
REPO_TRANSMISSION_ID,
G3_SESSION_ID,
CLIENT_NM,
ASSESSMENT_SESSION_ID,
PACKAGE_SESSION_ID,
TEST_SESSION_ID,
SCORE_SESSION_ID,
REPO_TRANSMISSION_STATE_CD,
REPO_TRANSMISSION_DATA_TX,
REPO_TRANSMISSION_LEVEL_CD,
CREATE_DT,
LAST_MODIFIED_DT.
here REPO_TRANSMISSION_ID is the primary key and REPO_TRANSMISSION_STATE_CD is the foregin key
2nd table REPO_ TRANSSMISSION_REQ_LOG column are
REPO_TRANSMISSION_REQ_LOG_ID
REPO_TRANSMISSION_ID
REQUEST_TX
RESPONSE_TX
ERROR_TX
CREATE_DT
LAST_MODIFIED_DT
here PK_REPO_TRANSMISSION_REQ_LOG is the primary key, REPO_TRANSMISSION_ID is foregin key
3rd tables REPO TRANSSMISSION STATE column are
REPO_TRANSMISSION_STATE_CD
REPO_TRANSMISSION_STATE_DS
CREATE_DT
LAST_MODIFIED_DT
and
REPO_TRANSSMISSION_STATE_CD values are TRANS_RESP,
RECON_REQ,
RECON_ERR,
RECON_RETRY,
RECON_RESP
here PK_REPO_TRANSMISSION_STATE_cd is the primary key
I have to retrieve the repo_transsmission_Id when the repotransmission_state_cd value is above 4 and I have to join the 1 st and 2nd table.
How I will write the sql query?
Do you just want to see how a query would look like to give you the results you need?
It would be something like this:
SELECT tr.repo_transmission_id
FROM REPOTRANSSMISSION tr
JOIN REPO_TRANSSMISSION_REQ_LOG lg ON (tr.REPO_TRANSMISSION_ID = lg.REPO_TRANSMISSION_ID)
WHERE tr.REPO_TRANSMISSION_STATE_CD > 4;

DB design: Should I use constraints within a table or a new table

I inherited a large existing DB and I'd like to know if I should refactor it because 95% of my queries require joining at least 4 tables.
The DB has a 5 tables that only have an ID and Name column with less than 20 rows. I assume the author did this so he could change the names there and not change them in the other tables, but many of those tables are only referenced in one other table. Should I refactor these small 2 column tables into the a larger table and add a constraint to the column so users can't input incorrect names instead of having seperate tables?
Resist that urge. From your description I can deduce that the existing design is solid and probably well normalized. Your refactoring may actually undo a good db structure.
If you are bothered by writing a lot of joins in your queries I would suggest creating views to mitigate the boilerplate.
...the author did this so he could change the names there not change
them in the other tables...
That is evidence of good design and exactly what you should strive for in a normalized database.
no.
your db is normalized and proper.
and you save space, lookup time, indexing for storing an int rather then a varchar name
small tables are optimized away if they are properly keyed.
Sounds like what you have are lookup tables. Let me tell you waht happens when you decide to put all lookups in one table with an additonal column to specify which type it is. Fisrt instead of joining to 4 different tables in one query, you have to join to the same table 4 times. There ends up being more contention for the resources in the "one table to rule them all". Further, you lose FK constraints. That means you eventually lose data integrity. So if one lookup is state, nothing wil prevent you from putting the id values for a different lookup for customer type in the stateid column in the customeraddress table. When the lookups are separate you con enforce that relationship.
Suppose instead of one big table you decide to have a constraint on the column for customer type. Constraints are now enforced but you have a problem when they need to change. Now you have to alter the database in order to add a new type. Again usually this is a very bad idea espcially when the table gets large.
Short story: Replacing strings with ID numbers has nothing to do with normalization. Using natural keys in your case might improve performance. In my tests, queries using natural keys were faster by 1 or 2 orders of magnitude.
You might have accepted an answer too quickly.
The DB has a 5 tables that only have an ID and Name column with less
than 20 rows.
I'm assuming these tables have a structure something like this.
create table a (
a_id integer primary key,
a_name varchar(30) not null unique
);
create table b (...
-- Just like a
create table your_data (
yet_another_id integer primary key,
a_id integer not null references a (a_id),
b_id integer not null references b (b_id),
c_id integer not null references c (c_id),
d_id integer not null references d (d_id),
unique (a_id, b_id, c_id, d_id),
-- other columns go here
);
And it's obvious that your_data will require four joins (at least) to get usable information from it.
But the names in table a, b, c, and d are unique (ahem), so you can use the unique names as targets for foreign key references. You could rewrite the table your_data like this.
create table your_data (
yet_another_id integer primary key,
a_name varchar(30) not null references a (a_name),
b_name varchar(30) not null references b (b_name),
c_name varchar(30) not null references c (c_name),
d_name varchar(30) not null references d (d_name),
unique (a_name, b_name, c_name, d_name),
-- other columns go here
);
Replacing id numbers with strings doesn't change the normal form. (And replacing strings with id numbers doesn't have anything to do with normalization.) If the original table were in 5NF, then this rewrite will be in 5NF, too.
But what about performance? Aren't id numbers plus joins supposed to be faster than strings?
I tested that by inserting 20 rows into each of the four tables a, b, c, and d. Then I generated a Cartesian product to fill one test table written with id numbers, and another using the names. (So, 160K rows in each.) I updated the statistics, and ran a couple of queries.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id)
inner join b on (b.b_id = your_data_id.b_id)
inner join c on (c.c_id = your_data_id.c_id)
inner join d on (d.d_id = your_data_id.d_id)
...
Total runtime: 808.472 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
Total runtime: 132.098 ms
The query using id numbers takes a lot longer to execute. I used a WHERE clause on all four columns, which returns a single row.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id and a.a_name = 'a one')
inner join b on (b.b_id = your_data_id.b_id and b.b_name = 'b one')
inner join c on (c.c_id = your_data_id.c_id and c.c_name = 'c one')
inner join d on (d.d_id = your_data_id.d_id and d.d_name = 'd one)
...
Total runtime: 14.671 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
where a_name = 'a one' and b_name = 'b one' and c_name = 'c one' and d_name = 'd one';
...
Total runtime: 0.133 ms
The tables using id numbers took about 100 times longer to query.
Tests used PostgreSQL 9.something.
My advice: Try before you buy. I mean, test before you invest. Try rewriting your data table to use natural keys. Think carefully about ON UPDATE CASCADE and ON DELETE CASCADE. Test performance with representative sample data. Edit your original question and let us know what you found.