I have a table in the following structure. I am writing a query to get all item_ids where key_name='topic' and key_string_value='investing', which is the simple part.
select item_id from table where key_name='topic' and key_string_value='investing'
But then for all the item_ids returned above, I want to order them by the values set for each item_id in key_name='importance' and key_name='product'.The table structure is making it very difficult as I am not an SQL expert. Any help would be appreciated.
item_id key_name key_string_value Key_float_value
1 topic investing null
1 importance null 500
1 product A null
1 product B null
2 topic Starting null
2 product B null
2 importance null 300
2 topic retail null
3 importance null 400
3 topic investing null
3 product C null
4 topic Starting null
4 topic investing null
4 importance null 400
4 product D null
#Schwern is on right - your structure should be normalized, and the names should be better too. All this makes me think: homework.
The answer to the homework question is a self join, and looks like this:
select t1.item_id , imp.key_float_value, prd.key_string_value
from [table] t1
LEFT OUTER JOIN [table] imp on imp.item_id = t1.item_id and imp.key_name='importance'
LEFT OUTER JOIN [table] prd on prd.item_id = t1.item_id and prd.key_name='product'
where t1.key_name='topic' and t1.key_string_value='investing'
ORDER BY imp.key_float_value, prd.key_string_value
The square brackets on `[table] are because the use of the table keyword as the table name requires the name to be delimited. Square brackets for TSQL. Others use double quotes (")
You have a very poorly design table that will be slow and difficult to work with. SQL is not a key/value store; it works on rows, columns and relationships. Rather than fight it, I would suggest redesigning it. Either use a NoSQL database which is easier to use and works more like normal programming data structures, or redesign it.
Here's the redesign I would suggest.
CREATE TABLE item (
id INTEGER PRIMARY KEY,
importance INTEGER DEFAULT 0
);
CREATE TABLE item_topics (
item_id INTEGER REFERENCES item(id),
topic TEXT NOT NULL
);
CREATE TABLE item_products (
item_id INTEGER REFERENCES item(id),
product TEXT NOT NULL
);
The item itself, and any scalar (ie. single value) attributes go into one table. Anything which can be a list (products and topics) needs its own table relating each item to its elements. If this seems clunky, that's because it is, but that's how SQL works.
To find all items whose topic is investing, you have to join on the item_topics table.
SELECT item.id
FROM item
JOIN item_topics ON item.id = item_topics.id
WHERE topic = 'investing'
Then to order them, add ORDER BY item.importance.
Related
I have a query that joins 4 tables. It returns 35 rows every time I run it. Here it is..
SELECT Lender.id AS LenderId,
Loans.Loan_ID AS LoanId,
Parcels.Parcel_ID AS ParcelId,
tr.Tax_ID AS TaxRecordId,
tr.Tax_Year AS TaxYear
FROM parcels
INNER JOIN Loans ON (Parcels.Loan_ID = Loans.Loan_ID AND Parcels.Escrow = 1)
INNER JOIN Lender ON (Lender.id = Loans.Bank_ID)
INNER JOIN Tax_Record tr ON (tr.Parcel_ID = Parcels.Parcel_ID AND tr.Tax_Year = :taxYear)
WHERE Loans.Active = 1
AND Loans.Date_Submitted IS NOT NULL
AND Parcels.Municipality = :municipality
AND Parcels.County = :county
AND Parcels.State LIKE :stateCode
If I left join a table (using a subquery in the on clause of the join), MySQL does some very unexpected things. Here's the modified query with the left join...
SELECT Lender.id AS LenderId,
Loans.Loan_ID AS LoanId,
Parcels.Parcel_ID AS ParcelId,
tr.Tax_ID AS TaxRecordId,
tr.Tax_Year AS TaxYear
FROM parcels
INNER JOIN Loans ON (Parcels.Loan_ID = Loans.Loan_ID AND Parcels.Escrow = 1)
INNER JOIN Lender ON (Lender.id = Loans.Bank_ID)
INNER JOIN Tax_Record tr ON (tr.Parcel_ID = Parcels.Parcel_ID AND tr.Tax_Year = :taxYear)
LEFT OUTER JOIN taxrecordpayment trp ON trp.taxRecordId = tr.Tax_ID AND trp.paymentId = (
SELECT p.id
FROM taxrecordpayment trpi
JOIN payments p ON p.id = trpi.paymentId
WHERE trpi.taxRecordId = tr.Tax_ID AND p.isFullYear = 0
ORDER BY p.dueDate, p.paymentSendTo
LIMIT 1
)
WHERE Loans.Active = 1
AND Loans.Date_Submitted IS NOT NULL
AND Parcels.Municipality = :municipality
AND Parcels.County = :county
AND Parcels.State LIKE :stateCode
I would like to note that the left join table does not appear in the where clause of the query at all, and I am not using the left join table in the select clause. In real life, I actually use the left join records in the select clause, but in my effort to get to the essential elements causing this problem, I have simplified the query and removed everything but the essential parts that cause trouble.
Here's what is happening...
Where I used to get 35 records, now I get a random number of records approaching 35. Sometimes, I get 33. Other times, I get 27, or 29, or 31, and so on. I would never expect a left join like this to filter out any records from my result set. A left join should only add additional columns to the result set, particularly when - as is the case here - the left join table is not part of the where clause.
I have determined that the problem really only happens if the subquery has a non-deterministic sort. In other words, if I have two taxrecordpayment records that match the subquery and both have the same due date and the same "paymentSendTo" value, then I see the issue. If the inner subquery has a deterministic sort, the issue goes away.
I would imagine that some people will look at my simplified example and recommend that I simply remove the subquery. If my query were this simple in real life, that would be the way to go.
In reality, the entire query is more complicated, is hitting a LOT of data, and modifying it is possible, but costly. Removing the subquery is even more costly.
Has anyone seen this sort of behavior before? I would expect a non-deterministic subquery to simply produce inconsistent results and I would never expect a left join like this to actually filter records out when the left joined table is not used at all in the where clause.
Here is the query plan, as provided by EXPLAIN...
id
select_type
table
partitions
type
possible_keys
key
key_len
ref
rows
filtered
Extra
1
PRIMARY
parcels
NULL
range
PRIMARY,Loan_ID,state_county,ParcelsCounty,county_state,Location,CountyLoan
county_state
106
NULL
590
1
Using index condition; Using where
1
PRIMARY
tr
NULL
eq_ref
parcel_year,ParcelsTax_Record,Year
parcel_year
8
infoexchange.parcels.Parcel_ID,const
1
100
Using index
1
PRIMARY
Loans
NULL
eq_ref
PRIMARY,Bank_ID,Bank,DateSub,loan_number
PRIMARY
4
infoexchange.parcels.Loan_ID
1
21.14
Using where
1
PRIMARY
Lender
NULL
eq_ref
PRIMARY
PRIMARY
8
infoexchange.Loans.bank_id
1
100
Using index
1
PRIMARY
trp
NULL
eq_ref
taxRecordPayment_key,IDX_trp_pymtId_trId
taxRecordPayment_key
8
infoexchange.tr.Tax_ID,func
1
100
Using where; Using index
2
DEPENDENT SUBQUERY
trpi
NULL
ref
taxRecordPayment_key,IDX_trp_pymtId_trId
taxRecordPayment_key
4
infoexchange.tr.Tax_ID
1
100
Using index; Using temporary; Using filesort
2
DEPENDENT SUBQUERY
p
NULL
eq_ref
PRIMARY
PRIMARY
4
infoexchange.trpi.paymentId
1
10
Using where
I have attempted to recreate this with a contrived data setup and an analogous query, but with my contrived data set, I cannot get the subquery behave non-deterministically even though it suffers from the same problem as my subquery above (there are multiple records that match the subquery and the order by is not unique for those records).
This seems to require a massive data set to start misbehaving. It happens on multiple distinct instances of MySQL 5.7, while a MySQL 5.6 instance does not demonstrate the problem at all. I am hoping someone can spot something in the above query plan to help me understand why the subquery is non-deterministic and - more importantly - why that causes records to get dropped from the result set.
I feel like this is either a data set issue (perhaps we need to do a table optimize or do some maintenance on our tables), or a bug in MySQL.
I have submitted a bug for this behavior.
https://bugs.mysql.com/bug.php?id=104824
You can recreate this behavior as follows...
CREATE TABLE tableA (
id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(10)
);
CREATE TABLE tableB (
id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
tableAId INTEGER NOT NULL,
name VARCHAR(10),
CONSTRAINT tableBFKtableAId FOREIGN KEY (tableAId) REFERENCES tableA (id)
);
INSERT INTO tableA (name)
VALUES ('he'),
('she'),
('it'),
('they');
INSERT INTO tableB (tableAId, name)
VALUES (1, 'hat'),
(2, 'shoes'),
(4, 'roof');
Run this query multiple times and the number of rows returned will vary:
SELECT COALESCE(b.id, -1) AS tableBId,
a.id AS tableAId
FROM tableA a
LEFT JOIN tableB b ON (b.tableAId = a.id AND 0.5 > RAND());
I have two tables:
category with columns:
id name
1 business
2 sports
...
article with columns:
id title categories
1 abc 1|2|3
2 xyz 1|2
I know there should be a separate table for article categories but I was given this.
Is it possible to write a query that returns:
id title category_names
1 xyz business,sports
I thought of splitting the string in article -> categories column, then use in query to extract name from category table but couldn't figure it out.
You should fix your data model. But, you can do this in SQL Server:
select a.*, s.names
from article a cross apply
(select string_agg(c.name, ',') as names
from string_split(a.categories, '|') ss join
category c
on try_convert(int, ss.value) = c.id
) s;
Here is a db<>fiddle.
Presumably, you already know the shortcomings of this data model:
SQL Server has poor string handling functionality.
Numbers should be stored as numbers not strings.
Foreign key references should be properly declared.
Such queries cannot make use of indexes and partitions.
If you really want to store multiple values in a field, SQL Server offers both JSON and XML. Strings are not the right approach.
I have the classic arrangement for a many to many relation in a small flashcard like application built using SQLite. Every card can have multiple tags, and every tag can have multiple cards. This two entities having each a table with a third table to link records.
This is the table for Cards:
CREATE TABLE Cards (CardId INTEGER PRIMARY KEY AUTOINCREMENT,
Text TEXT NOT NULL,
Answer INTEGER NOT NULL,
Success INTEGER NOT NULL,
Fail INTEGER NOT NULL);
This is the table for Tags:
CREATE TABLE Tags (TagId INTEGER PRIMARY KEY AUTOINCREMENT,
Name TEXT UNIQUE NOT NULL);
This is the cross reference table:
CREATE TABLE CardsRelatedToTags (CardId INTEGER,
TagId INTEGER,
PRIMARY KEY (CardId, TagId));
I need to get a table of cards with their associated tags in a column separated by commas.
I can already get what I need for a single row knowing its Id with the following query:
SELECT Cards.CardId, Cards.Text,
(SELECT group_concat(Tags.Name, ', ') FROM Tags
JOIN CardsRelatedToTags ON CardsRelatedToTags.TagId = Tags.TagId
WHERE CardsRelatedToTags.CardId = 1) AS TagsList
FROM Cards
WHERE Cards.CardId = 1
This will result in something like this:
CardId | Text | TagsList
1 | Some specially formatted text | Tag1, Tag2, TagN...
How to get this type of result (TagsList from group_concat) for every row in Cards using a SQL query? It is advisable to do so from the performance point of view? Or I need to do this sort of "presentation" work in application code using a simpler request to the DB?
Answering your code question:
SELECT
c.CardId,
c.Text,
GROUP_CONCAT(t.Name,', ') AS TagsList
FROM
Cards c
JOIN CardsRelatedToTags crt ON
c.CardId = crt.CardId
JOIN Tags t ON
crt.TagId = t.TagId
WHERE
c.CardId = 1
GROUP BY c.CardId, c.Text
Now, to the matter of performance. Databases are a powerful tool and do not end on simple SELECT statements. You can definitely do what you need inside a DB (even SQLite). It is a bad practice to use a SELECT statement as a feed for one column inside another SELECT. It would require scanning a table to get result for each row in your input.
I have a single table of activities, some labelled 'Assessment' (type_id of 50) and some 'Counselling' (type_id of 9) with dates of the activities. I need to compare these dates to find how long people wait for counselling after assessment. The table contains rows for many people, and that is the primary key of 'id'. My problem is how to produce a result row with both the assessment details and the counselling details for the same person, so that I can compare the dates. I've tried joining the table to itself, and tried nested subqueries, I just can't fathom it. I'm using Access 2010 btw.
Please forgive my stupidity, but here's an example of joining the table to itself that doesn't work, producing nothing (not surprising):
Table looks like:
ID TYPE_ID ACTIVITY_DATE_TIME
----------------------------------
1 9 20130411
1 v 50 v 20130511
2 9 20130511
3 9 20130511
In the above the last two rows have only had assessment so I want to ignore them, and just work on the situation where there's both assessment and counselling 'type-id'
SELECT
civicrm_activity.id, civicrm_activity.type_id,
civicrm_activity.activity_date_time,
civicrm_activity_1.type_id,
civicrm_activity_1.activity_date_time
FROM
civicrm_activity INNER JOIN civicrm_activity AS civicrm_activity_1
ON civicrm_activity.id = civicrm_activity_1.id
WHERE
civicrm_activity.type_id=9
AND civicrm_activity_1.type_id=50;
I'm actually wondering whether this is in fact not possible to do with SQL? I hope it is possible? Thank you for your patience!
Sounds to me like you only want to get the ID numbers where you have a TYPE_ID entry of both 9 and 50.
SELECT DISTINCT id FROM civicrm_activity WHERE type_id = '9' AND id IN (SELECT id FROM civicrm_activity WHERE type_id = '50');
This will give you a list of id's that has entries with both type_id 9 and 50. With that list you can now go and get the specifics.
Use this SQL for the time of type_id 9
SELECT activity_date_time FROM civicrm_activity WHERE id = 'id_from_last_sql' AND type_id = '9'
Use this SQL for the time of type_id 50
SELECT activity_date_time FROM civicrm_activity WHERE id = 'id_from_last_sql' AND type_id = '50'
Your query looks OK to me, too. The one problem might be that you use only one table alias. I don't know, but perhaps Access treats the table name "specially" such that, in effect, the WHERE clause says
WHERE
civicrm_activity.type_id=9
AND civicrm_activity.type_id=50;
That would certainly explain zero rows returned!
To fix that, use an alias for each table. I suggest shorter ones,
SELECT A.id, A.type_id, A.activity_date_time,
B.type_id, B.activity_date_time
FROM civicrm_activity as A
JOIN civicrm_activity as B
ON A.id = B.id
WHERE A.type_id=9
AND B.type_id=50;
I inherited a large existing DB and I'd like to know if I should refactor it because 95% of my queries require joining at least 4 tables.
The DB has a 5 tables that only have an ID and Name column with less than 20 rows. I assume the author did this so he could change the names there and not change them in the other tables, but many of those tables are only referenced in one other table. Should I refactor these small 2 column tables into the a larger table and add a constraint to the column so users can't input incorrect names instead of having seperate tables?
Resist that urge. From your description I can deduce that the existing design is solid and probably well normalized. Your refactoring may actually undo a good db structure.
If you are bothered by writing a lot of joins in your queries I would suggest creating views to mitigate the boilerplate.
...the author did this so he could change the names there not change
them in the other tables...
That is evidence of good design and exactly what you should strive for in a normalized database.
no.
your db is normalized and proper.
and you save space, lookup time, indexing for storing an int rather then a varchar name
small tables are optimized away if they are properly keyed.
Sounds like what you have are lookup tables. Let me tell you waht happens when you decide to put all lookups in one table with an additonal column to specify which type it is. Fisrt instead of joining to 4 different tables in one query, you have to join to the same table 4 times. There ends up being more contention for the resources in the "one table to rule them all". Further, you lose FK constraints. That means you eventually lose data integrity. So if one lookup is state, nothing wil prevent you from putting the id values for a different lookup for customer type in the stateid column in the customeraddress table. When the lookups are separate you con enforce that relationship.
Suppose instead of one big table you decide to have a constraint on the column for customer type. Constraints are now enforced but you have a problem when they need to change. Now you have to alter the database in order to add a new type. Again usually this is a very bad idea espcially when the table gets large.
Short story: Replacing strings with ID numbers has nothing to do with normalization. Using natural keys in your case might improve performance. In my tests, queries using natural keys were faster by 1 or 2 orders of magnitude.
You might have accepted an answer too quickly.
The DB has a 5 tables that only have an ID and Name column with less
than 20 rows.
I'm assuming these tables have a structure something like this.
create table a (
a_id integer primary key,
a_name varchar(30) not null unique
);
create table b (...
-- Just like a
create table your_data (
yet_another_id integer primary key,
a_id integer not null references a (a_id),
b_id integer not null references b (b_id),
c_id integer not null references c (c_id),
d_id integer not null references d (d_id),
unique (a_id, b_id, c_id, d_id),
-- other columns go here
);
And it's obvious that your_data will require four joins (at least) to get usable information from it.
But the names in table a, b, c, and d are unique (ahem), so you can use the unique names as targets for foreign key references. You could rewrite the table your_data like this.
create table your_data (
yet_another_id integer primary key,
a_name varchar(30) not null references a (a_name),
b_name varchar(30) not null references b (b_name),
c_name varchar(30) not null references c (c_name),
d_name varchar(30) not null references d (d_name),
unique (a_name, b_name, c_name, d_name),
-- other columns go here
);
Replacing id numbers with strings doesn't change the normal form. (And replacing strings with id numbers doesn't have anything to do with normalization.) If the original table were in 5NF, then this rewrite will be in 5NF, too.
But what about performance? Aren't id numbers plus joins supposed to be faster than strings?
I tested that by inserting 20 rows into each of the four tables a, b, c, and d. Then I generated a Cartesian product to fill one test table written with id numbers, and another using the names. (So, 160K rows in each.) I updated the statistics, and ran a couple of queries.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id)
inner join b on (b.b_id = your_data_id.b_id)
inner join c on (c.c_id = your_data_id.c_id)
inner join d on (d.d_id = your_data_id.d_id)
...
Total runtime: 808.472 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
Total runtime: 132.098 ms
The query using id numbers takes a lot longer to execute. I used a WHERE clause on all four columns, which returns a single row.
explain analyze
select a.a_name, b.b_name, c.c_name, d.d_name
from your_data_id
inner join a on (a.a_id = your_data_id.a_id and a.a_name = 'a one')
inner join b on (b.b_id = your_data_id.b_id and b.b_name = 'b one')
inner join c on (c.c_id = your_data_id.c_id and c.c_name = 'c one')
inner join d on (d.d_id = your_data_id.d_id and d.d_name = 'd one)
...
Total runtime: 14.671 ms
explain analyze
select a_name, b_name, c_name, d_name
from your_data
where a_name = 'a one' and b_name = 'b one' and c_name = 'c one' and d_name = 'd one';
...
Total runtime: 0.133 ms
The tables using id numbers took about 100 times longer to query.
Tests used PostgreSQL 9.something.
My advice: Try before you buy. I mean, test before you invest. Try rewriting your data table to use natural keys. Think carefully about ON UPDATE CASCADE and ON DELETE CASCADE. Test performance with representative sample data. Edit your original question and let us know what you found.