SQL composite key value vs string - sql

I have a list of integer from 1 to N elements (N < 24)
At the moment, there are two solutions to manage this value in a SQL database (I think it is the same for MySQL and Microsoft SQL Server)
Solution 1: use VARCHAR and , to separate integer values:
aaa | 40,50,50,10,600,200
aab | 40,50,600,200
aac | 40,50,50,10,600,200,500,1
Solution 2: create a new table with composite primary key (key, id) (id = index of element in list) and value:
aaa | 0 | 40
aaa | 1 | 50
aaa | 2 | 50
....
aab | 0 | 40
aab | 1 | 50
aab | 2 | 600
....
What is it better solution considering I have many items of data to load and I need to refresh this data many times
Thanks
Edit:
my operative case is: i need to refresh/read all data (list for key) with same call and i never call one by one, this is why i think first approach better.
And all math like avg or max i wanna do on client.

Usually the second approach is preferable. One advantage is ease of access:
-- Third value of aaa
select value from mytable where key = 'aaa' and pos = 3;
-- Avarage value of aaa
select avg(value) from mytable where key = 'aaa';
-- Avarage number of values
select avg(cnt) from (select count(*) as cnt from mytable group by key) counted;
Another is data consistency. You can add simple constraints to your columns, such as to allow only integers from, say, 1 to 700 and positions only up to 23.
There is an exception to the above, though. If you use the database only to store the list as is and you don't want to select separate values or even aggregate them, i.e. if this is just a string to the DBMS and your queries don't care about its content, then store it as a simple string. Why not?

The second solution that you propose is the classic way of doing this, I would recommend that.
The first solution is quite terrible in scaling and in other hundred things

Related

Single record buffering in SAP ABAP

My table is stud.
+-----+------+-------+
| no | name | grade |
+-----+------+-------+
| 101 | naga | A |
| 102 | raj | A |
| 103 | john | A |
+-----+------+-------+
The query I'm using is:
SELECT * FROM stud WHERE no = 101 AND grade = 'A'.
If am using single record buffering, how much data is being stored in the buffer area?
This query doesn't do anything. There is no "into" clause. meaning it wont store anything selected.
You are probably looking to do something like this....
SELECT * FROM stud into wa_stud WHERE no = 101 AND grade = 'A'.
"processing of each single row is performed here
endselect.
or perhaps something like this, where only 1 row (the first rows ordered by primary key) is selected...
select single * from stud into wa_stud where no = 101 and grade = 'A' .
or perhaps you want everything brought in to a table, meaning number and grade does not include the full primary key.
select * from stud into table it_stud where no = 101 and grade = 'A'.
this is from ABAP Keyword documentation in SE38:
SAP Buffer - Single Record Buffering
Only those rows in the table are buffered that are actually accessed.
This requires less space in the buffer than when using generic or full
buffering. On the other hand, more administration work is required and
significantly more direct database accesses.
So since your query returns a single record (based on the data you displayed) it should just get one row and hold in the buffer.
I'd suggest looking at SAP help and Google - also have a look at SELECT SINGLE and incompletely specified keys - there used to be a problem with the buffer being bypassed in some situations - have a read for reference.

For Sql performances, several equals or one between

For a new developement, I will have a big SQL table (~100M rows).
4 fields will be used to query the data.
Is it better to query one concatenated field with between or several equals ?
Exemple :
MainTable
PkId | Label | FkId1 | FkId2 | FkId3 | FkId4
1 | test | 1 | 4 | 3 | 1
Datas in Fk tables are static, example :
FkTable1
Id | Value
1 | a
2 | b
3 | c
To query the datas, the classic sql query is :
select Label, FkId1, FkId2, FkId3, FkId4
from MainTable
where FkId1=1 and FkId2=2 and FkId3 in(2, 3)
The idea to optimize performance is to add one field "UniqueId" calculated backend before the insert :
UniqueId = FkId1*1000000 + FkId2*10000 + FkId3*100 + FkId4
PkId | Label | FkId1 | FkId2 | FkId3 | FkId4 | UniqueId
1 | test | 1 | 4 | 3 | 1 | 1040301
select Label, FkId1, FkId2, FkId3, FkId4
from MainTable
where UniqueId between 1020200 and 1040000
Moreover, with the UniqueId field, an index on this field only will be sufficient.
What do you think ?
Thanks
For this query:
select Label, FkId1, FkId2, FkId3, FkId4
from MainTable
where FkId1 = 1 and FkId2 = 2 and FkId3 in (2, 3)
The optimal index is on MainTable(FkID1, FkId2, FkId3). You can also add Label and FkId4 to the index if you want a covering index (so the index can handle the entire query without referring to the original data pages).
There is no need for a computed field for the example you provided.
Since you will have 100M rows, thinking about optimisations from the start seems sensible to me.
However, your proposed solution will not work in this way:
Your formula above has two times the SAME factor 10000. You have to use different factors, i.e. different powers of 10.
Your select example has a "IN" clause (FkId3 in(2, 3)). This will only work, if only one of the FKs is queried this way. This fk should be the one with no factor in the formula for computing UniqueId (i.e. gives the least significant Digits of UniqueId).
Now seeing Gordons answer, I agree with him, i.e. using a combined index may be good enough for you (though your solution would probably slightly better). However, also the combined index has a similar problem: The FK field beeing queried with the IN clause should be the last field in the index.

SQL Server Primary Key for a range lookup

I have a static dataset that correlates a range of numbers to some metadata, e.g.
+--------+--------+-------+--------+----------------+
| Min | Max |Country|CardType| Issuing Bank |
+--------+--------+-------+--------+----------------+
| 400011 | 400051 | USA |VISA | Bank of America|
+--------+--------+-------+--------+----------------+
| 400052 | 400062 | UK |MAESTRO | HSBC |
+--------+--------+-------+--------+----------------+
I wish to lookup a the data for some arbitrary single value
SELECT *
FROM SomeTable
WHERE Min <= 400030
AND Max >= 400030
I have about 200k of these range mappings, and am wondering the best table structure for SQL Server?
A composite key doesn't seem correct due to the fact that most of the time, the value being looked up will be in between the two range values stored on disk. Similarly, only indexing the first column doesn't seem to be selective enough.
I know that 200k rows is fairly insignificant, and I can get by with doing not much, but lets assume that the numbers of rows could be orders of magnitude greater.
If you usually search on both min and max then a compound key on (min,max) is appropriate. The engine will find all rows where min is less than X, then search within those result to find the rows where max is greater then Y.
The index would also be useful if you do searches on min only, but would not be applicable if you do searches only on max.
You can index the first number and then do the lookup like this:
select t.*,
(select top 1 s.country
from static s
where t.num >= s.firstnum
order by s.firstnum
) country
from sometable t;
Or use outer apply:
select t.*, s.country
from sometable t outer apply
(select top 1 s.country
from static s
where t.num >= s.firstnum
order by s.firstnum
) s
This should take advantage of an index on static(firstnum) or static(firstnum, country). This does not check against the second number. If that is important, use outer apply and do the check outside the subquery.
I would specify the primary key on (Min,Max). Queries are as simple as:
SELECT *
FROM SomeTable
WHERE #Value BETWEEN Min AND Max
I'd also define a constraint to enforce that Min <= Max. Then I would create a trigger to enforce uniqueness in ranges and prevent the database from storing an overlapping range.
I belive is easy/faster if you create a trigger for INSERT and then fill the related calculated columns country, issuing bank, card-number length
At the end you do the calculation only once, instead 200k every time you will do a query. Of course is there a space cost. But query will be much easier to mantain.
I remember once I have to calculate some sin and cos to calculate distance so I just create the calculated columns once.
After your update I think is even easier
+--------+--------+-------+--------+----------------+----------+
| Min | Max |Country|CardType| Issuing Bank | TypeID |
+--------+--------+-------+--------+----------------+----------+
| 400011 | 400051 | USA |VISA | Bank of America| 1 |
+--------+--------+-------+--------+----------------+----------+
| 400052 | 400062 | UK |MAESTRO | HSBC | 2 |
+--------+--------+-------+--------+----------------+----------+
Then you Card will also create a column TypeID

Convert any string to an integer

Simply put, I'd like to be able to convert any string to an integer, preferably being able to restrict the size of the integer and ensure that the result is always identical. In other words is there a hashing function, supported by Oracle, that returns a numeric value and can that value have a maximum?
To provide some context if needed, I have two tables that have the following, simplified, format:
Table 1 Table 2
id | sequence_number id | sequence_number
-------------------- -------------
1 | 1 1 | 2QD44561
1 | 2 1 | 6HH00244
2 | 1 2 | 5DH08133
3 | 1 3 | 7RD03098
4 | 2 4 | 8BF02466
The column sequence_number is number(3) in Table 1 and varchar2(11) in Table 2; it is part of the primary key in both tables.
The data is externally provided and cannot be changed; in Table 1 it is, I believe, created by a simple sequence but in Table 2 has a meaning. The data is made up but representative.
Someone has promised that we would output a number(3) field. While this is fine for the column in the first table, it causes problems for the second.
I would like to be able to both convert sequence_number to an integer (easy), that is less than 1000 (harder) and if at all possible is constant (seemingly impossible). This means that I would like '2QD44561' to always return 586. It does not matter, much, if two strings return the same number.
Simply converting to an integer I can use utl_raw.cast_to_number():
select utl_raw.cast_to_number((utl_raw.cast_to_raw('2QD44561'))) from dual;
UTL_RAW.CAST_TO_NUMBER((UTL_RAW.CAST_TO_RAW('2QD44561')))
---------------------------------------------------------
-2.033E+25
But as you can see this isn't less than 1000
I've also been playing around with dbms_crypto and utl_encode to see if I could come up with something but I've not managed to get a small integer. Is there a way?
How about ora_hash?
select ora_hash(sequence_number, 999) from table_2;
... will produce a maximum of 3 digits. You could also seed it with the id I suppose, but not sure that adds much with so few values, and I'm not sure you'd want that anyway.
You are talking about using a hash function. There are lots of solutions out there - sha1 is very common.
But just FYI, when you say "restrict the size of the integer" understand that you will then be mapping an infinite set of strings or numbers onto a limited set of values. So while your strings will always map to the same value when they are the same, they will not be the only string to map to that value

SQL magic - query shouldn't take 15 hours, but it does

Ok, so i have one really monstrous MySQL table (900k records, 180 MB total), and i want to extract from subgroups records with higher date_updated and calculate weighted average in each group. The calculation runs for ~15 hours, and i have a strong feeling i'm doing it wrong.
First, monstrous table layout:
category
element_id
date_updated
value
weight
source_prefix
source_name
Only key here is on element_id (BTREE, ~8k unique elements).
And calculation process:
Make hash for each group and subgroup.
CREATE TEMPORARY TABLE `temp1` (INDEX ( `ds_hash` ))
SELECT `category`,
`element_id`,
`source_prefix`,
`source_name`,
`date_updated`,
`value`,
`weight`,
MD5(CONCAT(`category`, `element_id`, `source_prefix`, `source_name`)) AS `subcat_hash`,
MD5(CONCAT(`category`, `element_id`, `date_updated`)) AS `cat_hash`
FROM `bigbigtable` WHERE `date_updated` <= '2009-04-28'
I really don't understand this fuss with hashes, but it worked faster this way. Dark magic, i presume.
Find maximum date for each subgroup
CREATE TEMPORARY TABLE `temp2` (INDEX ( `subcat_hash` ))
SELECT MAX(`date_updated`) AS `maxdate` , `subcat_hash`
FROM `temp1`
GROUP BY `subcat_hash`;
Join temp1 with temp2 to find weighted average values for categories
CREATE TEMPORARY TABLE `valuebycats` (INDEX ( `category` ))
SELECT `temp1`.`element_id`,
`temp1`.`category`,
`temp1`.`source_prefix`,
`temp1`.`source_name`,
`temp1`.`date_updated`,
AVG(`temp1`.`value`) AS `avg_value`,
SUM(`temp1`.`value` * `temp1`.`weight`) / SUM(`weight`) AS `rating`
FROM `temp1` LEFT JOIN `temp2` ON `temp1`.`subcat_hash` = `temp2`.`subcat_hash`
WHERE `temp2`.`subcat_hash` = `temp1`.`subcat_hash`
AND `temp1`.`date_updated` = `temp2`.`maxdate`
GROUP BY `temp1`.`cat_hash`;
(now that i looked through it and wrote it all down, it seems to me that i should use INNER JOIN in that last query (to avoid 900k*900k temp table)).
Still, is there a normal way to do so?
UPD: some picture for reference:
removed dead ImageShack link
UPD: EXPLAIN for proposed solution:
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| 1 | SIMPLE | cur | ALL | NULL | NULL | NULL | NULL | 893085 | 100.00 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | next | ref | prefix | prefix | 1074 | bigbigtable.cur.source_prefix,bigbigtable.cur.source_name,bigbigtable.cur.element_id | 1 | 100.00 | Using where |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
Using hashses is one of the ways in which a database engine can execute a join. It should be very rare that you'd have to write your own hash-based join; this certainly doesn't look like one of them, with a 900k rows table with some aggregates.
Based on your comment, this query might do what you are looking for:
SELECT cur.source_prefix,
cur.source_name,
cur.category,
cur.element_id,
MAX(cur.date_updated) AS DateUpdated,
AVG(cur.value) AS AvgValue,
SUM(cur.value * cur.weight) / SUM(cur.weight) AS Rating
FROM eev0 cur
LEFT JOIN eev0 next
ON next.date_updated < '2009-05-01'
AND next.source_prefix = cur.source_prefix
AND next.source_name = cur.source_name
AND next.element_id = cur.element_id
AND next.date_updated > cur.date_updated
WHERE cur.date_updated < '2009-05-01'
AND next.category IS NULL
GROUP BY cur.source_prefix, cur.source_name,
cur.category, cur.element_id
The GROUP BY performs the calculations per source+category+element.
The JOIN is there to filter out old entries. It looks for later entries, and then the WHERE statement filters out the rows for which a later entry exists. A join like this benefits from an index on (source_prefix, source_name, element_id, date_updated).
There are many ways of filtering out old entries, but this one tends to perform resonably well.
Ok, so 900K rows isn't a massive table, it's reasonably big but and your queries really shouldn't be taking that long.
First things first, which of the 3 statements above is taking the most time?
The first problem I see is with your first query. Your WHERE clause doesn't include an indexed column. So this means that it has to do a full table scan on the entire table.
Create an index on the "data_updated" column, then run the query again and see what that does for you.
If you don't need the hash's and are only using them to avail of the dark magic then remove them completely.
Edit: Someone with more SQL-fu than me will probably reduce your whole set of logic into one SQL statement without the use of the temporary tables.
Edit: My SQL is a little rusty, but are you joining twice in the third SQL staement? Maybe it won't make a difference but shouldn't it be :
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 LEFT JOIN temp2 ON temp1.subcat_hash = temp2.subcat_hash
WHERE temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;
or
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 temp2
WHERE temp2.subcat_hash = temp1.subcat_hash
AND temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;