Check if a set of related rows exist - sql

I want to manage sets of things in a database. Assume the following two tables:
CREATE TABLE Sets (id BIGINT PRIMARY KEY, name VARCHAR(64));
CREATE TABLE SetItems (fkSet BIGINT, item BIGINT, FOREIGN KEY (fkSet) REFERENCES Sets(id));
I could create Sets by inserting a row into table Sets, and add one or more rows into SetItems with the corresponding fkSet.
Getting the items of a specific set is easy, it's basically SELECT * FROM SetItems WHERE fkSet = :id.
Problem: Now I want to find out if a set exists, given a set of SetItems.
Example: I want to find if there is a set with the items 2 and 5.
What I tried:
(1) I could try something like:
SELECT s.fkSet FROM Sets s, SetItems i1, SetItems i2
WHERE s.id = i1.fkSet AND i1.item = 2
AND s.id = i2.fkSet AND i2.item = 5;
But such an approach has several drawbacks:
I guess it scales very badly if I need to check for more SetItems.
I need to put together the sql-query with string concatenation which I dislike, increasing the chances for an injection attack.
it could also find sets which have additional items besides 2 and 5, which I would not like.
To better prevent SQL-Injections, I would prefer a way where I could use Prepared Statements. Technically, I could assemble the query-string for a prepared statement using String concatenation, and then set the query parameters, but this approach feels wrong somehow.
(2) Another solution: I could first get all sets the first SetItem is part of, and then check for each returned Set if it also contains all the other items and none additional ones. If the first SetItem is contained in a large number of Sets, this would result in a large number of queries, which seems inefficient and not scalable.
(3) For each SetItem that should be contained, I could get all sets it is in, and then do an intersection in my code outside SQL. This would require at most as many sql queries as there are SetItems to be checked.
(4) An alternative would be to store the setItems as a comma-separated list as VARCHAR, sorted in increasing order, directly as an additional column in the table Sets. The table SetItems would not be needed then. To check for the existence of a set I could just query if there is a row with the same comma-separated list. But then queries like "in which set is item xy contained" would not be possible so easily, relying on String-matching in the SQL-query. Not very relational...
Question: How can I efficiently query an SQL database if a set of related rows exists?
Should I structure my data differently? Should I use a NoSQL database for such a query?
I'm currently using H2 and would prefer a solution not using some specific SQL-dialect of a single database vendor.

You can use having to check how many distinct matches you have per set:
select i.fkSet
from SetItems i
where i.item in (2, 5)
group by s.fkSet
having count(distinct i.item) = 2
Of course, you need to make sure the final number (here 2) corresponds to the number of values you have listed at the in operator.

Related

iSeries query changes selected RRN of subquery result rows

I'm trying to make an optimal SQL query for an iSeries database table that can contain millions of rows (perhaps up to 3 million per month). The only key I have for each row is its RRN (relative record number, which is the physical record number for the row).
My goal is to join the table with another small table to give me a textual description of one of the numeric columns. However, the number of rows involved can exceed 2 million, which typically causes the query to fail due to an out-of-memory condition. So I want to rewrite the query to avoid joining a large subset with any other table. So the idea is to select a single page (up to 30 rows) within a given month, and then join that subset to the second table.
However, I ran into a weird problem. I use the following query to retrieve the RRNs of the rows I want for the page:
select t.RRN2 -- Gives correct RRNs
from (
select row_number() over() as SEQ,
rrn(e2) as RRN2, e2.*
from TABLE1 as e2
where e2.UPDATED between '2013-05-01' and '2013-05-31'
order by e2.UPDATED, e2.ACCOUNT
) as t
where t.SEQ > 270 and t.SEQ <= 300 -- Paging
order by t.UPDATED, t.ACCOUNT
This query works just fine, returning the correct RRNs for the rows I need. However, when I attempted to join the result of the subquery with another table, the RRNs changed. So I simplified the query to a subquery within a simple outer query, without any join:
select rrn(e) as RRN, e.*
from TABLE1 as e
where rrn(e) in (
select t.RRN2 -- Gives correct RRNs
from (
select row_number() over() as SEQ,
rrn(e2) as RRN2, e2.*
from TABLE1 as e2
where e2.UPDATED between '2013-05-01' and '2013-05-31'
order by e2.UPDATED, e2.ACCOUNT
) as t
where t.SEQ > 270 and t.SEQ <= 300 -- Paging
order by t.UPDATED, t.ACCOUNT
)
order by e.UPDATED, e.ACCOUNT
The outer query simply grabs all of the columns of each row selected by the subquery, using the RRN as the row key. But this query does not work - it returns rows with completely different RRNs.
I need the actual RRN, because it will be used to retrieve more detailed information from the table in a subsequent query.
Any ideas about why the RRNs end up different?
Resolution
I decided to break the query into two calls, one to issue the simple subquery and return just the RRNs (rows-IDs), and the second to do the rest of the JOINs and so forth to retrieve the complete info for each row. (Since the table gets updated only once a day, and rows never get deleted, there are no potential timing problems to worry about.)
This approach appears to work quite well.
Addendum
As to the question of why an out-of-memory error occurs, this appears to be a limitation on only some of our test servers. Some can only handle up to around 2m rows, while others can handle much more than that. So I'm guessing that this is some sort of limit imposed by the admins on a server-by-server basis.
Trying to use RRN as a primary key is asking for trouble.
I find it hard to believe there isn't a key available.
Granted, there may be no explicit primary key defined in the table itself. But is there a unique key defined in the table?
It's possible there's no keys defined in the table itself ( a practice that is 20yrs out of date) but in that case there's usually a logical file with a unique key defined that is by the application as the de-facto primary key to the table.
Try looking for related objects via green screen (DSPDBR) or GUI (via "Show related"). Keyed logical files show in the GUI as views. So you'd need to look at the properties to determine if they are uniquely keyed DDS logicals instead of non-keyed SQL views.
A few times I've run into tables with no existing de-facto primary key. Usually, it was possible to figure out what could be defined as one from the existing columns.
When there truly is no PK, I simply add one. Usually a generated identity column. There's a technique you can use to easily add columns without having to recompile or test any heritage RPG/COBOL programs. (and note LVLCHK(*NO) is NOT it!)
The technique is laid out in Chapter 4 of the modernizing Redbook
http://www.redbooks.ibm.com/abstracts/sg246393.html
1) Move the data to a new PF (or SQL table)
2) create new LF using the name of the existing PF
3) repoint existing LF to new PF (or SQL table)
Done properly, the record format identifiers of the existing objects don't change and thus you don't have to recompile any RPG/COBOL programs.
I find it hard to believe that querying a table of mere 3 million rows, even when joined with something else, should cause an out-of-memory condition, so in my view you should address this issue first (or cause it to be addressed).
As for your question of why the RRNs end up different I'll take the liberty of quoting the manual:
If the argument identifies a view, common table expression, or nested table expression derived from more than one base table, the function returns the relative record number of the first table in the outer subselect of the view, common table expression, or nested table expression.
A construct of the type ...where something in (select somethingelse...) typically translates into a join, so there.
Unless you can specifically control it, e.g., via ALWCPYDTA(*NO) for STRSQL, SQL may make copies of result rows for any intermediate set of rows. The RRN() function always accesses physical record number, as contrasted with the ROW_NUMBER() function that returns a logical row number indicating the relative position in an ordered (or unordered) set of rows. If a copy is generated, there is no way to guarantee that RRN() will remain consistent.
Other considerations apply over time; but in this case it's as likely to be simple copying of intermediate result rows as anything.

Select query to find all unique keys from a table where a any word from a list appears in a free text field?

Looking for a little bit of SQL-foo to help find the most efficient way to do this query.
I have a table with two columns, ID and a small character field (<300 chars). The ID field is not unique, and I would like the result to be a distinct list of ID numbers. I also have an input list of words that I want to query on, say 'foo', 'bar' as the base case. For a result to be valid, it also must have at least one matching row for each word that is input.
What is a clean and efficient way to write this as one query? I am also open to multiple queries if there is no single-query way to execute it efficiently.
Please note that in the specific environment I am working with I cannot use more than 10 subqueries, and I may have 10 or more words provided as input (although I may be able to limit the input to 10 as long as the user is aware of this). Also note that I cannot use the 'IN' clause if it is possible that the list of values in it grows to be larger than a few thousand. I am querying a table with potentially millions of ID-text pairs.
Thanks for any and all advice!
Use a UDF that returns a table:
Consider writing a user-defined function (UDF) that takes a string containing all values that you wish to search for, separated by a delimiter. The UDF would split the data in the string and return it as a table. Then, include the table that the UDF returns as a join on the table in question.
Here's an example: http://everysolution.wordpress.com/2011/07/28/udf-to-split-a-delimited-string-and-return-it-as-a-table/
If that small character field is always one word and you're looking for an exact match with a word in your list, I don't see why the below would not work. That is, if you're looking for IDs with 'foo', do you want only IDs that are 'foo', or might there be 'fooish', which should also be a match? In the latter case this won't work, in the former it should.
The query below assumes:
That your 2 column table is called "tbl"
That you can put the list of these 'input' words into a table; in my example below this other table is called "othertbl". It should contain however many words you're searching on, and it can be over 1,000 (the exists subquery doesn't have that limitation)
As stated before, I am assuming you are looking for exact matches on the 2nd column of "tbl", not partial or fuzzy matches
For performance reasons, you'll want to ensure that tbl.wordfield and othertbl.word are indexed (whatever the column names actually are)
-
select distinct id
from tbl
where exists
(select 'x' from othertbl where othertbl.word = tbl.wordfield)

SQL - renumbering a sequential column to be sequential again after deletion

I've researched and realize I have a unique situation.
First off, I am not allowed to post images yet to the board since I'm a new user, so see appropriate links below
I have multiple tables where a column (not always the identifier column) is sequentially numbered and shouldn't have any breaks in the numbering. My goal is to make sure this stays true.
Down and Dirty
We have an 'Event' table where we randomly select a percentage of the rows and insert the rows into table 'Results'. The "ID" column from the 'Results' is passed to a bunch of delete queries.
This more or less ensures that there are missing rows in several tables.
My problem:
Figuring out an sql query that will renumber the column I specify. I prefer to not drop the column.
Example delete query:
delete ItemVoid
from ItemTicket
join ItemVoid
on ItemTicket.item_ticket_id = itemvoid.item_ticket_id
where itemticket.ID in (select ID
from results)
Example Tables Before:
Example Tables After:
As you can see 2 rows were delete from both tables based on the ID column. So now I gotta figure out how to renumber the item_ticket_id and the item_void_id columns where the the higher number decreases to the missing value, and the next highest one decreases, etc. Problem #2, if the item_ticket_id changes in order to be sequential in ItemTickets, then
it has to update that change in ItemVoid's item_ticket_id.
I appreciate any advice you can give on this.
(answering an old question as it's the first search result when I was looking this up)
(MS T-SQL)
To resequence an ID column (not an Identity one) that has gaps,
can be performed using only a simple CTE with a row_number() to generate a new sequence.
The UPDATE works via the CTE 'virtual table' without any extra problems, actually updating the underlying original table.
Don't worry about the ID fields clashing during the update, if you wonder what happens when ID's are set that already exist, it
doesn't suffer that problem - the original sequence is changed to the new sequence in one go.
WITH NewSequence AS
(
SELECT
ID,
ROW_NUMBER() OVER (ORDER BY ID) as ID_New
FROM YourTable
)
UPDATE NewSequence SET ID = ID_New;
Since you are looking for advice on this, my advice is you need to redesign this as I see a big flaw in your design.
Instead of deleting the records and then going through the hassle of renumbering the remaining records, use a bit flag that will mark the records as Inactive. Then when you are querying the records, just include a WHERE clause to only include the records are that active:
SELECT *
FROM yourTable
WHERE Inactive = 0
Then you never have to worry about re-numbering the records. This also gives you the ability to go back and see the records that would have been deleted and you do not lose the history.
If you really want to delete the records and renumber them then you can perform this task the following way:
create a new table
Insert your original data into your new table using the new numbers
drop your old table
rename your new table with the corrected numbers
As you can see there would be a lot of steps involved in re-numbering the records. You are creating much more work this way when you could just perform an UPDATE of the bit flag.
You would change your DELETE query to something similar to this:
UPDATE ItemVoid
SET InActive = 1
FROM ItemVoid
JOIN ItemTicket
on ItemVoid.item_ticket_id = ItemTicket.item_ticket_id
WHERE ItemTicket.ID IN (select ID from results)
The bit flag is much easier and that would be the method that I would recommend.
The function that you are looking for is a window function. In standard SQL (SQL Server, MySQL), the function is row_number(). You use it as follows:
select row_number() over (partition by <col>)
from <table>
In order to use this in your case, you would delete the rows from the table, then use a with statement to recalculate the row numbers, and then assign them using an update. For transactional integrity, you might wrap the delete and update into a single transaction.
Oracle supports similar functionality, but the syntax is a bit different. Oracle calls these functions analytic functions and they support a richer set of operations on them.
I would strongly caution you from using cursors, since these have lousy performance. Of course, this will not work on an identity column, since such a column cannot be modified.

sql statement to retrieve a list of results (from other table) for a given record?

I have the following schema:
TABLE bands
-> band_id
-> property1
-> property2
-> ...
TABLE tracks
-> band_id
-> track_id
The tracks table typically maps a single band_id onto multiple track_ids (there are multiple tracks for each band).
My goal is to build, for the bands that match certain conditions (in terms of property1, property2, etc), a data structure of the format (band_id, property1, property2, ..., [list of track_ids for that band]).
Currently I am doing
SELECT band_id, property1, property2, ..., track_id
FROM bands, tracks
WHERE bands.property1 = xyz;
This generates tuples (band_id, property1, property2, ..., track_id) for the matching bands; one tuple for each track. Currently I take this and aggregate (in my non-sql code) all track_ids for a given band into a list. But it seems very wasteful, since all but the 'propertyN' values are getting returned multiple times for each single band.
The alternative I thought of was to first get the band_ids and properties for the matching bands, and then issue a separate query for the track_ids for each band. But this, in turn, means issuing a separate query for each band -- and there are tens of thousands of them, so this probably is not very smart.
Is there a better way to do this?
I have been looking at GROUP BY and that seems to be the general direction I am interested in, but I couldn't find an aggregate function that would basically return a list of values (in this case, track_ids) for each matching record in table bands.
You can return data in the form (band_id, [list of track_ids for that band as comma separated string]) but it's not the SQL way. Look at GROUP_CONCAT (for MySQL).
The way you are currently doing it is usually better, although you should use an explicit JOIN keyword.
SELECT bands.band_id, track_id
FROM bands
LEFT JOIN tracks ON tracks.band_id = bands.band_id
WHERE bands.property1 = xyz;
You can put the results into the structure of your choice once you have read them from the database.
If you want to produce a string and you're using MySQL there's a GROUP_CONCAT aggregation function but there's no simple equivalent in SQL Server, at least. For SQL Server you can use FOR XML queries to produce XML output with the sort of grouping that you want but I'm not sure how helpful that is in general. (In many of my applications I use FOR XML queries to extract data from a database and then XSLT to transform it into HTML for display on the web, but I'm aware that this is a fairly specific solution rather than a general one.)

SQL argument limit in Oracle

It appears that there is a limit of 1000 arguments in an Oracle SQL. I ran into this when generating queries such as....
select * from orders where user_id IN(large list of ids over 1000)
My workaround is to create a temporary table, insert the user ids into that first instead of issuing a query via JDBC that has a giant list of parameters in the IN.
Does anybody know of an easier workaround? Since we are using Hibernate I wonder if it automatically is able to do a similar workaround transparently.
An alternative approach would be to pass an array to the database and use a TABLE() function in the IN clause. This will probably perform better than a temporary table. It will certainly be more efficient than running multiple queries. But you will need to monitor PGA memory usage if you have a large number of sessions doing this stuff. Also, I'm not sure how easy it will be to wire this into Hibernate.
Note: TABLE() functions operate in the SQL engine, so they need us to declare a SQL type.
create or replace type tags_nt as table of varchar2(10);
/
The following sample populates an array with a couple of thousand random tags. It then uses the array in the IN clause of a query.
declare
search_tags tags_nt;
n pls_integer;
begin
select name
bulk collect into search_tags
from ( select name
from temp_tags
order by dbms_random.value )
where rownum <= 2000;
select count(*)
into n
from big_table
where name in ( select * from table (search_tags) );
dbms_output.put_line('tags match '||n||' rows!');
end;
/
As long as the temporary table is a global temporary table (ie only visible to the session), this is the recommended way of doing things (and I'd go that route for anything more than a dozen arguments, let alone a thousand).
I'd wonder where/how you are building that list of 1000 arguments. If this is a semi-permanent grouping (eg all employees based in a particular location) then that grouping should be in the database and the join done there. Databases are designed and built to do joins really quickly. Much quicker than pulling a bunch of id's back to the mid tier and then sending them back to the database.
select * from orders
where user_id in
(select user_id from users where location = :loc)
You can add additional predicates to split the list into chunks of 1000:
select * from orders where user_id IN (<first batch of 1000>)
OR user_id IN (<second batch of 1000>)
OR user_id IN ...
the comments regarding "if these IDs are in your database, use joins/correlation instead" hold true. However, if your list of IDs comes from elsewhere, like a SOLR result, you can get around the temp table requirement by issuing multiple queries, each with no more than 1000 ids present, and then merging the results of the query in memory. If you place the initial list of ids in a unique collection like a hashset, you can pop off 1000 ids at a time.