Say if I have two queries returning two tables with the same number of rows.
For example, if query 1 returns
| a | b | c |
| 1 | 2 | 3 |
| 4 | 5 | 6 |
and query 2 returns
| d | e | f |
| 7 | 8 | 9 |
| 10 | 11 | 12 |
How to obtain the following, assuming both queries are opaque
| a | b | c | d | e | f |
| 1 | 2 | 3 | 7 | 8 | 9 |
| 4 | 5 | 6 | 10 | 11 | 12 |
My current solution is to add to each query a row number column and inner join them
on this column.
SELECT
q1_with_rownum.*,
q2_with_rownum.*
FROM (
SELECT ROW_NUMBER() OVER () AS q1_rownum, q1.*
FROM (.......) q1
) q1_with_rownum
INNER JOIN (
SELECT ROW_NUMBER() OVER () AS q2_rownum, q2.*
FROM (.......) q2
) q2_with_rownum
ON q1_rownum = q2_rownum
However, if there is a column named q1_rownum in either of the query,
the above will break. It is not possible for me to look into q1 or q2;
the only information available is that they are both valid SQL queries
and do not contain columns with same names. Are there any SQL construct
similar to UNION but for columns instead of rows?
There is no such function. A row in a table is an entity.
If you are constructing generic code to run on any tables, you can try using less common values, such as "an unusual query rownum" -- or something more esoteric than that. I would suggest using the same name in both tables and then using using clause for the join.
Not sure if I understood your exact problem, but I think you mean both q1 and q2 are joined on a column with the same name?
You should add each table name before the column to distinguish which column is referenced:
"table1"."similarColumnName" = "table2"."similarColumnName"
EDIT:
So, problem is that if there is already a column with the same alias as your ROW_NUMBER(), the JOIN cannot be made because you have an ambiguous column name.
The easier solution if you cannot know your incoming table's columns is to make a solid alias, for example _query_join_row_number
EDIT2:
You could look into prefixing all columns with their original table's name, thus removing any conflict (you get q1_with_rows.rows and conflict column is q1_with_rows.q1.rows)
an example stack on this: In a join, how to prefix all column names with the table it came from
Related
I am currently working with an H2 database and I have written the following SQL, however the H2 database engine does not support the NOT IN being performed on a multiple column sub-query.
DELETE FROM AllowedParam_map
WHERE (AllowedParam_map.famid,AllowedParam_map.paramid) NOT IN (
SELECT famid,paramid
FROM macros
LEFT JOIN macrodata
ON macros.id != macrodata.macroid
ORDER BY famid)
Essentially I want to remove rows from allowedparam_map wherever it has the same combination of famid and paramid as the sub-query
Edit: To clarify, the sub-query is specifically trying to find famid/paramid combinations that are NOT present in macrodata, in an effort to weed out the allowedparam_map, hence the ON macros.id != macrodata.macroid. I'm also terrible at SQL so this might be completely the wrong way to do it.
Edit 2: Here is some more info about the pertinent schema:
Macros
| ID | NAME | FAMID |
| 0 | foo | 1 |
| 1 | bar | 1 |
| 2 | baz | 1 |
MacroData
| ID | MACROID | PARAMID | VALUE |
| 0 | 0 | 1 | 1024 |
| 1 | 0 | 2 | 200 |
| 2 | 0 | 3 | 89.85 |
AllowedParam_Map
| ID | FAMID | PARAMID |
| 0 | 1 | 1 |
| 1 | 1 | 2 |
| 2 | 1 | 3 |
| 3 | 1 | 4 |
The parameters are allowed on a per-family basis. Notice how the allowedParam_map table contains an entry for famid=1 and paramid=4, even though macro 0, aka "foo", does not have an entry for paramid=4. If we expand this, there might be another famid=1 macro that has paramid=4, but we cant be sure. I want to cull from the allowedParam_map table any unused parameters, based on the data in the macrodata table.
IN and NOT IN can always be replaced with EXISTS and NOT EXISTS.
Some points first:
You are using an ORDER BY in your subquery, which is of course superfluous.
You are outer-joining a table, which should have no effect when asking for existence. So either you need to look up a field in the outer-joined table, then inner-join it or you don't, then remove it from the query. (It's queer to join every non-related record (macros.id != macrodata.macroid) anyway.
You say in the comments section that both famid and paramid reside in table macros, so you can remove the outer join to macrodata from your query. You get:
As you say now that famid is in table macros and paramid is in table macrodata and you want to look up pairs that exist in AllowedParam_map, but not in the aformentioned tables, you seem to be looking for a simple inner join.
DELETE FROM AllowedParam_map
WHERE NOT EXISTS
(
SELECT *
FROM macros m
JOIN macrodata md ON md.macroid = m.id
WHERE m.famid = AllowedParam_map.famid
AND md.paramid = AllowedParam_map.paramid
);
You can use not exists instead:
DELETE FROM AllowedParam_map m
WHERE NOT EXISTS (SELECT 1
FROM macros LEFT JOIN
macrodata
ON macros.id <> macrodata.macroid -- I strongly suspect this should be =
WHERE m.famid = ?.famid and m.paramid = ?.paramid -- add the appropriate table aliases
);
Notes:
I strongly suspect the <> should be =. <> does not make sense in this context.
Replace the ? with the appropriate table alias.
NOT EXISTS is better than NOT IN anyway. It does what you expect if one of the value is NULL.
I have something like these 2 tables (but millions of rows in real):
items:
| X | Y |
---------
| 1 | 2 |
| 3 | 4 |
---------
details:
| X | A | B |
-------------
| 1 | a | b |
| 1 | c | d |
| 3 | e | f |
| 3 | g | h |
-------------
I have to aggregate several rows of one table details for one row in another table items to show them in a GridView like this:
| items.X | items.Y | details.A | details.B |
---------------------------------------------
| 1 | 2 | a, c | b, d |
| 3 | 4 | e, g | f, h |
---------------------------------------------
I already read this and the related questions and I know about GROUP_CONCAT, but I am not allowed to install it on the customer system. Because I don't have a chance to do this natively, I created a stored function (which I'm allowed to create), which returns a table with the columns X, A and B. This function works fine so far, but I don't seem to get these columns added to my result set.
Currently I'm trying to join the function result with a query on items, join-criterion would be the X-column in the example above. I made a minimal example with the AdventureWorks2012 database, which contains a table function dbo.ufnGetContactInformation(#PersonID INT) to join with the [Person].[EmailAddress] table on BusinessEntityID:
SELECT
[EmailAddress]
-- , p.[FirstName]
-- , p.[LastName]
FROM
[Person].[EmailAddress] e
INNER JOIN
dbo.ufnGetContactInformation(e.[BusinessEntityID]) p
ON p.[PersonID] = e.[BusinessEntityID]
The 2 commented lines indicate, what I try to do in reality, but if not commented, they hide the actual error I get:
Event 4104, Level 16, Status 1, Line 6
The multi-part identifier 'e.BusinessEntityID' could not be bound.
I understand, that during the joining process there is no value for e.[BusinessEntityID] yet. So I cannot select a specific subset in the function by using the function parameters, this should be in the join criteria anyway. Additionally I cannot have the function return all rows or create a temporary table, because this is insanely slow and expensive regarding both time and space in my specific situation.
Is there another way to achieve something like this with 2 existing tables and a table function?
Use Apply
Cross apply is similar to inner join,Outer apply is similar to left join
SELECT
[EmailAddress]
-- , p.[FirstName]
-- , p.[LastName]
FROM
[Person].[EmailAddress] e
cross apply
dbo.ufnGetContactInformation(e.[BusinessEntityID]) p
Problem: SQL Query that looks at the values in the "Many" relationship, and doesn't return values from the "1" relationship.
Tables Example: (this shows two different tables).
+---------------+----------------------------+-------+
| Unique Number | <-- Table 1 -- Table 2 --> | Roles |
+---------------+----------------------------+-------+
| 1 | | A |
| 2 | | B |
| 3 | | C |
| 4 | | D |
| 5 | | |
| 6 | | |
| 7 | | |
| 8 | | |
| 9 | | |
| 10 | | |
+---------------+----------------------------+-------+
When I run my query, I get multiple, unique numbers that show all of the roles associated to each number like so.
+---------------+-------+
| Unique Number | Roles |
+---------------+-------+
| 1 | C |
| 1 | D |
| 2 | A |
| 2 | B |
| 3 | A |
| 3 | B |
| 4 | C |
| 4 | A |
| 5 | B |
| 5 | C |
| 5 | D |
| 6 | D |
| 6 | A |
+---------------+-------+
I would like to be able to run my query and be able to say, "When the role of A is present, don't even show me the unique numbers that have the role of A".
Maybe if SQL could look at the roles and say, WHEN role A comes up, grab unique number and remove it from column 1.
Based on what I would "like" to happen (I put that in quotations as this might not even be possible) the following is what I would expect my query to return.
+---------------+-------+
| Unique Number | Roles |
+---------------+-------+
| 1 | C |
| 1 | D |
| 5 | B |
| 5 | C |
| 5 | D |
+---------------+-------+
UPDATE:
Query Example: I am querying 8 tables, but I condensed it to 4 for simplicity.
SELECT
c.UniqueNumber,
cp.pType,
p.pRole,
a.aRole
FROM c
JOIN cp ON cp.uniqueVal = c.uniqueVal
JOIN p ON p.uniqueVal = cp.uniqueVal
LEFT OUTER JOIN a.uniqueVal = p.uniqueVal
WHERE
--I do some basic filtering to get to the relevant clients data but nothing more than that.
ORDER BY
c.uniqueNumber
Table sizes: these tables can have anywhere from 50,000 rows to 500,000+
Pretending the table name is t and the column names are alpha and numb:
SELECT t.numb, t.alpha
FROM t
LEFT JOIN t AS s ON t.numb = s.numb
AND s.alpha = 'A'
WHERE s.numb IS NULL;
You can also do a subselect:
SELECT numb, alpha
FROM t
WHERE numb NOT IN (SELECT numb FROM t WHERE alpha = 'A');
Or one of the following if the subselect is materializing more than once (pick the one that is faster, ie, the one with the smaller subtable size):
SELECT t.numb, t.alpha
FROM t
JOIN (SELECT numb FROM t GROUP BY numb HAVING SUM(alpha = 'A') = 0) AS s USING (numb);
SELECT t.numb, t.alpha
FROM t
LEFT JOIN (SELECT numb FROM t GROUP BY numb HAVING SUM(alpha = 'A') > 0) AS s USING (numb)
WHERE s.numb IS NULL;
But the first one is probably faster and better[1]. Any of these methods can be folded into a larger query with multiple additional tables being joined in.
[1] Straight joins tend to be easier to read and faster to execute than queries involving subselects and the common exceptions are exceptionally rare for self-referential joins as they require a large mismatch in the size of the tables. You might hit those exceptions though, if the number of rows that reference the 'A' alpha value is exceptionally small and it is indexed properly.
There are many ways to do it, and the trade-offs depend on factors such as the size of the tables involved and what indexes are available. On general principles, my first instinct is to avoid a correlated subquery such as another, now-deleted answer proposed, but if the relationship table is small then it probably doesn't matter.
This version instead uses an uncorrelated subquery in the where clause, in conjunction with the not in operator:
select num, role
from one_to_many
where num not in (select otm2.num from one_to_many otm2 where otm2.role = 'A')
That form might be particularly effective if there are many rows in one_to_many, but only a small proportion have role A. Of course you can add an order by clause if the order in which result rows are returned is important.
There are also alternatives involving joining inline views or CTEs, and some of those might have advantages under particular circumstances.
I have a large table and I need to check for similar rows. I don't need all column values be the same, just similar. The rows must not be "distant" (determined by a query over other table), no value may be too different (I have already done the queries for these conditions) and most other values must be the same. I must expect some ambiguity, so one or two different values shouldn't break the "similarity" (well, I could get better performance by accepting only "completely equal" rows, but this simplification could cause errors; I will do this as an option).
The way I am going to solve this is through PL/pgSQL: to make a FOR LOOP iterating through the results of previous queries. For each column, I have an IF testing whether it differs; if yes, I increment a difference counter and go on. At the end of each loop, I compare the value to a threshold and see if I should keep the row as "similar" or not.
Such a PL/pgSQL-heavy approach seems slow in comparison to a pure SQL query, or to an SQL query with some PL/pgSQL functions involved. It would be easy to test for rows with all but X equivalent rows if I knew which rows should be different, but the difference can occur at any of some 40 rows. Is there any way how to solve this by a single query? If not, is there any faster way than to examine all the rows?
EDIT: I mentioned a table, in fact it is a group of six tables linked by 1:1 relationship. I don't feel like explaining what is what, that's a different question. Extrapolating from doing this over one table to my situation is easy for me. So I simplified it (but not oversimplified it - it should demonstrate all the difficulties I have there) and made an example demonstrating what I need. Null and anything else should count as "different". No need to make a script testing it all - I just need to find out whether it is possible to do in any way more efficient than I thought about.
The point is that I don't need to count rows (as usual), but columns.
EDIT2: previous fiddle - this wasn't so short, so I let it here just for archiving reasons.
EDIT3: simplified example here - just NOT NULL integers, preprocessing omitted. Current state of data:
select * from foo;
id | bar1 | bar2 | bar3 | bar4 | bar5
----+------+------+------+------+------
1 | 4 | 2 | 3 | 4 | 11
2 | 4 | 2 | 4 | 3 | 11
3 | 6 | 3 | 3 | 5 | 13
When I run select similar_records( 1 );, I should get only row 2 (2 columns with different values; this is within limit), not 3 (4 different values - outside the limit of two differences at most).
To find rows that only differ on a given maximum number of columns:
WITH cte AS (
SELECT id
,unnest(ARRAY['bar1', 'bar2', 'bar3', 'bar4', 'bar5']) AS col -- more
,unnest(ARRAY[bar1::text, bar2::text, bar3::text
, bar4::text, bar5::text]) AS val -- more
FROM foo
)
SELECT b.id, count(a.val <> b.val OR NULL) AS cols_different
FROM (SELECT * FROM cte WHERE id = 1) a
JOIN (SELECT * FROM cte WHERE id <> 1) b USING (col)
GROUP BY b.id
HAVING count(a.val <> b.val OR NULL) < 3 -- max. diffs allowed
ORDER BY 2;
I ignored all the other distracting details in your question.
Demonstrating with 5 columns. Add more as required.
If columns can be NULL you may want to use IS DISTINCT FROM instead of <>.
This is using the somewhat unorthodox, but handy parallel unnest(). Both arrays must have the same number of elements to work. Details:
Is there something like a zip() function in PostgreSQL that combines two arrays?
SQL Fiddle (building on yours).
In instead of a loop to compare each row to all the others do a self join
select f0.id, f1.id
from foo f0 inner join foo f1 on f0.id < f1.id
where
f0.bar1 = f1.bar1 and f0.bar2 = f1.bar2
and
#(f0.bar3 - f1.bar3) <= 1
and
f0.bar4 = f1.bar4 and f0.bar5 = f1.bar5
or
f0.bar4 = f1.bar5 and f0.bar5 = f1.bar4
and
#(f0.bar6 - f1.bar6) <= 2
and
f0.bar7 is not null and f1.bar7 is not null and #(f0.bar7 - f1.bar7) <= 5
or
f0.bar7 is null and f1.bar7 <= 3
or
f1.bar7 is null and f0.bar7 <= 3
and
f0.bar8 = f1.bar8
and
#(f0.bar11 - f1.bar11) <= 5
;
id | id
----+----
1 | 4
1 | 5
4 | 5
(3 rows)
select * from foo;
id | bar1 | bar2 | bar3 | bar4 | bar5 | bar6 | bar7 | bar8 | bar9 | bar10 | bar11
----+------+------+------+------+------+------+------+------+------+-------+-------
1 | abc | 4 | 2 | 3 | 4 | 11 | 7 | t | t | f | 42.1
2 | abc | 5 | 1 | 6 | 2 | 8 | 39 | t | t | t | 19.6
3 | xyz | 4 | 2 | 3 | 5 | 14 | 82 | t | f | | 95
4 | abc | 4 | 2 | 4 | 3 | 11 | 7 | t | t | f | 42.1
5 | abc | 4 | 2 | 3 | 4 | 13 | 6 | t | t | | 37.7
Are you aware that the and operator has priority over or? I'm asking because it looks like the where clause in your function is not what you want. I mean in your expression it is enough to f0.bar7 is null and f1.bar7 <= 3 to be true to include the pair
I have a table that contains URL strings, i.e.
/A/B/C
/C/E
/C/B/A/R
Each string is split into tokens where the separator in my case is '/'. Then I assign integer value to each token and the put them into dictionary (different database table) i.e.
A : 1
B : 2
C : 3
E : 4
D : 5
G : 6
R : 7
My problem is to find those rows in first tables which contain given sequence of tokens. Additional problem is that my input is sequence of ints, i.e. I have
3, 2
and I'd like to find following rows
/A/B/C
/C/B/A/R
How to do this in efficient way. By this I mean how to design proper database structure.
I use PostgreSQL, solution should work well for 2 mln of rows in first table.
To clarify my example - I need both 'B' AND 'C' to be in the URL. Also 'B' and 'C' can occur in any order in the URL.
I need efficient SELECT. INSERT does not have to be efficient. I do not have to do all work in SQL if this changes anything.
Thanks in advance
I'm not sure how to do this, but I'm just giving you some idea that might be useful. You already have your initial table. You process is and create the token table:
+------------+---------+
| TokenValue | TokenId |
+------------+---------+
| A | 1 |
| B | 2 |
| C | 3 |
| E | 4 |
| D | 5 |
| G | 6 |
| R | 7 |
+------------+---------+
That's ok for me. Now, what I would do is to create a new table in which I would match the original table with the tokens of the token table (OrderedTokens). Something like:
+-------+---------+---------+
| UrlID | TokenId | AnOrder |
+-------+---------+---------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 3 |
| 2 | 5 | 1 |
| 2 | 2 | 2 |
| 2 | 1 | 3 |
| 2 | 7 | 4 |
| 3 | 3 | 1 |
| 3 | 4 | 2 |
+-------+---------+---------+
This way you can even recreate your original table as long as you use the order field. For example:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join tokens t on t.tokenId = ot.tokenId
group by ot.urlId
The previous query would result in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C |
| D/B/A/R |
| C/E |
+-------------+
So, you don't even need your original table anymore. If you want to get Urls that have any of the provided token ids (in this case B OR C), you sould use this:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join Tokens t on t.tokenId = ot.tokenId
group by urlid
having count(case when ot.tokenId in (2, 3) then 1 end) > 0
This results in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C | => It has both B and C
| D/B/A/R | => It has only B
| C/E | => It has only C
+-------------+
Now, if you want to get all Urls that have BOTH ids, then try this:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join Tokens t on t.tokenId = ot.tokenId
group by urlid
having count(distinct case when ot.tokenId in (2, 3) then ot.tokenId end) = 2
Add in the count all the ids you want to filter and then equal that count the the amount of ids you added. The previous query will result in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C | => It has both B and C
+-------------+
The funny thing is that none of the solutions I provided results in your expected result. So, have I misunderstood your requirements or is the expected result you provided wrong?
Let me know if this is correct.
It really depends on what you mean by efficient. It will be a trade-off between query performance and storage.
If you want to efficiently store this information, then your current approach is appropriate. You can query the data by doing something like this:
SELECT DISTINCT
u.url
FROM
urls u
INNER JOIN
dictionary d
ON
d.id IN (3, 2)
AND u.url ~ E'\\m' || d.url_component || E'\\m'
This query will take some time, as it will be required to do a full table scan, and perform regex logic on each URL. It is, however, very easy to insert and store data.
If you want to optimize for query performance, though, you can create a reference table of the URL components; it would look something like this:
/A/B/C A
/A/B/C B
/A/B/C C
/C/E C
/C/E E
/D/B/A/R D
/D/B/A/R B
/D/B/A/R A
/D/B/A/R R
You can then create a clustered index on this table, on the URL component. This query would retrieve your results very quickly:
SELECT DISTINCT
u.full_url
FROM
url_components u
INNER JOIN
dictionary d
ON
d.id IN (3, 2)
AND u.url_component = d.url_component
Basically, this approach moves the complexity of the query up front. If you are doing few inserts, but lots of queries against this data, then that is appropriate.
Creating this URL component table is trivial, depending on what tools you have at your disposal. A simple awk script could work through your 2M records in a minute or two, and the subsequent copy back into the database would be quick as well. If you need to support real-time updates to this table, I would recommend a non-SQL solution: whatever your app is coded in could use regular expressions to parse the URL and insert the components into the component table. If you are limited to using the database, then an insert trigger could fulfill the same role, but it will be a more brittle approach.