How to execute this PostgreSQL join without using ANY()? - sql

I have two PostgreSQL tables. Table 1 has a column which is boolean, Table 2 has a column which is an array of one or both booleans:
--Table 1
id1 bool1
A t
B f
A f
B t
--Table 2
id2 bool2
A {"t", "f"}
B {"f"}
A {"t", "f"}
B {"t"}
What I want is, a join of the two tables on all rows where the IDs match, and any value in bool2 matches the value in bool1. I can make this happen on my local computer with
select * from table1
left join table2 on id1=id2 and bool1 = any(bool2)
However, my company uses a third-party system which does not support arrays in SQL. The arrays have to be cast into text columns. So how do I make this work without using any()?

With booleans, you're safe just using substring matching since you won't have to worry about unexpected characters showing up. Something like
WHERE bool2 LIKE ('%' || bool1 || '%')
or
WHERE substr(bool2,bool1::text) IS NOT NULL

Related

Pattern Matching or Fuzzy Matching of two tables based on one column

Assuming I have the right naming, what O am trying to write is a function or stored procedure to compare names and find out if they are the same value.
I think its called fuzzy matching
For example, a table has 2 columns and table b has 3 columns:
Name
Number
Hello
24
Evening
56
Name
Num
F
Heello
23
some value
GoodEvening
15
some value
I want table like
A
D
Hello
Heello
Morning
GoodMorning
Currently, I'm using
Select A.Name, B.Name
from table A
left table B
on A.Name like B.Name
or (LTRIM(RTRIM(REPLACE(REPLACE(REPLACE( A.Name,' ',''),'-',''),'''',''))) = LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(B.Name,' ',''),'-',''),'''',''))))
OR (A.Name LIKE '%'+B.Name+'%')
OR (B.Name LIKE '%'+A.Name+'%')
It is giving me a result, but not too accurate and is very slow, any other way I could try to compare these values?

Compare two unrelated tables sql

We're dealing with geographic data with our Oracle database.
There's a function called ST_Insertects(x,y) which returns true if record x intersects y.
What we're trying to do is, compare each record of table A with all records of table B, and check two conditions
condition 1 : A.TIMEZONE = 1 (Timezone field is not unique)
condition 2 : B.TIMEZONE = 1
condition 3 : ST_Intersects(A.SHAPE, B.SHAPE) (Shape field is where the geographical information is stored)
The result we're looking for is records ONLY from the table A that satisfy all 3 conditions above
We tried this in a single select statement but it doesn't seem to make much sense logically
pseudo-code that demonstrates a cross-join:
select A.*
from
tbl1 A, tbl2 B
where
A.TIMEZONE = 1 and
B.TIMEZONE = 1 and
ST_Intersects(A.SHAPE, B.SHAPE)
if you get multiples, you can put a distinct and only select A.XXX columns
With a cross-join rows are matched like this
a.row1 - b.row1
a.row1 - b.row2
a.row1 - b.row3
a.row2 - b.row1
a.row2 - b.row2
a.row2 - b.row3
So if row 1 evaluates to true on multiple rows, then just add a distinct on a.Column1, etc.
If you want to use the return value from your function in an Oracle SQL statement, you will need to change the function to return 0 or 1 (or 'T'/'F' - some data type supported by Oracle Database, which does NOT support the Boolean data type).
Then you probably want something like
select <columns from A>
from A
where A.timezone = 1
and exists ( select *
from B
where B.timezone = 1
and ST_intersects(A.shape, B.shape) = 1
)

Difference in NA/NULL treatment using dplyr::left_join (R lang) vs. SQL LEFT JOIN

I want to left join two dataframes, where there might be NAs in the join column on both side (i.e. both code columns)
a <- data.frame(code=c(1,2,NA))
b <- data.frame(code=c(1,2,NA, NA), name=LETTERS[1:4])
Using dplyr, we get:
left_join(a, b, by="code")
code name
1 1 A
2 2 B
3 NA C
4 NA D
Using SQL, we get:
CREATE TABLE a (code INT);
INSERT INTO a VALUES (1),(2),(NULL);
CREATE TABLE b (code INT, name VARCHAR);
INSERT INTO b VALUES (1, 'A'),(2, 'B'),(NULL, 'C'), (NULL, 'D');
SELECT * FROM a LEFT JOIN b USING (code);
It seems that dplyr joins do not treat NAs like SQL NULL values.
Is there a way to get dplyr to behave in the same way as SQL?
What is rationale behind this type of NA treatment?
PS. Of course, I could remove NAs first to get there left_join(a, na.omit(b), by="code"), but that is not my question.
In SQL, "null" matches nothing, because SQL has no information on what it should join to -- hence the resulting "null"s in your joined data set, just as it would appear if performing left outer joins without a match in the right data set.
In R however, the default behaviour for "NA" when it comes to joins is almost to treat it like a data point (e.g. a null operator), so "NA" would match "NA". For example,
> match(NA, NA)
[1] 1
One way you can circumvent this would be to use the base merge method,
> merge(a, b, by="code", all.x=TRUE, incomparables=NA)
code name
1 1 A
2 2 B
3 NA <NA>
The "incomparables" parameter here allows you to define values that cannot be matched, and essentially forces R to treat "NA" the way SQL treats "null". It doesn't look like the incomparables feature is implemented in left_join, but it may simply be named differently.
By default column code have primary key,therefore not accept NULL value

Find rows that contain all words in any order

My application is built in vb.net with SQL Server Compact as the database so I'm unable to use a full-text index.
Here's my data...
MainTable field1
A B C
B G C
X Y Z
C P B
Search term = B C
Expected Results = any combination of the search term = Rows 1, 2, 4
Here's what I'm currently doing...
I'm permuting the search term B C into an array containing %B%C% and %C%B% and inserting those values into field1 of tempTable.
So my SQL looks like this:
SELECT * FROM MainTable INNER JOIN tempTable ON MainTable.field1 LIKE tempTable.field1
In this simple example, it does return the expected results correctly. However, my search term can contain more values. For example 6 search terms B C D E F G when permuted has 720 different values and as more search terms are used, the permutations grow exponentially...which is not good.
Is there a better way to do this?
The following will work for your example above:
Select * from table where field1 like '%[BC]%'
But it will also return strings that contain ONLY "B" or "C". Do you need both characters in any order or one or more?
EDIT: Then the following would work:
Select * from test_data where col1 LIKE '%Apple%' and col1 like '%Dog%'
See the demo here: http://rextester.com/edit/LNDQ49764

SQL: Most efficient way to select sequences of rows from a table

I have a tagged textual corpus stored in an SQL table like the following:
id tag1 tag2 token sentence_id
0 a e five 1
1 b f score 1
2 c g years 1
3 d h ago 1
My task is to search the table for sequences of tokens that meet certain criteria, sometimes with gaps between each token.
For example:
I want to be able to search for a sequence similar to the following:
the token has the value a in the tag1 column, and
the second token is one to two rows away from the first, and has the value g in tag2 or b in tag1, and
the third token should be at least three rows away, and has ago in the token column.
In SQL, this would be something like the following:
SELECT * FROM my_table t1
JOIN my_table t2 ON t1.sentence_id = t2.sentence_id
JOIN my_table t3 ON t3.sentence_id = t1.sentence_id
WHERE t1.tag1 = 'a' AND (t2.id = t1.id + 1 OR t2.id = t1.id + 2)
AND (t2.tag2 = 'g' OR t2.tag1 = 'b')
AND t3.id >= t1.id + 3 AND t3.token = 'ago'
So far I have only been able to achieve this by joining the table by itself each time I specify a new token in the sequence (e.g. JOIN my_table t4), but with millions of rows this gets quite slow. Is there a more efficient way to do this?
You could try this staged approach:
apply each condition (other than the various distance conditions) as a subquery
Calculate the distances between the tokens which meet the conditions
Apply all the distance conditions separately.
This might improve things, if you have indexes on the tag1, tag2 and token columns:
SELECT DISTINCT sentence_id FROM
(
-- 2. Here we calculate the distances
SELECT cond1.sentence_id,
(cond2.id - cond1.id) as cond2_distance,
(cond3.id - cond1.id) as cond3_distance
FROM
-- 1. These are all the non-distance conditions
(
SELECT * FROM my_table WHERE tag1 = 'a'
) cond1
INNER JOIN
(
SELECT * FROM my_table WHERE
(tag1 = 'b' OR tag2 = 'g')
) cond2
ON cond1.sentence_id = cond2.sentence_id
INNER JOIN
(
SELECT * FROM my_table WHERE token = 'ago'
) cond3
ON cond1.sentence_id = cond3.sentence_id
) conditions
-- 3. Now apply the distance conditions
WHERE cond2_distance BETWEEN 0 AND 2
AND cond3_distance >= 3
ORDER BY sentence_id;
If you apply this query to this SQL fiddle you get:
| sentence_id |
|-------------|
| 1 |
| 4 |
Which is what you want. Now whether it's any faster or not, only you (with your million-row database) can really tell, but from the perspective of having to actually write these queries, you'll find they're much easier to read, understand and maintain.
You need to edit your question and give more details on how these sequences of tokens work (for instance, what does "each time I specify a new token in the sequence" mean in practice?).
In postgresql you can solve this class of queries with a window function. Following your exact specification above:
SELECT *,
CASE
WHEN lead(tag2, 2) OVER w = 'g' THEN lead(token, 2) OVER w
WHEN lead(tag1) OVER w = 'b' THEN lead(token) OVER w
ELSE NULL::text
END AS next_token
FROM my_table
WHERE tag1 = 'a'
AND next_token IS NOT NULL
WINDOW w AS (PARTITION BY sentence_id ORDER BY id);
The lead() function looks ahead a number of rows (default is 1, when not specified) from the current row in the window frame, in this case all rows with the same sentence_id as specified in the partition of the window definition. So, lead(tag1, 2) looks at the value of tag1 two rows ahead to compare against your condition, and lead(token, 2) returns the token from two rows ahead as column next_token in the current row and having the same sentence_id. If the first CASE condition fails, the second is evaluated; if that fails NULL is returned. Note that the order of the conditions in the CASE clause is significant: different ordering gives different results.
Obviously, if you keep on adding conditions for subsequent tokens the query becomes very complex and you may have to put individual search conditions in separate stored procedures and then call these depending on your requirements.