Join table condition for between 2 rows - sql

Is it possible to join these tables:
Log table:
+--------+---------------+------------+
| name | ip | created |
+--------+---------------+------------+
| 408901 | 178.22.51.168 | 1390887682 |
| 408901 | 178.22.51.168 | 1390927059 |
| 408901 | 178.22.51.168 | 1390957854 |
+--------+---------------+------------+
Orders table:
+---------+------------+
| id | created |
+---------+------------+
| 8563863 | 1390887692 |
| 8563865 | 1390897682 |
| 8563859 | 1390917059 |
| 8563860 | 1390937059 |
| 8563879 | 1390947854 |
+---------+------------+
Result table would be:
+---------+--------------+---------+---------------+------------+
|orders.id|orders.created|logs.name| logs.ip |logs.created|
+---------+--------------+---------+---------------+------------+
| 8563863 | 1390887692 | 408901 | 178.22.51.168 | 1390887682 |
| 8563865 | 1390897682 | 408901 | 178.22.51.168 | 1390887682 |
| 8563859 | 1390917059 | 408901 | 178.22.51.168 | 1390887682 |
| 8563860 | 1390937059 | 408901 | 178.22.51.168 | 1390927059 |
| 8563879 | 1390947854 | 408901 | 178.22.51.168 | 1390927059 |
+---------+--------------+---------+---------------+------------+
Is it possible?
Espessialy, if first table is result of some query.
UPDATE
Sorry for this mistake. I want found in log who make order. So orders table relate to logs table by created field, i.e.
first row with condition (orders.created >= log.created)

This will result in a non-equi join with a horrible performance:
SELECT *
FROM t2 JOIN t1
ON t1.created =
(
SELECT MAX(t1.created)
FROM t1 WHERE t1.created <= t2.created
)
You might better go with a cursor based on a UNION like this (you probably need to add some type casts to get a working UNION):
SELECT *
FROM
(
SELECT NULL AS name, NULL AS ip, NULL AS created2, t2.*
FROM t2
UNION ALL
SELECT t1.*, NULL AS id, NULL AS created
FROM t1
) AS dt
ORDER BY COALESCE(created, created2)
Now you can process the rows in the right order and remember the rows from the last t1 row.

There is nothing to bind these 2 together.
No ID or other column exists in both tables.
If this were the case, you could join these 2 tables in a stored procedure.
At the moment you ask the first query, store the data in a newly created table, use it in the join to get your results and delete it afterwards.
Kind regards

simply you can use union
select id, created from table_2
union all
select name, ip, created from table_1

Related

SQL Check if the User has IN and Out

I need help getting the User which has an 'IN' and 'Out' in Column isIN. If the user has an IN and OUT do not select them in the list. I need to select the user who has only had an IN. Please I need help. Thanks in advance.
This is the table:
| Users | IsIN |
|:------------------:|:-----:|
| MHYHDC61TMJ907867 | IN |
| MHYHDC61TMJ907867 | OUT |
| MHYHDC61TMJ907922 | IN |
| MHYHDC61TMJ907922 | OUT |
| MHYHDC61TMJ907923 | IN |
| MHYHDC61TMJ907923 | OUT |
| MHYHDC61TMJ907924 | IN | - I need to get only this row
| MHYHDC61TMJ907925 | IN |
| MHYHDC61TMJ907925 | OUT |
| MHYHDC61TMJ908054 | IN | - I need to get only this row
| MHYHDC61TMJ908096 | IN | - I need to get only this row
| MHYHDC61TMJ908109 | IN | - I need to get only this row
Need to get the result like
| Users | IsIN |
|:------------------:|:-----:|
| MHYHDC61TMJ907924 | IN |
| MHYHDC61TMJ908054 | IN |
| MHYHDC61TMJ908096 | IN |
| MHYHDC61TMJ908109 | IN |
I tried using this query and sample query below but it doesn't work.
select s.[Users], s.[isIn] [dbo].[tblIO] s
where not exists (
select 1
from [dbWBS].[dbo].[tblIO] s2
where s2.[Users] = s.[Users] and s2.isIn = 'IN'
);
You can use not exists:
select s.*
from sample s
where not exists (select 1
from sample s2
where s2.user = s.user and s2.inout = 'OUT'
);
If you want only users that meet the condition (and not the full rows):
select user
from sample s
group by user
having min(inout) = max(inout) and min(inout) = 'IN';
Bearing in mind that an 'OUT' IsIn must be always preceded by an 'IN' record, you could use a query like this:
select s.Users, 'IN' as IsIn
from sample s
group by s.Users
having count(distinct s.IsIn) = 1

Need to match a field with another that has commas on it' s value

I would like to match the field values of "FORMULATION" from TABLE 1 to "C_TEST_ARTICLE" from table 2, that has mutiple of these formulation sepparated by commas.
Table 1:
+----------------+--------------------+
| SAMPLE_NUMBER | FORMULATION |
+----------------+-----------+--------+
| 84778 | S/200582/01-TA-002 |
| 84777 | S/200582/01-TA-002 |
| 81691 | S/200451/01-TA-011 |
| 81690 | S/200451/01-TA-011 |
+----------------+-----------+--------+
TABLE 2
+-----------------------+--------------------------------------+------------------+
| C_TEST_ARTICLE | C_REPORT_NUMBER |
+----------------+-----------+---------------------------------+------------------+
| S/200180/03-TA-001,S/200180/03-TA-002 | 16698 |
| S/200375/01-TA-001,S/200375/01-TA-002,S/200375/01-TA-003 | 15031 |
+--------------------------------------------------------------+------------------+
What I want form all of this, is that the each of these "C_TEST_ARTICLES" has a "C_REPORT_NUMBER", so I would like to get all those "SAMPLE_NUMBERS" from table 1, so in that way, I would have the samples related to the report number.
you could try using LIKE
select SAMPLE_NUMBER
from table1
INNER JOIN table2 ON c_test_article like concat('%', formulation , '%'')
select
C_TEST_ARTICLE
,C_REPORT_NUMBER
,b1.SAMPLE_NUMBER
from TABLE 2
INNER JOIN TABLE 1 as b1 on C_TEST_ARTICLE like '%'+FORMULATION+'%'
Try
SELECT
T1.SampleNumber
, T2.C_Report_Number
FROM Table1 T1
, Table2 T2
WHERE CHARINDEX(T1.Formulation, T2.C_Test_article) > 0

Query Optimization - subselect in Left Join

I'm working on optimizing a sql query, and I found a particular line that appears to be killing my queries performance:
LEFT JOIN anothertable lastweek
AND lastweek.date>= (SELECT MAX(table.date)-7 max_date_lweek
FROM table table
WHERE table.id= lastweek.id)
AND lastweek.date< (SELECT MAX(table.date) max_date_lweek
FROM table table
WHERE table.id= lastweek.id)
I'm working on a way of optimizing these lines, but I'm stumped. If anyone has any ideas, please let me know!
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1908654 | 145057704 | 720461 | 00:00:29 |
| * 1 | HASH JOIN RIGHT OUTER | | 1908654 | 145057704 | 720461 | 00:00:29 |
| 2 | VIEW | VW_DCL_880D8DA3 | 427487 | 7694766 | 716616 | 00:00:28 |
| * 3 | HASH JOIN | | 427487 | 39328804 | 716616 | 00:00:28 |
| 4 | VIEW | VW_SQ_2 | 7174144 | 193701888 | 278845 | 00:00:11 |
| 5 | HASH GROUP BY | | 7174144 | 294139904 | 278845 | 00:00:11 |
| 6 | TABLE ACCESS STORAGE FULL | TASK | 170994691 | 7010782331 | 65987 | 00:00:03 |
| * 7 | HASH JOIN | | 8549735 | 555732775 | 429294 | 00:00:17 |
| 8 | VIEW | VW_SQ_1 | 7174144 | 172179456 | 278845 | 00:00:11 |
| 9 | HASH GROUP BY | | 7174144 | 294139904 | 278845 | 00:00:11 |
| 10 | TABLE ACCESS STORAGE FULL | TASK | 170994691 | 7010782331 | 65987 | 00:00:03 |
| 11 | TABLE ACCESS STORAGE FULL | TASK | 170994691 | 7010782331 | 65987 | 00:00:03 |
| * 12 | TABLE ACCESS STORAGE FULL | TASK | 1908654 | 110701932 | 2520 | 00:00:01 |
-----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 1 - access("SYS_ID"(+)="TASK"."PARENT")
* 3 - access("ITEM_2"="TASK_LWEEK"."SYS_ID")
* 3 - filter("TASK_LWEEK"."SNAPSHOT_DATE"<"MAX_DATE_LWEEK")
* 7 - access("ITEM_1"="TASK_LWEEK"."SYS_ID")
* 7 - filter("TASK_LWEEK"."SNAPSHOT_DATE">=INTERNAL_FUNCTION("MAX_DATE_LWEEK"))
* 12 - storage("TASK"."CLOSED_AT" IS NULL OR "TASK"."CLOSED_AT">=TRUNC(SYSDATE#!)-15)
* 12 - filter("TASK"."CLOSED_AT" IS NULL OR "TASK"."CLOSED_AT">=TRUNC(SYSDATE#!)-15)
Well, you are not even showing the select. As I can see that the select is done over Exadata ( Table Access Storage Full ) , perhaps you need to ask yourself why do you need to make 4 access to the same table.
You access fourth times ( lines 6, 10, 11, 12 ) to the main table TASK with 170994691 rows ( based on estimation of the CBO ). I don't know whether the statistics are up-to-date or it is optimizing sampling kick in due to lack of good statistics.
A solution could be use WITH for generating intermediate results that you need several times in your outline query
with my_set as
(SELECT MAX(table.date)-7 max_date_lweek ,
max(table.date) as max_date,
id from FROM table )
select
.......................
from ...
left join anothertable lastweek on ( ........ )
left join myset on ( anothertable.id = myset.id )
where
lastweek.date >= myset.max_date_lweek
and
lastweek.date < myset.max_date
Please, take in account that you did not provide the query, so I am guessing a lot of things.
Since complete information is not available I will suggest:
You are using the same query twice then why not use CTE such as
with CTE_example as (SELECT MAX(table.date), max_date_lweek, ID
FROM table table)
Looking at your explain plan, the only table being accessed is TASK. From that, I infer that the tables in your example: ANOTHERTABLE and TABLE are actually the same table and that, therefore, you are trying to get the last week of data that exists in that table for each id value.
If all that is true, it should be much faster to use an analytic function to get the max date value for each id and then limit based on that.
Here is an example of what I mean. Note I use "dte" instead of "date", to remove confusion with the reserved word "date".
LEFT JOIN ( SELECT lastweek.*,
max(dte) OVER ( PARTITION BY id ) max_date
FROM anothertable lastweek ) lastweek
ON 1=1 -- whatever other join conditions you have, seemingly omitted from your post
AND lastweek.dte >= lastweek.max_date - 7;
Again, this only works if I am correct in thinking that table and anothertable are actually the same table.

SELECTing Related Rows Based on a Single Row Match

I have the following table running on Postgres SQL 9.5:
+---+------------+-------------+
|ID | trans_id | message |
+---+------------+-------------+
| 1 | 1234567 | abc123-ef |
| 2 | 1234567 | def234-gh |
| 3 | 1234567 | ghi567-ij |
| 4 | 8902345 | ced123-ef |
| 5 | 8902345 | def234-bz |
| 6 | 8902345 | ghi567-ij |
| 7 | 6789012 | abc123-ab |
| 8 | 6789012 | def234-cd |
| 9 | 6789012 | ghi567-ef |
|10 | 4567890 | abc123-ab |
|11 | 4567890 | gex890-aj |
|12 | 4567890 | ghi567-ef |
+---+------------+-------------+
I am looking for the rows for each trans_id based on a LIKE query, like this:
SELECT * FROM table
WHERE message LIKE '%def-234%'
This, of course, returns just three rows, the three that match my pattern in the message column. What I am looking for, instead, is all the rows matching that trans_id in groups of messages that match. That is, if a single row matches the pattern, get all the rows with the trans_id of that matching row.
That is, the results would be:
+---+------------+-------------+
|ID | trans_id | message |
+---+------------+-------------+
| 1 | 1234567 | abc123-ef |
| 2 | 1234567 | def234-gh |
| 3 | 1234567 | ghi567-ij |
| 4 | 8902345 | ced123-ef |
| 5 | 8902345 | def234-bz |
| 6 | 8902345 | ghi567-ij |
| 7 | 6789012 | abc123-ab |
| 8 | 6789012 | def234-cd |
| 9 | 6789012 | ghi567-ef |
+---+------------+-------------+
Notice rows 10, 11, and 12 were not SELECTed because there was not one of them that matched the %def-234% pattern.
I have tried (and failed) to write a sub-query to get the all the related rows when a single message matches a pattern:
SELECT sub.*
FROM (
SELECT DISTINCT trans_id FROM table WHERE message LIKE '%def-234%'
) sub
WHERE table.trans_id = sub.trans_id
I could easily do this with two queries, but the first query to get a list of matching trans_ids to include in a WHERE trans_id IN (<huge list of trans_ids>) clause would be very large, and would not be a very inefficient way of doing this, and I believe there exists a way to do it with a single query.
Thank you!
This will do the job I think :
WITH sub AS (
SELECT trans_id
FROM table
WHERE message LIKE '%def-234%'
)
SELECT *
FROM table JOIN sub USING (trans_id);
Hope this help.
Try this:
SELECT ID, trans_id, message
FROM (
SELECT ID, trans_id, message,
COUNT(*) FILTER (WHERE message LIKE '%def234%')
OVER (PARTITION BY trans_id) AS pattern_cnt
FROM mytable) AS t
WHERE pattern_cnt >= 1
Using a FILTER clause in the windowed version of COUNT function we can get the number of records matching the predefined pattern within each trans_id slice. The outer query uses this count to filter out irrelevant slices.
Demo here
You can do this.
WITH trans
AS
(SELECT DISTINCT trans_id
FROM t1
WHERE message LIKE '%def234%')
SELECT t1.*
FROM t1,
trans
WHERE t1.trans_id = trans.trans_id;
I think this will perform better. If you have enough data, you can do an explain on both Sub query and CTE and compare the output.

Four Table Join in BigQuery

Okay, so I'm trying to link together four different tables, and its getting very difficult. I provided snippets of each table in the hopes you all could help out
Table 1: data
+--------+--------+-----------+
| charge | amount | date |
+--------+--------+-----------+
| 123 | 10000 | 2/10/2016 |
| 456 | 10000 | 1/28/2016 |
| 789 | 10000 | 3/30/2016 |
+--------+--------+-----------+
Table 2: data_metadata
+--------+------------+------------+
| charge | key | value |
+--------+------------+------------+
| 123 | identifier | trrkfll212 |
| 456 | code | test |
| 789 | ID | 123xyz |
+--------+------------+------------+
Table 3: buyer
+-----+-----------+----------+----------+
| id | date | discount | plan |
+-----+-----------+----------+----------+
| ABC | 2/13/2016 | yes | option a |
| DEF | 2/1/2016 | yes | option a |
| GHI | 1/22/2016 | no | option a |
+-----+-----------+----------+----------+
Table 4: buyer_metadata
+--------------+-----------+--------+
| id | |key| | value |
+--------------+-----------+--------+
| ABC | migration | TRUE |
| DEF | emid | foo |
| GHI | ID | 123xyz |
+--------------+-----------+--------+
Okay, so the tables data and data_metadata are obviously connected by the charge column.
The tables buyer and buyer_metadata are connected by the id column.
But I want to link all of them together. I'm pretty sure the way to accomplish this is through linking the metadata tables together through the common field in the "value" column (in this example: 123xyz).
Could anyone help?
This might look like something like that if all "link" columns are unique :
SELECT *
FROM data d
JOIN data_metadata dm ON d.charge = dm.charge
JOIN buyer_metada bm ON dm.value = bm.value
JOIN buyer b ON bm.id = b.id
If not, I think you'll have to use something like GROUP BY clause
Let's take it in two steps, first create composite tables for data and buyer. Composite table for data:
SELECT data.charge, data.amount, data.date,
data_metadata.key, data_metadata.value
FROM [data] AS data
JOIN (SELECT charge, key, value FROM [data_metadata]) AS data_metadata
ON data.charge = data_metadata.charge
And composite table for buyer:
SELECT buyer.id, buyer.date, buyer.discount, buyer.plan,
buyer_metadata.key, buyer_metadata.value
FROM [buyer] AS buyer
JOIN (SELECT key, value FROM [buyer_metadata]) AS buyer_metadata
ON buyer.id = buyer_metadata.id
And then let's join the two composite tables
SELECT composite_data.*, composite_buyer.*
FROM (
SELECT data.charge, data.amount, data.date,
data_metadata.key, data_metadata.value
FROM [data] AS data
JOIN (SELECT charge, key, value FROM [data_metadata]) AS data_metadata
ON data.charge = data_metadata.charge) AS composite_data
JOIN (
SELECT buyer.id, buyer.date, buyer.discount, buyer.plan,
buyer_metadata.key, buyer_metadata.value
FROM [buyer] AS buyer
JOIN (SELECT key, value FROM [buyer_metadata]) AS buyer_metadata
ON buyer.id = buyer_metadata.id) AS composite_buyer
ON composite_data.value = composite_buyer.value
I haven't tested it but it's probably close.
For reference, here is the page on BigQuery JOINs. And have you seen this SO?