This is the query I'm using:
select DISTINCT "HRG_GOAL_ACCESS"."PERSON_ID" as "PERSON_ID",
"HRG_GOAL_ACCESS"."BUSINESS_GROUP_ID" as "BUSINESS_GROUP_ID",
"HRG_GOALS"."GOAL_ID" as "GOAL_ID",
"HRG_GOALS"."ASSIGNMENT_ID" as "ASSIGNMENT_ID",
"HRG_GOALS"."GOAL_NAME" as "GOAL_NAME",
"HRG_MASS_REQ_RESULTS"."ORGANIZATION_ID" as "ORGANIZATION_ID",
"HRG_MASS_REQ_RESULTS"."RESULT_CODE" as "RESULT_CODE",
"HRG_GOAL_PLN_ASSIGNMENTS"."CREATED_BY" as "CREATED_BY"
from "FUSION"."HRG_GOAL_PLN_ASSIGNMENTS" "HRG_GOAL_PLN_ASSIGNMENTS",
"FUSION"."HRG_MASS_REQ_RESULTS" "HRG_MASS_REQ_RESULTS",
"FUSION"."HRG_GOALS" "HRG_GOALS",
"FUSION"."HRG_GOAL_ACCESS" "HRG_GOAL_ACCESS"
where "HRG_GOAL_ACCESS"."PERSON_ID"="HRG_GOALS"."PERSON_ID"
and "HRG_MASS_REQ_RESULTS"."PERSON_ID"="HRG_GOALS"."PERSON_ID"
and "HRG_GOAL_PLN_ASSIGNMENTS"."PERSON_ID"="HRG_MASS_REQ_RESULTS"."PERSON_ID"
Output
PERSON_ID BUSINESS_GROUP_ID GOAL_ID ASSIGNMENT_ID GOAL_NAME RESULT_CODE CREATED_BY
---------------- ----------------- --------------- --------------- ------------------ -------------------- -------------------
300000048030404 1 300000137711224 300000048033078 NANO_CLASS SUCCESS anonymous G_1
300000048030404 1 300000137637946 300000048033078 INCREASE SALES BY 40% SUCCESS REDDI.SAREDDY G_1
300000048030404 1 300000137637946 300000048033078 INCREASE SALES BY 40% SUCCESS CURTIS.FEITTY
Your output does not contain duplicates. You have more than one row for PERSON_ID (300000048030404) but that's because the master table (? HRG_GOAL_ACCESS ?) has multiple rows in its child tables.
Each row has different details, so the set is valid. There are different values of HRG_GOALS.GOAL_ID, HRG_GOALS.GOAL_NAME and HRG_GOAL_PLN_ASSIGNMENTS.CREATED_BY.
If this response does not make you happy you need to explain more clearly what your desire output would look like. Alternatively you need to figure out your data model and understand why your query returns the data it does. Probably you have a missing join condition; the use of distinct could be hindering you in finding that out.
Related
If I have a table of correct data I need to check with my actual table to make sure the data is correct and I have some rows like the following:
Data_Check_Table
FRUIT ------- PRICE ------- WEEKS_FRESH ------- SUPPLIER
Apple $1 1 Big Co.
Banana $1 1 Super Co.
and the actual table with this info:
Data_Table
FRUIT ------- PRICE ------- WEEKS_FRESH ------- SUPPLIER
Apple $2 1 Big Co.
Banana $1 1 Super Co.
...and assume there are many other rows, some match up fine and others have inconsistencies in certain areas (Maybe the wrong price? Or wrong supplier? Maybe even both.) How would I do a select to find these rows that are inconsistent with the actual data?
Select dt.Fruit,dt.Price, dt.Weeks_Fresh,dtc.Fruit,dtc.Price, dtc.Weeks_Fresh,...
From DataTable dt
FULL OUTER JOIN
DataTable_Check dtc
ON dt.Fruit = dtc.Fruit
AND dt.Price = dtc.Price
.....
Where dt.Fruit IS NULL OR dtc.Fruit IS NULL
The full join includes records from each table regardless of whether there is a match, so if either side is null then you know there is a mismatch.
The following to find actual records not matching correct records:
select *
from Data_Table
minus
select *
from Data_Check_Table
I created a table out of a CSV file which is produced by an external software.
Amongst the other fields, this table contains one field called "CustomID".
Each row on this table must be linked to a customer using the content of that field.
Every customer may have one or more set of customIDs at their own discretion, as long as each sequence starts with the same prefix.
So for example:
Customer 1 may use "cust1_n" and "cstm01_n" (where n is a number)
Customer 2 may use "customer2_n"
ImportedRows
PKID CustomID Description
---- --------------- --------------------------
1 cust1_001 Something
2 cust1_002 ...
3 cstm01_000001 ...
4 customer2_00001 ...
5 cstm01_000232 ...
..
Now I have created 2 support tables as follows:
Customers
PKID Name
---- --------------------
1 Customer 1
2 Customer 2
and
CustomIDs
PKID FKCustomerID SearchPattern
---- ------------ -------------
1 1 cust1_*
2 1 cstm01_*
3 2 customer2_*
What I need to achieve is the retrieval of all rows for a given customer using all the LIKE conditions found on the CustomIDs tables for that customer.
I have failed miserably so far.
Any clues, please?
Thanks in advance.
Silver.
To use LIKE you must replace the * with % in the pattern. Different dbms use different functions for string manipulation. Let's assume there is a REPLACE function available:
SELECT ir.*
FROM ImportedRows ir
JOIN CustomIDs c ON ir.CustomID LIKE REPLACE(c.SearchPattern, '*', '%')
WHERE c.FKCustomerID = 1;
I'm trying to solve this query where i need to find the the top balance at each base. Balance is in one table and bases are in another table.
This is the existing query i have that returns all the results but i need to find a way to limit it to 1 top result per baseID.
SELECT o.names.name t.accounts.bidd.baseID, MAX(t.accounts.balance)
FROM order o, table(c.accounts) t
WHERE t.accounts.acctype = 'verified'
GROUP BY o.names.name, t.accounts.bidd.baseID;
accounts is a nested table.
this is the output
Name accounts.BIDD.baseID MAX(T.accounts.BALANCE)
--------------- ------------------------- ---------------------------
Jerard 010 1251.21
john 012 3122.2
susan 012 3022.2
fin 012 3022.2
dan 010 1751.21
What i want the result to display is calculate the highest balance for each baseID and only display one record for that baseID.
So the output would look only display john for baseID 012 because he has the highest.
Any pointers in the right direction would be fantastic.
I think the problem is cause of the "Name" column. since you have three names mapped to one base id(12), it is considering all three records as unique ones and grouping them individually and not together.
Try to ignore the "Name" column in select query and in the "Group-by" clause.
SELECT t.accounts.bidd.baseID, MAX(t.accounts.balance)
FROM order o, table(c.accounts) t
WHERE t.accounts.acctype = 'verified'
GROUP BY t.accounts.bidd.baseID;
I have two tables.
Order
Replication.
A single order record can have multiple Replication records. I want to join these two tables, such that i always retrieve a single record out of the join even if multiple records exist.
Sample data
Replication table:
ORDID | STATUS | ID | ERRORMSG | HTTPSTATUS | DELIVERYCNT
=========================================================
1717410307 1 JBM-9e92ae0c NULL 200 1
----------
1717410307 1 JBM-9fb59af1 NULL 400 -99
----------
1717410308 1 JBM-0764b091 NULL 403 1
----------
1717410308 1 JBM-0764b091 NULL 200 1
Order Table:
ORDID | ORDTYPE | DATE
----------
1717410307 CAR 22-SEP-2011
1717410308 BUS 23-SEP-2011
How can i make a join effectively so as , i will get as many records in order table and a replication table that should be dynamically selected on a priority basis.
The priority can be defined as :
Any record with a delivery count of -99
HTTPSTATUS != 200
Please guide me how can i proceed with this joining?
Please let me know if you need any clarification.
Your help is much appreciated!
Is it possible to use ORDER BY clause based on the HTTPSTATUS and DELIVERYCNT?
In that case you can write a specific ORDER BY and getting the TOP 1 from it (don't know which RDBMS do you use) or getting ROW_NUMBER() OVER (ORDER BY ... ) AS RowN WHERE RowN = 1
But this is the ugly (yet quick) solution.
The other option is to make a subquery where you add a new column which will make the priority calculation.
To make the query effective you should consider indexing (or using RDBMS specific solutions like included columns)
I have a large table (TokenFrequency) which has millions of rows in it. The TokenFrequency table that is structured like this:
Table - TokenFrequency
id - int, primary key
source - int, foreign key
token - char
count - int
My goal is to select all of the rows in which two sources have the same token in it. For example if my table looked like this:
id --- source --- token --- count
1 ------ 1 --------- dog ------- 1
2 ------ 2 --------- cat -------- 2
3 ------ 3 --------- cat -------- 2
4 ------ 4 --------- pig -------- 5
5 ------ 5 --------- zoo ------- 1
6 ------ 5 --------- cat -------- 1
7 ------ 5 --------- pig -------- 1
I would want a SQL query to give me source 1, source 2, and the sum of the counts. For example:
source1 --- source2 --- token --- count
---- 2 ----------- 3 --------- cat -------- 4
---- 2 ----------- 5 --------- cat -------- 3
---- 3 ----------- 5 --------- cat -------- 3
---- 4 ----------- 5 --------- pig -------- 6
I have a query that looks like this:
SELECT F.source AS source1, S.source AS source2, F.token,
(F.count + S.count) AS sum
FROM TokenFrequency F
INNER JOIN TokenFrequency S ON F.token = S.token
WHERE F.source <> S.source
This query works fine but the problems that I have with it are that:
I have a TokenFrequency table that has millions of rows and therefore need a faster alternative to obtain this result.
The current query that I have is giving duplicates. For example its selecting:
source1=2, source2=3, token=cat, count=4
source1=3, source2=2, token=cat, count=4
Which isn't too much of a problem but if there is a way to elimate those and in turn obtain a speed increase then it would be very useful
The main issue that I have is speed of the query with my current query it takes hours to complete. The INNER JOIN on a table to itself is what I believe to be the problem. Im sure there has to be a way to eliminate the inner join and get similar results just using one instance of the TokenFrequency table. The second problem that I mentioned might also promote a speed increase in the query.
I need a way to restructure this query to provide the same results in a faster, more efficient manner.
Thanks.
I'd need a little more info to diagnose the speed issue, but to remove the dups, add this to the WHERE:
AND F.source<S.source
Try this:
SELECT token, GROUP_CONCAT(source), SUM(count)
FROM TokenFrequency
GROUP BY token;
This should run a lot faster and also eliminate the duplicates. But the sources will be returned in a comma-separated list, so you'll have to explode that in your application.
You might also try creating a compound index over the columns token, source, count (in that order) and analyze with EXPLAIN to see if MySQL is smart enough to use it as a covering index for this query.
update: I seem to have misunderstood your question. You don't want the sum of counts per token, you want the sum of counts for every pair of sources for a given token.
I believe the inner join is the best solution for this. An important guideline for SQL is that if you need to calculate an expression with respect to two different rows, then you need to do a join.
However, one optimization technique that I mentioned above is to use a covering index so that all the columns you need are included in an index data structure. The benefit is that all your lookups are O(log n), and the query doesn't need to do a second I/O to read the physical row to get other columns.
In this case, you should create the covering index over columns token, source, count as I mentioned above. Also try to allocate enough cache space so that the index can be cached in memory.
If token isn't indexed, it certainly should be.