I want to randomly pick 4 rows which are distinct and do not have any entry that matches with any of the 4 chosen columns.
Here is what I coded:
SELECT DISTINCT en,dialect,fr FROM words ORDER BY RANDOM() LIMIT 4
Here is some data:
**en** **dialect** **fr**
number SFA numero
number TRI numero
hotel CAI hotel
hotel SFA hotel
I want:
**en** **dialect** **fr**
number SFA numero
hotel CAI hotel
Some retrieved rows would have something similar with each other, like having the same en or the same fr, I would like to retrieved rows that do not share anything similar with each other, how do I do that?
I think I’d do this in the front end code rather the dB, here’s a pseudo code (don’t know what your node looks like):
var seenEn = “en not in (''“;
var seenFr = “fr not in (''“;
var rows =[];
while(rows.length < 4)
{
var newrow = sqlquery(“SELECT *
FROM table WHERE “ + seenEn + “) and ”
+ seenFr + “) ORDER BY random() LIMIT 1”);
if(!newrow)
break;
rows.push(newrow);
seenEn += “,‘“+ newrow.en + “‘“;
seenFr += “,‘“+ newrow.fr + “‘“;
}
The loop runs as many times as needed to retrieve 4 rows (or maybe make it a for loop that runs 4 times) unless the query returns null. Each time the query returns the values are added to a list of values we don’t want the query to return again. That list had to start out with some values (null) that are never in the data, to prevent a syntax error when concatenation a comma-value string onto the seenXX variable. Those syntax errors can be avoided in other ways like having a Boolean of “if it’s the first value don’t put the comma” but I chose to put dummy ineffective values into the sql to make the JS simpler. Same goes for the
As noted, it looks like JS to ease your understanding but this should be treated as pseudo code outlining a general algorithm - it’s never been compiled/run/tested and may have syntax errors or not at all work as JS if pasted into your file; take the idea and work it into your solution
Please note this was posted from an iphone and it may have done something stupid with all the apostrophes and quotes (turned them into the curly kind preferred by writers rather than the straight kind used by programmers)
You can use Rank or find first row for each group to achieve your result,
Check below , I hope this code will help you
SELECT 'number' AS Col1, 'SFA' AS Col2, 'numero' AS Col3 INTO #tbl
UNION ALL
SELECT 'number','TRI','numero'
UNION ALL
SELECT 'hotel','CAI' ,'hotel'
UNION ALL
SELECT 'hotel','SFA','hotel'
UNION ALL
SELECT 'Location','LocationA' ,'Location data'
UNION ALL
SELECT 'Location','LocationB','Location data'
;
WITH summary AS (
SELECT Col1,Col2,Col3,
ROW_NUMBER() OVER(PARTITION BY p.Col1 ORDER BY p.Col2 DESC) AS rk
FROM #tbl p)
SELECT s.Col1,s.Col2,s.Col3
FROM summary s
WHERE s.rk = 1
DROP TABLE #tbl
Related
This is being run on sql for IBMI Series 7
I have a table which stores info about orders. Each row has an order number (ON), part number(PN), and sequence number(SEQ). Each ON will have multiple PN's linked to them and each part number has multiple SEQ Number. Each sequence number represents the order in which to do work on the part. Somewhere else in the system once the part is at a location and ready to be worked on it shows a flag. What I want to do is get a list of orders for a location that have not yet arrived but have been closed out on the previous location( Which means the part is on it's way).
I have a query listed below that I believe should work but I get the following error: "The column qualifier or table t undefined". Where is my issue at?
Select * From (SELECT M2ON as Order__Number , M2SEQ as Sequence__Number,
M2PN as Product__Number,ML2OQ as Order__Quantity
FROM M2P
WHERE M2pN in (select R1PN FROM R1P WHERE (RTWC = '7411') AND (R1SEQ = M2SEQ)
)
AND M2ON IN (SELECT M1ON FROM M1P WHERE ML1RCF = '')
ORDER BY ML2OSM ASC) as T
WHERE
T.Order__Number in (Select t3.m2on from (SELECT *
FROM(Select * from m2p
where m2on = t.Order__Number and m2pn = t.Product__Number
order by m2seq asc fetch first 2 rows only
)as t1 order by m2seq asc fetch first row only
) as t3 where t3.m2stat = 'C')
EDIT- Answer for anyone else with this issue
Clutton's Answer worked with slight modification so thank you to him for the fast response! I had to name my outer table and specify that in the subquery otherwise the as400 would kick back and tell me it couldn't find the columns. I also had to order by the sequence number descending so that I grabbed the highest record that was below the parameter(otherwise for example if my sequence number was 20 it could grab 5 even though 10 was available and should be shown first. Here is the subquery I now use. Please note the actual query names m2p as T1.
IFNULL((
SELECT
M2STAT
FROM
M2P as M2P_1
WHERE
M2ON = T1.M2ON
AND M2SEQ < T1.M2SEQ
AND M2PN IN (select R1PN FROM R1P WHERE (RTWC = #WC) AND (R1SEQ = T1.M2SEQ))
ORDER BY M2SEQ DESC
FETCH FIRST ROW ONLY
), 'NULL') as PRIOR_M2STAT
Just reading your question, it looks like something I do frequently to emulate RPG READPE op codes. Is the key to M2P Order/Seq? If so, here is a basic piece that may help you build out the rest of the query.
I am assuming that you are trying to get the prior record by key using SQL. In RPG this would be like doing a READPE on the key for a file with Order/Seq key.
Here is an example using a subquery to get the status field of the prior record.
SELECT
M2ON, M2PN, M2OQ, M2STAT,
IFNULL((
SELECT
M2STAT
FROM
M2P as M2P_1
WHERE
M2P_1.M2ON = M2ON
AND M2P_1.M2SEQ < M2SEQ
FETCH FIRST ROW ONLY
), '') as PRIOR_M2STAT
FROM
M2P
Note that this wraps the subquery in an IFNULL to handle the case where it is the first sequence number and no prior sequence exists.
Hi I have a redshift table of articles that has a field on it that can contain many accounts. So there is a one to many relationship between articles to accounts.
However I want to create a new view where it lists the partner id's in one column and in another column a count of how many times the partner id appears in the articles table.
I've attempted to do this using regex and created a new redshift view, but am getting weird results where it doesn't always build properly. So one day it will say a partner appears 15 times, then the next 17, then the next 15, when the partner id count hasn't actually changed.
Any help would be greatly appreciated.
SELECT partner_id,
COUNT(DISTINCT id)
FROM (SELECT id,
partner_ids,
SPLIT_PART(partner_ids,',',i) partner_id
FROM positron_articles a
LEFT JOIN util.seq_0_to_500 s
ON s.i < regexp_count (partner_ids,',') + 2
OR s.i = 1
WHERE i > 0
AND regexp_count (partner_ids,',') = 0
ORDER BY id)
GROUP BY 1;
Let's start with some of the more obvious things and see if we can start to glean other information.
Next GROUP BY 1 on your outer query needs to be GROUP BY partner_id.
Next you don't need an order by in your INNER query and the database engine will probably do a better job optimizing performance without it so remove ORDER BY id.
If you want your final results to be ordered then add an ORDER BY partner_id or similar clause after your group by of your OUTER query.
It looks like there are also problems with how you are splitting a partnerid from partnerids but I am not positive about that because I need to understand your view and the data it provides to know how that affects your record count for partnerid.
Next your LEFT JOIN statement on the util.seq_0_to_500 I am pretty sure you can drop off the s.i = 1 as the first condition will satisfy that as well because 2 is greater than 1. However your left join really acts more like an inner join because you then exclude any non matches from positron_articles that don't have a s.i > 0.
Oddly then your entire join and inner query gets kind of discarded because you only want articles that have no commas in their partnerids: regexp_count (partner_ids,',') = 0
I would suggest posting the code for your util.seq_0_to_500 and if you have a partner table let use know about that as well because you can probably get your answer a lot easier with that additional table depending on how regexp_count works. I suspect regex_count(partnerids,partnerid) exampleregex_count('12345,678',1234) will return greater than 0 at which point you have no choice but to split the delimited strings into another table before counting or building a new matching function.
If regex_count only matches exact between commas and you have a partner table your query could be as easy as this:
SELECT
p.partner_id
,COUNT(a.id) AS ArticlesAppearedIn
FROM
positron_articles a
LEFT JOIN PARTNERTABLE p
ON regexp_count(a.partnerids,p.partnerid) > 0
GROUP BY
p.partner_id
I will actually correct myself as I just thought of a way to join a partner table without regexp_count. So if you have a partner table this might work for you. If not you will need to split strings. It basically tests to see if the partnerid is the entire partnerids, at the beginning, in the middle, or at the end of partnerids. If one of those is met then the records is returned.
SELECT
p.partner_id
,COUNT(a.id) AS ArticlesAppearedIn
FROM
PARTNERTABLE p
INNER JOIN positron_articles a
ON
(
CASE
WHEN a.partnerids = CAST(p.partnerid AS VARCHAR(100)) THEN 1
WHEN a.partnerids LIKE p.partnerid + ',%' THEN 1
WHEN a.partnerids LIKE '%,' + p.partnerid + ',%' THEN 1
WHEN a.partnerids LIKE '%,' + p.partnerid THEN 1
ELSE 0
END
) = 1
GROUP BY
p.partner_id
HI this code is working fine until the last statement there is more to it but was wondering if we can learn what is incorrect on this.
this is on the ibm i (as400)
'SQL0199 Keyword Select Not Selected. Valid Tokens: For Use Skip Wait With Fetch Order Union Except Optimize' can you explain this issue to me?
SELECT COUNT(*)
FROM DLIB.ORDHEADR,DLIB.TRANCODE,DLIB.TRA11
WHERE OHCOM# = TSCOM# AND OHORD# = TSORD#
AND (otCOM# = OHCOM# AND OTORD#= OHORD# AND ottrnc = 'AQC')
AND OHORDT IN('RTR','INT','SAM')
AND OHREQD = replace(char(current date, iso), '-', '')
AND OHHLDC = ' '
AND ( ( TSTATS IN('AEP','SPJ')
AND OHORD# NOT in (SELECT a.TSORD#
FROM DLIB.TRANCODE a
WHERE a.TSTATS IN('EEP','SPC')
)
)
OR TSTATS IN('EEP','SPC')
AND OHORD# IN (SELECT DISTINCT(C.TSORD#)
FROM DLIB.TRANCODE C
JOIN (SELECT DISTINCT (B.TSORD#), MAX(B.TSUTIM) AS C_TSUTIM,
MAX(B.TSUDAT) AS C_TSUDAT
FROM DLIB.TRANCODE B
WHERE B.TSTATS IN ('EEP','SPC','ECM','ECT',
'ECA','CEL','BOC','COM',
'COO','REV','MCO','CPA',
'ECV','ECC','EPT','EPM',
'CAT','CAC','CAM','CAS',
'MAC','004','006','600',
'MEP','EPC','CPK')
GROUP BY B.TSORD#
) q1
ON C.TSORD# = q1.TSORD#
AND C.TSUDAT = q1.C_TSUDAT
AND C.TSUTIM = q1.C_TSUTIM
WHERE C.TSORD# NOT IN (SELECT F.TSORD#
FROM DLIB.TRANCODE F
WHERE F.TSTATS IN ('SPJ','REL','EAS','REV',
'STP','SPT','PPC','SPM',
'BPA','BPB','BPC','BPD','BPE',
'BPF','BPG','BPH','BPI','BPJ',
'BPK','BPL','BPM','BPN','CBM',
'BPO','BPP','BAT','BCM',
'BAM','WAT','WAM','LBL','012',
'006','600','004','SCP','CBA',
'CBB','CBC','CBD','CBE',
'CBF','CBG','CBH','CBI','CBJ',
'CBK','CBL','CBM','CBN','CBO',
'CBP','CBQ','CBR','CBS',
'CBT','CBU','CBV','CBW',
'CBX','CBY','CBZ','CB1',
'CB2','CB3','CB4','CB5')
)
AND C.TSTATS IN('EEP','SPC')
)
)
-- till here it's fine.
SELECT COALESCE(SUM(OdQty#),0)
You need to use GROUP BY to SUM.
SELECT COALESCE(SUM(Goals),0) AS TeamGoals
FROM Players
GROUP BY TeamId
After formatting your code so that we can see better where the various parts of the statement begins and ends, we can see what matches up with what.
Everything up to "till here it's fine" is one SQL SELECT statement. You need a semicolon to begin your next query, which starts with SELECT COALESCE(), but is incomplete since there is no FROM clause. Once you've put the terminator on the first statement it should run.
The second query is another question. You didn't show us the rest of the code. As TeKapa says, you need a GROUP BY clause anytime you use an aggregate function. But this is only required, if you are also including a non-aggregate column in the results.
SELECT TeamID, COALESCE(SUM(Goals),0) AS TeamGoals
FROM Players
GROUP BY TeamId
That will give you each TeamID in Players, and the total Goals for each team. You would probably also include ORDER BY TeamID
But if you simply want the combined total of all Players, it is completely valid to say
SELECT SUM(Goals) AS TotalGoals
FROM Players
Taking a step back, it seems like your query has gotten so complex, that even you may be having difficulty managing it. Hopefully others wont be asked to maintain something like this.
If such code is going into production, I recommend finding ways to modularize portions of the complexity, such as with views, or common table expressions. It may also be a good idea to store those lists of values in a table, rather than hardcoding them.
Keep in mind I am using SQL 2000
I have two tables.
tblAutoPolicyList contains a field called PolicyIDList.
tblLossClaims contains two fields called LossPolicyID & PolicyReview.
I am writing a stored proc that will get the distinct PolicyID from PolicyIDList field, and loop through LossPolicyID field (if match is found, set PolicyReview to 'Y').
Sample table layout:
PolicyIDList LossPolicyID
9651XVB19 5021WWA85, 4421WWA20, 3314WWA31, 1121WAW11, 2221WLL99 Y
5021WWA85 3326WAC35, 1221AXA10, 9863AAA44, 5541RTY33, 9651XVB19 Y
0151ZVB19 4004WMN63, 1001WGA42, 8587ABA56, 8541RWW12, 9329KKB08 N
How would I go about writing the stored proc (looking for logic more than syntax)?
Keep in mind I am using SQL 2000.
Select LossPolicyID, * from tableName where charindex('PolicyID',LossPolicyID,1)>0
Basically, the idea is this:
'Unroll' tblLossClaims and return two columns: a tblLossClaims key (you didn't mention any, so I guess it's going to be LossPolicyID) and Item = a single item from LossPolicyID.
Find matches of unrolled.Item in tblAutoPolicyList.PolicyIDList.
Find matches of distinct matched.LossPolicyID in tblLossClaims.LossPolicyID.
Update tblLossClaims.PolicyReview accordingly.
The main UPDATE can look like this:
UPDATE claims
SET PolicyReview = 'Y'
FROM tblLossClaims claims
JOIN (
SELECT DISTINCT unrolled.LossPolicyID
FROM (
SELECT LossPolicyID, Item = itemof(LossPolicyID)
FROM unrolling_join
) unrolled
JOIN tblAutoPolicyList
ON unrolled.ID = tblAutoPolicyList.PolicyIDList
) matched
ON matched.LossPolicyID = claims.LossPolicyID
You can take advantage of the fixed item width and the fixed list format and thus easily split LossPolicyID without a UDF. I can see this done with the help of a number table and SUBSTRING(). unrolling_join in the above query is actually tblLossClaims joined with the number table.
Here's the definition of unrolled 'zoomed in':
...
(
SELECT LossPolicyID,
Item = SUBSTRING(LossPolicyID,
(v.number - 1) * #ItemLength + 1,
#ItemLength)
FROM tblLossClaims c
JOIN master..spt_values v ON v.type = 'P'
AND v.number BETWEEN 1 AND (LEN(c.LossPolicyID) + 2) / (#ItemLength + 2)
) unrolled
...
master..spt_values is a system table that is used here as the number table. Filter v.type = 'P' gives us a rowset with number values from 0 to 2047, which is narrowed down to the list of numbers from 1 to the number of items in LossPolicyID. Eventually v.number serves as an array index and is used to cut out single items.
#ItemLength is of course simply LEN(tblAutoPolicyList.PolicyIDList). I would probably also declared #ItemLength2 = #ItemLength + 2 so it wasn't calculated every time when applying the filter.
Basically, that's it, if I haven't missed anything.
If the PolicyIDList field is a delimited list, you have to first separate the individual policy IDs and create a temporary table with all of the results. Next up, use an update query on the tblLossClaims with 'where exists (select * from #temptable tt where tt.PolicyID = LossPolicyID).
Depending on the size of the table/data, you might wish to add an index to your temporary table.
I am working on an application that has to assign numeric codes to elements. This codes are not consecutives and my idea is not to insert them in the data base until have the related element, but i would like to find, in a sql matter, the not assigned codes and i dont know how to do it.
Any ideas?
Thanks!!!
Edit 1
The table can be so simple:
code | element
-----------------
3 | three
7 | seven
2 | two
And I would like something like this: 1, 4, 5, 6. Without any other table.
Edit 2
Thanks for the feedback, your answers have been very helpful.
This will return NULL if a code is not assigned:
SELECT assigned_codes.code
FROM codes
LEFT JOIN
assigned_codes
ON assigned_codes.code = codes.code
WHERE codes.code = #code
This will return all non-assigned codes:
SELECT codes.code
FROM codes
LEFT JOIN
assigned_codes
ON assigned_codes.code = codes.code
WHERE assigned_codes.code IS NULL
There is no pure SQL way to do exactly the thing you want.
In Oracle, you can do the following:
SELECT lvl
FROM (
SELECT level AS lvl
FROM dual
CONNECT BY
level <=
(
SELECT MAX(code)
FROM elements
)
)
LEFT OUTER JOIN
elements
ON code = lvl
WHERE code IS NULL
In PostgreSQL, you can do the following:
SELECT lvl
FROM generate_series(
1,
(
SELECT MAX(code)
FROM elements
)) lvl
LEFT OUTER JOIN
elements
ON code = lvl
WHERE code IS NULL
Contrary to the assertion that this cannot be done using pure SQL, here is a counter example showing how it can be done. (Note that I didn't say it was easy - it is, however, possible.) Assume the table's name is value_list with columns code and value as shown in the edits (why does everyone forget to include the table name in the question?):
SELECT b.bottom, t.top
FROM (SELECT l1.code - 1 AS top
FROM value_list l1
WHERE NOT EXISTS (SELECT * FROM value_list l2
WHERE l2.code = l1.code - 1)) AS t,
(SELECT l1.code + 1 AS bottom
FROM value_list l1
WHERE NOT EXISTS (SELECT * FROM value_list l2
WHERE l2.code = l1.code + 1)) AS b
WHERE b.bottom <= t.top
AND NOT EXISTS (SELECT * FROM value_list l2
WHERE l2.code >= b.bottom AND l2.code <= t.top);
The two parallel queries in the from clause generate values that are respectively at the top and bottom of a gap in the range of values in the table. The cross-product of these two lists is then restricted so that the bottom is not greater than the top, and such that there is no value in the original list in between the bottom and top.
On the sample data, this produces the range 4-6. When I added an extra row (9, 'nine'), it also generated the range 8-8. Clearly, you also have two other possible ranges for a suitable definition of 'infinity':
-infinity .. MIN(code)-1
MAX(code)+1 .. +infinity
Note that:
If you are using this routinely, there will generally not be many gaps in your lists.
Gaps can only appear when you delete rows from the table (or you ignore the ranges returned by this query or its relatives when inserting data).
It is usually a bad idea to reuse identifiers, so in fact this effort is probably misguided.
However, if you want to do it, here is one way to do so.
This the same idea which Quassnoi has published.
I just linked all ideas together in T-SQL like code.
DECLARE
series #table(n int)
DECLARE
max_n int,
i int
SET i = 1
-- max value in elements table
SELECT
max_n = (SELECT MAX(code) FROM elements)
-- fill #series table with numbers from 1 to n
WHILE i < max_n BEGIN
INSERT INTO #series (n) VALUES (i)
SET i = i + 1
END
-- unassigned codes -- these without pair in elements table
SELECT
n
FROM
#series AS series
LEFT JOIN
elements
ON
elements.code = series.n
WHERE
elements.code IS NULL
EDIT:
This is, of course, not ideal solution. If you have a lot of elements or check for non-existing code often this could cause performance issues.