Concatenated range descriptions in MySQL - sql

I have data in a table looking like this:
+---+----+
| a | b |
+---+----+
| a | 1 |
| a | 2 |
| a | 4 |
| a | 5 |
| b | 1 |
| b | 3 |
| b | 5 |
| c | 5 |
| c | 4 |
| c | 3 |
| c | 2 |
| c | 1 |
+---+----+
I'd like to produce a SQL query which outputs data like this:
+---+-----------+
| a | 1-2, 4-5 |
| b | 1,3,5 |
| c | 1-5 |
+---+-----------+
Is there a way to do this purely in SQL (specifically, MySQL 5.1?)
The closest I have got is select a, concat(min(b), "-", max(b)) from test group by a;, but this doesn't take into account gaps in the range.

Use:
SELECT a, GROUP_CONCAT(x.island)
FROM (SELECT y.a,
CASE
WHEN MIN(y.b) = MAX(y.b) THEN
CAST(MIN(y.b) AS VARCHAR(10))
ELSE
CONCAT(MIN(y.b), '-', MAX(y.b))
END AS island
FROM (SELECT t.a, t.b,
CASE
WHEN #prev_b = t.b -1 THEN
#group_rank
ELSE
#group_rank := #group_rank + 1
END AS blah,
#prev_b := t.b
FROM TABLE t
JOIN (SELECT #group_rank := 1, #prev_b := 0) r
ORDER BY t.a, t.b) y
GROUP BY y.a, y.blah) x
GROUP BY a
The idea is if you assign a value to group sequencial values, then you can use MIN/MAX to get the appropriate vlalues. IE:
a | b | blah
---------------
a | 1 | 1
a | 2 | 1
a | 4 | 2
a | 5 | 2

I also found Martin Smith's answer to another question helpful:
printing restaurant opening hours from a database table in human readable format using php

Related

Replace null values with most recent non-null values SQL

I have a table where each row consists of an ID, date, variable values (eg. var1).
When there is a null value for var1 in a row, I want like to replace the null value with the most recent non-null value before that date for that ID. How can I do this quickly for a very large table?
So presume I start with this table:
+----+------------|-------+
| id |date | var1 |
+----+------------+-------+
| 1 |'01-01-2022'|55 |
| 2 |'01-01-2022'|12 |
| 3 |'01-01-2022'|45 |
| 1 |'01-02-2022'|Null |
| 2 |'01-02-2022'|Null |
| 3 |'01-02-2022'|20 |
| 1 |'01-03-2022'|15 |
| 2 |'01-03-2022'|Null |
| 3 |'01-03-2022'|Null |
| 1 |'01-04-2022'|Null |
| 2 |'01-04-2022'|77 |
+----+------------+-------+
Then I want this
+----+------------|-------+
| id |date | var1 |
+----+------------+-------+
| 1 |'01-01-2022'|55 |
| 2 |'01-01-2022'|12 |
| 3 |'01-01-2022'|45 |
| 1 |'01-02-2022'|55 |
| 2 |'01-02-2022'|12 |
| 3 |'01-02-2022'|20 |
| 1 |'01-03-2022'|15 |
| 2 |'01-03-2022'|12 |
| 3 |'01-03-2022'|20 |
| 1 |'01-04-2022'|15 |
| 2 |'01-04-2022'|77 |
+----+------------+-------+
cte suits perfect here
this snippets returns the rows with values, just an update query and thats all (will update my response).
WITH selectcte AS
(
SELECT * FROM testnulls where var1 is NOT NULL
)
SELECT t1A.id, t1A.date, ISNULL(t1A.var1,t1B.var1) varvalue
FROM selectcte t1A
OUTER APPLY (SELECT TOP 1 *
FROM selectcte
WHERE id = t1A.id AND date < t1A.date
AND var1 IS NOT NULL
ORDER BY id, date DESC) t1B
Here you can dig further about CTEs :
https://learn.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver16

Merging multiple "state-change" time series

Given a number of tables like the following, representing state-changes at time t of an entity identified by id:
| A | | B |
| t | id | a | | t | id | b |
| - | -- | - | | - | -- | - |
| 0 | 1 | 1 | | 0 | 1 | 3 |
| 1 | 1 | 2 | | 2 | 1 | 2 |
| 5 | 1 | 3 | | 3 | 1 | 1 |
where t is in reality a DateTime field with millisecond precision (making discretisation infeasible), how would I go about creating the following output?
| output |
| t | id | a | b |
| - | -- | - | - |
| 0 | 1 | 1 | 3 |
| 1 | 1 | 2 | 3 |
| 2 | 1 | 2 | 2 |
| 3 | 1 | 2 | 1 |
| 5 | 1 | 3 | 1 |
The idea is that for any given input timestamp, the entire state of a selected entity can be extracted by selecting one row from the resulting table. So the latest state of each variable corresponding to any time needs to be present in each row.
I've tried various JOIN statements, but I seem to be getting nowhere.
Note that in my use case:
rows also need to be joined by entity id
there may be more than two source tables to be merged
I'm running PostgreSQL, but I will eventually translate the query to SQLAlchemy, so a pure SQLAlchemy solution would be even better
I've created a db<>fiddle with the example data.
I think you want a full join and some other manipulations. The ideal would be:
select t, id,
last_value(a.a ignore nulls) over (partition by id order by t) as a,
last_value(b.b ignore nulls) over (partition by id order by t) as b
from a full join
b
using (t, id);
But . . . Postgres doesn't support ignore nulls. So an alternative method is:
select t, id,
max(a) over (partition by id, grp_a) as a,
max(b) over (partition by id, grp_b) as b
from (select *,
count(a.a) over (partition by id order by t) as grp_a,
count(b.b) over (partition by id order by t) as grp_b
from a full join
b
using (t, id)
) ab;

Select from a concatenation of two columns after a left join

Problem description
Let the tables C and V have those values
>> Table V <<
| UnID | BillID | ProductDesc | Value | ... |
| 1 | 1 | 'Orange Juice' | 3.05 | ... |
| 1 | 1 | 'Apple Juice' | 3.05 | ... |
| 1 | 2 | 'Pizza' | 12.05 | ... |
| 1 | 2 | 'Chocolates' | 9.98 | ... |
| 1 | 2 | 'Honey' | 15.98 | ... |
| 1 | 3 | 'Bread' | 3.98 | ... |
| 2 | 1 | 'Yogurt' | 8.55 | ... |
| 2 | 1 | 'Ice Cream' | 7.05 | ... |
| 2 | 1 | 'Beer' | 9.98 | ... |
| 2 | 2 | 'League of Legends RP' | 40.00 | ... |
>> Table C <<
| UnID | BillID | ClientName | ... |
| 1 | 1 | 'Alexander' | ... |
| 1 | 2 | 'Tom' | ... |
| 1 | 3 | 'Julia' | ... |
| 2 | 1 | 'Tom' | ... |
| 2 | 2 | 'Alexander' | ... |
Table C have the values of each product, which is associated with a bill number. Table V has the relationship between the client name and the bill number. However, the bill number has a counter that is dependent on the UnId, which is the store unity ID. That being said, each store has it`s own Bill number 1, number 2, etc. Also, the number of bills from each store are not equal.
Solution description
I'm trying to make select between the C left join V without sucess. Because each BillID is dependent on the UnID, I have to make the join considering the concatenation between those two columns.
I've used this script, but it gives me an error.
SELECT
SUM(C.Value),
V.ClientName
FROM
C
LEFT JOIN
V
ON
CONCAT(C.UnID, C.BillID) = CONCAT(V.UnID, V.BillID)
GROUP BY
V.ClientName
and SQL server returns me this 'CONCAT' is not a recognized built-in function name.
I'm using Microsoft SQL Server 2008 R2
Is the use of CONCAT wrong? Or is it the way I tried to SELECT? Could you give me a hand?
[OBS: The tables I've present you are just for the purpose of explaining my difficulties. That being said, if you find any errors in the explanation, please let me know to correct them.]
You should be joining on the equality of the UnID and BillID columns in the two tables:
SELECT
c.ClientName,
COALESCE(SUM(v.Value), 0) AS total
FROM C c
LEFT JOIN V v
ON c.UnID = v.UnID AND
c.BillID = v.BillID
GROUP BY
c.ClientName;
In theory you could try joining on CONCAT(UnID, BillID). However, you could run into problems. For example, UnID = 1 with BillID = 23 would, concatenated together, be the same as UnID = 12 and BillID = 3.
Note: We wrap the sum with COALESCE, because should a given client have no entries in the V table, the sum would return NULL, which we then replace with zero.
concat is only available in sql server 2012.
Here's one option.
SELECT
SUM(C.Value),
V.ClientName
FROM
C
LEFT JOIN
V
ON
cast(C.UnID as varchar(100)) + cast(C.BillID as varchar(100)) = cast(V.UnID as varchar(100)) + cast(V.BillID as varchar(100))
GROUP BY
V.ClientName

Postgres: Aggregate accounts into a single identity by common email address

I'm building a directory of users, where:
each user can have an account on one or more external services, and
each of these accounts can have one or more email addresses.
What I want to know is, how can I aggregate these accounts into single identities through common email addresses?
For example, let's say I have two services, A and B. For each service, I have a table that relates an account to one or more email addresses.
So if service A has these account email addresses:
account_id | email_address
-----------|--------------
1 | a#foo.com
1 | b#foo.com
2 | c#foo.com
and service B has these account email addresses:
account_id | email_address
-----------|--------------
3 | a#foo.com
3 | a#bar.com
4 | d#foo.com
I'd like to create a table that aggregates the email addresses of these accounts into a single user identity:
user_id | email_address
--------|--------------
X | a#foo.com
X | b#foo.com
X | a#bar.com
Y | c#foo.com
Z | d#foo.com
As you can see, account 1 from service A and account 2 from service B have been merged into a common user X, based on the common email address a#foo.com. Here's an animated visual:
The closest answer I could find is this one, and I suspect the solution is a recursive CTE, but given the inputs and engine are different I'm having trouble implementing it.
Clarification: I'm looking for a solution that handles an arbitrary number of services, so perhaps the input table might be better off as:
service_id | account_id | email_address
-----------|------------|--------------
A | 1 | a#foo.com
A | 1 | b#foo.com
A | 2 | c#foo.com
B | 3 | a#foo.com
B | 3 | a#bar.com
B | 4 | d#foo.com
demo1:db<>fiddle, demo2:db<>fiddle
WITH combined AS (
SELECT
a.email as a_email,
b.email as b_email,
array_remove(ARRAY[a.id, b.id], NULL) as ids
FROM
a
FULL OUTER JOIN b ON (a.email = b.email)
), clustered AS (
SELECT DISTINCT
ids
FROM (
SELECT DISTINCT ON (unnest_ids)
*,
unnest(ids) as unnest_ids
FROM combined
ORDER BY unnest_ids, array_length(ids, 1) DESC
) s
)
SELECT DISTINCT
new_id,
unnest(array_cat) as email
FROM (
SELECT
array_cat(
array_agg(a_email) FILTER (WHERE a_email IS NOT NULL),
array_agg(b_email) FILTER (WHERE b_email IS NOT NULL)
),
row_number() OVER () as new_id
FROM combined co
JOIN clustered cl
ON co.ids <# cl.ids
GROUP BY cl.ids
) s
Step by step explanation:
For explanation I'll take this dataset. This is a little bit more complex than yours. It can illustrate my steps better. Some problems don't occur in your smaller set. Think about the characters as variables for email addresses.
Table A:
| id | email |
|----|-------|
| 1 | a |
| 1 | b |
| 2 | c |
| 5 | e |
Table B
| id | email |
|----|-------|
| 3 | a |
| 3 | d |
| 4 | e |
| 4 | f |
| 3 | b |
CTE combined:
JOIN of both tables on same email addresses to get a touch point. IDs of same Ids will be concatenated in one array:
| a_email | b_email | ids |
|-----------|-----------|-----|
| (null) | a#bar.com | 3 |
| a#foo.com | a#foo.com | 1,3 |
| b#foo.com | (null) | 1 |
| c#foo.com | (null) | 2 |
| (null) | d#foo.com | 4 |
CTE clustered (sorry for the names...):
Goal is to get all elements exactly in only one array. In combined you can see, for example currently there are more arrays with the element 4: {5,4} and {4}.
First ordering the rows by the length of their ids arrays because the DISTINCT later should take the longest array (because holding the touch point {5,4} instead of {4}).
Then unnest the ids arrays to get a basis for filtering. This ends in:
| a_email | b_email | ids | unnest_ids |
|---------|---------|-----|------------|
| b | b | 1,3 | 1 |
| a | a | 1,3 | 1 |
| c | (null) | 2 | 2 |
| b | b | 1,3 | 3 |
| a | a | 1,3 | 3 |
| (null) | d | 3 | 3 |
| e | e | 5,4 | 4 |
| (null) | f | 4 | 4 |
| e | e | 5,4 | 5 |
After filtering with DISTINCT ON
| a_email | b_email | ids | unnest_ids |
|---------|---------|-----|------------|
| b | b | 1,3 | 1 |
| c | (null) | 2 | 2 |
| b | b | 1,3 | 3 |
| e | e | 5,4 | 4 |
| e | e | 5,4 | 5 |
We are only interested in the ids column with the generated unique id clusters. So we need all of them only once. This is the job of the last DISTINCT. So CTE clustered results in
| ids |
|-----|
| 2 |
| 1,3 |
| 5,4 |
Now we know which ids are combined and should share their data. Now we join the clustered ids against the origin tables. Since we have done this in the CTE combined we can reuse this part (that's the reason why it is outsourced into a single CTE by the way: We do not need another join of both tables in this step anymore). The JOIN operator <# says: JOIN if the "touch point" array of combined is a subgroup of the id cluster of clustered. This yields in:
| a_email | b_email | ids | ids |
|---------|---------|-----|-----|
| c | (null) | 2 | 2 |
| a | a | 1,3 | 1,3 |
| b | b | 1,3 | 1,3 |
| (null) | d | 3 | 1,3 |
| e | e | 5,4 | 5,4 |
| (null) | f | 4 | 5,4 |
Now we are able to group the email addresses by using the clustered ids (rightmost column).
array_agg aggregates the mails of one column, array_cat concatenates the email arrays of both columns into one big email array.
Since there are columns where email is NULL we can filter these values out before clustering with the FILTER (WHERE...) clause.
Result so far:
| array_cat |
|-----------|
| c |
| a,b,a,b,d |
| e,e,f |
Now we group all email addresses for one single id. We have to generate new unique ids. That's what the window function row_number is for. It simply adds a row count to the table:
| array_cat | new_id |
|-----------|--------|
| c | 1 |
| a,b,a,b,d | 2 |
| e,e,f | 3 |
Last step is to unnest the array to get a row per email address. Since in the array are still some duplicates we can eliminate them in this step with a DISTINCT as well:
| new_id | email |
|--------|-------|
| 1 | c |
| 2 | a |
| 2 | b |
| 2 | d |
| 3 | e |
| 3 | f |
OK, provided you only have two 'services', and assuming that to begin with you are not overly concerned with how to best represent the new key (I've used text as the easiest to hand), then please try the below query. This works for me on Postgres 9.6:
WITH shared_addr AS
(
SELECT foo.account_a, foo.account_b, row_number() OVER (ORDER BY foo.account_a) AS shared_id
FROM (
SELECT
a.account_id as account_a
, b.account_id as account_b
FROM
service_a a
JOIN
service_b b
ON
a.email_address = b.email_address
GROUP BY a.account_id, b.account_id
) foo
)
SELECT
bar.account_id,
bar.email_address
FROM
(
SELECT
'A-' || service_a.account_id::text AS account_id,
service_a.email_address
FROM service_a
LEFT OUTER JOIN
shared_addr
ON
shared_addr.account_a = service_a.account_id
WHERE shared_addr.account_b IS NULL
UNION ALL
SELECT
'B-' ||service_b.account_id::text,
service_b.email_address FROM service_b
LEFT OUTER JOIN
shared_addr
ON
shared_addr.account_b = service_b.account_id
WHERE shared_addr.account_a IS NULL
UNION ALL
(
SELECT
'shared-' || shared_addr.shared_id::text,
service_b.email_address
FROM service_b
JOIN
shared_addr
ON
shared_addr.account_b = service_b.account_id
UNION
SELECT
'shared-' || shared_addr.shared_id::text,
service_a.email_address
FROM service_a
JOIN
shared_addr
ON
shared_addr.account_a = service_a.account_id
)
) bar
;

Multiply with Previous Value in Oracle SQL

Its easy to multiply (or sum/divide/etc.) with previous row in Excel spreadsheet, however, I could not do it so far in Oracle SQL.
A B C
199901 3.81 51905
199902 -6.09 48743.9855
199903 4.75 51059.32481
199904 6.39 54322.01567
199905 -2.35 53045.4483
199906 2.65 54451.15268
199907 1.1 55050.11536
199908 -1.45 54251.88869
199909 0 54251.88869
199910 4.37 56622.69622
Above, column B is static and column C has the formula as:
((B2/100)+1)*C1
((B3/100)+1)*C2
((B4/100)+1)*C3
Example: 51905 from row 1 multiplied with -6.09 from row 2:
((-6.09/100)+1)*51905
I have been trying analytical and window functions, but not succeeded yet. LAG function can give previous row value in current row, but cannot give calculated previous value.
This can be done with a help of MODEL clause
select *
FROM (
SELECT t.*,
row_number() over (order by a) as rn
from table1 t
)
MODEL
DIMENSION BY (rn)
MEASURES ( A, B, 0 c )
RULES (
c[rn=1] = 51905, -- value in a first row
c[rn>1] = round( c[cv()-1] * (b[cv()]/100 +1), 6 )
)
;
Demo: http://sqlfiddle.com/#!4/9756ed/11
| RN | A | B | C |
|----|--------|-------|--------------|
| 1 | 199901 | 3.81 | 51905 |
| 2 | 199902 | -6.09 | 48743.9855 |
| 3 | 199903 | 4.75 | 51059.324811 |
| 4 | 199904 | 6.39 | 54322.015666 |
| 5 | 199905 | -2.35 | 53045.448298 |
| 6 | 199906 | 2.65 | 54451.152678 |
| 7 | 199907 | 1.1 | 55050.115357 |
| 8 | 199908 | -1.45 | 54251.888684 |
| 9 | 199909 | 0 | 54251.888684 |
| 10 | 199910 | 4.37 | 56622.696219 |