Querying sequences of rows in SQL - sql

Suppose I am storing events associated with users in a table as follows (with dt standing in for the timestamp of the event):
| dt | user | event |
| 1 | 1 | A |
| 2 | 1 | D |
| 3 | 1 | B |
| 4 | 1 | C |
| 5 | 1 | B |
| 6 | 2 | B |
| 7 | 2 | B |
| 8 | 2 | A |
| 9 | 2 | A |
| 10 | 2 | C |
Such that we could say:
user 1 has an event-sequence of ADBCB
user 2 has event-sequence BBAAC
The types of questions I would want to answer about these users are very easy to express as regular expresions on the event-sequences, e.g. "which users have an event-sequence matching A.*B?" or "which users have an event-sequence matching A[^C]*B[^C]*D?" etc.
What would be a good SQL technique or operator I could use to answer similar queries over this table structure?
Is there a way to efficiently/dynamically generate a table of user-to-event-sequence which could then be queried with regex?
I am currently looking at using Postgres, but I am curious to know if any of the bigger DBMS's like SQLServer or Oracle have specialized operators for this as well.

With Postgres 9.x this is actually quite easy:
select userid,
string_agg(event, '' order by dt) as event_sequence
from events
group by userid;
Using that result you can now apply a regular expression on the event_sequence:
select *
from (
select userid,
string_agg(event, '' order by dt) as event_sequence
from events
group by userid
) t
where event_sequence ~ 'A.*B'
With Postgres 8.x you need to find a replacement for the string_agg() function (just google for it, there are a lot of examples out there) and you need a sub-select to ensure the ordering of the aggregate as 8.x does support an order by in an aggregate function.

I'm not at a computer to write code for this answer, but here's how I would go about a RegEx-based solution in SQL Server:
Build a string from the resultset. Something like http://blog.sqlauthority.com/2009/11/25/sql-server-comma-separated-values-csv-from-table-column/ should work if you omit the comma
Run your RegEx match against the resulting string. Unfortunately, SQL Server does not provide this functionality natively, however, you can use a CLR function for this purpose as described at http://www.ideaexcursion.com/2009/08/18/sql-server-regular-expression-clr-udf/
This should ultimately provide you with the functionality in SQL Server that your original question requests, however, if you're analyzing a very large dataset, this could be quite slow and there may be better ways to accomplish what you're looking for.

For Oracle (version 11g R2):
By chance if you are using Oracle DB 11g R2, take look at listagg. The below code should work, but I haven't tested. The point is: you can use listagg.
SQL> select user,
2 listagg( event, '' )
3 within group (order by dt) events
4 from users
5 group by user
6 order by dt
7 /
USER EVENTS
--------- --------------------
1 ADBCB
2 BBAAC
In prior versions you can do with CONNECT BY clause. More details on listagg.

Related

SQL structure for multiple queries of the same table (using window function, case, join)

I have a complex production SQL question. It's actually PrestoDB Hadoop, but conforms to common SQL.
I've got to get a bunch of metrics from a table, a little like this (sorry if the tables are mangled):
+--------+--------------+------------------+
| device | install_date | customer_account |
+--------+--------------+------------------+
| dev 1 | 1-Jun | 123 |
| dev 1 | 4-Jun | 456 |
| dev 1 | 10-Jun | 789 |
| dev 2 | 20-Jun | 50 |
| dev 2 | 25-Jun | 60 |
+--------+--------------+------------------+
I need something like this:
+--------+------------------+-------------------------+
| device | max_install_date | previous_account_number |
+--------+------------------+-------------------------+
| dev 1 | 10-Jun | 456 |
| dev 2 | 25-Jun | 50 |
+--------+------------------+-------------------------+
I can do two separate queries to get max install date and previous account number, like this:
select device, max(install_date) as max_install_date
from (select [a whole bunch of stuff], dense_rank() over(partition by device order by [something_else]) rnk
from some_table a
)
But how do you combine them into one query to get one line for each device? I have rank, with statements, case statements, and one join. They all work individually but I'm banging my head to understand how to combine them all.
I need to understand how to structure big queries.
ps. any good books you recommend on advanced SQL for data analysis? I see a bunch on Amazon but nothing that tells me how to construct big queries like this. I'm not a DBA. I'm a data guy.
Thanks.
You can use correlated subquery approach :
select t.*
from table t
where install_date = (select max(install_date) from table t1 where t1.device = t.device);
This assumes install_date has resonbale date format.
I think you want:
select t.*
from (select t.*, max(install_date) over (partition by device) as max_install_date,
lag(customer_account) over (partition by device order by install-date) as prev_customer_account
from t
) t
where install_date = max_install_date;

Is it possible to do complex SQL queries using Django?

I have the following Script to get a list of calculated index for each day after specific date:
with test_reqs as (
select id_test, date_request, sum(n_requests) as n_req from cdr_test_stats
where
id_test in (2,4) and -- List of Ids included in index calc
date_request >= 20170823 -- Start date (end date -> Last in DB -> Today)
group by id_test, date_request
),
date_reqs as (
select date_request, sum(n_req) as n_req
from test_reqs
group by date_request
),
test_reqs_ratio as (
select H.id_test, H.date_request,
case when D.n_req = 0 then null else H.n_req/D.n_req end as ratio_req
from test_reqs H
inner join date_reqs D
on H.date_request = D.date_request
),
test_reqs_index as (
select HR.*, least(nullif(HA.n_dates_hbalert, 0), 10) as index_hb
from test_reqs_ratio HR
left join cdr_test_alerts_stats HA
on HR.id_test = HA.id_test and HR.date_request = HA.date_request
)
select date_request, 10-sum(ratio_req*index_hb) as index_hb
from test_reqs_index
group by date_request
Result:
---------------------------
| date_request | index_hb |
---------------------------
| 20170904 | 7.5508 |
| 20170905 | 7.6870 |
| 20170825 | 7.4335 |
| 20170901 | 7.7116 |
| 20170824 | 1.6568 |
| 20170823 | 0.0000 |
| 20170903 | 5.1850 |
| 20170830 | 0.0000 |
| 20170828 | 0.0000 |
---------------------------
The problem is that I want to get the same in Django and avoid to execute the raw query using the cursor.
Many thanks for any suggestion.
Without going deep into the specifics of your query, I'd say the Django ORM has enough expressiveness to handle most problems, but generally, would require you to redesign the query from the ground up. You would have to use subqueries and joins instead of the CTE's, and you might end up with a solution that does some of the work in Python land instead of the DB.
Taking this into account the answer is: depends. Your functional requirements, such as performance and data size play a role.
Another solution worth considering is declaring your SQL query as a view, and at least in the case of Postgres, use something like django-pgviews to query it with Django ORM almost as if it were a model.

Google Big Query : Window Function Row Wise Cumulative Sum Across Columns

I am looking to calculate cumulative sum across columns in Google Big Query.
Assume there are five columns (NAME,A,B,C,D) with two rows of integers, for example:
NAME | A | B | C | D
----------------------
Bob | 1 | 2 | 3 | 4
Carl | 5 | 6 | 7 | 8
I am looking for a windowing function or UDF to calculate the cumulative sum across rows to generate this output:
NAME | A | B | C | D
-------------------------
Bob | 1 | 3 | 6 | 10
Carl | 5 | 11 | 18 | 27
Any thoughts or suggestions greatly appreciated!
I think, there are number of reasonable workarounds for your requirements mostly in the area of designing better your table. All really depends on how you input your data and most importantly how than you consume it
Still, if to stay with presented requirements - Below is not exactly what you expect in your question as an output, but might be usefull as an example:
SELECT name, GROUP_CONCAT(STRING(cum)) AS all FROM (
SELECT name,
SUM(INTEGER(num))
OVER(PARTITION BY name
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cum
FROM (
SELECT name, SPLIT(all) AS num FROM (
SELECT name,
CONCAT(STRING(a),',',STRING(b),',',STRING(c),',',STRING(d)) AS all
FROM yourtable
)
)
)
GROUP BY name
Output is:
name all
Bob 1,3,6,10
Carl 5,11,18,26
Depends on how you than consume this data - it still can work for you
Note, not you avoiding now writing something like col1 + col2 + .. + col89 + col90 - but still need to explicitelly mention each column just ones.
in case if you have "luxury" of implementing your requirements outside of GBQ UI, but rather in some Client- you can use BigQuery API to programatically aquire table schema and build on fly your logic/query and than execute it
Take a look at below APIs to start with:
To get table schema - https://cloud.google.com/bigquery/docs/reference/v2/tables/get
To issue query job - https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
There's no need for a UDF:
SELECT name, a, a+b, a+b+c, a+b+c+d
FROM tab

SQL Server 2008 pivot query gone wrong with column name

I have some problems regarding a pivot query. I am new to this. So look for something in the internet so I found dozens of them. So I decided to follow this Link. Been practice but seems like I ran into some obvious error.
My code is:
select
risk, [Quick] AS Quick, [Brown] AS Brown, [Fox] AS Fox
from
(select risk, site
from tst) as ps
PIVOT
(
count(risk)
for site in ([Brown], [Brown], [Fox])
) AS pvt
But it is throwing an error:
Invalid column name 'risk'.
Basically I want to have an output like this:
|Foo | Quick | Brown | Fox |
| 1 | 10 | 3 | 2 |
| 2 | 5 | 4 | 4 |
| 3 | 4 | 1 | 5 |
| 4 | 2 | 3 | 7 |
| 5 | 3 | 2 | 1 |
Something like that. Just counting how many there is in a specific number. Any help would be much appreciated. Thanks
The problem with your existing query is you are using the column risk in your final select list as well as inside of the aggregate function. Once you've counted the risk values for each site this is not available to display.
To get around this you can add a second version of the risk column to your subquery similar to the following. You then count this other column of risk and display one in the final select:
select risk, [ADAB] AS ADAB, [Bahrain] AS Bahrain, [Thumrait] AS Thumrait
from
(
select risk, piv_risk = risk, site
from qcqcif
) as ps
PIVOT
(
count(piv_risk)
for site in ([ADAB], [Bahrain], [Thumrait])
) AS pvt;
See SQL Fiddle with Demo

Access 2007 select first value of query results

I am running into a rather annoying thingy in Access (2007) and I am not sure if this is a feature or if I am asking for the impossible.
Although the actual database structure is more complex, my problem boils down to this:
I have a table with data about Units for specific years. This data comes from different sources and might overlap.
Unit | IYR | X1 | Source |
-----------------------------
A | 2009 | 55 | 1 |
A | 2010 | 80 | 1 |
A | 2010 | 101 | 2 |
A | 2010 | 150 | 3 |
A | 2011 | 90 | 1 |
...
Now I would like the user to select certain sources, order them by priority and then extract one data value for each year.
For example, if the user selects source 1, 2 and 3 and orders them by (3, 1, 2), then I would like the following result:
Unit | IYR | X1 | Source |
-----------------------------
A | 2009 | 55 | 1 |
A | 2010 | 150 | 3 |
A | 2011 | 90 | 1 |
I am able to order the initial table, based on a specific order. I do this with the following query
SELECT Unit, IYR, X1, Source
FROM TestTable
WHERE Source In (1,2,3)
ORDER BY Unit, IYR,
IIf(Source=3,1,IIf(Source=1,2,IIf(Source=2,3,4)))
This gives me the following intermediate result:
Unit | IYR | X1 | Source |
-----------------------------
A | 2009 | 55 | 1 |
A | 2010 | 150 | 3 |
A | 2010 | 80 | 1 |
A | 2010 | 101 | 2 |
A | 2011 | 90 | 1 |
Next step is to only get the first value of each year. I was thinking to use the following query:
SELECT X.Unit, X.IYR, first(X.X1) as FirstX1
FROM (...) AS X
GROUP BY X.Unit, X.IYR
Where (…) is the above query.
Now Access goes bananas. Whatever order I give to the intermediate results, the result of this query is.
Unit | IYR | X1 |
--------------------
A | 2009 | 55 |
A | 2010 | 80 |
A | 2011 | 90 |
In other words, for year 2010 it shows the value of source 1 instead of 3. It seems that Access does not care about the ordering of the nested query when it applies the FIRST() function and sticks to the original ordering of the data.
Is this a feature of Access or is there a different way of achieving the desired results?
Ps: Next step would be to use a self join to add the source column to the results again, but I first need to resolve above problem.
Rather than use first it may be better to determine the MIN Priority and then join back e.g.
SELECT
t.UNIT,
t.IYR,
t.X1,
t.Source ,
t.PrioritySource
FROM
(SELECT
Unit,
IYR,
X1,
Source,
SWITCH ( [Source]=3, 1,
[Source]=1, 2,
[Source]=2, 3) as PrioritySource
FROM
TestTable
WHERE
Source In (1,2,3)
) as t
INNER JOIN
(SELECT
Unit,
IYR,
MIN(SWITCH ( [Source]=3, 1,
[Source]=1, 2,
[Source]=2, 3)) as PrioritySource
FROM
TestTable
WHERE
Source In (1,2,3)
GROUP BY
Unit,
IYR ) as MinPriortiy
ON t.Unit = MinPriortiy.Unit and
t.IYR = MinPriortiy.IYR and
t.PrioritySource = MinPriortiy.PrioritySource
which will produce this result (Note I include Source and priority source for demonstration purposes only)
UNIT | IYR | X1 | Source | PrioritySource
----------------------------------------------
A | 2009 | 55 | 1 | 2
A | 2010 | 150 | 3 | 1
A | 2011 | 90 | 1 | 2
Note the first subquery is to handle the fact that Access won't let you join on a Switch
Yes, FIRST() does use an arbitrary ordering. From the Access Help:
These functions return the value of a specified field in the first or
last record, respectively, of the result set returned by a query. If
the query does not include an ORDER BY clause, the values returned by
these functions will be arbitrary because records are usually returned
in no particular order.
I don't know whether FROM (...) AS X means you are using an ORDER BY inline (assuming that is actually possible) or if you are using a VIEW ('stored Query object') here but either way I assume the ORDER BY is being disregarded (because an ORDER BY should only apply to the final result).
The alternative is to use MIN() (or possibly MAX()).
This is the most concise way I have found to write such queries in Access that require pulling back all columns that correspond to the first row in a group of records that are ordered in a particular way.
First, I added a UniqueID to your table. In this case, it's just an AutoNumber field. You may already have a unique value in your table, in which case you can use that.
This will choose the row with a Source 3 first, then Source 1, then Source 2. If there is a tie, it picks the one with the higher X1 value. If there is a further tie, it is broken by the UniqueID value:
SELECT t.* INTO [Chosen Rows]
FROM TestTable AS t
WHERE t.UniqueID=
(SELECT TOP 1 [UniqueID] FROM [TestTable]
WHERE t.IYR=IYR ORDER BY Choose([Source],2,3,1), X1 DESC, UniqueID)
This yields:
Unit IYR X1 Source UniqueID
A 2009 55 1 1
A 2010 150 3 4
A 2011 90 1 5
I recommend (1) you create an index on the IYR field -- this will dramatically increase your performance for this type of query, and (2) if you have a lot (>~100K) records, this isn't the best choice. I find it works quite well for tables in the 1-70K range. For larger datasets, I like to use my GroupIncrement function to partition each group (similar to SQL Server's ROW_NUMBER() OVER statement).
The Choose() function is a VBA function and may not be clear here. In your case, it sounds like there is some interactivity required. For that, you could create a second table called "Choices", like so:
Rank Choice
1 3
2 1
3 2
Then, you could substitute the following:
SELECT t.* INTO [Chosen Rows]
FROM TestTable AS t
WHERE t.UniqueID=(SELECT TOP 1 [UniqueID] FROM
[TestTable] t2 INNER JOIN [Choices] c
ON t2.Source=c.Choice
WHERE t.IYR=t2.IYR ORDER BY c.[Rank], t2.X1 DESC, t2.UniqueID);
Indexing Source on TestTable and Choice on the Choices table may be helpful here, too, depending on the number of choices required.
Q:
Can you get this to work without the need for surrogate key? For
example what if the unique key is the composite of
{Unit,IYR,X1,Source}
A:
If you have a compound key, you can do it like this-- however I think that if you have a large dataset, it will totally kill the performance of the query. It may help to index all four columns, but I can't say for sure because I don't regularly use this method.
SELECT t.* INTO [Chosen Rows]
FROM TestTable AS t
WHERE t.Unit & t.IYR & t.X1 & t.Source =
(SELECT TOP 1 Unit & IYR & X1 & Source FROM [TestTable]
WHERE t.IYR=IYR ORDER BY Choose([Source],2,3,1), X1 DESC, Unit, IYR)
In certain cases, you may have to coalesce some of the individual parts of the key as follows (though Access generally will coalesce values automatically):
t.Unit & CStr(t.IYR) & CStr(t.X1) & CStr(t.Source)
You could also use a query in your FROM statements instead of the actual table. The query itself would build a composite of the four fields used in the key, and then you'd use the new key name in the WHERE clause of the top SELECT statement, and in the SELECT TOP 1 [key] of the subquery.
In general, though, I will either: (a) create a new table with an AutoNumber field, (b) add an AutoNumber field, (c) add an integer and populate it with a unique number using VBA - this is useful when you get a MaxLocks error when trying to add an AutoNumber, or (d) use an already indexed unique key.