How to pull rows from a SQL table until quotas for multiple columns are met? - sql

I've been able to find a few examples of questions similar to this one, but most only involve a single column being checked.
SQL Select until Quantity Met
Select rows until condition met
I have a large table representing facilities, with columns for each type of resource available and the number of those specific resources available per facility. I want this stored procedure to be able to take integer values in as multiple parameters (representing each of these columns) and a Lat/Lon. Then it should iterate over the table sorted by distance, and return all rows (facilities) until the required quantity of available resources (specified by the parameters) are met.
Data source example:
Id
Lat
Long
Resource1
Resource2
...
1
50.123
4.23
5
12
...
2
61.234
5.34
0
9
...
3
50.634
4.67
21
18
...
Result Wanted:
#latQuery = 50.634
#LongQuery = 4.67
#res1Query = 10
#res2Query = 20
Id
Lat
Long
Resource1
Resource2
...
3
50.634
4.67
21
18
...
1
50.123
4.23
5
12
...
Result includes all rows that meet the queries individually. Result is also sorted by distance to the requested lat/lon
I'm able to sort the results by distance, and sum the total running values as suggested in other threads, but I'm having some trouble with the logic comparing the running values with the quota provided in the params.
First I have some CTEs to get most recent edits, order by distance and then sum the running totals
WITH cte1 AS (SELECT
#origin.STDistance(geography::Point(Facility.Lat, Facility.Long, 4326)) AS distance,
Facility.Resource1 as res1,
Facility.Resource2 as res2
-- ...etc
FROM Facility
),
cte2 AS (SELECT
distance,
res1,
SUM(res1) OVER (ORDER BY distance) AS totRes1,
res2,
SUM(res1) OVER (ORDER BY distance) AS totRes2
-- ...etc, there's 15-20 columns here
FROM cte1
)
Next, with the results of that CTE, I need to pull rows until all quotas are met. Having the issues here, where it works for one row but my logic with all the ANDs isn't exactly right.
SELECT * FROM cte2 WHERE (
(totRes1 <= #res1Query OR (totRes1 > #res1Query AND totRes1- res1 <= #totRes1)) AND
(totRes2 <= #res2Query OR (totRes2 > #res2Query AND totRes2- res2 <= #totRes2)) AND
-- ... I also feel like this method of pulling the next row once it's over may be convoluted as well?
)
As-is right now, it's mostly returning nothing, and I'm guessing it's because it's too strict? Essentially, I want to be able to let the total values go past the required values until they are all past the required values, and then return that list.
Has anyone come across a better method of searching using separate quotas for multiple columns?
See my update in the answers/comments

I think you are massively over-complicating this. This does not need any joins, just some running sum calculations, and the right OR logic.
The key to solving this is that you need all rows, where the running sum up to the previous row is less than the requirement for all requirements. This means that you include all rows where the requirement has not been met, and the first row for which the requirement has been met or exceeded.
To do this you can subtract the current row's value from the running sum.
You could utilize a ROWS specification of ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING. But then you need to deal with NULL on the first row.
In any event, even a regular running sum should always use ROWS UNBOUNDED PRECEDING, because the default is RANGE UNBOUNDED PRECEDING, which is subtly different and can cause incorrect results, as well as being slower.
You can also factor out the distance calculation into a CROSS APPLY (VALUES, avoiding the need for lots of CTEs or derived tables. You now only need one level of derivation.
DECLARE #origin geography = geography::Point(#latQuery, #LongQuery, 4326);
SELECT
f.Id,
f.Lat,
f.Long,
f.Resource1,
f.Resource2
FROM (
SELECT f.*,
SumRes1 = SUM(f.Resource1) OVER (ORDER BY v1.Distance ROWS UNBOUNDED PRECEDING) - f.Resource1,
SumRes2 = SUM(f.Resource2) OVER (ORDER BY v1.Distance ROWS UNBOUNDED PRECEDING) - f.Resource2
FROM Facility f
CROSS APPLY (VALUES(
#origin.STDistance(geography::Point(f.Lat, f.Long, 4326))
)) v1(Distance)
) f
WHERE (
f.SumRes1 < #res1Query
OR f.SumRes2 < #res2Query
);
db<>fiddle

Was able to figure out the problem on my own here. The primary issue I was running into was that I was comparing 25 different columns' running totals versus the 25 stored proc parameters (quotas of resources required by the search).
Changing the lines such as these
(totRes1 <= #res1Query OR (totRes1 > #res1Query AND totRes1- res1 <= #totRes1)) AND --...
to
(totRes1 <= #res1Query OR (totRes1 > #res1Query AND totRes1- res1 <= #totRes1) OR #res1Query = 0) AND --...
(adding in the OR #res1Query = 0)solved my issue.
In other words, the search is often only for one or two columns (types of resources) - leaving others as zero. The way my logic was set up caused it to skip over lots of rows because it was instantly marking them as having met the quota (value less than or equal to the quota). like #A Neon Tetra suggested, was pretty close to it already.
Update:
First attempt didn't exactly fix my own issue. Posting the stripped down version of my code that is now working for me.
DECLARE #Lat AS DECIMAL(12,6)
DECLARE #Lon AS DECIMAL(12,6)
DECLARE #res1Query AS INT
DECLARE #res2Query AS INT
-- repeat for Resource 3 through 25, etc...
DECLARE #origin geography = geography::Point(#Lat, #Lon, 4326);
-- CTE to be able to expose distance
cte AS (SELECT TOP(99999) -- --> this is hacky, it won't let me order by distance unless I'm selecting TOP(x) or some other fn?
dbo.Facility.FacilityGUID,
dbo.Facility.Lat,
dbo.Facility.Lon,
#origin.STDistance(geography::Point(dbo.Facility.Lat, dbo.Facility.Lon, 4326))
AS distance,
dbo.Facility.Resource1 AS res1,
dbo.Facility.Resource2 AS res2,
-- repeat for Resource 3 through 25, etc...
FROM dbo.Facility
ORDER BY distance),
-- third CTE - has access to distance so we can keep track of a running total ordered by distance
---> have to separate into two since you can't reference the same alias (distance) again within the same SELECT
fullCTE AS (SELECT
FacilityID,
Lat,
Long,
distance,
res1,
SUM(res1) OVER (ORDER BY distance)AS totRes1,
res2,
SUM(res2) OVER (ORDER BY distance)AS totRes2,
-- repeat for Resource 3 through 25, etc...
FROM cte)
SELECT * -- Customize what you're pulling here for your output as needed
FROM dbo.Facility INNER JOIN fullCTE ON (fullCTE.FacilityID = dbo.Facility.FacilityID)
WHERE EXISTS
(SELECT
FacilityID
FROM fullCTE WHERE (
FacilityID = dbo.Facility.FacilityID AND
-- Keep pulling rows until all conditions are met, as opposed to pulling rows while they're under the quota
NOT (
((totRes1 - res1 >= #res1Query AND #res1Query <> 0) OR (#res1Query = 0)) AND
((totRes2 - res2 >= #res2Query AND #res2Query <> 0) OR (#res2Query = 0)) AND
-- repeat for Resource 3 through 25, etc...
)
)
)

Related

A trigger to create populate a table based on other table

Ok, since it seems that my last two questions (this one and this one) only lead to confussion, I will try to explain the FULL problem here, so it might be a long post.
I'm trying to create a database for a trading system. The database has 2 main tables. One is table "Ticks" and the other is "Candles". As shown in the figure, each table has its own attributes..
Candles, bars or ohlc are the same thing.
The way a candle is seen in a chart is like this:
Candles are just a way to representate aggregated data, nothing more.
There are many ways to aggregate ticks in order to create one candle. In this post, I'm asking for a particular way that is creating one candle every 500 ticks. So, if the ticks table has 1000 ticks, I can create only 2 candles. If it has 500 ticks, I can create 1 candle. If it has 5000 ticks, I can create 10 candles. If there are 5001 ticks I still have only 10 candles, because I'm missing the other 499 ticks in order to create the 11th candle.
Actually, I'm storing all the ticks using a python script and creating (and therefore, inserting in the candles table) candles with another python script. This is a real time process.
Both scripts run in a while True: loop. No, I can't (read shouldn't) stop the scripts because the market is opened 24 hours - 5 days a week.
What I'm trying to do is to get rid of the python script that creates and stores the candles in the candles table. Why? Because I think that it will improve performance. Instead of doing multiple queries to know the amount of ticks that are available to create a new candle, I think that a trigger could handle it in a more efficient way (please, if I'm mistaken correct me).
I don't know how to actually solve it, but what I'm trying is to do this (thanks to #GordonLinoff for helping me in previous questions):
do $$
begin
with total_ticks as (
select count(*) c from (
select * from eurusd_tick2 eurusd where date >
(SELECT date from eurusd_ohlc order by date desc limit 1)
order by date asc) totals),
ticks_for_candles as(
select * from eurusd_tick2 eurusd where date >
(SELECT date from eurusd_ohlc order by date desc limit 1)
order by date asc
), candles as(
select max(date) as date,
max(bid) filter (where mod(seqnum, 500) = 1) as open,
max(bid) as high,
min(bid) as low,
max(bid) filter (where mod(seqnum, 500) = 500-1) as close,
max(ask) filter (where mod(seqnum, 500) = 500-1) as ask
from (
select t.*, row_number() over (order by date) as seqnum
from (select * from ticks_for_candles) t) as a
group by floor((seqnum - 1) /500)
having count(*) = 500
)
case 500<(select * from total_ticks)
when true then
return select * from candles
end;
end $$;
Using this, I get this error:
ERROR: syntax error at or near "case"
LINE 33: case 500<(select * from total_ticks)
^
SQL state: 42601
Character: 945
As you can see, there is no select after the CETs. If I put:
select case 500<(select * from total_ticks)
when true then
return select * from candles
end;
end $$;
I get this error:
ERROR: subquery must return only one column
LINE 31: (select * from candles)
^
QUERY: with total_ticks as (
select count(*) c from (
select * from eurusd_tick2 eurusd where date >
(SELECT date from eurusd_ohlc order by date desc limit 1)
order by date asc) totals),
ticks_for_candles as(
select * from eurusd_tick2 eurusd where date >
(SELECT date from eurusd_ohlc order by date desc limit 1)
order by date asc
), candles as(
select max(date) as date,
max(bid) filter (where mod(seqnum, 500) = 1) as open,
max(bid) as high,
min(bid) as low,
max(bid) filter (where mod(seqnum, 500) = 500-1) as close,
max(ask) filter (where mod(seqnum, 500) = 500-1) as ask
from (
select t.*, row_number() over (order by date) as seqnum
from (select * from ticks_for_candles) t) as a
group by floor((seqnum - 1) /500)
having count(*) = 500
)
select case 1000>(select * from total_ticks)
when true then
(select * from candles)
end
CONTEXT: PL/pgSQL function inline_code_block line 4 at SQL statement
SQL state: 42601
So honestly, I don't know how to do it correctly. It doesn't has to be with the actual code I provide here, but the desired output looks as follows:
-----------------------------------------------------------------------------------
| date | open | high | low | close | ask |
|2020-05-01 20:39:27.603452| 1.0976 | 1.09766 | 1.09732 | 1.09762 | 1.09776 |
This would be the output when there is enough ticks to create only 1 candle. If there is enough to create two of them, then there should be 2 rows.
So, at the end of the day, what I have in mind is that the trigger should check constantly if there is enough data to create a candle and if it is, then create it.
Is this a good idea or I should stick to the python script?
Can this be achieved with my approach?
What I'm doing wrong?
What should I do and how should I manage this situation?
I really hope that the question now is complete and there is no missing information.
All comments and advices are appreciated.
Thanks!
EDIT: Since this is a real time process, in one second there could be 499 ticks in the database and in the next second there could be 503 ticks. This means that 4 ticks arrived in 1 second.
Being a database guy, my approach would be to use triggers in the database.
Create a third table candle_in_the_making that contains the data from the ticks that have not yet been aggregated to a candles entry.
Create an INSERT trigger on the ticks table (doesn't matter if BEFORE or AFTER) that does the following:
For every tick inserted, add a row to candle_in_the_making.
If the row count reaches 500, compute and insert a new candles row and TRUNCATE candle_in_the_making.
This is simple if ticks are inserted only in a single thread.
If ticks are inserted concurrently, you have to find a way to prevent two threads from inserting the 500th tick in candle_in_the_making at the same time (so that you end up with 501 entries). I can think of two ways to do that in the database:
Have an extra table c_i_m_count that contains only a single number, which is the number of rows in candle_in_the_making. Before you insert into candle_in_the_making, you run the atomic
UPDATE c_i_m_count SET counter = counter + 1 RETURNING counter;
This locks the row, so that any two INSERTs into counter_in_the_making are effectively serialized.
Use advisory locks to serialize the inserting threads. In particular, a transaction level exclusive lock as taken by pg_advisory_xact_lock would be indicated.

Query Optimization Problems (spatial)

I have two datasets with spatial data.
Dataset 1 has approximately 15,000,000 records.
Dataset 2 has approximately 16,000,000 records.
Both are using the data type geography (GPS coordinates) and all records are points.
Both tables have spatial indexes with cells_per_object = 1 and the levels are (HIGH, HIGH, HIGH, HIGH)
All points are located in a, globally speaking, small area (1 U.S. state). The points are spread out enough to warrant using geography rather than a projection to geometry.
DECLARE #g GEOGRAPHY
SET #g = (SELECT TOP 1 GPSPoint FROM Dataset1)
EXEC sp_help_spatial_geography_index 'Dataset1', 'Dataset1_SpatialIndex', 0, #g
Shows
propvalue-propname
1-Total_Number_Of_ObjectCells_In_Level0_For_QuerySample
28178-Total_Number_Of_ObjectCells_In_Level1_In_Index
1-Total_Number_Of_ObjectCells_In_Level4_For_QuerySample
14923330-Total_Number_Of_ObjectCells_In_Level4_In_Index
1-Total_Number_Of_Intersecting_ObjectCells_In_Level1_In_Index
1-Total_Number_Of_Intersecting_ObjectCells_In_Level4_For_QuerySample
14923330-Total_Number_Of_Intersecting_ObjectCells_In_Level4_In_Index
1-Total_Number_Of_Border_ObjectCells_In_Level0_For_QuerySample
28177-Total_Number_Of_Border_ObjectCells_In_Level1_In_Index
740-Number_Of_Rows_Selected_By_Primary_Filter
0-Number_Of_Rows_Selected_By_Internal_Filter
740-Number_Of_Times_Secondary_Filter_Is_Called
1-Number_Of_Rows_Output
99.99504-Percentage_Of_Rows_NotSelected_By_Primary_Filter
0-Percentage_Of_Primary_Filter_Rows_Selected_By_Internal_Filter
0-Internal_Filter_Efficiency
0.135135-Primary_Filter_Efficiency
Which means that the query
DECLARE #g GEOGRPAHY
SET #g = (SELECT TOP 1 GPSPoint FROM Dataset1)
SELECT TOP 1
*
FROM
Dataset2 D
WHERE
#g.Filter(D.GPSPoint.STBuffer(1)) = 1
Takes almost an hour to complete.
I have also tried doing
WITH TABLE1 AS (
SELECT
A.RecordID,
B.RecordID,
RANK() OVER (PARTITION BY A.RecordID ORDER BY A.GPSPoint.STDistance(B.GPSPoint) ASC) AS 'Ranking'
FROM
Dataset1 A
INNER JOIN
Dataset2 B
ON
B.GPSPoint.Filter(A.GPSPoint.STBuffer(1)) = 1
AND A.GPSPoint.STDistance(B.GPSPoint) <= 50
)
SELECT
*
FROM
TABLE1
WHERE
Ranking = 1
Which ends up being about 1,000 times faster, but at that rate what I am trying to do will take a query running for six months to complete. I honestly do no know what to do at this point. The end goal is to do a nearest neighbor search for every record in dataset1 to find the closest point in dataset2, but like this it seems impossible.
Does anyone have any ideas where I could improve the efficiency of this process?
Try this: It is based on recommendations on MSDN.
SELECT TOP(1)
A.RecordID,
B.RecordID,
A.GPSPoint.STDistance(B.GPSPoint) AS Distance
FROM
Dataset1 A
INNER JOIN
Dataset2 B
ON
A.GPSPoint.STDistance(B.GPSPoint) <= 50
AND B.GPSPoint IS NOT NULL
ORDER BY BY A.GPSPoint.STDistance(B.GPSPoint) ASC
Note I have removed this, try the query above first, then add these predicates and see how it effects the indexing.
B.GPSPoint.Filter(A.GPSPoint.STBuffer(1)) = 1
AND
//or try B.GPSPoint.STIntersects(A.GPSPoint.STBuffer(1)) = 1
The following requirements must be met for a Nearest Neighbor query to use a spatial index:
A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses.
The TOP clause cannot contain a PERCENT statement.
The WHERE clause must contain a STDistance() method
If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause.
The first expression in the ORDER BY clause must use the STDistance() method.
Sort order for the first STDistance() expression in the ORDER BY clause must be ASC.
All the rows for which STDistance returns NULL must be filtered out.

Counting items with multiple criteria

I have a table (getECRs) in PowerPivot.
Right now, I've been able to create a calculated column that counts how many times the row's customer ID (BAN) occurs in the BAN column with the following formula:
=CALCULATE(COUNTROWS(getECRs),ALLEXCEPT(getECRs,getECRs[BAN]))
What I'm having difficulty with is adding multiple criteria to the CALCULATE formula in PowerPivot.
Each row has a column that gives the date the request was generated _CreateDateKey. I'm trying to include criteria that would only include multiple BANs if they fall within 7 days (before or after) the _CreateDateKey for the row.
For example for one BAN, there are the following dates and their expected counts:
_CreateDateKey Count Explanation
6/13/2014 3 Does not include 6/23
6/13/2014 3 Does not include 6/23
6/16/2014 4 Includes all
6/23/2014 2 Does not include the 2 items from 6/13
In Excel I would use a COUNTIFS statement, like below to get the desired result (using table structure naming)
=COUNTIFS([BAN],[#BAN],[_CreateDateKey],">="&[#[_CreateDateKey]]-7,[_CreateDateKey],"<="&[#[_CreateDateKey]]+7)
But I can't seem to figure out the relative criteria needed for the dates. I tried the following as a criteria to the CALCULATE function, but it resulted in an error:
getECRs[_CreateDateKey]>=[_CreateDateKey]-7
Error: Column '_CreateDateKey' cannot be found or may not be used in this expression.
This formula answers your specific question. It's a good pattern to get down as it's highly re-usable - the EARLIER() is referencing the value of the current row (slightly more complex than this but that is the end result):
=
CALCULATE (
COUNTROWS ( getECRs ),
FILTER (
getECRs,
getECRs[BAN] = EARLIER ( getECRs[BAN] )
&& getECRs[_CreateDateKey]
>= EARLIER ( getECRs[_CreateDateKey] ) - 7
&& getECRs[_CreateDateKey]
<= EARLIER ( getECRs[_CreateDateKey] ) + 7
)
)
Fundamentally you should probably be looking to get away from the 'Excel mindset' of using a calculated column and deal with this using a measure.
An adaptation of the above would look like this - it would use the filter context of the PIVOT in which you were using it (e.g. if BAN was rows then you would get the count for that BAN).
You may need to adjust the ALL() if is too 'open' for your real world context and you might have to deal with totals using HASONEVALUE():
=
CALCULATE (
COUNTROWS ( getECRs ),
FILTER (
ALL(getECRs),
getECRs[_CreateDateKey] >= MAX ( getECRs[_CreateDateKey] ) - 7 &&
getECRs[_CreateDateKey] <= MAX ( getECRs[_CreateDateKey] ) + 7
)
)

SQL Server : how to select a fixed amount of rows (select every x-th value)

A short description: I have a table with data that is updated over a certain time period. Now the problem is, that - depending on the nature of the sensor which sends the data - in this time period there could be either 50 data sets or 50.000. As I want to visualize this data (using ASP.NET / c#), for a first preview I would like to SELECT just 1000 values from the table.
I already have an approach doing this: I count the rows in the time period of interest, with a simple "where" clause to specify the sensor-id, save it as a variable in SQL, and then divide the count() by 1000. I've tried it in MS Access, where it works just fine:
set #divider = select count(*) from table where [...]
SELECT (Int([RowNumber]/#divider)), First(Value)
FROM myTable
GROUP BY (Int([RowNumber]/#divider));
The trick in Access was, that I simply have a data field ("RowNumber"), which is my PK/ID, and goes from 0 up. I tried to accomplish that in SQL Server using the ROW_NUMBER() method, which works more or less. I've got the right syntax for the method, but I can not use the GROUP BY statement
Windowed functions can only appear in the SELECT or ORDER BY
clauses.
meaning ROW_NUMBER() can't be in the GROUP BY statement.
Now I'm kinda stuck. I've tried to save the ROW_NUMBER value into a char or a separate column, and GROUP BY it later on, but I couldn't get it done. And somehow I start to think, that my strategy might have its weaknesses ...? :/
To clarify once more: I don't need to SELECT TOP 1000 from my table, because this would just mean that I select the first 1000 values (depending on the sorting). I need to SELECT every x-th value, while I can compute the x (and I could even round it to an INT, if that would help to get it done). I hope I was able to describe the problem understandable ...
This is my first post here on StackOverflow, I hope I didn't forget anything essential or important, if you need any further information (table structure, my queries so far, ...) please don't hesitate to ask. Any help or hint is highly appreciated - thanks in advance! :)
Update: SOLUTION! Big thanks to https://stackoverflow.com/users/52598/lieven!!!
Here is how I did it in the end:
I declare 2 variables - I count my rows and SET it into the first var. Then I use ROUND() on the just assigned variable, and divide it by 1000 (because in the end I want ABOUT 1000 values!). I split this operation into 2 variables, because if I used the value from the COUNT function as basis for my ROUND operation, there were some mistakes.
declare #myvar decimal(10,2)
declare #myvar2 decimal(10,2)
set #myvar = (select COUNT(*)
from value_table
where channelid=135 and myDate >= '2011-01-14 22:00:00.000' and myDate <= '2011-02-14 22:00:00.000'
)
set #myvar2 = ROUND(#myvar/1000, 0)
Now I have the rounded value, which I want to be my step-size (take every x-th value -> this is our "x" ;)) stored in #myvar2. Next I will subselect the data of the desired timespan and channel, and add the ROW_NUMBER() as column "rn", and finally add a WHERE-clause to the outer SELECT, where I divide the ROW_NUMBER through #myvar2 - when the modulus is 0, the row will be SELECTed.
select * from
(
select (ROW_NUMBER() over (order by id desc)) as rn, myValue, myDate
from value_table
where channel_id=135 and myDate >= '2011-01-14 22:00:00.000' and myDate<= '2011-02-14 22:00:00.000'
) d
WHERE rn % #myvar2 = 0
Works like a charm - once again all my thanks to https://stackoverflow.com/users/52598/lieven, see the comment below for the original posting!
In essence, all you need to do to select the x-th value is retain all rows where the modulus of the rownumber divided by x is 0.
WHERE rn % #x_thValues = 0
Now to be able to use your ROW_NUMBER's result, you'll need to wrap the entire statement into in a subselect
SELECT *
FROM (
SELECT *
, rn = ROW_NUMBER() OVER (ORDER BY Value)
FROM DummyData
) d
WHERE rn % #x_thValues = 0
Combined with a variable to what x-th values you need, you might use something like this testscript
DECLARE #x_thValues INTEGER = 2
;WITH DummyData AS (SELECT * FROM (VALUES (1), (2), (3), (4)) v (Value))
SELECT *
FROM (
SELECT *
, rn = ROW_NUMBER() OVER (ORDER BY Value)
FROM DummyData
) d
WHERE rn % #x_thValues = 0
One more option to consider:
Select Top 1000 *
From dbo.SomeTable
Where ....
Order By NewID()
but to be honest- like the previous answer more than this one.
The question could be about performance..

Distribution of table in time

I have a MySQL table with approximately 3000 rows per user. One of the columns is a datetime field, which is mutable, so the rows aren't in chronological order.
I'd like to visualize the time distribution in a chart, so I need a number of individual datapoints. 20 datapoints would be enough.
I could do this:
select timefield from entries where uid = ? order by timefield;
and look at every 150th row.
Or I could do 20 separate queries and use limit 1 and offset.
But there must be a more efficient solution...
Michal Sznajder almost had it, but you can't use column aliases in a WHERE clause in SQL. So you have to wrap it as a derived table. I tried this and it returns 20 rows:
SELECT * FROM (
SELECT #rownum:=#rownum+1 AS rownum, e.*
FROM (SELECT #rownum := 0) r, entries e) AS e2
WHERE uid = ? AND rownum % 150 = 0;
Something like this came to my mind
select #rownum:=#rownum+1 rownum, entries.*
from (select #rownum:=0) r, entries
where uid = ? and rownum % 150 = 0
I don't have MySQL at my hand but maybe this will help ...
As far as visualization, I know this is not the periodic sampling you are talking about, but I would look at all the rows for a user and choose an interval bucket, SUM within the buckets and show on a bar graph or similar. This would show a real "distribution", since many occurrences within a time frame may be significant.
SELECT DATEADD(day, DATEDIFF(day, 0, timefield), 0) AS bucket -- choose an appropriate granularity (days used here)
,COUNT(*)
FROM entries
WHERE uid = ?
GROUP BY DATEADD(day, DATEDIFF(day, 0, timefield), 0)
ORDER BY DATEADD(day, DATEDIFF(day, 0, timefield), 0)
Or if you don't like the way you have to repeat yourself - or if you are playing with different buckets and want to analyze across many users in 3-D (measure in Z against x, y uid, bucket):
SELECT uid
,bucket
,COUNT(*) AS measure
FROM (
SELECT uid
,DATEADD(day, DATEDIFF(day, 0, timefield), 0) AS bucket
FROM entries
) AS buckets
GROUP BY uid
,bucket
ORDER BY uid
,bucket
If I wanted to plot in 3-D, I would probably determine a way to order users according to some meaningful overall metric for the user.
#Michal
For whatever reason, your example only works when the where #recnum uses a less than operator. I think when the where filters out a row, the rownum doesn't get incremented, and it can't match anything else.
If the original table has an auto incremented id column, and rows were inserted in chronological order, then this should work:
select timefield from entries
where uid = ? and id % 150 = 0 order by timefield;
Of course that doesn't work if there is no correlation between the id and the timefield, unless you don't actually care about getting evenly spaced timefields, just 20 random ones.
Do you really care about the individual data points? Or will using the statistical aggregate functions on the day number instead suffice to tell you what you wish to know?
AVG
STDDEV_POP
VARIANCE
TO_DAYS
select timefield
from entries
where rand() = .01 --will return 1% of rows adjust as needed.
Not a mysql expert so I'm not sure how rand() operates in this environment.
For my reference - and for those using postgres - Postgres 9.4 will have ordered set aggregates that should solve this problem:
SELECT percentile_disc(0.95)
WITHIN GROUP (ORDER BY response_time)
FROM pageviews;
Source: http://www.craigkerstiens.com/2014/02/02/Examining-PostgreSQL-9.4/