This question already has answers here:
Efficient latest record query with Postgresql
(6 answers)
Select first row in each GROUP BY group?
(20 answers)
Optimize GROUP BY query to retrieve latest row per user
(3 answers)
Closed last month.
I have a timeseries table in Postgres that collects events of various types, with their values and a timestamp. In stylized form it looks like this:
evt_type
evt_val_01
evt_val_02
evt_time
1
0.5
10
1
2
0.7
12
1
3
0.8
13
1
2
0.1
21
2
2
0.3
98
3
3
0.4
76
3
2
0.2
3
4
I'd like now to create a SELECT query that returns the latest (by timestamp in evt_time) values per event type (evt_typ), i.e., the query should return:
evt_type
evt_val
evt_val_02
evt_time
1
0.5
10
1
2
0.2
3
4
3
0.4
76
3
In QuestDB for instance, this is easy to be achieved by the LATEST ON parameter. However, I am struggling to find an efficient Postgres approach that makes full use of index on evt_time and evt_type. The trivial approach using
select evt_type, max(evt_time) from events group by evt_type
In a sub-query, is way too slow, as the real-life table has double-digit million rows and about a thousand event types, plus about a dozen value columns. I tried to use various forms of last_value(), and LATERAL joins, but did not manage to get it right thus far.
Any suggestions how to structure the SELECT query would be greatly appreciated. PS, I'm on Postgres 15.1
Related
I have a table company_totals, that has the following schema -
column_name
column_data_type
company
STRING
link
STRING
full_count
FLOAT
starts_with_count
FLOAT
Number of rows = 12,000,000. Table size = 1.6 GB. CLUSTERED BY = company link. SEARCH INDEX created on column = link.
I have the following select statement which is taking beyond 6 hours and the execution results in timeout - Operation timed out after 6.0 hours. Consider reducing the amount of work performed by your operation so that it can complete within this limit.)
SELECT first_table.company, first_table.link, null as full_count, SUM(second_table.full_count) AS starts_with_count
FROM company_totals first_table, company_totals second_table
WHERE STARTS_WITH(second_table.link, first_table.link)
group by first_table.company, first_table.link
The above query calculates values of the column starts_with_count which is the sum of values of another column full_count, based on a starts_with() condition. In the company_totals table, the column starts_with_count is what I want to fill. I have added the expected values for this column manually to show my expectation. Other column values are already present in the table. The starts_with_count value is sum (full_count) where its link appears in other rows.
company
link
full_count
starts_with_count (expected)
abc
http://www.abc.net1
1
15 (= sum (full_count) where link like 'http://www.abc.net1%')
abc
http://www.abc.net1/page1
2
9 (= sum (full_count) where link like 'http://www.abc.net1/page1%')
abc
http://www.abc.net1/page1/folder1
3
3 (= sum (full_count) where link like 'http://www.abc.net1/page1/folder1%')
abc
http://www.abc.net1/page1/folder2
4
4
abc
http://www.abc.net1/page2
5
5
xyz
http://www.xyz.net1/
6
21
xyz
http://www.xyz.net1/page1/
7
15
xyz
http://www.xyz.net1/page1/file1
8
8
Highly appreciate any help in this issue.
I'm using InfluxDB 1.8 and trying to make a little more complex query than Influx was made for.
I want to retrieve all data that refers to the last month stored, based on tag and field values that my script stores (not the default "time" field that Influx creates). Say we have this infos measurement:
time
field_month
field_week
tag_month
tag_week
some_data
1631668119209113500
8
1
8
1
random
1631668119209113500
8
2
8
2
random
1631668119209113500
8
3
8
3
random
1631668119209113500
9
1
9
1
random
1631668119209113500
9
1
9
1
random
1631668119209113500
9
2
9
2
random
Which 8 refers to August, 9 to September, and then some_data that is stored on a given week of that month.
I can use MAX selector at field_month to get the last month of the year stored (can't use Flux date package because I'm using v1.8). Further, I want the data grouped by tag_month and tag_week so I can COUNT how many times some_data was stored on each week of the month, that's why the same data is repeated in field and tag keys. Something like that:
SELECT COUNT(field_month) FROM infos WHERE field_month = 9 GROUP BY tag_month, tag_week
Replacing 9 by MAX Selector:
SELECT COUNT(field_month) FROM infos WHERE field_month = (SELECT MAX(field_month) FROM infos) GROUP BY tag_month, tag_week
The first query works (see results here), but not the second.
Am I doing something wrong? Is there any other possibility to make this work in v1.8?
NOTE: I know Influx wasn't supposed to be used like that. I've tried and managed this easily with PostgreSQL, using an adapted form of the second query above. But while we straighten things up to use Postgres, we have to use InfluxDB v1.8.
in postgresql you can try :
SELECT COUNT(field_month) FROM infos WHERE field_month =
(SELECT field_month FROM infos ORDER BY field_month DESC limit 1)
GROUP BY tag_month, tag_week;
I have a powerpivot table that shows work_tickets and timestamps for each step taken towards resolution:
`Ticket | Step | Time | **TicketDuration**
--------------------------------------
1 1 5:30 15
1 2 5:33 15
1 3 5:45 15
2 1 6:00 10
2 2 6:05 10
2 3 6:10 10
[ticketDuration] is a calculated column I added on my own. Now I'm trying to create a measure for the [AverageTicketDuration] so that it returns 12.5 minutes for the table above{ (15+10)/2 }. I haven't got a clue how to use DAX to produce the results. Please help!
What you are looking for is the AVERAGEX function, which has the following definition AVERAGEX(<table>,<expression>)
The idea being that it will iterate though each row of a defined table applying your calculation, then average the results.
In the below example, I use Table1 as the table name.
To start with to iterate along tickets we would use the following VALUES( Table1[ticket]) which will return the unique values in the ticket column.
Then assuming that your ticket duration is always the same within a ticket ID, the aggregation method used in the expression would be Average(Table1[Ticket]). Since for example of ticket 1, (15 + 15 + 15)/3 = 15
Put together the measure would look like below:\
measure:=AVERAGEX( VALUES( Table1[ticket]), AVERAGE(Table1[Ticket Duration]))
The result when dropped into a pivot using your sample data.
Can you help me with sql query to get the desired result
Database used :- Redshift
requirement is
I have 3 columns as:- dish_id,cateogory_id,counter
So i want counter to increase +1 if the dish_id is repeated and if not it should remain 1
the query i need should be able to query the source table and get the results as
dish_id category_id counter
21 4 1
21 6 2
21 6 3
12 1 1
Unless I missunderstood your question, you can accomplish that using window functions:
SELECT *,row_number() OVER (PARTITION BY dish_id) FROM my_table;
I want to select each 5 rows to be unique and the select pattern applies for the rest of the result (i.e if the result contains 10 records I am expecting to have 2 set of 5 unique rows)
Example:
What I have:
1
1
5
3
4
5
2
4
2
3
Result I want to achieve:
1
2
3
4
5
1
2
3
4
5
I have tried and searched a lot but couldn't find anything close to what I want to achieve.
Assuming that you can somehow order the rows within the sets of 5:
SELECT t.Row % 5, t.Row FROM #T t
ORDER BY t.Row , t.Row % 5
We could probably get closer to the truth with more details about what your data looks like and what it is you're trying to actually do.
This will work with the sample of data you provided
SELECT DISTINCT(thevalue) FROM theresults
UNION ALL
SELECT DISTINCT(thevalue) FROM theresults
But it's unclear to me if it's really what you need.
For instance :
if your table/results returns 12 rows, do you still want 2x5 rows or do you want 2x6 rows ?
do you have always in your table/results the same rows in double ?
There's a lot more questions to rise and no hint about them in what you asked.