I would like to know if there are tricks to optimizing the following HiveQL (or SQL, out of curiosity).
For example, if I have a table:
x | y | e | time
2 | 5 | 1 | 11:30:00
2 | 1 | 1 | 12:15:00
8 | 0 | 1 | 16:00:00
10 | 6 | 2 | 16:06:00
1 | 2 | 2 | 17:00:00
and I want to get multiple aggregates:
select
e,
time,
sum(x) over w as cumu_x
sum(y) over w as cumu_y
count(x) over w as num_x
from my_table
window w as
(partition by e
order by time
rows between unbounded preceding and current row)
should give me the desired result
e | time | cumu_x | cumu_y | num_x
1 | 11:30:00 | 2 | 5 | 1
1 | 12:15:00 | 4 | 6 | 2
1 | 16:00:00 | 12 | 6 | 3
2 | 16:06:00 | 10 | 6 | 1
2 | 17:00:00 | 11 | 8 | 2
The question: how can this be optimized? Such Hive queries are extremely slow when millions of rows are involved.
If I were looping over the data myself, I would:
Calculate all aggregates in the same loop. Does this happen if I use the window alias?
Sort the data once and keep running totals. This is because I know that at each iteration, the result will just be an increment of the prior result. Does Hive do this? Is there a way to give hints so that it will?
Process different bins of "e" in parallel. Does Hive do this? I only see a single reducer when I run. Is there a way to help Hive parallelize?
Related
I have a table that has the data about user_ids, all their last log_in dates to the app
Table:
|----------|--------------|
| User_Id | log_in_dates |
|----------|--------------|
| 1 | 2021-09-01 |
| 1 | 2021-09-03 |
| 2 | 2021-09-02 |
| 2 | 2021-09-04 |
| 3 | 2021-09-01 |
| 3 | 2021-09-02 |
| 3 | 2021-09-03 |
| 3 | 2021-09-04 |
| 4 | 2021-09-03 |
| 4 | 2021-09-04 |
| 5 | 2021-09-01 |
| 6 | 2021-09-01 |
| 6 | 2021-09-09 |
|----------|--------------|
From the above table, I'm trying to understand the user's log in behavior from the present day to the past 90 days.
Num_users_no_log_in defines the count for the number of users who haven't logged in to the app from present_day to the previous days (last_log_in_date)
I want the table like below:
|---------------|------------------|--------------------|-------------------------|
| present_date | days_difference | last_log_in_date | Num_users_no_log_in |
|---------------|------------------|--------------------|-------------------------|
| 2021-09-01 | 0 | 2021-09-01 | 0 |
| 2021-09-02 | 1 | 2021-09-01 | 3 |->(Id = 1,5,6)
| 2021-09-02 | 0 | 2021-09-02 | 3 |->(Id = 1,5,6)
| 2021-09-03 | 2 | 2021-09-01 | 2 |->(Id = 5,6)
| 2021-09-03 | 1 | 2021-09-02 | 1 |->(Id = 2)
| 2021-09-03 | 0 | 2021-09-03 | 3 |->(Id = 2,5,6)
| 2021-09-04 | 3 | 2021-09-01 | 2 |->(Id = 5,6)
| 2021-09-04 | 2 | 2021-09-02 | 0 |
| 2021-09-04 | 1 | 2021-09-03 | 1 |->(Id= 1)
| 2021-09-04 | 0 | 2021-09-04 | 3 |->(Id = 1,5,6)
| .... | .... | .... | ....
|---------------|------------------|--------------------|-------------------------|
I was able to get the first three columns Present_date | days_difference | last_log_in_date using the following query:
with dts as
(
select distinct log_in from users_table
)
select x.log_in_dates as present_date,
DATEDIFF(DAY, y.log_in_dates ,x.log_in_dates ) as Days_since_last_log_in,
y.log_in_dates as log_in_dates
from dts x, dts y
where x.log_in_dates >= y.log_in_dates
I don't understand how I can get the fourth column Num_users_no_log_in
I do not really understand your need: are there values base on users or dates? It it's based on dates, as it looks like (elsewhere you would probably have user_id as first column), what does it mean to have multiple times the same date? I understand that you would like to have a recap for all dates since the beginning until the current date, but in my opinion in does not really make sens (imagine your dashboard in 1 year!!)
Once this is said, let's go to the approach.
In such cases, I develop step by step using common table extensions. For you example, it required 3 steps:
prepare the time series
integrate connections' dates and perform the first calculation (time difference)
Finally, calculate nb connection per day
Then, the final query will display the desired result.
Here is the query I proposed, developed with Postgresql (you did not precise your dbms, but converting should not be such a big deal here):
with init_calendar as (
-- Prepare date series and count total users
select generate_series(min(log_in_dates), now(), interval '1 day') as present_date,
count(distinct user_id) as nb_users
from users
),
calendar as (
-- Add connections' dates for each period from the beginning to current date in calendar
-- and calculate nb days difference for each of them
-- Syntax my vary depending dbms used
select distinct present_date, log_in_dates as last_date,
extract(day from present_date - log_in_dates) as days_difference,
nb_users
from init_calendar
join users on log_in_dates <= present_date
),
usr_con as (
-- Identify last user connection's dates according to running date
-- Tag the line to be counted as no connection
select c.present_date, c.last_date, c.days_difference, c.nb_users,
u.user_id, max(log_in_dates) as last_con,
case when max(log_in_dates) = present_date then 0 else 1 end as to_count
from calendar c
join users u on u.log_in_dates <= c.last_date
group by c.present_date, c.last_date, c.days_difference, c.nb_users, u.user_id
)
select present_date, last_date, days_difference,
nb_users - sum(to_count) as Num_users_no_log_in
from usr_con
group by present_date, last_date, days_difference, nb_users
order by present_date, last_date
Please note that there is a difference with your own expected result as you forgot user_id = 3 in your calculation.
If you want to play with the query, you can with dbfiddle
I'm trying to solve the bus routing problem in postgresql which requires visibility of previous and next rows. Here is my solution.
Step 1) Have one edges table which represents all the edges (the source and target represent vertices (bus stops):
postgres=# select id, source, target, cost from busedges;
id | source | target | cost
----+--------+--------+------
1 | 1 | 2 | 1
2 | 2 | 3 | 1
3 | 3 | 4 | 1
4 | 4 | 5 | 1
5 | 1 | 7 | 1
6 | 7 | 8 | 1
7 | 1 | 6 | 1
8 | 6 | 8 | 1
9 | 9 | 10 | 1
10 | 10 | 11 | 1
11 | 11 | 12 | 1
12 | 12 | 13 | 1
13 | 9 | 15 | 1
14 | 15 | 16 | 1
15 | 9 | 14 | 1
16 | 14 | 16 | 1
Step 2) Have a table which represents bus details like from time, to time, edge etc.
NOTE: I have used integer format for "from" and "to" column for faster results as I can do an integer query, but I can replace it with any better format if available.
postgres=# select id, "busedgeId", "busId", "from", "to" from busedgetimes;
id | busedgeId | busId | from | to
----+-----------+-------+-------+-------
18 | 1 | 1 | 33000 | 33300
19 | 2 | 1 | 33300 | 33600
20 | 3 | 2 | 33900 | 34200
21 | 4 | 2 | 34200 | 34800
22 | 1 | 3 | 36000 | 36300
23 | 2 | 3 | 36600 | 37200
24 | 3 | 4 | 38400 | 38700
25 | 4 | 4 | 38700 | 39540
Step 3) Use dijkstra algorithm to find the nearest path.
Step 4) Get the upcoming buses from the busedgetimes table in the earliest first order for the nearest path detected by dijkstra algorithm.
Problem: I am finding it difficult to make the query for the Step 4.
For example: If I get the path as edges 2, 3, 4, to travel from source vertex 2 to target vertex 5 in the above records. To get the first bus for the first edge, it's not so hard as I can simply query with from < 'expected departure' order by from desc but for the second edge, the from condition requires to time of first result row. Also, query requires edge ids filter.
How can I achieve this in a single query?
I am not sure if I understood your problem correctly. But getting values from other rows this can be done by window functions (https://www.postgresql.org/docs/current/static/tutorial-window.html):
demo: db<>fiddle
SELECT
id,
lag("to") OVER (ORDER BY id) as prev_to,
"from",
"to",
lead("from") OVER (ORDER BY id) as next_from
FROM bustimes;
The lag function moves the value of the previous row into the current one. The lead function does the same with the next row. So you are able to calculate a difference between last arrival and current departure or something like that.
Result:
id prev_to from to next_from
18 33000 33300 33300
19 33300 33300 33600 33900
20 33600 33900 34200 34200
21 34200 34200 34800 36000
22 34800 36000 36300
Please notice that "from" and "to" are reserved words in PostgreSQL. It would be better to chose other names.
I'm trying to write a stored procedure for selecting X amount of well spread points in time from a big table.
I have a table points:
"Userid" integer
, "Time" timestamp with time zone
, "Value" integer
It contains hundreds of millions of records. And about a million of records per each user.
I want to select X points (lets say 50), which all well spread from time A to time B. The problem is that the points are not spread equally (if one point is in 6:00:00, the next point may be after 15 seconds, 20, or 4 minutes for example).
Selection all the points for an id can take up to 60 seconds (because there are about a million points).
Is there any way to select the exact amount of points I desire, as much well spread as possible, in a fast way?
Sample data:
+--------+---------------------+-------+
| UserId | Time | Value |
+--------+---------------------+-------+
1 | 1 | 2017-04-10 14:00:00 | 1 |
2 | 1 | 2017-04-10 14:00:10 | 10 |
3 | 1 | 2017-04-10 14:00:20 | 32 |
4 | 1 | 2017-04-10 14:00:35 | 80 |
5 | 1 | 2017-04-10 14:00:58 | 101 |
6 | 1 | 2017-04-10 14:01:00 | 203 |
7 | 1 | 2017-04-10 14:01:30 | 204 |
8 | 1 | 2017-04-10 14:01:40 | 205 |
9 | 1 | 2017-04-10 14:02:02 | 32 |
10 | 1 | 2017-04-10 14:02:15 | 7 |
11 | 1 | 2017-04-10 14:02:30 | 900 |
12 | 1 | 2017-04-10 14:02:45 | 22 |
13 | 1 | 2017-04-10 14:03:00 | 34 |
14 | 1 | 2017-04-10 14:03:30 | 54 |
15 | 1 | 2017-04-10 14:04:00 | 54 |
16 | 1 | 2017-04-10 14:06:00 | 60 |
17 | 1 | 2017-04-10 14:07:20 | 654 |
18 | 1 | 2017-04-10 14:07:40 | 32 |
19 | 1 | 2017-04-10 14:08:00 | 33 |
20 | 1 | 2017-04-10 14:08:12 | 32 |
21 | 1 | 2017-04-10 14:10:00 | 8 |
+--------+---------------------+-------+
I want to select 11 "best" points from the list above, for the user with Id 1,
from time 2017-04-10 14:00:00 to 2017-04-10 14:10:00.
Currently its done on the server, after selecting all the points for the user.
I calculate the "best times" by dividing the difference in times and get a list such as: 14:00:00,14:01:00,....14:10:00 (11 "best times", as the amount of points). Than I look for the closest point for each "best time", that not have been selected yet.
The result will be points: 1, 6, 9, 13, 15, 16, 17, 18, 19, 20, 21
Edit:
I'm trying something like this:
SELECT * FROM "points"
WHERE "Userid" = 1 AND
(("Time" =
(SELECT "Time" FROM
"points"
ORDER BY abs(extract(epoch from '2017-04-10 14:00:00' - "Time"))
LIMIT 1)) OR
("Time" =
(SELECT "Time" FROM
"points"
ORDER BY abs(extract(epoch from '2017-04-10 14:01:00' - "Time"))
LIMIT 1)) OR
("Time" =
(SELECT "Time" FROM
"points"
ORDER BY abs(extract(epoch from '2017-04-10 14:02:00' - "Time"))
LIMIT 1)))
The problems here are that:
A) It doesn't take in account points that already have been selected.
B) Because of the ORDER BY, each additional time increases the running time of the query by ~ 1 second, and for 50 points I get back to the 1 minute mark.
There is an optimization problem behind your question that's hard to solve with just SQL.
That said, your attempt of an approximation can be implemented to use an index and show good performance irregardless of table size. You need this index if you don't have it already:
CREATE INDEX ON points ("Userid", "Time");
Query:
SELECT *
FROM generate_series(timestamptz '2017-04-10 14:00:00+0'
, timestamptz '2017-04-10 14:09:00+0' -- 1 min *before* end!
, interval '1 minute') grid(t)
LEFT JOIN LATERAL (
SELECT *
FROM points
WHERE "Userid" = 1
AND "Time" >= grid.t
AND "Time" < grid.t + interval '1 minute' -- same interval
ORDER BY "Time"
LIMIT 1
) t ON true;
dbfiddle here
Most importantly, the rewritten query can use above index and will be very fast, solving problem B).
It also addresses problem A) to some extent as no point is returned more than once. If there is no row between two adjacent points in the grid, you get no row in the result. Using LEFT JOIN .. ON true keeps all grid rows and appends NULL in this case. Eliminate those NULL rows by switching to CROSS JOIN. You may get fewer result rows this way.
I am only search ahead of each grid point. You might append a second LATERAL join to also search behind each grid point (just another index-scan), and take the closer one of the two results (ignoring NULL). But that introduces two problems:
If one match is behind and the next is ahead, the gap widens.
You need special treatment for lower and / or upper bound of the outer interval
And you need two LATERAL joins with two index scans.
You could use a recursive CTE to search 1 minute ahead of the last time actually found, but then the total number of rows returned varies even more.
It all comes down to an exact definition of what you need, and where compromises are allowed.
Related:
What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?
Aggregating the most recent joined records per week
MySQL/Postgres query 5 minutes interval data
Optimize GROUP BY query to retrieve latest row per user
answer use generate_series('2017-04-10 14:00:00','2017-04-10 14:10:00','1 minute'::interval) and join for comparison.
for others to save time on data set:
t=# create table points(i int,"UserId" int,"Time" timestamp(0), "Value" int,b text);
CREATE TABLE
Time: 13.728 ms
t=# copy points from stdin delimiter '|';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 1 | 1 | 2017-04-10 14:00:00 | 1 |
>> 2 | 1 | 2017-04-10 14:00:10 | 10 |
3 | 1 | 2017-04-10 14:00:20 | 32 |
4 | 1 | 2017-04-10 14:00:35 | 80 |
5 | 1 | 2017-04-10 14:00:58 | 101 |
6 | 1 | 2017-04-10 14:01:00 | 203 |
7 | 1 | 2017-04-10 14:01:30 | >> 204 |
8 | 1 | 2017-04-10 14:01:40 | 205 |
9 | 1 | 2017-04-10 14:02:02 | 32 |
10 | 1 | 2017-04-10 14:02:15 | 7 |
11 | 1 | 2017-04-10 14:02:30 | 900 |
12 | 1 | 2017-04-10 14:02:45 | 22 |
>> >> >> >> >> >> >> >> >> >> 13 | 1 | 2017-04-10 14:03:00 | 34 |
14 | 1 | 2017-04-10 14:03:30 | 54 |
15 | 1 | 2017-04-10 14:04:00 | 54 |
16 | 1 | 2017-04-10 14:06:00 | 60 |
17 | 1 | 2017-04-10 14:07:20 | 654 |
18 | 1 | 2017-04-10 14:07:40 | 32 |
19 | 1 | 2017-04-10 14:08:00 | 33 |
20 | 1 | 2017-04-10 14:08:12 | 32 |
21 | 1 | 2017-04-10 14:10:00 | 8 |>> >> >> >> >> >> >> >> \.
>> \.
COPY 21
Time: 7684.259 ms
t=# alter table points rename column "UserId" to "Userid";
ALTER TABLE
Time: 1.013 ms
Frankly I don't understand the request. This is how I got it from description and results are different from expected by OP:
t=# with r as (
with g as (
select generate_series('2017-04-10 14:00:00','2017-04-10 14:10:00','1 minute'::interval) s
)
select *,abs(extract(epoch from '2017-04-10 14:02:00' - "Time"))
from g
join points on g.s = date_trunc('minute',"Time")
order by abs
limit 11
)
select i, "Time","Value",abs
from r
order by i;
i | Time | Value | abs
----+---------------------+-------+-----
4 | 2017-04-10 14:00:35 | 80 | 85
5 | 2017-04-10 14:00:58 | 101 | 62
6 | 2017-04-10 14:01:00 | 203 | 60
7 | 2017-04-10 14:01:30 | 204 | 30
8 | 2017-04-10 14:01:40 | 205 | 20
9 | 2017-04-10 14:02:02 | 32 | 2
10 | 2017-04-10 14:02:15 | 7 | 15
11 | 2017-04-10 14:02:30 | 900 | 30
12 | 2017-04-10 14:02:45 | 22 | 45
13 | 2017-04-10 14:03:00 | 34 | 60
14 | 2017-04-10 14:03:30 | 54 | 90
(11 rows)
I added abs column to justify why I thought those rows fit request better
I have this table.
+------------------------------------------------------------+
| ks | time | val1 | val2 |
+-------------+---------------+---------------+--------------+
| A | 1 | 1 | 1 |
| B | 1 | 3 | 5 |
| A | 2 | 6 | 7 |
| B | 2 | 10 | 12 |
| A | 4 | 6 | 7 |
| B | 4 | 20 | 26 |
+------------------------------------------------------------+
What I want to get is for each row,
ks | time | val1 | val1 of next ts of same ks |
To be clear, result of above example should be,
+------------------------------------------------------------+
| ks | time | val1 | next.val1 |
+-------------+---------------+---------------+--------------+
| A | 1 | 1 | 6 |
| B | 1 | 3 | 10 |
| A | 2 | 6 | 6 |
| B | 2 | 10 | 20 |
| A | 4 | 6 | null |
| B | 4 | 20 | null |
+------------------------------------------------------------+
(I need the same next for value2 as well)
I tried a lot to come up with a hive query for this, but still no luck. I was able to write a query for this in sql as mentioned here (Quassnoi's answer), but couldn't create the equivalent in hive because hive doesn't support subqueries in select.
Can someone please help me achieve this?
Thanks in advance.
EDIT:
Query I tried was,
SELECT ks, time, val1, next[0] as next.val1 from
(SELECT ks, time, val1
COALESCE(
(
SELECT Val1, time
FROM myTable mi
WHERE mi.val1 > m.val1 AND mi.ks = m.ks
ORDER BY time
LIMIT 1
), CAST(0 AS BIGINT)) AS next
FROM myTable m
ORDER BY time) t2;
Your query seems quite similar to the "year ago" reporting that is ubiquitous in financial reporting. I think a LEFT OUTER JOIN is what you are looking for.
We join table myTable to itself, naming the two instances of the same table m and n. For every entry in the first table m we will attempt to find a matching record in n with the same ks value but an incremented value of time. If this record does not exist, all column values for n will be NULL.
SELECT
m.ks,
m.time,
m.val1,
n.val1 as next_val1,
m.val2,
n.val2 as next_val2
FROM
myTable m
LEFT OUTER JOIN
myTable n
ON (
m.ks = n.ks
AND
m.time + 1 = n.time
);
Returns the following.
ks time val1 next_val1 val2 next_val2
A 1 1 6 1 7
A 2 6 6 7 7
A 3 6 NULL 7 NULL
B 1 3 10 5 12
B 2 10 20 12 26
B 3 20 NULL 26 NULL
Hope that helps.
I find that using Hive custom map/reduce functionality works great to solve queries similar to this. It gives you the opportunity to consider a set of input and "reduce" to one (or more) results.
This answer discusses the solution.
The key is that you use CLUSTER BY to send all results with similar key value to the same reducer, hence same reduce script, collect accordingly, and then output the reduced results when the key changes, and start collecting for the new key.
I've read through the Oracle documentation concerning the CONNECT operations, but I can't seem to get my head around a database query we have in an existing application. Below is a simplified version of the query.
SELECT LEVEL,
CONNECT_BY_ROOT MY_MONTH MY_LABEL,
b.*
FROM (
SELECT ROWNUM AS ORDERING,
MY_AREA,
TRUNC (THE_MONTH, 'MONTH') AS MY_MONTH
FROM MY_TABLE
ORDER BY MY_AREA, MY_MONTH DESC
) b
WHERE LEVEL <= 3
START WITH 1 = 1
CONNECT BY PRIOR MY_AREA = MY_AREA
AND PRIOR ORDERING = ORDERING - 1
AND PRIOR MY_MONTH <= ADD_MONTHS(MY_MONTH, 6);
While I have a basic understanding of the CONNECT functionalities, this combination has me lost. Can anyone explain what is going on in this query?
I think the end says to get all of the rows that have the same area and a row number 1 less than the current row number and a date before 6 months in the future from the current date. I would guess this would only return 1 row (due to the row number criteria) or 0 rows if the other criteria weren't met. And then maybe the first CONNECT_BY_ROOT says to get that row's MY_MONTH value?
Start with b, which is a table of MY_AREA (a number?), MY_MONTH, which is a month-truncated date (i.e. the days are all set to 01), and an aliased ROWNUM, which is determined by the ORDER BY clause, which is ORDER BY MY_AREA, MY_MONTH DESC, e.g.:
+----------+---------+-----------+
| ORDERING | MY_AREA | MY_MONTH |
+----------+---------+-----------+
| 1 | 10 | 01-SEP-12 |
| 2 | 10 | 01-JAN-12 |
| 3 | 12 | 01-AUG-12 |
| 4 | 12 | 01-JUN-12 |
| 5 | 12 | 01-MAY-12 |
| 6 | 12 | 01-JAN-12 |
| 7 | 12 | 01-JAN-10 |
+----------+---------+-----------+
The WHERE clause doesn't come into play until later, so move on to START WITH, which says only 1 = 1. This means that every row in b will be used in the query; if you had had another condition here, e.g. my_area < 5 or whatever, only a certain set of rows would have been used.
Now, the CONNECT BY, which determines how the hierarchy should be built. This works like a WHERE clause, except for the special PRIOR keyword which tells the DB to look at the previous level in the hierarchy. So:
PRIOR MY_AREA = MY_AREA just means that the child node has to have the same value for `MY_AREA'
PRIOR ORDERING = ORDERING - 1 means that the child should come one row after the current node in b's ordering.
PRIOR MY_MONTH <= ADD_MONTHS(MY_MONTH, 6) means that in order to be joined into the hierarchy, the previous MY_MONTH should be 6 months or less after the date of the current node.
The whole hierarchy is then created. LEVEL (special for CONNECT BY...) is set to the level in the hierarchy, CONNECT_BY_ROOT gives the MY_MONTH value for the root of that hierarchy and aliases it to MY_LABEL. After this, the table would look something like the following table. I've added separators for each hierarchy for clarity.
+-------+-----------+----------+---------+-----------+
| LEVEL | MY_LABEL | ORDERING | MY_AREA | MY_MONTH |
+-------+-----------+----------+---------+-----------+
| 1 | 01-SEP-12 | 1 | 10 | 01-SEP-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-JAN-12 | 2 | 10 | 01-JAN-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-AUG-12 | 3 | 12 | 01-AUG-12 |
| 2 | 01-AUG-12 | 4 | 12 | 01-JUN-12 |
| 3 | 01-AUG-12 | 5 | 12 | 01-MAY-12 |
| 4 | 01-AUG-12 | 6 | 12 | 01-JAN-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-JUN-12 | 4 | 12 | 01-JUN-12 |
| 2 | 01-JUN-12 | 5 | 12 | 01-MAY-12 |
| 3 | 01-JUN-12 | 6 | 12 | 01-JAN-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-MAY-12 | 5 | 12 | 01-MAY-12 |
| 2 | 01-MAY-12 | 6 | 12 | 01-JAN-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-JAN-12 | 6 | 12 | 01-JAN-12 |
+-------+-----------+----------+---------+-----------+
| 1 | 01-JAN-10 | 7 | 12 | 01-JAN-10 |
+-------+-----------+----------+---------+-----------+
So, as you can see, each of the rows appears at the top of its own hierarchy, with all nodes meeting the CONNECT BY criteria under it.
Finally, the WHERE clause is applied; this chops off all of the levels > 3 in every hierarchy, so you're left with a maximum of 3 levels. This affects only one row in the middle hierarchy, the one with LEVEL = 4.