Theil–Sen estimator using Hive - sql

I would like to calculate the Theil–Sen estimator per ID for the value column in the sample table below using hive. The Theil–Sen estimator is defined here https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator, I tried to use arrays but could not figure out a solution. Any help is appreciated.
+----+-------+-------+
| 1 | 1 | 10 |
| 1 | 2 | 20 |
| 1 | 3 | 30 |
| 1 | 4 | 40 |
| 1 | 5 | 50 |
| 2 | 1 | 100 |
| 2 | 2 | 90 |
| 2 | 3 | 102 |
| 2 | 4 | 75 |
| 2 | 5 | 70 |
| 2 | 6 | 50 |
| 2 | 7 | 100 |
| 2 | 8 | 80 |
| 2 | 9 | 60 |
| 2 | 10 | 50 |
| 2 | 11 | 40 |
| 2 | 12 | 40 |
+----+-------+-------+

Related

How to return the maximum and minimum values for specific ID SQL

Given the following SQL tables: https://imgur.com/a/NI8VrC7. For each specific ID_t I need to return the MAX() and MIN() value of Cena_c(total price) column of a given ID_t.
| ID_t | Nazwa |
| ---- | ----- |
| 1 | T1 |
| 2 | T2 |
| 3 | T3 |
| 4 | T4 |
| 5 | T5 |
| 6 | T6 |
| 7 | T7 |
| ID | ID_t | Ilosc | Cena_j | Cena_c | ID_p |
| ---- | ---- | ----- | ------ | ------ | ---- |
| 100 | 1 | 1 | 10 | 10 | 1 |
| 101 | 2 | 3 | 20 | 60 | 2 |
| 102 | 4 | 5 | 10 | 50 | 7 |
| 103 | 2 | 2 | 20 | 40 | 5 |
| 104 | 5 | 1 | 30 | 30 | 5 |
| 105 | 7 | 6 | 80 | 480 | 1 |
| 106 | 6 | 7 | 15 | 105 | 2 |
| 107 | 6 | 5 | 15 | 75 | 1 |
| 108 | 3 | 3 | 25 | 75 | 7 |
| 109 | 7 | 1 | 80 | 80 | 5 |
| 110 | 4 | 1 | 10 | 10 | 2 |
| 111 | 2 | 9 | 20 | 180 | 2 |
Based on provided tables the correct result should look like this:
| ID_t | Cena_c_max | Cena_c_min |
| ----- | ---------- | ---------- |
| T1 | 10 | 10 |
| T2 | 180 | 60 |
| T3 | 75 | 75 |
| T4 | 50 | 10 |
| T5 | 30 | 30 |
| T6 | 105 | 75 |
| T7 | 480 | 80 |
Is this even possible?
I haven't found anything yet that I could use to implement my solution.
SELECT concat('T',ID_t), max(Cena_c) as Cena_c_max, min(Cena_c) as Cena_c_min
FROM table
GROUP BY ID_t
Better is to solve it with joins of tables, because it will be avoided in the future if the prefix T is changed to another letter.
Hardcoding should be avoided.
select b.nazva as "Nazva", max(a.cena.c) as "Cena_c_max", min(a.cena.c) as "Cena_c_min"
from table1 as a
left join table2 as b on (
a.id_t = b.id_t
)
group by id_t

How do I get around aggregate function error?

I have the following sql to calculate a % total:
SELECT tblTourns_atp.ID_Ti,
Sum([FS_1]/(SELECT Sum(FSOF_1)
FROM stat_atp
WHERE stat_atp.ID_T = tblTourns_atp.ID_T)) AS S1_IP
FROM stat_atp
INNER JOIN tblTourns_atp ON stat_atp.ID_T = tblTourns_atp.ID_T
GROUP BY tblTourns_atp.ID_Ti
I'm getting the 'aggregate error' because it wants the ID_T fields either grouped or in an aggregate function. I've read loads of examples but none of them seem to apply when the offending field is the subject of 'WHERE'.
Tables and output as follows:
+----------+------+--------+--+---------------+-------+--+--------+--------+
| stat_atp | | | | tblTourns_atp | | | Output | |
+----------+------+--------+--+---------------+-------+--+--------+--------+
| ID_T | FS_1 | FSOF_1 | | ID_T | ID_Ti | | ID_Ti | S1_IP |
| 1 | 20 | 40 | | 1 | 1 | | 1 | 31.03% |
| 2 | 30 | 100 | | 2 | 1 | | 2 | 28.57% |
| 3 | 40 | 150 | | 3 | 1 | | 3 | 33.33% |
| 4 | 30 | 100 | | 4 | 2 | | | |
| 5 | 30 | 100 | | 5 | 2 | | | |
| 6 | 40 | 150 | | 6 | 2 | | | |
| 7 | 20 | 40 | | 7 | 3 | | | |
| 8 | 30 | 100 | | 8 | 3 | | | |
| 9 | 40 | 150 | | 9 | 3 | | | |
| 10 | 20 | 40 | | 10 | 3 | | | |
+----------+------+--------+--+---------------+-------+--+--------+--------+
Since you already have an inner join between the two tables, a separate subquery isn't required:
select t.id_ti, sum(s.fs_1)/sum(s.fsof_1) as pct
from tbltourns_atp t inner join stat_atp s on t.id_t = s.id_t
group by t.id_ti

Pandas - Grouping Rows With Same Value in Dataframe

Here is the dataframe in question:
|City|District|Population| Code | ID |
| A | 4 | 2000 | 3 | 21 |
| A | 8 | 7000 | 3 | 21 |
| A | 38 | 3000 | 3 | 21 |
| A | 7 | 2000 | 3 | 21 |
| B | 34 | 3000 | 6 | 84 |
| B | 9 | 5000 | 6 | 84 |
| C | 4 | 9000 | 1 | 28 |
| C | 21 | 1000 | 1 | 28 |
| C | 32 | 5000 | 1 | 28 |
| C | 46 | 20 | 1 | 28 |
I want to regroup the population counts by city to have this kind of output:
|City|Population| Code | ID |
| A | 14000 | 3 | 21 |
| B | 8000 | 6 | 84 |
| C | 15020 | 1 | 28 |
df = df.groupby(['City', 'Code', 'ID'])['Population'].sum()
You can make a group by 'City', 'Code' and 'ID then make sum of 'population'.

writing SQL query to show result in specific order

I have this table
+----+--------+------------+-----------+
| Id | day_id | subject_id | period_Id |
+----+--------+------------+-----------+
| 1 | 1 | 1 | 1 |
| 2 | 1 | 2 | 2 |
| 8 | 2 | 6 | 1 |
| 9 | 2 | 7 | 2 |
| 15 | 3 | 3 | 1 |
| 16 | 3 | 4 | 2 |
| 22 | 4 | 5 | 1 |
| 23 | 4 | 5 | 2 |
| 24 | 4 | 6 | 3 |
| 29 | 5 | 8 | 1 |
| 30 | 5 | 1 | 2 |
to something like this
| Id | day_id | subject_id | period_Id |
| 1 | 1 | 1 | 1 |
| 8 | 2 | 6 | 1 |
| 15 | 3 | 3 | 1 |
| 22 | 4 | 5 | 1 |
| 29 | 5 | 8 | 1 |
| 2 | 1 | 2 | 2 |
| 2 | 1 | 2 | 2 |
| 16 | 3 | 4 | 2 |
| 23 | 4 | 5 | 2 |
| 30 | 5 | 1 | 2 |
+----+--------+------------+-----------+
SO, I want to choose one period with a different subject each day and doing this for number of weeks. so first subject dose not come until all subject have been chosen.
You can ORDER BY period_id first and then by day_id:
SELECT *
FROM your_table
ORDER BY period_Id, day_Id
LiveDemo

Join Distinct or First

I have a table structure for SalesItems, and Sales.
SalesItems is setup something like this
| SaleItemID | SaleID | ProductID | ProductType |
| 1 | 1 | 1 | 1 |
| 2 | 1 | 2 | 2 |
| 3 | 1 | 15 | 1 |
| 4 | 2 | 5 | 2 |
| 5 | 3 | 1 | 1 |
| 6 | 3 | 8 | 5 |
And Sales is setup something like this
| Sale | Cash |
| 1 | 1.00 |
| 2 | 10.00 |
| 3 | 28.50 |
I am trying to export a basic 'Daily History' that uses joins to spit out the information like this.
| Date | StoreID | Type1Sales | Type2Sales | ... | Cash Taken |
| 5/2 | 50 | 50 | 40 | ... | 39.50 |
| 5/3 | 50 | 10 | 32.50 | ... | 48.50 |
The issue I'm having is if I do an inner join From Sales to Sales Items, I'll end up with this.
| SaleItemID | SaleID | ProductID | ProductType | Sale | Cash |
| 1 | 1 | 1 | 1 | 1 | 1.00 |
| 2 | 1 | 2 | 2 | 1 | 1.00 |
| 3 | 1 | 15 | 1 | 1 | 1.00 |
| 4 | 2 | 5 | 2 | 2 | 10.00 |
| 5 | 3 | 1 | 1 | 3 | 28.50 |
| 6 | 3 | 8 | 5 | 3 | 28.50 |
So if I do a SUM(Cash), then I'll end up returning $70.00, instead of the correct $39.50. I'm not the best with joins, so I've been researching outer joins and such, but none of those seem to work as it's still matching up. Is there a way to only match on the FIRST instance, and return NULL for the rest? For example, something like this
| SaleItemID | SaleID | ProductID | ProductType | Sale | Cash |
| 1 | 1 | 1 | 1 | 1 | 1.00 |
| 2 | 1 | 2 | 2 | 1 | NULL |
| 3 | 1 | 15 | 1 | 1 | NULL |
| 4 | 2 | 5 | 2 | 2 | 10.00 |
| 5 | 3 | 1 | 1 | 3 | 28.50 |
| 6 | 3 | 8 | 5 | 3 | NULL |
Or do you have any other suggestions for returning back the correct amount of Cash for each particular day?
Use DISTINCT(SaleID) in your SELECT to return a single row for each Sale ID.