I need to create a SQL query which calculates some data.
For instance, I have such SQL query:
SELECT SUM(AMOUNT) FROM FIRMS WHERE FIRM_ID IN(....) GROUP BY FIRM;
which produces such data:
28,740,573
30,849,923
25,665,724
43,223,313
34,334,534
35,102,286
38,556,820
19,384,871
Now, in a second column I need to show relation between one entry and sum of all entries. Like that:
28,740,573 | 0.1123
30,849,923 | 0.1206
25,665,724 | 0.1003
43,223,313 | 0.1689
34,334,534 | 0.1342
35,102,286 | 0.1372
38,556,820 | 0.1507
19,384,871 | 0.0758
For instance, sum of all entries from first column above is gonna be 255,858,044 and the value in a first entry, second cell is gonna be 28,740,573 / 255,858,044 = 0.1123. And same for each entry in a result.
How can I do that?
UPD: Thanks #a_horse_with_no_name, I forgot to DBMS. It's Oracle.
Most databases now support the ANSI standard window functions. So, you can do:
SELECT SUM(AMOUNT),
SUM(AMOUNT) / SUM(SUM(AMOUNT)) OVER () as ratio
FROM FIRMS
WHERE FIRM_ID IN (....)
GROUP BY FIRM;
Note: Some databases do integer division. So, if AMOUNT is an integer, then you need to convert to a non-integer number in these databases. One easy method is to multiple by 1.0.
Related
I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)
I haven't been able to figure out exactly how to put together this SQL string. I'd really appreciate it if someone could help me out. I am using Access 2016, so please only provide answers that will work with Access. I have two queries that both have different fields except for one in common. I need to find the minimum absolute difference between the two similar columns. Then, I need to be able to pull the data from that corresponding record. For instance,
qry1.Col1 | qry1.Col2
-----------|-----------
10245.123 | Have
302044.31 | A
qry2.Col1 | qry2.Col2
----------------------
23451.321 | Great
345622.34 | Day
Find minimum absolute difference in a third query, qry3. For instance, Min(Abs(qry1!Col1 - qry2!Col1) I imagine it would produce one of these tables for each value in qry1.Col1. For the value 10245.123,
qry3.Col1
----------
13206.198
335377.217
Since 13206.198 is the minimum absolute difference, I want to pull the record corresponding to that from qry2 and associate it with the data from qry1 (I'm assuming this uses a JOIN). Resulting in a fourth query like this,
qry4.Col1 (qry1.Col1) | qry4.Col2 (qry1.Col2) | qry4.Col3 (qry2.Col2)
----------------------------------------------------------------------
10245.123 | Have | Great
302044.31 | A | Day
If this is all doable in one SQL string, that would be great. If a couple of steps are required, that's okay as well. I just would like to avoid having to time consumingly do this using loops and RecordSet.Findfirst in VBA.
You can use a correlated subquery:
select q1.*,
(select top 1 q2.col2
from qry2 as q2
order by abs(q2.col1 - q1.col1), q2.col2
) as qry2_col2
from qry1 as q1;
I have a static dataset that correlates a range of numbers to some metadata, e.g.
+--------+--------+-------+--------+----------------+
| Min | Max |Country|CardType| Issuing Bank |
+--------+--------+-------+--------+----------------+
| 400011 | 400051 | USA |VISA | Bank of America|
+--------+--------+-------+--------+----------------+
| 400052 | 400062 | UK |MAESTRO | HSBC |
+--------+--------+-------+--------+----------------+
I wish to lookup a the data for some arbitrary single value
SELECT *
FROM SomeTable
WHERE Min <= 400030
AND Max >= 400030
I have about 200k of these range mappings, and am wondering the best table structure for SQL Server?
A composite key doesn't seem correct due to the fact that most of the time, the value being looked up will be in between the two range values stored on disk. Similarly, only indexing the first column doesn't seem to be selective enough.
I know that 200k rows is fairly insignificant, and I can get by with doing not much, but lets assume that the numbers of rows could be orders of magnitude greater.
If you usually search on both min and max then a compound key on (min,max) is appropriate. The engine will find all rows where min is less than X, then search within those result to find the rows where max is greater then Y.
The index would also be useful if you do searches on min only, but would not be applicable if you do searches only on max.
You can index the first number and then do the lookup like this:
select t.*,
(select top 1 s.country
from static s
where t.num >= s.firstnum
order by s.firstnum
) country
from sometable t;
Or use outer apply:
select t.*, s.country
from sometable t outer apply
(select top 1 s.country
from static s
where t.num >= s.firstnum
order by s.firstnum
) s
This should take advantage of an index on static(firstnum) or static(firstnum, country). This does not check against the second number. If that is important, use outer apply and do the check outside the subquery.
I would specify the primary key on (Min,Max). Queries are as simple as:
SELECT *
FROM SomeTable
WHERE #Value BETWEEN Min AND Max
I'd also define a constraint to enforce that Min <= Max. Then I would create a trigger to enforce uniqueness in ranges and prevent the database from storing an overlapping range.
I belive is easy/faster if you create a trigger for INSERT and then fill the related calculated columns country, issuing bank, card-number length
At the end you do the calculation only once, instead 200k every time you will do a query. Of course is there a space cost. But query will be much easier to mantain.
I remember once I have to calculate some sin and cos to calculate distance so I just create the calculated columns once.
After your update I think is even easier
+--------+--------+-------+--------+----------------+----------+
| Min | Max |Country|CardType| Issuing Bank | TypeID |
+--------+--------+-------+--------+----------------+----------+
| 400011 | 400051 | USA |VISA | Bank of America| 1 |
+--------+--------+-------+--------+----------------+----------+
| 400052 | 400062 | UK |MAESTRO | HSBC | 2 |
+--------+--------+-------+--------+----------------+----------+
Then you Card will also create a column TypeID
I have a table with the following structure
|user_id | place | type_of_place | money_earned| time |
|--------+-------+---------------+-------------+------|
| | | | | |
The table is very large, several millions of rows. The data is in a PostgreSQL 9.1 database.
I want to calculate, per user_id and type_of_place: the mean, the standard deviation, and the top 5 of places (ordered by counts), and the most used hour of time (mode).
The resulting data must be in this form:
| user_id | type_of_place | avg | stddev | top5_places | mode |
+---------+---------------+-----+--------+------------------+------+
| 1 | tp1 | 10 | 1 | {p1,p2,p3,p4,p5} | 8 |
| 2 | tp1 | 3 | 2 | {p3,p4} | 23 |
| 1 | tp3 | 1 | 1 | {p1} | 4 |
etc.
Is there a for of doing this with window functions efficiently?
What if I want to grouping by week? (i.e. another column that represents the number of week)
Thank you!
A standard GROUP BY query will get you most of the way:
SELECT
user_id,
type_of_place,
avg(money_earned) AS avg,
stddev(money_earned) AS stddev
FROM
earnings -- I'm not sure what your data table is called...
GROUP BY
user_id,
type_of_place
This leaves the top5_places and mode columns. These are both also aggregates, but not ones which are defined in the standard PostgreSQL installation. Luckily, you can add them.
Here's a page discussing how to define a mode aggregate function: http://wiki.postgresql.org/wiki/Aggregate_Mode
Once you have a mode aggregate function, assuming time is a timestamp of some kind, the expression you will add to the select list will be:
SELECT
...
mode(extract(hour FROM time)) AS mode -- Add this expression
FROM
...
Assuming order by money
For top5_places, there are several approaches, but the quickest is probably to use PostgreSQL's builtin array_agg function, and take the first 5 elements:
SELECT
...
(array_agg(place ORDER BY money_earned DESC))[1:5] AS top5_places -- Add this expression
FROM
...
One alternative is to define another aggregate called (for instance) top5, which performs the same function. This could be more efficient if there are many distinct places for each user/type of place combination, since it can stop accumulating after the first 5, whereas the above expression will generally build a complete array of all places, and then truncate to the first 5.
This assumes that a place has a unique earnings entry for each user/type combination. If a place can occur more than once, and you want to sort by sum(money_earned) for each place, then you need to use a subquery like in the examples below...
Order by counts
Ok, so the places should be ordered by how often they occur. Here's a quick way, which uses a couple of subqueries -- add this as an expression to the select-clause of the above query:
(SELECT
(array_agg(place ORDER BY cnt DESC))[1:5]
FROM
(SELECT place, count(*) FROM earnings AS t2
WHERE t2.user_id = earnings.user_id AND t2.type_of_place = earnings.type_of_place
GROUP BY place) AS s (place, cnt)
) AS top5_places
The inner subquery called s evaluates to a table of each place for that user/type combination, and the number of times it occurs (which I've called cnt). These are then fed to array_agg in descending order of that count.
I suspect there could be much neater (and probably more efficient) ways of writing it. If not, then I would recommend trying to move this complicated expression into a function or aggregate, if you can...
Histrogram of places in each hour
We'll use a similar expression, which will return the array of counts, ordered by hour:
(SELECT
array_agg(cnt ORDER BY hour DESC)
FROM
(SELECT extract(hour FROM time), count(*) FROM earnings AS t2
WHERE t2.user_id = earnings.user_id AND t2.type_of_place = earnings.type_of_place
GROUP BY 1) AS s (hour, cnt)
) AS hourly_histogram
(Add that to the select-clause of the original query.)
First, I've been using mysql for forever and am now upgrading to postgresql. The sql syntax is much stricter and some behavior different, thus my question.
I've been searching around for how to merge rows in a postgresql query on a table such as
id | name | amount
0 | foo | 12
1 | bar | 10
2 | bar | 13
3 | foo | 20
and get
name | amount
foo | 32
bar | 23
The closest I've found is Merge duplicate records into 1 records with the same table and table fields
sql returning duplicates of 'name':
scope :tallied, lambda { group(:name, :amount).select("charges.name AS name,
SUM(charges.amount) AS amount,
COUNT(*) AS tally").order("name, amount desc") }
What I need is
scope :tallied, lambda { group(:name, :amount).select("DISTINCT ON(charges.name) charges.name AS name,
SUM(charges.amount) AS amount,
COUNT(*) AS tally").order("name, amount desc") }
except, rather than returning the first row of a given name, should return mash of all rows with a given name (amount added)
In mysql, appending .group(:name) (not needing the initial group) to the select would work as expected.
This seems like an everyday sort of task which should be easy. What would be a simple way of doing this? Please point me on the right path.
P.S. I'm trying to learn here (so are others), don't just throw sql in my face, please explain it.
I've no idea what RoR is doing in the background, but I'm guessing that group(:name, :amount) will run a query that groups by name, amount. The one you're looking for is group by name:
select name, sum(amount) as amount, count(*) as tally
from charges
group by name
If you append amount to the group by clause, the query will do just that -- i.e. count(*) would return the number of times each amount appears per name, and the sum() would return that number times that amount.