Calculate performance / percentage - sql

I have a influxdb bucket full of stock prices and created wonderful dashboards in Grafana now.
I try for hours to get a chart in place, that shows the performance in percent instead of the stock price, but it does not work.
Query to get the price is very simple:
SELECT last("price") FROM "stock_data" WHERE ("Name" = 'XYZ') AND $timeFilter GROUP BY time($__interval) fill(previous)
How do I get a performance chart now out of it? It tried (and I believe that is close)
SELECT ( last("price") / first(price)) * 100 FROM "stock_data" WHERE ("Name" = 'XYZ') AND $timeFilter GROUP BY time($__interval) fill(previous)
The issue comes with the "/ first(price)"!
All I really need is the first value of the whole query.
Any help?

Related

Running Sum Query

I'm new to Access and I'm trying to develop my own Inventory Management Database.
I'm trying to make a query that could display a running total of the Inventory on Hand as of a specific date. This is how my table looks:
It's sorted according to ITEM_ID then TRANDATE in ascending order. I'd like to add a calculated field beside the NET field that would show a running total of the specific ITEM_ID after a specific date. Negative numbers in the NET field represent a sale while the positive ones represent a purchase. I tried using the DSUM function as it is widely recommended in creating a running sum field. My expression is this
DSum([NET],"InvtyTransT", "[ITEM_ID]=" & [ITEM_ID] And "[TRANDATE]<=#" & [TRANDATE] & "#"). But it only shows the total of the NET field (6827) in each record like this:
What I needed is like this:
(I used an IF function in excel to compute this)
Please help. I think I might have missed something in my expression. I've tried revising it several times and it would always give me the same wrong answer in every record.
Thanks in advance.
Try correlated sub-query.
SELECT t.*, (SELECT SUM(t2.NET)
from InvtyTransT as t2
WHERE (t2.TRANDATE <= t.TRANDATE AND t2.ITEM_ID = t.ITEM_ID)
) AS rSUM
FROM InvtyTransT AS t;

Grouping a percentage calculation in postgres/redshift

I keep running in to the same problem over and over again, hoping someone can help...
I have a large table with a category column that has 28 entries for donkey breed, then I'm counting two specific values grouped by each of those categories in subqueries like this:
WITH totaldonkeys AS (
SELECT donkeybreed,
COUNT(*) AS total
FROM donkeytable1
GROUP BY donkeybreed
)
,
sickdonkeys AS (
SELECT donkeybreed,
COUNT(*) AS totalsick
FROM donkeytable1
JOIN donkeyhealth on donkeytable1.donkeyid = donkeyhealth.donkeyid
WHERE donkeyhealth.sick IS TRUE
GROUP BY donkeybreed
)
,
It's my goal to end up with a table that has primarily the percentage of sick donkeys for each breed but I always end up struggling like hell with the problem of not being able to group by without using an aggregate function which I cannot do here:
SELECT (CAST(sickdonkeys.totalsick AS float) / totaldonkeys.total) * 100 AS percentsick,
totaldonkeys.donkeybreed
FROM totaldonkeys, sickdonkeys
GROUP BY totaldonkeys.donkeybreed
When I run this I end up with 28 results for each breed of donkey, one correct I believe but obviously hundreds of useless datapoints.
I know I'm probably being really dumb here but I keep hitting in to this same problem again and again with new donkeydata, I should obviously be structuring the whole thing a new way because you just can't do this final query without an aggregate function, I think I must be missing something significant.
You can easily count the proportion that are sick in the donkeyhealth table
SELECT d.donkeybreed,
AVG( (dh.sick)::int ) AS proportion_sick
FROM donkeytable1 d JOIN
donkeyhealth dh
ON d.donkeyid = dh.donkeyid
GROUP BY d.donkeybreed

MS Access - Extract single value for calculation

This an MS Access related question.
I get the Collateral divided 50 times because I have 50 rows in my ExchangeRates table... however the SELECT statement is supposed to only extract the value associated to CurrencyCode="EUR". How can I change the below statement to have the division being applied once only?
SELECT tbl_A.Security, tbl_A.Typ, Sum(([Collateral]/(SELECT tblExchangeRates.RateToUSD
FROM tblExchangeRates
WHERE (((tblExchangeRates.CurrencyCode)="EUR"))))) AS CollateralUSD
FROM tbl_A, tblExchangeRates
GROUP BY tbl_A.Security, tbl_A.Typ
HAVING (((tbl_A.Typ)="PR"));
It looks like this is what I was willing to get, just an Alias. SQL gurus, you are welcome to review.
SELECT tbl_A.Security, Sum(([Collateral]/[RateToUSD])) AS CollateralUSD
FROM tbl_A, (SELECT RateToUSD
FROM tblExchangeRates
WHERE CurrencyCode = 'EUR') AS MyAliasQ
GROUP BY tbl_A.Security
HAVING (((tbl_A.Typ)="PR"));

Calculate First Time Buyer and Repeating Buyers using MAQL Queries in GOOD-DATA platform

I have been recently working with GOOD-DATA platform. I don't have that much experience in MAQL, but I am working on it. I did some metric and reports in GOOD-DATA platform. Recently I tried to create a metric for calculating Total Buyers,First Time Buyers and Repeating Buyers. I created these three reports and working perfect.But when i try to add a order date parent filter the first time buyers and repeating buyers value getting wrong. please have Look at Following queries.
I can find out the correct values using sql queries.
MAQL Queries:
TOTAL ORDERS-
SELECT COUNT(NexternalOrderNo) BY CustomerNo WITHOUT PF
TOTAL FIRSTTIMEBUYERS-
SELECT COUNT(CustomerNo) WHERE (TOTAL ORDER WO PF=1) WITHOUT PF
TOTAL REPEATINGBUYERS-
SELECT COUNT(CustomerNo) WHERE (TOTAL ORDER WO PF>1) WITHOUT PF
Can any one suggest a logic for finding these values using MAQL
It's not clear what you want to do. If you could provide more details about the report you need to get, it would be great.
It's not necessary to put "without pf" into the metrics. This clause bans filter application, so when you remove it, the parent filter will be used there. And you will probably get what you want. Specifically, modify this:
SELECT COUNT(CustomerNo) WHERE (TOTAL ORDER WO PF>1) WITHOUT PF
to:
SELECT COUNT(CustomerNo) WHERE (TOTAL ORDER WO PF>1)
The only thing you miss here is "ALL IN ALL OTHER DIMENSIONS" aka "ALL OTHER".
This keyword locks and overrides all attributes in all other dimensions—keeping them from having any affect on the metric. You can read about it more in MAQL Reference Guide.
FIRSTTIMEBUYERS:
SELECT COUNT(CustomerNo)
WHERE (SELECT IFNULL(COUNT(NexternalOrderNo), 0) BY Customer ID, ALL OTHER) = 1
REPEATINGBUYERS:
SELECT COUNT(CustomerNo)
WHERE (SELECT IFNULL(COUNT(NexternalOrderNo), 0) BY Customer ID, ALL OTHER) > 1

MySQL - Max() return wrong result

I tried this query on MySQL server (5.1.41)...
SELECT max(volume), dateofclose, symbol, volume, close, market FROM daily group by market
I got this result:
max(volume) dateofclose symbol volume close market
287031500 2010-07-20 AA.P 500 66.41 AMEX
242233000 2010-07-20 AACC 16200 3.98 NASDAQ
1073538000 2010-07-20 A 4361000 27.52 NYSE
2147483647 2010-07-20 AAAE.OB 400 0.01 OTCBB
437462400 2010-07-20 AAB.TO 31400 0.37 TSX
61106320 2010-07-20 AA.V 0 0.24 TSXV
As you can see, the maximum volume is VERY different from the 'real' value of the volume column?!?
The volume column is define as int(11) and I got 2 million rows in this table but it's very far from the max of MyISAM storage so I cannot believed this is the problem!? What is also strange is data get show from the same date (dateofclose). If I force a specific date with a WHERE clause, the same symbol came out with different max(volume) result. This is pretty weird...
Need some help here!
UPDATE :
Here's my edited "working" request:
SELECT a.* FROM daily a
INNER JOIN (
SELECT market, MAX(volume) AS max_volume
FROM daily
WHERE dateofclose = '20101108'
GROUP BY market
) b ON
a.market = b.market AND
a.volume = b.max_volume
So this give me, by market, the highest volume's stock (for nov 8, 2010).
As you can see, the maximum volume is VERY different from the 'real' value of the volume column?!?
This is because MySQL rather bizarrely doesn't GROUP things in a sensical way.
Selecting MAX(column) will get you the maximum value for that column, but selecting other columns (or column itself) will not necessarily select the entire row that the found MAX() value is in. You essentially get an arbitrary (and usually useless) row back.
Here's a thread with some workarounds using subqueries:
How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL?
This is a subset of the "greatest n per group" problem. (There is a tag with that name but I am a new user so I can't retag).
This is usually best handled with an analytic function, but can also be written with a join to a sub-query using the same table. In the sub-query you identify the max value, then join to the original table on the keys to find the row that matches the max.
Assuming that {dateofclose, symbol, market} is the grain at which you want the maximum volume, try:
select
a.*, b.max_volume
from daily a
join
(
select
dateofclose, symbol, market, max(volume) as max_volume
from daily
group by
dateofclose, symbol, market
) b
on
a.dateofclose = b.dateofclose
and a.symbol = b.symbol
and a.market = b.market
Also see this post for reference.
Did you try adjusting your query to include Symbol in the grouping?
SELECT max(volume), dateofclose, symbol,
volume, close, market FROM daily group by market, symbol