I am trying to get the unique values out from a table, the table holds the following as a time log file:
id | time | code | user
1 | 7000 | xxxx | 1
2 | 7000 | xxxx | 1
3 | 7500 | xxxx | 2
4 | 7000 | xxxx | 3
What I would like to know is how many unique users have used the code at time, e.g. 7000, it should say 2 but with the distinct I write I get 3
SELECT Time, COUNT(*) as total, Code
FROM dbo.AnalyticsPause
WHERE CODE = 'xxxx'
GROUP BY id, Time, Code
Result:
time | Count | code
7000 | 3 | xxxx
7500 | 1 | xxxx
where I would like to have
time | Count | code
7000 | 2 | xxxx
7500 | 1 | xxxx
How would I be able to add a distinct on the id_user and still count all the time together
count(*) counts the total number of rows (in the group). You should instead just count the distinct user:
SELECT Time, COUNT(DISTINCT user) as total, Code
FROM dbo.AnalyticsPause
WHERE code = 'xxxx'
GROUP BY Time, Code
Related
I am trying to sum all the columns that have the same ID number in a specified date range, but it always gives me duplicated values
select pr.product_sku,
pr.product_name,
pr.brand,
pr.category_name,
pr.subcategory_name,
a.stock_on_hand,
sum(pr.pageviews) as page_views,
sum(acquired_subscriptions) as acquired_subs,
sum(acquired_subscription_value) as asv_value
from dwh.product_reporting pr
join dm_product.product_data_livefeed a
on pr.product_sku = a.product_sku
where pr.fact_day between '2022-05-01' and '2022-05-30' and pr.pageviews > '0' and pr.acquired_subscription_value > '0' and store_id = 1
group by pr.product_sku,
pr.product_name,
pr.brand,
pr.category_name,
pr.subcategory_name,
a.stock_on_hand;
This supposes to give me:
Sum of all KPI values for a distinct product SKU
Example table:
| Date | product_sku |page_views|number_of_subs
|------------|-------------|----------|--------------|
| 2022-01-01 | 1 | 110 | 50 |
| 2022-01-25 | 2 | 1000 | 40 |
| 2022-01-20 | 3 | 2000 | 10 |
| 2022-01-01 | 1 | 110 | 50 |
| 2022-01-25 | 2 | 1000 | 40 |
| 2022-01-20 | 3 | 2000 | 10 |
Expected Output:
| product_sku |page_views|number_of_subs
|-------------|----------|--------------|
| 1 | 220 | 100 |
| 2 | 2000 | 80 |
| 3 | 4000 | 20 |
Sorry I had to edit to add the table examples
Since you're not listing the dupes (assuming they are truly appearing as duplicate rows, and not just multiple rows with different values), I'll offer that there may be something else that's at play here - I would suggest for every string value in your result set that's part of the GROUP BY clause to apply a TRIM(UPPER()) as you might be dealing with either a case insensitivity or trailing blanks that are treated as unique values in the query.
Assuming all the columns are character based:
select trim(upper(pr.product_sku)),
trim(upper(pr.product_name)),
trim(upper(pr.brand)),
trim(upper(pr.category_name)),
trim(upper(pr.subcategory_name)),
sum(pr.pageviews) as page_views,
sum(acquired_subscriptions) as acquired_subs,
sum(acquired_subscription_value) as asv_value
from dwh.product_reporting pr
where pr.fact_day between '2022-05-01' and '2022-05-30' and pr.pageviews > '0' and pr.acquired_subscription_value > '0' and store_id = 1
group by trim(upper(pr.product_sku)),
trim(upper(pr.product_name)),
trim(upper(pr.brand)),
trim(upper(pr.category_name)),
trim(upper(pr.subcategory_name));
Thank you guys for all your help, I found out where the problem was. It was mainly in the group by when I removed all the other column names and left only the product_sku column, it worked as required
I have a table that looks like this:
ID | Value | Date
1 | 3000 | 25/06
1 | 3000 | 26/06
1 | 2000 | 12/07
2 | 4000 | 23/12
2 | 4000 | 12/12
3 | 2000 | 01/11
3 | 2000 | 23/04
3 | 4000 | 23/05
3 | 4000 | 04/11
Now I want to display unique values for a specific ID and how many times each specific value appears in the table for a specific ID.
The desired output for
select ### where ID = 1 from tablename; would be:
distinct Value | count
3000 | 2
2000 | 1
and for:
select ### where ID = 3 from tablename;
distinct Value | count
2000 | 2
4000 | 2
Can this be done with a single select statement (for each ID)?
Maybe something like this:
select ID
, Value
, Count(*) AS CountOfValues
from tablename
group by ID, Value
Just grouping by both ID and Value and counting each amount of times the value appears per those grouping sets.
This is my table...
+----+--------+
| id | amount |
+----+--------+
| 1 | 100 |
| 1 | 50 |
| 1 | 0 |
| 2 | 500 |
| 2 | 100 |
| 3 | 300 |
| 3 | -2 |
| 4 | 400 |
| 4 | 200 |
+----+--------+
I would like to choose from it each value of id that does not have a nonpositive (i.e. negative or 0) value associated with it, and the smallest amount associated with that id.
If I use this code...
SELECT DISTINCT id, amount
FROM table t
WHERE amount = (SELECT MIN(amount) FROM table WHERE id= t.id)
... then these results show...
+----+--------+
| id | amount |
+----+--------+
| 1 | 0 |
| 2 | 100 |
| 3 | -2 |
| 4 | 200 |
+----+--------+
But what I want the statement to return is...
+----+--------+
| id | amount |
+----+--------+
| 2 | 100 |
| 4 | 200 |
+----+--------+
Just add amount>0 in your query. You missed out that condition in your query. That should do it.
SELECT DISTINCT id, amount FROM table t
WHERE amount = (SELECT MIN(amount) FROM table WHERE id= t.id)
and amount>0;
If you want to display id, where min(amount) > 0, the use this.
SELECT id, min(amount) as amount
FROM table t
group by id
having min(amount) > 0;
Please try the following...
SELECT id,
MIN( amount )
FROM table
WHERE amount > 0
GROUP BY id
ORDER BY id;
This statement starts by selecting all records WHERE amount is larger than 0.
The records from the resulting dataset are then grouped by each surviving value of id and the smallest value of amount is chosen for that GROUP / id.
The resulting pairs of values are then sorted by ORDER id and returned to the user.
If you have any questions or comments, then please feel free to post a Comment accordingly.
I'm facing a problem that I cant wrap my head around so maybe you can help me to solve it!?
I have one table:
id | datetime | property | house_id | household_id | plug_id | value
---+--------------------+----------+----------+--------------+---------+--------
1 |2013-08-31 22:00:01 | 0 | 1 | 1 | 1 | 15
2 |2013-08-31 22:00:01 | 0 | 1 | 1 | 3 | 3
3 |2013-08-31 22:00:01 | 0 | 1 | 2 | 1 | 21
4 |2013-08-31 22:00:01 | 0 | 1 | 2 | 2 | 1
5 |2013-08-31 22:00:01 | 0 | 2 | 1 | 3 | 53
6 |2013-08-31 22:00:02 | 0 | 2 | 2 | 4 | 34
7 |2013-08-31 22:00:02 | 0 | 1 | 1 | 1 | 16
...
The table holds electricity consumption measurements per second for multiple houses that have multiple households (apartments) in them. Each household has multiple electricity plugs. None of the houses or households have a unique id but are identified by a combination of house_id and household_id.
1) I need a SQL query that can give me a list of all the unique households.
2) I want to use the list from 1) to create a SQL query that gives me a list of the highest value for each household (the value is cumulative, so the latest datetime holds the highest value). I need a total value (SUM) for each household (sum of all the plugs in that household), i.e. a list of of households with their total electricity consumption.
Is this even possible? I'm using SQL Server 2012 and the table has 100.000.000 rows.
If I understand correctly, you want the sum of the highest values of value, for house/household/plug combinations. This may do what you want:
select house_id, household_id, sum(maxvalue)
from (select house_id, household_id, plug_id, max(value) as maxvalue
from consumption
group by house_id, household_id, plug_id
) c
group by house_id, household_id;
according to your description I think you can use this query;
select house_id,household_id, max(value), sum(value) from your_table_name group by house_id,household_id
I have a table with Bills, each Bill can have 20 subregister.
Example (Top 5 Per Bill, could be up to 60,000 bills)
(TABLE ONE)
Bill | SubRow |
-----+------------+
1000 | 1 |
1000 | 2 |
1000 | 3 |
1000 | 4 |
1000 | 5 |
1001 | 1 |
1001 | 2 |
1001 | 3 |
1001 | 4 |
1001 | 5 |
In another table, I have the Bill number and a Range of subrows
Example:
(TABLE TWO)
Bill | InitialRange | Final Range|
-----+--------------+------------+
1000 | 1 | 2 |
1000 | 4 | 5 |
1001 | 3 | 5 |
In a query I want to achieve the following:
To show , from table One, all records NOT beetween the ranges in table 2.
That means I should get the following set :
Bill | SubRow |
-----+------------+
1000 | 3 |
1001 | 1 |
1001 | 2 |
What I have so far:
Select Bill,SubRow
from TABLE ONE
LEFT join TABLE TWO ON TABLEONE.Bill= TABLETWO.bill
where Subrow < InitialRange and Subrow > FinalRange
but the second condition in the second row in TABLETWO overrides the first for the 1000 bill.
Any idea on how to achieve this?
note(I the tables appears messed up, I will try to fix it)
Image with Example:
http://postimg.org/image/ymc3z2uzx/
Try this:
SELECT * FROM TABLE_ONE WHERE NOT EXISTS
(SELECT * FROM TABLE_TWO
WHERE TABLE_ONE.Bill = TABLE_TWO.Bill
AND TABLE_ONE.SubRow BETWEEN TABLE_TWO.IinitialRange AND TABLE_TWO.FinalRange)