I have a specific scenario to group data in a result set based on a specific format. Below is how my data looks like.
--------------------------------
ID Value
--------------------------------
1 2
2 1
3 1
4 3
5 1
6 1
7 6
8 9
9 1
10 1
I need to group the result set value based on 'Value' column. Data to be grouped from the first instance of non '1' till the last instance of '1'. Individual non '1's need have its own group value. My expected result should be something like this.
------------------------------------
ID Value Group
------------------------------------
1 2 Group1
2 1 Group1
3 1 Group1
4 3 Group2
5 1 Group2
6 1 Group2
7 6 Group3
8 9 Group4
9 1 Group4
10 1 Group4
Groups start with a non-1 value. You can define them by using a cumulative sum:
select t.*,
sum(case when value <> 1 then 1 else 0 end) over (order by id) as grp
from t;
Related
I have a table that looks something like this:
id name status
2 a 1
2 a 2
2 a 3
2 a 2
2 a 1
3 b 2
3 b 1
3 b 2
3 b 1
and the resultant i want is:
id name total count count(status3) count(status2) count(status1)
2 a 5 1 2 2
3 b 4 0 2 2
please help me get this result somehow, i can just get id, name or one of them at a time, don't know how to put a clause to get this table at once.
Here's a simple solution using group by and case when.
select id
,count(*) as 'total count'
,count(case status when 3 then 1 end) as 'count(status1)'
,count(case status when 2 then 1 end) as 'count(status3)'
,count(case status when 1 then 1 end) as 'count(status2)'
from t
group by id
id
total count
count(status3)
count(status2)
count(status1)
2
5
1
2
2
3
4
0
2
2
Fiddle
Here's a way to solve it using pivot.
select *
from (select status,id, count(*) over (partition by id) as "total count" from t) tmp
pivot (count(status) for status in ([1],[2],[3])) pvt
d
total count
1
2
3
3
4
2
2
0
2
5
2
2
1
Fiddle
I have such tables:
Group - combination of TypeId and ZoneId
ID TypeID ZoneID
-- -- --
1 1 1
2 1 2
3 2 1
4 2 2
5 2 3
6 3 3
Object
ID GroupId
-- --
1 1
2 1
3 2
4 3
5 3
6 3
I want to build a query for grouping all these tables by TypeId and ZoneId, with number of objects which have specific combination of these field:
ResultTable
TypeId ZoneId Number of objects
-- -- --
1 1 2
1 2 1
2 1 3
2 2 1
2 3 0
3 3 0
Query for this:
SELECT
group.TypeId,
group.ZoneId,
COUNT(obj.ID) as NumberOfObjects
FROM[Group] group
JOIN[Object] obj on obj.GroupID = group.ID
GROUP BY group.TypeId, group.ZoneId ORDER BY group.TypeId
But! I want to add summarize row after each group, and make it like:
ResultTableWithSummary
TypeId ZoneId Number of objects
-- -- --
1 1 2
1 2 1
Summary (empty field) 3
2 1 3
2 2 1
2 3 0
Summary (empty field) 4
3 3 0
Summary (empty field) 0
The problem is that I can use GROUP BY ROLLUP(group.TypeId, group.ZoneId):
TypeId ZoneId Number of objects
-- -- --
1 1 2
1 2 1
1 null 3
2 1 3
2 2 1
2 3 0
2 null 4
3 3 0
3 null 0
but I cannot or don't know how to change not-null group.TypeId in summary rows with "Summary".
How can I do this?
The simplest method is coalesce(), but you need to be sure the types match:
SELECT COALESCE(CONVERT(VARCHAR(255), group.TypeId, 'Summary') as TypeId,
. . .
This is not the most general method, because it does not handle real NULL values in the GROUP BY keys. That doesn't seem to be an issue in this case. If it were, you could use a CASE expression with GROUPING().
EDIT:
For your particular variant (which I find strange), you can use:
SELECT (CASE WHEN group.TypeId IS NULL OR group.ZoneID IS NULL
THEN 'Summary' ELSE CONVERT(VARCHAR(255), group.TypeId)
END) as TypeId,
. . .
In practice, I would use something similar to the COALESCE() in both columns, so I don't lose the information on what the summary is for.
based on this thread (Check rows for monotonically increasing values), I have an additional requirement:
The value-column represents a counter.
In my application, due to some annoying reason, the counter value gets reset from time to time, i.e. starts from zero. For data evaluation, I need the accumulated value of all counts. My idea was to create an additional column that contains the accumulated value.
As long as no reset occurs, the value of the new column is the same as of the original value column. After a reset, the value of the new column is the latest accumulated value + the current counter value. Multiple resets may occur in the data. Once again, rows with the same "name" belong to the same measurement and have to be handled sorted by meas_date.
This is the original data:
id name meas_date value
1 name1 2018/01/01 1
2 name1 2018/01/02 2
3 name2 2018/01/04 2
4 name1 2018/01/03 1
5 name1 2018/01/04 5
6 name2 2018/01/05 4
7 name2 2018/01/06 2
8 name1 2018/01/05 2
Desired result would be
id name meas_date value accumulated_value
1 name1 2018/01/01 1 1
2 name1 2018/01/02 2 2
3 name2 2018/01/04 2 2
4 name1 2018/01/03 1 3
5 name1 2018/01/04 5 7
6 name2 2018/01/05 4 4
7 name2 2018/01/06 2 6
8 name1 2018/01/05 2 9
The LAG function from the thread mentioned above is really helpful to find the rows where the counter value was reset. But now, I am struggling to combine this with the accumulation of the values to get the overall counter values.
Thank you very much,
Christian
I guess I found a solution which takes two steps:
-- 1. set flag column = 2 for all rows with values right before an reset
update TEST dst set dst.flag = (
with src as (
SELECT id, name, value,
CASE WHEN value < value_next THEN 0 ELSE 2 END AS flag
FROM (
SELECT id, name, value,
LEAD(value, 1, 0) OVER (PARTITION BY name order by meas_date) AS value_next
FROM TEST
)
)
select src.flag from src where dst.id = src.id
)
-- 2. Use SQL for Modeling to calculate the accumulated values
SELECT name, meas_date, value, offset, value+offset as accumulated_value
FROM TEST
MODEL RETURN UPDATED ROWS
PARTITION BY (name)
DIMENSION BY (meas_date, flag)
MEASURES (value, 0 as offset)
RULES (
offset[meas_date, ANY] ORDER BY meas_date = NVL(sum(NVL(value,0))[meas_date < CV(meas_date), flag=2],0)
);
After step 1:
id name meas_date value flag
1 name1 01.01.18 1 0
2 name1 02.01.18 2 2
3 name2 04.01.18 2 0
4 name1 03.01.18 1 0
5 name1 04.01.18 5 2
6 name2 05.01.18 4 2
7 name2 06.01.18 2 2
8 name1 05.01.18 2 2
Output of step 2
name meas_date value offset accumulated_value
name1 01.01.18 1 0 1
name1 02.01.18 2 0 2
name1 03.01.18 1 2 3
name1 04.01.18 5 2 7
name1 05.01.18 2 7 9
name2 04.01.18 2 0 2
name2 05.01.18 4 0 4
name2 06.01.18 2 4 6
Is this helpful?
select id, name, meas_date, value, sum(value) over(partition by meas_date order by meas_date, value ) from #temp
group by id, name, meas_date, value
order by meas_date, value
sorry I have no clue on how to do this.
Here is some sample data.
Data Data Entry
Name Active key Name Active Date key
Name 1 1 1 Name 1 1 Jan-15 1
Name 2 0 2 Name 2 1 Feb-15 2
Name 3 1 3 Name 1 1 Jan-14 1
Name 4 1 4 Name 3 1 Feb-15 3
Name 5 1 5 Name 3 0 Jan-14 3
Name 6 0 6 Name 4 1 Feb-15 4
Name 7 1 7 Name 5 1 Mar-15 5
Name 8 1 8 Name 6 1 Apr-15 6
Two tables Data , and Data_Entry you can say.
How do I get an output from this where it shows.
data.active = '1' and data_entry.active = '1' for each key ? as well as the count that it shows up in the data_entry
I would want the output to be for example this : As it is only showing me active data that has an active entry in the data_entry table and the count to be only that of active entries from data_entry
name last_date count
name 1 Jan-15 2
name 3 Feb-15 1
name 4 Feb-15 1
Name 5 Mar-15 1
Name 6 Apr-15 1
I think that should work for you:
SELECT
name,
MAX(date) AS last_date,
COUNT(*) AS count
FROM (
SELECT
data.key
data.name,
data_entry.key,
data_entry.date
FROM data
INNER JOIN data_entry ON data_entry.key = data.key
WHERE data.active = 1 AND data_entry.active = 1
) subquery
GROUP BY key, name
I have an SQL Server database, that logs weather device sensor data.
The table looks like this:
Id DeviceId SensorId Value
1 1 1 42
2 1 1 3
3 1 2 30
4 2 2 0
5 2 1 1
6 3 1 26
7 3 1 23
8 3 2 1
In return the query should return the following:
Id DeviceId SensorId Value
2 1 1 3
3 1 2 30
4 2 2 0
5 2 1 1
7 3 1 23
8 3 2 1
For each device the sensor should be unique. i.e. Values in Columns DeviceId and SensorId should be unique (row-wise).
Apologies if I'm not clear enough.
If you don't want to sum Value as your desired result suggest, so you just want to take an "arbitrary" row of each "DeviceId + SensorId"-group:
WITH CTE AS
(
SELECT Id, DeviceId, SensorId, Value,
RN = ROW_NUMBER() OVER (PARTITION BY DeviceId, SensorId ORDER BY ID DESC)
FROM dbo.TableName
)
SELECT Id, DeviceId, SensorId, Value
FROM CTE
WHERE RN = 1
ORDER BY ID
This returns the row with the highest ID per group. You need to change ORDER BY ID DESC if you want a different result. Demo: http://sqlfiddle.com/#!6/8e31b/2/0 (your result)