Transform dates from table into columns in MS Access query - sql

I have a table like below:
type_id
date
order
20
2021-06-23
123
20
2021-06-23
217
35
2021-06-23
121
35
2021-06-24
128
20
2021-06-24
55
35
2021-06-25
77
20
2021-06-26
72
20
2021-06-26
71
and want to create a query only where type_id=20 likie this:
2021-06-23
2021-06-24
2021-06-25
2021-06-25
123
55
72
217
71
is it possible to do this with sql without vba?
if vba needed do I need to create a extra table and every time add/delete a new columns ?
Thnak You for any idea

You can use conditional aggregation. But this is a pain in MS Access because you need a sequential value. You can calculate one:
select max(iif(date = "2021-06-23", order, null)) as val_2021_06_23,
max(iif(date = "2021-06-24", order, null)) as val_2021_06_24,
max(iif(date = "2021-06-25", order, null)) as val_2021_06_25,
max(iif(date = "2021-06-26", order, null)) as val_2021_06_26
from (select t.*,
(select count(*)
from t as t2
where t2.type_id = t.type_id and t2.date = t.date and t2.order <= t.order
) as seqnum
from t
where type_id = 20
) t
group by seqnum;

Thank You, it works !
(only one change in the code needed "2021-06-23" ----> #2021-06-23#
In the meantime I found a other solution but this need add a new field into the table. The field is a numeric field which contain sequence numer for each day from 1 to n. In my project it's even helpful because in this case I can control order by in columns
here is the code. maybe helpful for someone in future
TRANSFORM
First ([tabela1].[order])
SELECT [tabela1].[sequence]
FROM [tabela1]
WHERE [tabela1].[type_id] = 20
GROUP BY [tabela1].[sequence]
PIVOT [tabela1].[date]

Related

Get sum qty over specific range

I have the below table
substring(area,6,3)
qty
101
10
103
15
102
11
104
30
105
25
107
17
108
23
106
48
And I am looking to get a result as below without repeating the IIF ( as it's a cumulative of 4 sequences) in the area:
new_area(substring(area,6,3)
sum_qty
101-104
66
105-108
117
I don't know how to create the new area column to be able to get the sum qty
Looking forward to your help.
Please also add an explanation so I will understand how the query is running.
I think this is what you are looking for.
We just use the window function row_number() to create the Grp
NOTE: If you have repeating values in AREA use dense_rank() instead of row_number()
Example
Select new_area = concat(min(area),'-',max(area))
,qty = sum(qty)
From (
Select area=substring(area,6,3)
,qty
,Grp = (row_number() over (order by substring(area,6,3))-1) / 4
From YourTable
) A
Group By Grp
Results
new_area qty
101-104 66
105-108 113 -- get different results
If you were to run the subquery, you would see the following.
Then it becomes a small matter to aggregate the data grouped by the created column GRP

What is an efficient alternative to cross join two large tables to get running total?

I have 2 tables whose schema is as follows:
table1
event_dt
6/30/2018
7/1/2018
7/2/2018
7/3/2018
7/4/2018
7/5/2018
7/6/2018
7/7/2018
7/8/2018
7/9/2018
7/10/2018
table:2
event_dt time(in seconds)
7/7/2018 144
7/8/2018 63
7/1/2018 47
7/8/2018 81
7/9/2018 263
7/7/2018 119
7/8/2018 130
7/9/2018 206
7/5/2018 134
7/1/2018 140
For each date in table 1 i want to find the cumulative sum of time upto that date .So i used a cross join to get the output using the following code:
select t1.event_dt, sum(t2.time)
from yp1 t1 cross join yp2 t2
where t1.event_dt>=t2.event_dt
group by t1.event_dt
Using this query i was able to get the cumulative running total for each date in table 1 as long as there is an event before that day. For example first event date is 07/01/2018 but the first date in table1 is 06/30/2018 so in the final output 6/30/2018 wont be present.
The problem with this method is the cross join is taking too long, i have millions of records since an observation is taken every 6 seconds. SO is there a way to get the same results without a cross join or for that matter any way which is more efficient.
I think the best way is to use SQL's cumulative sum function:
select event_dt, running_time
from (select event_dt, time, sum(time) over (order by event_dt) as running_time
from ((select event_dt, null as time
from t1
) union all
(select event_dt, time
from t2
)
) tt
) tt
where time is null;

Select rows where value changed in column

Currently I have this table in sql database sorted by Account#.
Account# Charge_code PostingDate Balance
12345 35 1/18/2016 100
**12345 35 1/20/2016 200**
12345 61 1/23/2016 250
12345 61 1/22/2016 300
12222 41 1/20/2016 200
**12222 41 1/21/2016 250**
12222 42 1/23/2016 100
12222 42 1/25/2016 600
How do I select last row prior to the change in the charge_code column for each Account#. I highlighted the rows that I am trying to return.
The query should execute quickly with the table having tens of thousands of records.
In SQL Server 2012+, you would use lead():
select t.*
from (select t.*,
lead(charge_code) over (partition by account order by postingdate) as next_charge_code
from t
) t
where charge_code <> next_charge_code;
In earlier versions of SQL Server, you can do something similar with apply.

SQL Query to continuously bucket data

I have a table as follows:
Datetime | ID | Price | Quantity
2013-01-01 13:30:00 1 139 25
2013-01-01 13:30:15 2 140 25
2013-01-01 13:30:30 3 141 15
Supposing that I wish to end up with a table like this, which buckets the data into quantities of 50 as follows:
Bucket_ID | Max | Min | Avg |
1 140 139 139.5
2 141 141 141
Is there a simple query to do this? Data will constantly be added to the first table, it would be nice if it could somehow not recalculate the completed buckets of 50 and instead automatically start averaging the next incomplete bucket. Ideas appreciated! Thanks
You may try this solution. It should work even if "number" is bigger than 50 (but relying on fact that avg(number) < 50).
select
bucket_id,
max(price),
min(price),
avg(price)
from
(
select
price,
bucket_id,
(select sum(t2.number) from test t2 where t2.id <= t1.id ) as accumulated
from test t1
join
(select
rowid as bucket_id,
50 * rowid as bucket
from test) buckets on (buckets.bucket - 50) < accumulated
and buckets.bucket > (accumulated - number))
group by
bucket_id;
You can have a look at this fiddle http://sqlfiddle.com/#!7/4c63c/1 if it is what you want.

Sql get latest records of the month for each name

This question is probably answered before but i cant find how to get the latest records of the months.
The problem is that I have a table with sometimes 2 row for the same month. I cant use the aggregate function(I guess) cause in the 2 rows, i have different data where i need to get the latest.
Example:
name Date nuA nuB nuC nuD
test1 05/06/2013 356 654 3957 7033
test1 05/26/2013 113 237 399 853
test3 06/06/2013 145 247 68 218
test4 06/22/2013 37 37 6 25
test4 06/27/2013 50 76 20 84
test4 05/15/2013 34 43 34 54
I need to get a result like:
test1 05/26/2013 113 237 399 853
test3 06/06/2013 145 247 68 218
test4 05/15/2013 34 43 34 54
test4 06/27/2013 50 76 20 84
** in my example the data is in order but in my real table the data is not in order.
For now i have something like:
SELECT Name, max(DATE) , nuA,nuB,nuC,nuD
FROM tableA INNER JOIN
Group By Name, nuA,nuB,nuC,nuD
But it didn't work as i want.
Thanks in advance
Edit1:
It seems that i wasn't clear with my question...
So i add some data in my example to show you how i need to do it.
Thanks guys
Use SQL Server ranking functions.
select name, Date, nuA, nuB, nuC, nuD from
(Select *, row_number() over (partition by name, datepart(year, Date),
datepart(month, Date) order by Date desc) as ranker from Table
) Z
where ranker = 1
Try this
SELECT t1.* FROM Table1 t1
INNER JOIN
(
SELECT [name],MAX([date]) as [date] FROM Table1
GROUP BY [name],YEAR([date]),MONTH([date])
) t2
ON t1.[date]=t2.[date] and t1.[name]=t2.[name]
ORDER BY t1.[name]
Can you not just do an order
select * from tablename where Date = (select max(Date) from tablename)
followed by only pulling the first 3?