Related
I have Table A
SnapshotDat
Invoice ID
2022-09-11
1111
2022-09-12
1111
2022-09-13
1111
2022-09-14
1111
2022-09-15
1111
2022-09-16
1111
2022-09-17
1111
2022-09-18
1111
2022-09-19
1111
2022-09-20
1111
2022-09-21
1111
2022-09-22
1111
2022-09-23
1111
2022-09-24
1111
2022-09-25
1111
Table B
Date
Invoice ID
Status
2022-09-11
1111
draft
2022-09-15
1111
outstanding
2022-09-20
1111
pending
2022-09-24
1111
paid
And I want to establish a join by Invoice ID and Dates, to have this table result
SnapshotDat
Invoice ID
Status
2022-09-11
1111
draft
2022-09-12
1111
draft
2022-09-13
1111
draft
2022-09-14
1111
draft
2022-09-15
1111
outstanding
2022-09-16
1111
outstanding
2022-09-17
1111
outstanding
2022-09-18
1111
outstanding
2022-09-19
1111
outstanding
2022-09-20
1111
pending
2022-09-21
1111
pending
2022-09-22
1111
pending
2022-09-23
1111
pending
2022-09-24
1111
paid
2022-09-25
1111
paid
Here's what I've tried:
SELECT a.SnapshotDate, a.invoiceid, b.status
FROM a
LEFT JOIN b ON a.invoiceid = b.invoiceid
AND a.SnapshotDate<=b.Date
Consider below approach (assuming your dates related columns are actually of date data type
select a.*, status
from TableA as a
join (
select *,
ifnull(-1 + lead(date) over(partition by invoiceId order by date), current_date()) lastDate
from TableB
) as b
on a.invoiceId = b.invoiceId
and snapshotDate between date and lastDate
if applied to sample data in your question - output is
Purpose: I work in Hospitality Industry. I want to understand at what time the Restaurant is full and what time it is less busy. I have the opening and closing times, I want to split it 30 minute interval period.
I would really appreciate if you could ease help me.
Thanking you in advance
Table
Check# Open CloseTime
25484 17:34 18:06
25488 18:04 21:22
Output
Check# Open Close Duration
25484 17:34 18:00 0:25
25484 18:00 18:30 0:30
25488 18:08 18:30 0:21
25488 18:30 19:00 0:30
25488 19:00 19:30 0:30
25488 19:30 20:00 0:30
25488 20:00 20:30 0:30
25488 20:30 21:00 0:30
25488 21:00 21:30 0:30
I am new to SQL. I am good at Excel, but due to its limitations i want to use SQL. I just know the basics in SQL.
I have tried on the google, but could not find solution to it. All i can see use of Date Keywords, but not the Field name in the code, hence i am unable to use them.
Could you try this, it works in MySQL 8.0:
WITH RECURSIVE times AS (
SELECT time '0:00' AS `Open`, time '0:30' as `Close`
UNION ALL
SELECT addtime(`Open`, '0:30'), addtime(`Close`, '0:30')
FROM times
WHERE `Open` < time '23:30'
)
SELECT c.`Check`,
greatest(t.`Open`, c.`Open`) `Open`,
least(t.`Close`, c.`CloseTime`) `Close`,
timediff(least(t.`Close`, c.`CloseTime`), greatest(t.`Open`, c.`Open`)) `Duration`
FROM times t
JOIN checks c ON (c.`Open` < t.`Close` AND c.`CloseTime` > t.`Open`);
| Check | Open | Close | Duration |
| ----- | -------- | -------- | -------- |
| 25484 | 17:34:00 | 18:00:00 | 00:26:00 |
| 25484 | 18:00:00 | 18:06:00 | 00:06:00 |
| 25488 | 18:04:00 | 18:30:00 | 00:26:00 |
| 25488 | 18:30:00 | 19:00:00 | 00:30:00 |
| 25488 | 19:00:00 | 19:30:00 | 00:30:00 |
| 25488 | 19:30:00 | 20:00:00 | 00:30:00 |
| 25488 | 20:00:00 | 20:30:00 | 00:30:00 |
| 25488 | 20:30:00 | 21:00:00 | 00:30:00 |
| 25488 | 21:00:00 | 21:22:00 | 00:22:00 |
->Fiddle
This works for SQL Server 2019:
WITH times([Open], [Close]) AS (
SELECT cast({t'00:00:00'} as time) as "Open",
cast({t'00:30:00'} as time) as "Close"
UNION ALL
SELECT dateadd(minute, 30, [Open]), dateadd(minute, 30, [Close])
FROM times
WHERE [Open] < cast({t'23:30:00'} as time)
)
SELECT c.[Check],
iif(t.[Open] > c.[Open], t.[Open], c.[Open]) as [Open],
iif(t.[Close] < c.[CloseTime], t.[Close], c.[CloseTime]) as [Close],
datediff(minute,
iif(t.[Open] > c.[Open], t.[Open], c.[Open]),
iif(t.[Close] < c.[CloseTime], t.[Close], c.[CloseTime])) Duration
FROM times t
JOIN checks c ON (c.[Open] < t.[Close] AND c.[CloseTime] > t.[Open]);
Check | Open | Close | Duration
25484 | 17:34:00.0000000 | 18:00:00.0000000 | 26
25484 | 18:00:00.0000000 | 18:06:00.0000000 | 6
25488 | 18:04:00.0000000 | 18:30:00.0000000 | 26
25488 | 18:30:00.0000000 | 19:00:00.0000000 | 30
25488 | 19:00:00.0000000 | 19:30:00.0000000 | 30
25488 | 19:30:00.0000000 | 20:00:00.0000000 | 30
25488 | 20:00:00.0000000 | 20:30:00.0000000 | 30
25488 | 20:30:00.0000000 | 21:00:00.0000000 | 30
25488 | 21:00:00.0000000 | 21:22:00.0000000 | 22
->Fiddle
Situation
We have a PostgreSQL 9.1 database containing user sessions with login date/time and logout date/time per row. Table looks like this:
user_id | login_ts | logout_ts
------------+--------------+--------------------------------
USER1 | 2021-02-03 09:23:00 | 2021-02-03 11:44:00
USER2 | 2021-02-03 10:49:00 | 2021-02-03 13:30:00
USER3 | 2021-02-03 13:32:00 | 2021-02-03 15:31:00
USER4 | 2021-02-04 13:50:00 | 2021-02-04 14:53:00
USER5 | 2021-02-04 14:44:00 | 2021-02-04 15:21:00
USER6 | 2021-02-04 14:52:00 | 2021-02-04 17:59:00
Goal
Would like to get the max number of concurrent users for each 24 hours of each day in the time range. Like this:
date | hour | sessions
-----------+-------+-----------
2021-02-03 | 01:00 | 0
2021-02-03 | 02:00 | 0
2021-02-03 | 03:00 | 0
2021-02-03 | 04:00 | 0
2021-02-03 | 05:00 | 0
2021-02-03 | 06:00 | 0
2021-02-03 | 07:00 | 0
2021-02-03 | 08:00 | 0
2021-02-03 | 09:00 | 1
2021-02-03 | 10:00 | 2
2021-02-03 | 11:00 | 2
2021-02-03 | 12:00 | 1
2021-02-03 | 13:00 | 1
2021-02-03 | 14:00 | 1
2021-02-03 | 15:00 | 0
2021-02-03 | 16:00 | 0
2021-02-03 | 17:00 | 0
2021-02-03 | 18:00 | 0
2021-02-03 | 19:00 | 0
2021-02-03 | 20:00 | 0
2021-02-03 | 21:00 | 0
2021-02-03 | 22:00 | 0
2021-02-03 | 23:00 | 0
2021-02-03 | 24:00 | 0
2021-02-04 | 01:00 | 0
2021-02-04 | 02:00 | 0
2021-02-04 | 03:00 | 0
2021-02-04 | 04:00 | 0
2021-02-04 | 05:00 | 0
2021-02-04 | 06:00 | 0
2021-02-04 | 07:00 | 0
2021-02-04 | 08:00 | 0
2021-02-04 | 09:00 | 0
2021-02-04 | 10:00 | 0
2021-02-04 | 11:00 | 0
2021-02-04 | 12:00 | 0
2021-02-04 | 13:00 | 1
2021-02-04 | 14:00 | 3
2021-02-04 | 15:00 | 1
2021-02-04 | 16:00 | 1
2021-02-04 | 17:00 | 1
2021-02-04 | 18:00 | 0
2021-02-04 | 19:00 | 0
2021-02-04 | 20:00 | 0
2021-02-04 | 21:00 | 0
2021-02-04 | 22:00 | 0
2021-02-04 | 23:00 | 0
2021-02-04 | 24:00 | 0
Considerations
"Concurrent" means at the same point in time. Thus user2 and user3 do not overlap for
13:00, but user4 and user6 do overlap for 14:00 even though they only overlap for 1 minute.
User sessions can span multiple hours and would thus count for each hour they are part of.
Each user can only be online once at one point in time.
If there are no users for a particular hour, this should return 0.
Similar questions
A similar question was answered here: Count max. number of concurrent user sessions per day by Erwin Brandstetter. However, this is per day rather than per hour, and I am apparently too much of a noob at postgreSQL to be able to translate it into hourly so I'm hoping someone can help.
I would decompose this into two problems:
Find the number of overlaps and when they begin and end.
Find the hours.
Note two things:
I am assuming that '2014-04-03 17:59:00' is a typo.
The following goes by the beginning of the hour and puts the date/hour in a single column.
First, calculate the overlaps. For this, unpivot the logins and logout. Put in a counter of +1 for logins and -1 for logouts and do a cumulative sum. This looks like:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select *
from overlap
order by ts;
For the next step, use generate_series() to generate timestamps one hour apart. Look for the maximum value during that period using left join and group by:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select gs.hh, coalesce(max(o.num_overlaps), 0) as num_overlaps
from generate_series('2021-02-03'::date, '2021-02-05'::date, interval '1 hour') gs(hh) left join
overlap o
on o.ts < gs.hh + interval '1 hour' and
o.next_ts > gs.hh
group by gs.hh
order by gs.hh;
Here is a db<>fiddle using your data fixed with the a reasonable logout time for the last record.
For any time period you can calculate number of concurrent sesions using OVERLAPS operator in SQL:
CREATE TEMP TABLE sessions (
user_id text not null,
login_ts timestamp,
logout_ts timestamp );
INSERT INTO sessions SELECT 'webuser', d,
d+((1+random()*300)::text||' seconds')::interval
FROM generate_series(
'2021-02-28 07:42'::timestamp,
'2021-03-01 07:42'::timestamp,
'5 seconds'::interval) AS d;
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE (s2.login_ts, s2.logout_ts) OVERLAPS (s1.login_ts, s1.logout_ts))
AS parallel_sessions
FROM sessions s1 LIMIT 10;
user_id | login_ts | logout_ts | parallel_sessions
---------+---------------------+----------------------------+------------------
webuser | 2021-02-28 07:42:00 | 2021-02-28 07:42:25.528594 | 6
webuser | 2021-02-28 07:42:05 | 2021-02-28 07:45:50.513769 | 47
webuser | 2021-02-28 07:42:10 | 2021-02-28 07:44:18.810066 | 28
webuser | 2021-02-28 07:42:15 | 2021-02-28 07:45:17.3888 | 40
webuser | 2021-02-28 07:42:20 | 2021-02-28 07:43:14.325476 | 15
webuser | 2021-02-28 07:42:25 | 2021-02-28 07:43:44.174841 | 21
webuser | 2021-02-28 07:42:30 | 2021-02-28 07:43:32.679052 | 18
webuser | 2021-02-28 07:42:35 | 2021-02-28 07:45:12.554117 | 38
webuser | 2021-02-28 07:42:40 | 2021-02-28 07:46:37.94311 | 55
webuser | 2021-02-28 07:42:45 | 2021-02-28 07:43:08.398444 | 13
(10 rows)
This work well on small data sets but for better performance, use PostgreSQL Range Types as below. This works on postgres 9.2 and later.
ALTER TABLE sessions ADD timerange tsrange;
UPDATE sessions SET timerange = tsrange(login_ts,logout_ts);
CREATE INDEX ON sessions USING gist (timerange);
CREATE TEMP TABLE level1 AS
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE s2.timerange && s1.timerange) AS parallel_sessions
FROM sessions s1;
SELECT date_trunc('hour',login_ts) AS hour, count(*),
max(parallel_sessions)
FROM level1
GROUP BY hour;
hour | count | max
---------------------+-------+-----
2021-02-28 14:00:00 | 720 | 98
2021-03-01 03:00:00 | 720 | 99
2021-03-01 06:00:00 | 720 | 94
2021-02-28 09:00:00 | 720 | 96
2021-02-28 10:00:00 | 720 | 97
2021-02-28 18:00:00 | 720 | 94
2021-02-28 11:00:00 | 720 | 97
2021-03-01 00:00:00 | 720 | 97
2021-02-28 19:00:00 | 720 | 99
2021-02-28 16:00:00 | 720 | 94
2021-02-28 17:00:00 | 720 | 95
2021-03-01 02:00:00 | 720 | 99
2021-02-28 08:00:00 | 720 | 96
2021-02-28 23:00:00 | 720 | 94
2021-03-01 07:00:00 | 505 | 92
2021-03-01 04:00:00 | 720 | 95
2021-02-28 21:00:00 | 720 | 97
2021-03-01 01:00:00 | 720 | 93
2021-02-28 22:00:00 | 720 | 96
2021-03-01 05:00:00 | 720 | 93
2021-02-28 20:00:00 | 720 | 95
2021-02-28 13:00:00 | 720 | 95
2021-02-28 12:00:00 | 720 | 97
2021-02-28 15:00:00 | 720 | 98
2021-02-28 07:00:00 | 216 | 93
(25 rows)
I need help from captain obvious I suppose. I'm trying to Insert data from a table into a temptable. Ok this is easy
I need to insert the data we got today and the data we got 10 days ago. The where clause may aford it, th's okay
What for me is hard is to insert the data of today only if it does not appear in the data 10 days ago
An exemple of the table I use ([datatable]) :
Date Purchase Line_Purchase
---------------------------------------------------------------------------
2017-04-29 0000002 01
2017-04-29 0000002 02
2017-04-29 0000003 01
2017-04-29 0000003 02
2017-04-29 0000003 03
2017-04-29 0000004 01
2017-04-29 0000005 01
2017-04-19 0000001 01
2017-04-19 0000001 02
2017-04-19 0000001 03
2017-04-19 0000002 01
2017-04-19 0000002 02
My desired table temptable:
Input_date Purchase Line_Purchase
-------------------------------------------------------------------------
2017-04-19 0000001 01
2017-04-19 0000001 02
2017-04-19 0000001 03
2017-04-19 0000002 01
2017-04-19 0000002 02
2017-04-29 0000003 01
2017-04-29 0000003 02
2017-04-29 0000003 03
2017-04-29 0000004 01
2017-04-29 0000005 01
Is there any request possible in SQL that can change that ?
I tried this way
INSERT INTO #TEMPTABLE
(Input_date ,Purchase ,Line_Purchase)
SELECT
table.Date
,table.Purchase
,table.Line_Purchase
FROM
datatable table
WHERE
convert(date, table.Date) = convert(date, GETDATE() - 10)
INSERT INTO #TEMPTABLE
(Input_date ,Purchase ,Line_Purchase)
SELECT
table.Date
,table.Purchase
,table.Line_Purchase
FROM
datatable table
RIGHT JOIN #TEMPTABLE temp
on table.Purchase = temp.Purchase and table.Line_Purchase = temp.Line_Purchase
WHERE
convert(date, table.Date) = convert(date, GETDATE())
AND (temp.Purchase is null AND temp.Line_Purchase is null)
Thanks in advance
You can do this with not exists():
select date as Input_date, Purchase, Line_Purchase
into #temptable
from t
where date = '2017-04-19' --convert(date, getdate() - 10);
insert into #temptable (Input_date, Purchase, Line_Purchase)
select *
from t
where date = '2017-04-29'
and not exists (
select 1
from t as i
where i.purchase=t.purchase
and i.line_purchase=t.line_purchase
and i.date = '2017-04-19' --convert(date, getdate() - 10)
);
select *
from #temptable;
rextester demo: http://rextester.com/SAQSG21367
returns:
+------------+----------+---------------+
| Input_Date | Purchase | Line_Purchase |
+------------+----------+---------------+
| 2017-04-19 | 0000001 | 01 |
| 2017-04-19 | 0000001 | 02 |
| 2017-04-19 | 0000001 | 03 |
| 2017-04-19 | 0000002 | 01 |
| 2017-04-19 | 0000002 | 02 |
| 2017-04-29 | 0000003 | 01 |
| 2017-04-29 | 0000003 | 02 |
| 2017-04-29 | 0000003 | 03 |
| 2017-04-29 | 0000004 | 01 |
| 2017-04-29 | 0000005 | 01 |
+------------+----------+---------------+
Optionally, if you are doing both of these operations at the same time you can do it in the same query using a derived table/subquery or common table expression with row_number()
;
;with cte as (
select date, Purchase, Line_Purchase
, rn = row_number() over (partition by Purchase,Line_Purchase order by date)
from t
--where date in ('2017-09-26','2017-09-16')
where date in (convert(date, getdate()), convert(date, getdate()-10))
)
select date as Input_date, Purchase, Line_Purchase
into #temptable
from cte
where rn = 1
select *
from #temptable;
rextester demo: http://rextester.com/QMF5992
returns:
+------------+----------+---------------+
| Input_date | Purchase | Line_Purchase |
+------------+----------+---------------+
| 2017-09-16 | 0000001 | 01 |
| 2017-09-16 | 0000001 | 02 |
| 2017-09-16 | 0000001 | 03 |
| 2017-09-16 | 0000002 | 01 |
| 2017-09-16 | 0000002 | 02 |
| 2017-09-26 | 0000003 | 01 |
| 2017-09-26 | 0000003 | 02 |
| 2017-09-26 | 0000003 | 03 |
| 2017-09-26 | 0000004 | 01 |
| 2017-09-26 | 0000005 | 01 |
+------------+----------+---------------+
I have two tables: balance and calendar.
Balance :
Account Date Balance
1111 01/01/2014 100
1111 02/01/2014 156
1111 03/01/2014 300
1111 04/01/2014 300
1111 07/01/2014 468
1112 02/01/2014 300
1112 03/01/2014 300
1112 06/01/2014 300
1112 07/01/2014 350
1112 08/01/2014 400
1112 09/01/2014 450
1113 01/01/2014 30
1113 02/01/2014 40
1113 03/01/2014 45
1113 06/01/2014 45
1113 07/01/2014 60
1113 08/01/2014 50
1113 09/01/2014 20
1113 10/01/2014 10
Calendar
date business_day_ind
01/01/2014 N
02/01/2014 Y
03/01/2014 Y
04/01/2014 N
05/01/2014 N
06/01/2014 Y
07/01/2014 Y
08/01/2014 Y
09/01/2014 Y
10/01/2014 Y
I need to do the following:
I need to fill in the missing days for all the accounts up to the maximum day for which it has value. Say for account 1111, it has value only till 07/01/2014, so the dates need to be filled only till that. But when I join with the calendar table (plain left join), I am not able restrict the maximum day to the day available for an account.
1111 01/01/2014 100 N
1111 02/01/2014 156 Y
1111 03/01/2014 300 Y
1111 04/01/2014 300 Y
1111 05/01/2014 N
1111 06/01/2014 N
1111 07/01/2014 468 Y
1111 08/01/2014 Y
1111 09/01/2014 Y
1111 10/01/2014 Y
1112 01/01/2014 N
1112 02/01/2014 300 Y
1112 03/01/2014 300 Y
1112 04/01/2014 N
1112 05/01/2014 N
1112 06/01/2014 300 Y
1112 07/01/2014 350 Y
1112 08/01/2014 400 Y
1112 09/01/2014 450 Y
1112 10/01/2014 Y
I need an efficient way (preferably not involving multiple steps) to restrict the dates up to an account's maximum balance available date (07/01/2014 in case of 1111,09/01/2014 in case 1112)
Desired output:
1111 01/01/2014 100 N
1111 02/01/2014 156 Y
1111 03/01/2014 300 Y
1111 04/01/2014 300 Y
1111 05/01/2014 N
1111 06/01/2014 N
1111 07/01/2014 468 Y
1112 01/01/2014 N
1112 02/01/2014 300 Y
1112 03/01/2014 300 Y
1112 04/01/2014 N
1112 05/01/2014 N
1112 06/01/2014 300 Y
1112 07/01/2014 350 Y
1112 08/01/2014 400 Y
1112 09/01/2014 450 Y
After filling the missing days, I am planning to impute the balance of previous business day to the missing days. I am planning to get previous business day for every date and do an update to missing rows by joining the original balance table with acct and previous business day as key.
Thanks.
I am Greenplum database.
A possible way would be put a second select in a subquery. For instance:
select ... from calendar a left outer join balance b on a.date = b.date
where a.date <= (select max(date) from balance c where b.Account = c.Account )
I suppose that you have third table, accounts:
select
accounts.account,
calendar.date,
balance.balance,
calendar.business_day_ind
from
accounts cross join lateral (
select *
from calendar
where calendar.date <= (
select max(date)
from balance
where balance.account = accounts.account)) as calendar left join
balance on (balance.account = accounts.account and balance.date = calendar.date)
order by
accounts.account, calendar.date;
About lateral joins
That was a fun challenge!
CREATE TABLE balance
(account int, balance_date timestamp, balance int)
DISTRIBUTED BY (account, balance_date);
INSERT INTO balance
values (1111,'01/01/2014', 100),
(1111, '02/01/2014', 156),
(1111, '03/01/2014', 300),
(1111, '04/01/2014', 300),
(1111, '07/01/2014', 468),
(1112, '02/01/2014', 300),
(1112, '03/01/2014', 300),
(1112, '06/01/2014', 300),
(1112, '07/01/2014', 350),
(1112, '08/01/2014', 400),
(1112, '09/01/2014', 450),
(1113, '01/01/2014', 30),
(1113, '02/01/2014', 40),
(1113, '03/01/2014', 45),
(1113, '06/01/2014', 45),
(1113, '07/01/2014', 60),
(1113, '08/01/2014', 50),
(1113, '09/01/2014', 20),
(1113, '10/01/2014', 10);
CREATE TABLE calendar
(calendar_date timestamp, business_day_ind boolean)
DISTRIBUTED BY (calendar_date);
INSERT INTO calendar
values ('01/01/2014', false),
('02/01/2014', true),
('03/01/2014', true),
('04/01/2014', false),
('05/01/2014', false),
('06/01/2014', true),
('07/01/2014', true),
('08/01/2014', true),
('09/01/2014', true),
('10/01/2014', true);
analyze balance;
analyze calendar;
And now the query.
select d.account, d.my_date, b.balance, c.business_day_ind
from (
select account, start_date + interval '1 month' * (generate_series(0, duration)) AS my_date
from (
select account, start_date, (date_part('year', duration) * 12 + date_part('month', duration))::int as duration
from (
select start_date, age(end_date, start_date) as duration, account
from (
select account, min(balance_date) as start_date, max(balance_date) as end_date
from balance
group by account
) as sub1
) as sub2
) sub3
) as d
left outer join balance b on d.account = b.account and d.my_date = b.balance_date
join calendar c on c.calendar_date = d.my_date
order by d.account, d.my_date;
Results:
account | my_date | balance | business_day_ind
---------+---------------------+---------+------------------
1111 | 2014-01-01 00:00:00 | 100 | f
1111 | 2014-02-01 00:00:00 | 156 | t
1111 | 2014-03-01 00:00:00 | 300 | t
1111 | 2014-04-01 00:00:00 | 300 | f
1111 | 2014-05-01 00:00:00 | | f
1111 | 2014-06-01 00:00:00 | | t
1111 | 2014-07-01 00:00:00 | 468 | t
1112 | 2014-02-01 00:00:00 | 300 | t
1112 | 2014-03-01 00:00:00 | 300 | t
1112 | 2014-04-01 00:00:00 | | f
1112 | 2014-05-01 00:00:00 | | f
1112 | 2014-06-01 00:00:00 | 300 | t
1112 | 2014-07-01 00:00:00 | 350 | t
1112 | 2014-08-01 00:00:00 | 400 | t
1112 | 2014-09-01 00:00:00 | 450 | t
1113 | 2014-01-01 00:00:00 | 30 | f
1113 | 2014-02-01 00:00:00 | 40 | t
1113 | 2014-03-01 00:00:00 | 45 | t
1113 | 2014-04-01 00:00:00 | | f
1113 | 2014-05-01 00:00:00 | | f
1113 | 2014-06-01 00:00:00 | 45 | t
1113 | 2014-07-01 00:00:00 | 60 | t
1113 | 2014-08-01 00:00:00 | 50 | t
1113 | 2014-09-01 00:00:00 | 20 | t
1113 | 2014-10-01 00:00:00 | 10 | t
(25 rows)
I had to get the min and max dates for each account and then use generate_series to generate the months between the two dates. It would have been a bit cleaner query if you wanted a record for each day but I had to use another subquery to get the results at a monthly level.