T-SQL use operator '&' like function SUM/COUNT/etc - sql

Hi,
in SQL-Server (2014) i search too make a request like this:
SELECT
T.[Text],
&(T.[Numeric])
FROM
MyTable AS T
GROUP BY
T.[Text];
I would like use '&' and '|' like SUM()/MAX()/COUNT()/MIN() functions.
Somebody can help me ?
EDIT:
Need too:
SELECT
T.[Text],
|(T.[Numeric])
FROM
MyTable AS T
GROUP BY
T.[Text];
EDIT 2:
CREATE TABLE [dbo].[1L_Tests](
[ID_Test] [int] NOT NULL,
[Text] [varchar](5) NOT NULL,
[Numeric] [int] NOT NULL,
PRIMARY KEY ([ID_Test])
);
GO
INSERT INTO [dbo].[1L_Tests]
([ID_Test], [Text], [Numeric])
VALUES
(1, 'KCH', 0)
,(2, 'KCH', 12)
,(3, 'KCH', 13)
,(4, 'DAF', 9)
,(5, 'DAF', 7)
,(6, 'LDE', 29)
,(7, 'LDE', 37)
,(8, 'LDE', 46);
GO
SELECT
T.[Text],
&(T.[Numeric]) AS 'Inter',
|(T.[Numeric]) AS 'Union'
FROM
[1L_Tests] AS T
GROUP BY
T.[Text];
I Expected:
Text | Inter | Union
KCH ; 0 ; 13
DAF ; 1 ; 15
LDE ; 4 ; 63

SQL Server doesn't have a bitwise aggregate AND. If you know how many bits you want, you can do it bit-by-bit:
select . . .,
min(t.number & 1) | min(t.number & 2) | min(t.number & 4) | . . .
from . . .
Bitwise operations often suggest premature optimization in a database. It is usually better to represent the "bits" as separate flags.

Related

How to bypass NULL in a Lookup Table using JOIN?

In SQL (SSMS), I am trying to do a lookup using a ranging table (similar to Point_Lookup below) and what I need is to bypass the NULL scenario.
Firstly, please use below SQL codes to replicate the scenario in question:
-- Code for People Data --
Create table #People ([Name] varchar(50) null,Age int null)
Insert into #People VALUES
('George' , 30),
('Clooney' , 18),
('Sandra' , 44),
('Bullock' , 15),
('Adam' , 100)
-- Code for Point_Lookup Data--
Create table #Point_Lookup ([Lower_Limit] int null, [Upper_Limit] int null, [Point] int null)
Insert into #Point_Lookup VALUES
(0, 10, 1),
(10, 20, 2),
(20, 30, 3),
(30, 40, 4),
(40, 50, 5),
(50, NULL, 6)
I have tried below code to successfully join both tables and get the desired points EXCEPT when [Age] >= 50 (Since the Upper_Limit is showing NULL, the point from the lookup table is also showing NULL - desired result should be 6).
Select ppl.*, point.[Point]
from #People as ppl
left join #Point_Lookup as point
on ppl.[Age] >= point.[Lower_Limit] and
ppl.[Age] < point.[Upper_Limit]
I have also tried replacing the NULL using ISNULL() but I realized this still does not JOIN both tables when [Age] >= 50 (not quite sure why).
Select ppl.*, point.[Point]
from #People as ppl
left join #Point_Lookup as point
on ppl.[Age] >= point.[Lower_Limit]
and ppl.[Age] < isnull(point.[Upper_Limit], point.[Upper_Limit] + 1 + ppl.
[Age])
Is there a way to somehow only consider one condition --> (ppl.[Age] >= point.[Lower_Limit]) when [Age] >= 50 (without going into the NULL in Upper_Limit)? Maybe somehow using CASE?
The expected result should show 6 Point when [Age] >= 50. Please help.
You can try using coalesce() function which will work like case when, so if point.[Upper_Limit] is null then it will consider later one
Select ppl.*, point.[Point]
from #People as ppl
left join #Point_Lookup as point
on ppl.[Age] >= point.[Lower_Limit]
and ppl.[Age] < coalesce(point.[Upper_Limit], point.[Lower_Limit] + 1 + ppl.
[Age])
If NULL means that the condition should be avoided then you can use an OR to write exactly this:
Select
ppl.*,
point.[Point]
from
#People as ppl
left join #Point_Lookup as point on
ppl.[Age] >= point.[Lower_Limit] and
(point.[Upper_Limit] IS NULL OR ppl.[Age] < point.[Upper_Limit])
In you attempt:
isnull(point.[Upper_Limit], point.[Upper_Limit] + 1 + ppl.[Age])
If point.[Upper_Limit] is NULL then any addition will also be NULL, that's why it doesn't join correctly. You should remove the point.[Upper_Limit] and just leave 1 + ppl.[Age] and it will work, but using the OR will be better for index usage (if any).
try this,
Select ppl.*,
(select top 1 point from #Point_Lookup where ppl.Age>=Lower_Limit and ppl.Age<=(case when Upper_Limit is null then ppl.Age else Upper_Limit end) ) from #People as ppl
The problem with this structure is that you can have lacks or overlaps in the ranges, also, in this case can be confusing which should be the correct point for a value equal to one of the limits...
I use to do it this way, with one single limit, the boundaries are defined by the previous and next records, and there is no way to have a lack of values out of the range
Create table #People ([Name] varchar(50) null,Age int null)
Insert into #People VALUES
('George' , 30),
('Clooney' , 18),
('Sandra' , 44),
('Bullock' , 15),
('Adam' , 100),
('Lio' , 4)
-- Code for Point_Lookup Data--
Create table #Point_Lookup ([Limit] int not null,[Point] int null)
Insert into #Point_Lookup VALUES
(0, 1),
(10, 2),
(20, 3),
(30, 4),
(40, 5),
(50, 6)
SELECT *
FROM
#People P
CROSS APPLY (
SELECT TOP 1 Point
FROM #Point_Lookup L
WHERE P.Age >= L.Limit
ORDER BY Limit DESC
) L
drop table #people
drop table #Point_Lookup

SQLite INSERT statement with mixed values

I want to insert a record into table "SampleTable" and the INSERT statement has 2 real values and one SELECT statement. I know that I could use a Trigger to solve the problem but I need a solution that allows an INSERT statement similar to the one below (which does not work. It produces a "Syntax Error". Thank you for your help in this matter.
CREATE TABLE "SampleTable" (
"SampleTableID" integer PRIMARY KEY AUTOINCREMENT NOT NULL,
"UniqueIdentifier" nvarchar,
"Name" nvarchar(50),
"City" nvarchar(50)
)
INSERT INTO "SampleTable" (
"UniqueIdentifier",
"Name",
"City"
)
VALUES ((SELECT substr(u,1,8)||'-'||substr(u,9,4)||'-4'||substr(u,13,3)||
'-'||v||substr(u,17,3)||'-'||substr(u,21,12) from (
select lower(hex(randomblob(16))) as u, substr('89ab',random() % 4 + 1, 1) as v),"Russel","Dallas");
Use insert . . . select instead of insert . . . values. I think this is the syntax you are looking for:
INSERT INTO "SampleTable" (UniqueIdentifier, Name, City)
SELECT substr(u, 1, 8)||'-'||substr(u, 9, 4)||'-4'||substr(u, 13, 3)||'-'||v||substr(u, 17, 3)||''||substr(u, 21, 12), 'Russel', 'Dallas'
from (select lower(hex(randomblob(16))) as u, substr('89ab',random() % 4 + 1, 1) as v
) toinsert;

How can get null column after UNPIVOT?

I have got the following query:
WITH data AS(
SELECT * FROM partstat WHERE id=4
)
SELECT id, AVG(Value) AS Average
FROM (
SELECT id,
AVG(column_1) as column_1,
AVG(column_2) as column_2,
AVG(column_3) as column_3
FROM data
GROUP BY id
) as pvt
UNPIVOT (Value FOR V IN (column_1,column_2,column_3)) AS u
GROUP BY id
if column_1,column_2 and column_3 (or one of this columns) have values then i get result as the following:
id, Average
4, 5.12631578947368
if column_1,column_2 and column_3 have NULL values then the query does not return any rows as the following:
id, Average
my question is how can i get as the following result if columns contents NULL values?
id, Average
4, NULL
Have you tried using COALESCE or ISNULL?
e.g.
ISNULL(AVG(column_1), 0) as column_1,
This does mean that you will get 0 as the result instead of 'NULL' though - do you need null when they are all NULL?
Edit:
Also, is there any need for an unpivot? Since you are specifying all 3 columns, why not just do:
SELECT BankID, (column_1 + column_2 + column_3) / 3 FROM partstat
WHERE bankid = 4
This gives you the same results but with the NULL
Of course this is assuming you have 1 row per bankid
Edit:
UNPIVOT isn't supposed to be used like this as far as I can see - I'd unpivot first then try the AVG... let me have a go...
Edit:
Ah I take that back, it is just a problem with NULLs - other posts suggest ISNULL or COALESCE to eliminate the nulls, you could use a placeholder value like -1 which could work e.g.
SELECT bankid, AVG(CASE WHEN value = -1 THEN NULL ELSE value END) AS Average
FROM (
SELECT bankid,
isnull(AVG(column_1), -1) as column_1 ,
AVG(Column_2) as column_2 ,
Avg(column_3) as column_3
FROM data
group by bankid
) as pvt
UNPIVOT (Value FOR o in (column_1, column_2, column_3)) as u
GROUP BY bankid
You need to ensure this will work though as if you have a value in column2/3 then column_1 will no longer = -1. It might be worth doing a case to see if they are all NULL in which case replacing the 1st null with -1
Here is an example without UNPIVOT:
DECLARE #partstat TABLE (id INT, column_1 DECIMAL(18, 2), column_2 DECIMAL(18, 2), column_3 DECIMAL(18, 2))
INSERT #partstat VALUES
(5, 12.3, 1, 2)
,(5, 2, 5, 5)
,(5, 2, 2, 2)
,(4, 2, 2, 2)
,(4, 4, 4, 4)
,(4, 21, NULL, NULL)
,(6, 1, NULL, NULL)
,(6, 1, NULL, NULL)
,(7, NULL, NULL, NULL)
,(7, NULL, NULL, NULL)
,(7, NULL, NULL, NULL)
,(7, NULL, NULL, NULL)
,(7, NULL, NULL, NULL)
;WITH data AS(
SELECT * FROM #partstat
)
SELECT
pvt.id,
(ISNULL(pvt.column_1, 0) + ISNULL(pvt.column_2, 0) + ISNULL(pvt.column_3, 0))/
NULLIF(
CASE WHEN pvt.column_1 IS NULL THEN 0 ELSE 1 END +
CASE WHEN pvt.column_2 IS NULL THEN 0 ELSE 1 END +
CASE WHEN pvt.column_3 IS NULL THEN 0 ELSE 1 END
, 0)
AS Average
FROM (
SELECT id,
AVG(column_1) as column_1,
AVG(column_2) as column_2,
AVG(column_3) as column_3
FROM data
GROUP BY id
) as pvt

Need to convert a recursive CTE query to an index friendly query

After going through all the hard work of writing a recursive CTE query to meet my needs, I realize I can't use it because it doesn't work in an indexed view. So I need something else to replace the CTE below. (Yes you can use a CTE in a non-indexed view, but that's too slow for me).
The requirements:
My ultimate goal is to have a self updating indexed view (it doesn't have to be a view, but something similar)... that is, if data changes in any of the tables the view joins on, then the view needs to update itself.
The view needs to be indexed because it has to be very fast, and the data doesn't change very frequently. Unfortunately, the non-indexed view using a CTE takes 3-5 seconds to run which is way too long for my needs. I need the query to run in milliseconds. The recursive table has a few hundred thousand records in it.
As far as my research has taken me, the best solution to meet all these requirements is an indexed view, but I'm open to any solution.
The CTE can be found in the answer to my other post.
Or here it is again:
DECLARE #tbl TABLE (
Id INT
,[Name] VARCHAR(20)
,ParentId INT
)
INSERT INTO #tbl( Id, Name, ParentId )
VALUES
(1, 'Europe', NULL)
,(2, 'Asia', NULL)
,(3, 'Germany', 1)
,(4, 'UK', 1)
,(5, 'China', 2)
,(6, 'India', 2)
,(7, 'Scotland', 4)
,(8, 'Edinburgh', 7)
,(9, 'Leith', 8)
;
DECLARE #tbl2 table (id int, abbreviation varchar(10), tbl_id int)
INSERT INTO #tbl2( Id, Abbreviation, tbl_id )
VALUES
(100, 'EU', 1)
,(101, 'AS', 2)
,(102, 'DE', 3)
,(103, 'CN', 5)
;WITH abbr AS (
SELECT a.*, isnull(b.abbreviation,'') abbreviation
FROM #tbl a
left join #tbl2 b on a.Id = b.tbl_id
), abcd AS (
-- anchor
SELECT id, [Name], ParentID,
CAST(([Name]) AS VARCHAR(1000)) [Path],
cast(abbreviation as varchar(max)) abbreviation
FROM abbr
WHERE ParentId IS NULL
UNION ALL
--recursive member
SELECT t.id, t.[Name], t.ParentID,
CAST((a.path + '/' + t.Name) AS VARCHAR(1000)) [Path],
isnull(nullif(t.abbreviation,'')+',', '') + a.abbreviation
FROM abbr AS t
JOIN abcd AS a
ON t.ParentId = a.id
)
SELECT *, [Path] + ':' + abbreviation
FROM abcd
After hitting all the roadblocks with indexed views (self join, cte, udf accessing data etc), I propose that the below as a solution for you.
Create support function
Based on maximum depth of 4 from root (5 total). Or use a CTE
CREATE FUNCTION dbo.GetHierPath(#hier_id int) returns varchar(max)
WITH SCHEMABINDING
as
begin
return (
select FullPath =
isnull(H5.Name+'/','') +
isnull(H4.Name+'/','') +
isnull(H3.Name+'/','') +
isnull(H2.Name+'/','') +
H1.Name
+
':'
+
isnull(STUFF(
isnull(','+A1.abbreviation,'') +
isnull(','+A2.abbreviation,'') +
isnull(','+A3.abbreviation,'') +
isnull(','+A4.abbreviation,'') +
isnull(','+A5.abbreviation,''),1,1,''),'')
from dbo.HIER H1
left join dbo.ABBR A1 on A1.hier_id = H1.Id
left join dbo.HIER H2 on H1.ParentId = H2.Id
left join dbo.ABBR A2 on A2.hier_id = H2.Id
left join dbo.HIER H3 on H2.ParentId = H3.Id
left join dbo.ABBR A3 on A3.hier_id = H3.Id
left join dbo.HIER H4 on H3.ParentId = H4.Id
left join dbo.ABBR A4 on A4.hier_id = H4.Id
left join dbo.HIER H5 on H4.ParentId = H5.Id
left join dbo.ABBR A5 on A5.hier_id = H5.Id
where H1.id = #hier_id)
end
GO
Add columns to the table itself
For example the fullpath column, if you need, add the other 2 columns in the CTE by splitting the result of dbo.GetHierPath on ':' (left=>path, right=>abbreviations)
-- index maximum key length is 900, based on your data, 400 is enough
ALTER TABLE HIER ADD FullPath VARCHAR(400)
Maintain the columns
Because of the hierarchical nature, record X could be deleted that affects a Y descendent and Z ancestor, which is quite hard to identify in either of INSTEAD OF or AFTER triggers. So the alternative approach is based on the conditions
if data changes in any of the tables the view joins on, then the view needs to update itself.
the non-indexed view using a CTE takes 3-5 seconds to run which is way too long for my needs
We maintain the data simply by running through the entire table again, taking 3-5 seconds per update (or faster if the 5-join query works out better).
CREATE TRIGGER TG_HIER
ON HIER
AFTER INSERT, UPDATE, DELETE
AS
UPDATE HIER
SET FullPath = dbo.GetHierPath(HIER.Id)
Finally, index the new column(s) on the table itself
create index ix_hier_fullpath on HIER(FullPath)
If you intended to access the path data via the id, then it is already in the table itself without adding an additional index.
The above TSQL references these objects
Modify the table and column names to suit your schema.
CREATE TABLE dbo.HIER (Id INT Primary Key Clustered, [Name] VARCHAR(20) ,ParentId INT)
;
INSERT dbo.HIER( Id, Name, ParentId ) VALUES
(1, 'Europe', NULL)
,(2, 'Asia', NULL)
,(3, 'Germany', 1)
,(4, 'UK', 1)
,(5, 'China', 2)
,(6, 'India', 2)
,(7, 'Scotland', 4)
,(8, 'Edinburgh', 7)
,(9, 'Leith', 8)
,(10, 'Antartica', NULL)
;
CREATE TABLE dbo.ABBR (id int primary key clustered, abbreviation varchar(10), hier_id int)
;
INSERT dbo.ABBR( Id, Abbreviation, hier_id ) VALUES
(100, 'EU', 1)
,(101, 'AS', 2)
,(102, 'DE', 3)
,(103, 'CN', 5)
GO
EDIT - Possibly faster alternative
Given that all records are recalculated each time, there is no real need for a function that returns the FullPath for a single HIER.ID. The query in the support function can be used without the where H1.id = #hier_id filter at the end. Furthermore, the expression for FullPath can be broken into PathOnly and Abbreviation easily down the middle. Or just use the original CTE, whichever is faster.

Arel causing infinite loop on aggregation

I have trouble with using Arel to aggregate 2 columns in the same query. When I run this, the whole server freezes for a minute, before the rails dev-server crashes. I suspect an infinite loop :).
Maybe I have misunderstood the whole concept of Arel, and I would be grateful if anybody could have a look at it.
The expected result of this query is something like this:
[{:user_id => 1, :sum_account_charges => 300, :sum_paid_debts => 1000},...]
a_account_charges = Table(:account_charges)
a_paid_debts = Table(:paid_debts)
a_participants = Table(:expense_accounts_users)
account_charge_sum = a_account_charges
.where(a_account_charges[:expense_account_id].eq(id))
.group(a_account_charges[:user_id])
.project(a_account_charges[:user_id], a_account_charges[:cost].sum)
paid_debts_sum = a_paid_debts
.where(a_paid_debts[:expense_account_id].eq(id))
.group(a_paid_debts[:from_user_id])
.project(a_paid_debts[:from_user_id], a_paid_debts[:cost].sum)
charges = a_participants
.where(a_participants[:expense_account_id].eq(id))
.join(account_charge_sum)
.on(a_participants[:user_id].eq(account_charge_sum[:user_id]))
.join(paid_debts_sum)
.on(a_participants[:user_id].eq(paid_debts_sum[:from_user_id]))
I'm new to arel, but after banging on this for several days and really digging, I don't think it can be done. Here's an outline of what I've done, if anyone has any additional insight it would be welcome.
First, these scripts will create the test tables and populate them with test data. I have set up 9 expense_account_users, each with a different set of charges/paid_debts, as follows: 1 charge/1 payment, 2 charges/2 payments, 2 charges/1 payment, 2 charges/0 payments, 1 charge/2 payments, 0 charges/2 payments, 1 charge/0 payments, 0 charges/1 payment, 0 charges, 0 payments.
CREATE TABLE IF NOT EXISTS `expense_accounts_users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`expense_account_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=10 ;
INSERT INTO `expense_accounts_users` (`id`, `expense_account_id`) VALUES (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1), (8, 1), (9, 1);
CREATE TABLE IF NOT EXISTS `account_charges` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`expense_account_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`cost` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=10 ;
INSERT INTO `account_charges` (`id`, `expense_account_id`, `user_id`, `cost`) VALUES (1, 1, 1, 1), (2, 1, 2, 1), (3, 1, 2, 2), (4, 1, 3, 1), (5, 1, 3, 2), (6, 1, 4, 1), (7, 1, 5, 1), (8, 1, 5, 2), (9, 1, 7, 1);
CREATE TABLE IF NOT EXISTS `paid_debts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`expense_account_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`cost` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=10 ;
INSERT INTO `paid_debts` (`id`, `expense_account_id`, `user_id`, `cost`) VALUES (1, 1, 1, 1), (2, 1, 2, 1), (3, 1, 2, 2), (4, 1, 3, 1), (5, 1, 4, 1), (6, 1, 4, 2), (7, 1, 6, 1), (8, 1, 6, 2), (9, 1, 8, 1);
Ultimately, to get the data that you are after in one fell swoop, this is the SQL statement you'd use:
SELECT user_charges.user_id,
user_charges.sum_cost,
COALESCE(SUM(paid_debts.cost), 0) AS 'sum_paid'
FROM (
SELECT expense_accounts_users.id AS 'user_id',
COALESCE(sum(account_charges.cost), 0) AS 'sum_cost'
FROM expense_accounts_users
LEFT OUTER JOIN account_charges on expense_accounts_users.id = account_charges.user_id
GROUP BY expense_accounts_users.id)
AS user_charges
LEFT OUTER JOIN paid_debts ON user_charges.user_id = paid_debts.user_id
GROUP BY user_charges.user_id
You have to do a LEFT OUTER JOIN between users and charges first so that you get a row for every user, then you have to LEFT OUTER JOIN the result to debts in order to avoid multiplying your results with the two joins inside the same construct.
(note the use of COALESCE to convert NULL values from the LEFT OUTER JOINs to zeroes - a convenience item, perhaps)
The result of this statement is this:
user_id sum_cost sum_paid
1 1 1
2 3 3
3 3 1
4 1 3
5 3 0
6 0 3
7 1 0
8 0 1
9 0 0
After many attempts, I found that this arel code came the closest to what we are after:
c = Arel::Table.new(:account_charges)
d = Arel::Table.new(:paid_debts)
p = Arel::Table.new(:expense_accounts_users)
user_charges = p
.where(p[:expense_account_id].eq(1))
.join(c, Arel::Nodes::OuterJoin)
.on(p[:id].eq(c[:user_id]))
.project(p[:id], c[:cost].sum.as('sum_cost'))
.group(p[:id])
charges = user_charges
.join(d, Arel::Nodes::OuterJoin)
.on(p[:id].eq(d[:user_id]))
.project(d[:cost].sum.as('sum_paid'))
Essentially, I am joining users to charges in the first construct with a LEFT OUTER JOIN, then trying to take the result of that and LEFT OUTER JOIN it back to debts. This arel code produces the following SQL statement:
SELECT `expense_accounts_users`.`id`,
SUM(`account_charges`.`cost`) AS sum_cost,
SUM(`paid_debts`.`cost`) AS sum_paid
FROM `expense_accounts_users`
LEFT OUTER JOIN `account_charges` ON `expense_accounts_users`.`id` = `account_charges`.`user_id`
LEFT OUTER JOIN `paid_debts` ON `expense_accounts_users`.`id` = `paid_debts`.`user_id`
WHERE `expense_accounts_users`.`expense_account_id` = 1
GROUP BY `expense_accounts_users`.`id`
Which, when run, produces this output:
id sum_cost sum_paid
1 1 1
2 6 6
3 3 2
4 2 3
5 3 NULL
6 NULL 3
7 1 NULL
8 NULL 1
9 NULL NULL
Very close, but not quite. First, the lack of COALESCE gives us NULL values instead of zeroes - I'm not sure how to effect the COALESCE function call from within arel.
More importantly, though, combining the LEFT OUTER JOINs into one statement without the internal subselect is resulting in the sum_paid totals getting multiplied up in examples 2, 3 and 4 - any time there is more than one of either charge or payment and at least one of the other.
Based on some online readings, I had hoped that changing the arel slightly would solve the problem:
charges = user_charges
.join(d, Arel::Nodes::OuterJoin)
.on(user_charges[:id].eq(d[:user_id]))
.project(d[:cost].sum.as('sum_paid'))
But any time I used user_charges[] in the second arel construct, I got an undefined method error for SelectManager#[]. This may be a bug, or may be right - I really can't tell.
I just don't see that arel has a way to make use of the SQL from the first construct as a queryable object in the second construct with the necessary subquery alias, as is needed to make this happen in one SQL statement.