I have two tables like this:
Table1:
LOAD * INLINE [
OrderItem
OI1
OI2
OI3
OI4
OI6
];
Table2:
LOAD * INLINE [
OrderItem
OI2
OI3
OI4
OI5
OI6
];
Now I want a third table which shows me that only OI5 is not in "Table1"!
A Listbox solution is also ok.
I tried some things with joins bit it didn't work. I also read this
but it only showed me the difference between the two tables in a listbox. See below:
Table1:
LOAD * INLINE [
OrderItem
OI1
OI2
OI3
OI4
OI6
];
Concatenate(Table1)
Table2:
LOAD * INLINE [
OrderItem
OI2
OI3
OI4
OI5
OI6
];
INNER JOIN (Table1)
LOAD *
WHERE "Only in One Table?"
;
// Here I want "OI5" as an output
Difference:
LOAD
OrderItem,
if(count(OrderItem)<2,-1) as "Only in One Table?"
RESIDENT Table1 GROUP BY OrderItem
;
Result:
Thanks!
One possible solution:
Table1:
LOAD * INLINE [
OrderItem1
OI1
OI2
OI3
OI4
OI6
];
Table2:
LOAD * INLINE [
OrderItem2
OI2
OI3
OI4
OI5
OI6
];
Missings:
Load
OrderItem2 as MissingsOrderItem
Resident
Table2
Where
Not Exists(OrderItem1, OrderItem2)
;
After reload the result will be:
Select LOAD * from Table 2 where LOAD not in (Select LOAD from Table 1);
Related
I'm working with a Postgres database and I have a products view like this:
id
name
product_groups
1
product1
[{...}]
2
product2
[{...}]
the product_groups field contains an array of json objects with the product groups data that the product belongs to, where each json object has the following structure:
{
"productGroupId": 1001,
"productGroupName": "Microphones"
"orderNo": 1,
}
I have a query to get all the products that belong to certain group:
SELECT * FROM products p WHERE p.product_groups #> [{"productGroupId": 1001}]
but I want to get all the products ordered by the orderNo property of the group that I'm querying for.
what should I add/modify to my query in order to achieve this?
I am not really sure I understand your question. My assumptions are:
there will only be one match for the condition on product groups
you want to sort the result rows from the products table, not the elements of the array.
If those two assumptions are correct, you can use a JSON path expression to extract the value of orderNo and then sort by it.
SELECT p.*
FROM products p
WHERE p.product_groups #> [{"productGroupId": 1001}]
ORDER BY jsonb_path_query_first(p.product_groups, '$[*] ? (#.productGroupId == 1001).orderNo')::int
You have to unnest the array:
SELECT p.*
FROM products AS p
CROSS JOIN LATERAL jsonb_array_elements(p.product_groups) AS arr(elem)
WHERE arr.elem #> '{"productGroupId": 1001}'
ORDER BY CAST(arr.elem ->> 'orderNo' AS bigint);
I have jsonb in one of my table
the jsonb looks like this
my_data : [
{pid: 1, stock: 500},
{pid: 2, stock: 1000},
...
]
pid refers to products' table id ( which is pid )
EDIT: The table products has following properties: pid (PK), name
I want to loop over my_data[] in my JSONB and fetch pid's name from product table.
I need the result to look something like this (including the product names from the second table) ->
my_data : [
{
product_name : "abc",
pid: 1,
stock : 500
},
...
]
How should I go about performing such jsonb inner join?
Edit :- tried S-Man's solutions and i'm getting this error
"invalid reference to FROM-clause entry for table \"jc\""
here is the
SQL QUERY
step-by-step demo:db<>fiddle
SELECT
jsonb_build_object( -- 5
'my_data',
jsonb_agg( -- 4
elems || jsonb_build_object('product_name', mot.product_name) -- 3
)
)
FROM
mytable,
jsonb_array_elements(mydata -> 'my_data') as elems -- 1
JOIN
my_other_table mot ON (elems ->> 'pid')::int = mot.pid -- 2
Expand JSON array into one row per array element
Join the other table against the current one using the pid values (notice the ::int cast, because otherwise it would be text value)
The new columns from the second table now can be converted into a JSON object. This one can be concatenate onto the original one using the || operator
After that recreating the array from the array elements again
Putting this in array into a my_data element
Another way is using jsonb_set() instead of step 5 do reset the array into the original array directly:
step-by-step demo:db<>fiddle
SELECT
jsonb_set(
mydata,
'{my_data}',
jsonb_agg(
elems || jsonb_build_object('product_name', mot.product_name)
)
)
FROM
mytable,
jsonb_array_elements(mydata -> 'my_data') as elems
JOIN
my_other_table mot ON (elems ->> 'pid')::int = mot.pid
GROUP BY mydata
Example with tables :
Promotion(idPromo, nameUser)
Company(idCompany, nameCompany)
PromoCompany(idPromo, idCompany)
I try to get with only one query, all promotions who have a company (idCompany = 1 for example) OR who have 0 company.
To describe :
PromoCompany is a restriction table... if data exist for a promotion, promotion is valable for only this companies, if no data, promotion valable for all companies.
Example :
Promo[{
idPromo:1
namePromo:"promo test"
},
{
idPromo:2
namePromo:"promo test 2"
}]
Company[{
idCompany:10
nameCompany:"CompanyPloof"
},{
idCompany:12
nameCompany:"CompanyPaf"
}
]
PromoCompany[{
idPromo:1
idCompany:10
},{
idPromo:1
idCompany:12
}
]
If my company is CompanyPloof, promos are idPromo 1 et 2
If my company is CompanyPaf, promos are idPromo 2 (because not restricted)
Hmmm . . . this sounds like two conditions:
select p.*
from promotions p
where exists (select 1 from promocompany pc where pc.idpromo = p.idpromo and p.idcompany = 1) or
not exists (select 1 from promocompany pc where pc.idpromo = p.idpromo);
I would suggest a small change to your data model. Add a flag in promotions that says whether the promotion is available for all companies or if it is restricted. Having to search through a table is a little awkward -- and potentially confusing. Adding one row to the table could invalidate a promotion for everyone else.
I am looking for advice on optimizing the following sample query and processing the result. The SQL variant in use is the internal FileMaker ExecuteSQL engine which is limited to the SELECT statement with the following syntax:
SELECT [DISTINCT] {* | column_expression [[AS] column_alias],...}
FROM table_name [table_alias], ...
[ WHERE expr1 rel_operator expr2 ]
[ GROUP BY {column_expression, ...} ]
[ HAVING expr1 rel_operator expr2 ]
[ UNION [ALL] (SELECT...) ]
[ ORDER BY {sort_expression [DESC | ASC]}, ... ]
[ OFFSET n {ROWS | ROW} ]
[ FETCH FIRST [ n [ PERCENT ] ] { ROWS | ROW } {ONLY | WITH TIES } ]
[ FOR UPDATE [OF {column_expression, ...}] ]
The query:
SELECT item1 as val ,interval, interval_next FROM meddata
WHERE fk = 12 AND active1 = 1 UNION
SELECT item2 as val ,interval, interval_next FROM meddata
WHERE fk = 12 AND active2 = 1 UNION
SELECT item3 as val ,interval, interval_next FROM meddata
WHERE fk = 12 AND active3 = 1 UNION
SELECT item4 as val ,interval, interval_next FROM meddata
WHERE fk = 12 AND active4 = 1 ORDER BY val
This may give the following result as a sample:
val,interval,interval_next
Artelac,0,1
Artelac,3,6
Celluvisc,1,3
Celluvisc,12,24
What I am looking to achieve (in addition to suggestions for optimization) is a result formatted like this:
val,interval,interval_next,interval,interval_next,interval,interval_next,interval,interval_next ->etc
Artelac,0,1,3,6
Celluvisc,1,3,12,24
Preferably I would like this processed result to be produced by the SQL engine.
Possible?
Thank you.
EDIT: I included the column names in the result for clarity, though they are not part of the result. I wish to illustrate that there may be an arbitrary number of 'interval' and 'interval_next' columns in the result.
I do not think you need to optimise you query, looks fine to me.
You are looking for something like PIVOT in TSQL, which is not supported in FQL. You biggest issue is going to be a variable number of columns returned.
I think the best approach is to get your intermediate result and use a FileMaker script or Custom Function to pivot it.
An alternative is to get the list of distinct val and loop through them (with CF or script) with FQL Statement for each row. You will not be able to combine them with union as it needs the same number of columns.
Question:
I recently had an interesting SQL problem.
I had to get the leasing contract for a leasing object.
The problem was, there could be multiple leasing contracts per room, and multiple leasing object per room.
However, because of bad db tinkering, leasing contracts are assigned to the room, not the leasing object. So I had to take the contract number, and compare it to the leasing object number, in order to get the right results.
I thought this would do:
SELECT *
FROM T_Room
LEFT JOIN T_MAP_Room_LeasingObject
ON MAP_RMLOBJ_RM_UID = T_Room.RM_UID
LEFT JON T_LeasingObject
ON LOBJ_UID = MAP_RMLOBJ_LOBJ_UID
LEFT JOIN T_MAP_Room_LeasingContract
ON T_MAP_Room_LeasingContract.MAP_RMCTR_RM_UID = T_Room.RM_UID
LEFT JOIN T_Contracts
ON T_Contracts.CTR_UID = T_MAP_Room_LeasingContract.MAP_RMCTR_CTR_UID
AND T_Contracts.CTR_No LIKE ( ISNULL(T_LeasingObject.LOBJ_No, '') + '.%' )
WHERE ...
However, because the mapping table gets joined before I have the contract number, and I cannot get the contract number without having the mapping table, i have doubled entries.
The problem is a little more complicated, as rooms having no leasing contract needed also to show up, so I couldn't just use an inner join.
With a little bit experimenting, I found that this works as expected:
SELECT *
FROM T_Room
LEFT JOIN T_MAP_Room_LeasingObject
ON MAP_RMLOBJ_RM_UID = T_Room.RM_UID
LEFT JON T_LeasingObject
ON LOBJ_UID = MAP_RMLOBJ_LOBJ_UID
LEFT JOIN T_MAP_Room_LeasingContract
LEFT JOIN T_Contracts
ON T_Contracts.CTR_UID = T_MAP_Room_LeasingContract.MAP_RMCTR_CTR_UID
ON T_MAP_Room_LeasingContract.MAP_RMCTR_RM_UID = T_Room.RM_UID
AND T_Contracts.CTR_No LIKE ( ISNULL(T_LeasingObject.LOBJ_No, '') + '.%' )
WHERE ...
I now see why the two on conditions in one join, which usually are courtesy of query designer, can be useful, and what difference it makes.
I was wondering whether this is a MS-SQL/T-SQL specific thing, or whether this is standard sql.
So I tried in PostgreSQL with another 3 tables.
So I wrote this query on 3 other tables:
SELECT *
FROM t_dms_navigation
LEFT JOIN t_dms_document
ON NAV_DOC_UID = DOC_UID
LEFT JOIN t_dms_project
ON PJ_UID = NAV_PJ_UID
and tried to turn it into one with two on conditions
SELECT *
FROM t_dms_navigation
LEFT JOIN t_dms_document
LEFT JOIN t_dms_project
ON PJ_UID = NAV_PJ_UID
ON NAV_DOC_UID = DOC_UID
So I thought it's t-sql specific, but quickly tried in MS-SQL too, just to find to my surprise that it doesn't work there either.
I thought it might be because of missing foreign keys, so i removed them on all tables in my room query, but it still did not work.
So here my question:
Why are 2 on conditions even legal, does this have a name, and why does it not work on my second example ?
It's standard SQL. Each JOIN has to have a corresponding ON clause. All you're doing is shifting around the order that the joins happen in1 - it's a bit like changing the bracketing of an expression to get around precedence rules.
A JOIN B ON <cond1> JOIN C ON <cond2>
First joins A and B based on cond1. It then takes that combined rowset and joins it to C based on cond2.
A JOIN B JOIN C ON <cond1> ON <cond2>
First joins B and C based on cond1. It then takes A and joins it to the previous combined rowset, based on cond2.
It should work in PostgreSQL - here's the relevant part of the documentation of the SELECT statement:
where from_item can be one of:
[ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
function_name ( [ argument [, ...] ] ) [ AS ] alias [ ( column_alias [, ...] | column_definition [, ...] ) ]
function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] )
from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) ]
It's that last line that's relevant. Notice that it's a recursive definition - what can be to the left and right of a join can be anything - including more joins.
1As always with SQL, this is the logical processing order - the system is free to perform physical processing in whatever sequence it feels will work best, provided the result is consistent.