Order by the IN value with jsonb array - sql

I am trying to select records based on the order of array elements of another row:
SELECT *
FROM tester
WHERE id IN (
SELECT jsonb_array_elements(d->'fam')->>'id'
FROM tester
WHERE id='3'
)
I am aware of this solution.
The difference is that I don't know how to dynamically generate the "ordering" value. Is that possible?
The full fiddle is here.
I would like to see results based on the order found in the json data:
id name d
-- ----- --------
2 barb {"fam": [{"id": 1}, {"id": 3}]}
4 jaimie {"fam": [{"id": 3}, {"id": 2}, {"id": 1}]}
1 bob {"fam": [{"id": 3}, {"id": 2}, {"id": 4}]}

Use WITH ORDINALITY in a LATERAL join to preserve the original order of the array:
SELECT t.*
FROM tester t1
CROSS JOIN jsonb_array_elements(t1.d->'fam') WITH ORDINALITY fam(id, ord)
JOIN tester t ON t.id = fam.id->>'id'
WHERE t1.id = '3'
ORDER BY fam.ord;
SQL Fiddle.
Note a subtle difference: The IN construct in your original query not only removes original order, it also removes duplicates. My query keeps all IDs extracted from the array, duplicate or not, in original order.
The LATERAL keyword is optional noise for table functions. As is the AS keyword for table aliases. Would go here:
CROSS JOIN LATERAL jsonb_array_elements(d->'fam') WITH ORDINALITY AS fam(id, ord)
Related:
PostgreSQL unnest() with element number
How to get elements with a unique number from a json array in PostgreSQL?
Query for array elements inside JSON type
What is the difference between LATERAL and a subquery in PostgreSQL?

Related

How to perform ILIKE query against JSONB type column?

I have column "category_products" with datatype as JSONB. In that column data is inserted as array and this array contains objects. and that object contains array of object.
Here I need to perform ILIKE query against product_name.
example
category_products
-----------------
[{"products":[{product_name: product_one, price: 123}, {product_name: product_two, price: 999}]]
You may first flatten your data using a lateral join with jsonb_path_query and then apply an ILIKE in a WHERE clause as you need. Here is an illustration.
See the demo.
select id, l, l ->> 'product_name' as prod
from the_table,
lateral jsonb_path_query(category_products, '$[*].products[*]') as l;
Please note that your sample data are not valid JSON at all.
Unrelated but this would be so much easier and cleaner with a normalized data design.
Edit
As jsonb_path_query does not exist in pre-PG12 versions here is an alternative and a new demo.
select id, l, l ->> 'product_name' as prod
from the_table,
lateral jsonb_array_elements(category_products) as arr_ex,
lateral jsonb_array_elements(arr_ex -> 'products') as l;

Is there a way to lookup a jsonb column by its values

I have a table, test, in postgres 12 with a jsonb column, data_col, that has many different keys and values.
My requirement is to select * from that table where value matches a string.
for example, the table has data as below
id some_value data_col
---------------------------------------------------------------
11 2018 {"a": "Old Farm"}
12 2019 {"b": "My house is old", "c": "See you tomorrow"}
13 2020 {"d": "The old house", "a": "Very Green", "e": "Olden days"}
As you can see, there are many different keys and so its not practical to lookup like the examples on the web suggests i.e col_name->>'Key'
I am looking to write a sql with a where clause to give me all rows that have the string "old" in it.
something like:
select * from test where data_col ILIKE '%old%'
should give me
11, 2018, Old Farm
12, 2019, My house is old
13, 2020, Olden days
One option uses jsonb_each():
select t.*, x.*
from test t
cross join lateral jsonb_each(t.data_col) x
where x.value ilike '%old%'
Note that this multiplies the rows if an object contains "old" more than once. To avoid that, you can use exists instead:
select t.*
from test t
where exists (
select 1
from jsonb_each(t.data_col) x
where x.val ilike '%old%'
)
Or if you want to aggregate all the matched values in one column:
select t.*, x.*
from test t
cross join lateral (
select string_agg(x.val, ',') as vals
from jsonb_each(t.data_col) x
where x.val ilike '%old%'
) x
where x.vals is not null
As you are using Postgres 12, you can use a SQL/JSON path function:
select id, some_value,
jsonb_path_query_first(data_col, '$.* ? (# like_regex "old" flag "i")') #>> '{}'
from data
The #>> operator is only there to convert the scalar JSON value into text (as there is no direct cast from jsonb to text that would remove the double quotes)
If there are potentially more values with the substring, you can use jsonb_path_query_array() to all of them as an array (obviously you need to remove the #>> then)

How to get rows in Postgres saving initial order in request?

I have an array of ids:
[0, 1, 2, 3, 4]
and I want to get rows from Postgres table with these ids saving initial order in my array.
To get these rows I use select * from "table_name" WHERE id IN (ids).
After this query Postgres can return rows in this order
[4, 2, 1, 0, 5]
I know that I can change order to initial by myself, but may be there is a way to use another query to solve the problem?
Join against the unnested array rather than using an IN:
select t.*
from table_name t
join unnest(array[0,1,2,3,4]) with ordinality as a(id, idx) on a.id = t.id
order by a.idx;
The option with ordinality will return the index of each element in the array. And that index can then be used to sort the result.

Postgresql, jsonb with multiple keys is returning a single row for each key

Here's what my situation is. I have rows that have a json column, and what I've been trying to do is get all the values for all the keys in that json in just one row.
let's say if I have row with the json value:
{"key1": "a", "key2": "b"}
Now, is it possible to extract the values as such: ["a", "b"]?
I attempted this so far:
select ---- some sum() fields ----,
b.match_data::json -> jsonb_object_keys(b.match_data) as "Course"
from --- tables ---
join -- tables ---
where -- condition ---
group by -- sum() fields ----, b.match_data
The problem with this is that for json with multiple keys, it is returning multiple rows.
demo: db<>fiddle
WITH jsondata AS (
SELECT '{"key1": "a", "key2": "b"}'::jsonb as data -- A
)
SELECT jsonb_agg(value) -- C
FROM jsondata, jsonb_each(data) -- B
Postgres JSON functions, Postgres (JSON) aggregate functions
A: CTE to work with your data
B: jsonb_each expands your data; result:
key value
key1 "a"
key2 "b"
C: jsonb_agg aggregates the value column into a json array with the expected result: ["a", "b"].
If you do not want the result as json array but as normal text array you have to change jsonb_each into jsonb_each_text and jsonb_agg into array_agg (see fiddle)
I used jsonb as type. Of course all functions exist for type json as well.
(Postgres JSON types)
S-Man's answer gave me a direction to use aggregators, and after a few trial and errors I got my answer
(select string_agg((select value from jsonb_array_elements_text(value)), ',')
from jsonb_each(b.match_data)) "Course"
It collects and displays values as a, b,... in one single row.

How to sort a PostgreSQL JSON field [duplicate]

This question already has answers here:
How to reorder array in JSONB type column
(1 answer)
JSONB column: sort only content of arrays stored in column with mixed JSONB content
(1 answer)
Closed 4 years ago.
I have some table with json field. I want to group & count that field to understand its usage frequency. But the data in this field is stored in a random order, so I can get the wrong result. That's why I've decided to sort that field before grouping, but I failed. How can I do it?
JSON_FIELD
["one", "two"]
["two"]
["two", "one"]
["two"]
["three"]
I've tried this code:
SELECT JSON_FIELD, COUNT(1) FROM my_table GROUP BY JSON_FIELD;
But the result w/o sorting is wrong:
JSON_FIELD COUNT
["one", "two"] 1
["two"] 2
["two", "one"] 1
["three"] 1
But if I could sort it somehow, the expected (and correct) result would be:
JSON_FIELD COUNT
["one", "two"] 2
["two"] 2
["three"] 1
My question is very familiar to How to convert json array into postgres int array in postgres 9.3
A bit messy but works:
SELECT ja::TEXT::JSON, COUNT(*)
FROM (
SELECT JSON_AGG(je ORDER BY je::TEXT) AS ja
FROM (
SELECT JSON_ARRAY_ELEMENTS(j) je, ROW_NUMBER() OVER () AS r
FROM (
VALUES
('["one", "two"]'::JSON),
('["two"]'),
('["two", "one"]'),
('["two"]'),
('["three"]')
) v(j)
) el
GROUP BY r
) x
GROUP BY ja::TEXT
Result:
BTW the casting of JSON values to TEXT is because (at least in PG 9.3) there are no JSON equality operators, so I cast to TEXT in order to be able to do GROUP or ORDER BY, then back to JSON.