Query JSONB columns with GORM using model - go-gorm

I have a struct/model
type User struct {
gorm.Model
Name string `gorm:"unique;not null" json:"name"`
Data postgres.Jsonb `json:"data"`
}
I can query in postgres
db=# select id,name,data from users where data #> '{"foo": "bar"}';
id | name | data
----+-------+------------------
6 | user01 | {"foo": "bar"}
7 | user02 | {"foo": "bar"}
8 | user03 | {"foo": "bar"}
How do I construct a query on the jsonB column for a particular key(s)? I was not able to find any documentation for using model objects to query. I understand its possible to do with raw query, but wanted to see how it can be done using model object ie.
users := []model.User{}
db.Find(&users, map[string]interface{}{"foo": "bar"})
http://gorm.io/docs/dialects.html
http://gorm.io/docs/query.html

In your example you are not specifying which field the map will filter. Try
db.Find(&users, "data #> ?", map[string]interface{}{"foo": "bar"})

You make the query like this :
users := []model.User{}
db.Where("data ->> 'foo' = ?", "bar").Find(&users)

I don't know if this works for jsonp but datatypes worked for me with JSON saved as a string
import "gorm.io/datatypes"
db.Where(datatypes.JSONQuery("data").Equals("bar", "foo")).Find(&users)

Related

Postgres get multiple rows into a single json object

I have a users table with columns like id, name, email, etc. I want to retrieve information of some users in the following format in a single json object:
{
"userId_1" : {"name" : "A", "email": "A#gmail.com"},
"userId_2" : {"name" : "B", "email": "B#gmail.com"}
}
Wherein the users unique id is the key and a json containing his information is its corresponding value.
I am able to get this information in two separate rows using json_build_object but I would want it get it in a single row in the form of one single json object.
You can use json aggregation functions:
select jsonb_object_agg(id, to_jsonb(t) - 'id') res
from mytable t
jsonb_object_agg() aggregates key/value pairs into a single object. The key is the id of each row, and the values is a jsonb object made of all columns of the table except id.
Demo on DB Fiddle
Sample data:
id | name | email
:------- | :--- | :----------
userid_1 | A | A#gmail.com
userid_2 | B | B#gmail.com
Results:
| res |
| :----------------------------------------------------------------------------------------------------- |
| {"userid_1": {"name": "A", "email": "A#gmail.com"}, "userid_2": {"name": "B", "email": "B#gmail.com"}} |
try -
select row_to_json(col) from T
link below might help https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql
Try this:
SELECT json_object(array_agg(id), array_agg(json::text)) FROM (
SELECT id, json_build_object('name', name, 'email', email) as json
FROM users_table
) some_alias_name
If your id is not of text type then you have to cast it to text too.

How to loop through JSON array of JSON objects to see if it contains a value that I am looking for in postgres?

Here is an example of the json object
rawJSON = [
{"a":0, "b":7},
{"a":1, "b":8},
{"a":2, "b":9}
]
And I have a table that essentially looks like this.
demo Table
id | ...(other columns) | rawJSON
------------------------------------
0 | ...(other columns info) | [{"a":0, "b":7},{"a":1, "b":8}, {"a":2, "b":9}]
1 | ...(other columns info) | [{"a":0, "b":17},{"a":11, "b":5}, {"a":12, "b":5}]
What I want is to return a row which insideRawJSON has value from "a" of less than 2 AND the value from "b" of less than 8. THEY MUST BE FROM THE SAME JSON OBJECT.
Essentially the query would similarly look like this
SELECT *
FROM demo
WHERE FOR ANY JSON OBJECT in rawJSON column -> "a" < 2 AND -> "b" < 8
And therefore it will return
id | ...(other columns) | rawJSON
------------------------------------
0 | ...(other columns info) | [{"a":0, "b":7},{"a":1, "b":8}, {"a":2, "b":9}]
I have searched from several posts here but was not able to figure it out.
https://dba.stackexchange.com/questions/229069/extract-json-array-of-numbers-from-json-array-of-objects
https://dba.stackexchange.com/questions/54283/how-to-turn-json-array-into-postgres-array
I was thinking of creating a plgpsql function but wasn't able to figure out .
Any advice I would greatly appreciate it!
Thank you!!
I would like to avoid cross join lateral because it will slow down a lot.
You can use a subquery that searches through the array elements together with EXISTS.
SELECT *
FROM demo d
WHERE EXISTS (SELECT *
FROM jsonb_array_elements(d.rawjson) a(e)
WHERE (a.e->>'a')::integer < 2
AND (a.e->>'b')::integer < 8);
db<>fiddle
If the datatype for rawjson is json rather than jsonb, use json_array_elements() instead of jsonb_array_elements().

KQL: mv-expand OR bag_unpack equivalent command to convert a list to multiple columns

According to mv-expand documentation:
Expands multi-value array or property bag.
mv-expand is applied on a dynamic-typed column so that each value in the collection gets a separate row. All the other columns in an expanded row are duplicated.
Just like the mv-expand operator will create a row each for the elements in the list -- Is there an equivalent operator/way to make each element in a list an additional column?
I checked the documentation and found Bag_Unpack:
The bag_unpack plugin unpacks a single column of type dynamic by treating each property bag top-level slot as a column.
However, it doesn't seem to work on the list, and rather works on top-level JSON property.
Using bag_unpack (like the below query):
datatable(d:dynamic)
[
dynamic({"Name": "John", "Age":20}),
dynamic({"Name": "Dave", "Age":40}),
dynamic({"Name": "Smitha", "Age":30}),
]
| evaluate bag_unpack(d)
It will do the following:
Name Age
John 20
Dave 40
Smitha 30
Is there a command/way (see some_command_which_helps) I can achieve the following (convert a list to columns):
datatable(d:dynamic)
[
dynamic(["John", "Dave"])
]
| evaluate some_command_which_helps(d)
That translates to something like:
Col1 Col2
John Dave
Is there an equivalent where I can convert a list/array to multiple columns?
For reference: We can run the above queries online on Log Analytics in the demo section if needed (however, it may require login).
you could try something along the following lines
(that said, from an efficiency standpoint, you may want to check your options of restructuring the data set to begin with, using a schema that matches how you plan to actually consume/query it)
datatable(d:dynamic)
[
dynamic(["John", "Dave"]),
dynamic(["Janice", "Helen", "Amber"]),
dynamic(["Jane"]),
dynamic(["Jake", "Abraham", "Gunther", "Gabriel"]),
]
| extend r = rand()
| mv-expand with_itemindex = i d
| summarize b = make_bag(pack(strcat("Col", i + 1), d)) by r
| project-away r
| evaluate bag_unpack(b)
which will output:
|Col1 |Col2 |Col3 |Col4 |
|------|-------|-------|-------|
|John |Dave | | |
|Janice|Helen |Amber | |
|Jane | | | |
|Jake |Abraham|Gunther|Gabriel|
To extract key value pairs from text and convert them to columns without hardcoding the key names in query:
print message="2020-10-15T15:47:09 Metrics: duration=2280, function=WorkerFunction, count=0, operation=copy_into, invocationId=e562f012-a994-4fc9-b585-436f5b2489de, tid=lct_b62e6k59_prd_02, table=SALES_ORDER_SCHEDULE, status=success"
| extend Properties = extract_all(#"(?P<key>\w+)=(?P<value>[^, ]*),?", dynamic(["key","value"]), message)
| mv-apply Properties on (summarize make_bag(pack(tostring(Properties[0]), Properties[1])))
| evaluate bag_unpack(bag_)
| project-away message

Querying array of text in postgres

I have an array type I want to store in Postgres. One of the major use cases I have is to see if any of the records has an array which has a string in it.
eg.
| A | ["NY", "Paris", "Milan"] |
| B | ["Paris", "NY"] |
| C | [] |
| D | ["Milan"] |
Does there exist a row with Paris in the array? Which rows have Milan in the array? and so on.
I have 2 options on how to store the column. I can either make it a type text[] or convert it into a json as {"cities": ["NY", "Paris", "Milan"]} and then store as a JSONB field
However, I am not sure what would allow the fastest querying for the use case I have. Is there any one obviously better way of doing this? Am I tying myself down in any way by choosing one over the other? If I choose one over the other then how can I query the DB?
As you seem to be storing simple lists of values, I would recommend to use datataype Array over JSON, which better fits more complex cases (nested datastructures, associative arrays, ...).
To check for the value of an element at any position in the array, you can use array function ANY().
Here is a query that will return all records where the array stored in column cities contains 'Paris' :
SELECT t.* FROM mytable t WHERE 'Paris' = ANY(t.cities);
Yields :
id cities
---------------------------
A ["NY","Paris","Milan"]
B ["Paris","NY"]
Demo on DB Fiddle
For more information :
Postgres Arrays Documentation
Postgres Arrays Tutorial
I've noticed it is better to query JSONB, if it is a simple key-value store.
As in for instance you want to store arbitrary info on a row that your not sure what the columns(keys) would be.
info = {"a":"apple", "b":"ball"}
For use cases like yours, it would be better if you could design the db with simple tables so you could use JOINS and Indexes to your advantage.
You could restructure the tables like :
Location
id | name
----------
1 | Paris
2 | NY
3 | Milan
Other Table (with foreign key on location table)
user | location_id
--------------------
A | 1
A | 3
B | 2
Using these set of tables it would be easy to query all users with location paris using JOINS.

how to make json value from hive columns

I want to convert hive columns to json value.
I know how to convert json value to string i.e. by using get_json_object.
For example, this is the hive table:
id | name
-------------
1 | kim
2 | lee
3 | park
Expected Output is:
[ {"1" : "kim"}, {"2" : "lee"}, {"3" : "park"} ]
You can use Brickhouse UDF collect:
CREATE TEMPORARY FUNCTION collect AS 'brickhouse.udf.collect.CollectUDAF';
SELECT collect(map(CAST(id as STRING), name)) from table;