How to present dynamic keys in a type struct? - sql

I have a PostgreSQL table which has a JSONB filed. The table can be created by
create table mytable
(
id uuid primary key default gen_random_uuid(),
data jsonb not null,
);
insert into mytable (data)
values ('{
"user_roles": {
"0x101": [
"admin"
],
"0x102": [
"employee",
"customer"
]
}
}
'::json);
In above example, I am using "0x101", "0x102" to present two UIDs. In reality, it has more UIDs.
I am using jackc/pgx to read that JSONB field.
Here is my code
import (
"context"
"fmt"
"github.com/jackc/pgx/v4/pgxpool"
)
type Data struct {
UserRoles struct {
UID []string `json:"uid,omitempty"`
// ^ Above does not work because there is no fixed field called "uid".
// Instead they are "0x101", "0x102", ...
} `json:"user_roles,omitempty"`
}
type MyTable struct {
ID string
Data Data
}
pg, err := pgxpool.Connect(context.Background(), databaseURL)
sql := "SELECT data FROM mytable"
myTable := new(MyTable)
err = pg.QueryRow(context.Background(), sql).Scan(&myTable.Data)
fmt.Printf("%v", myTable.Data)
As the comment inside mentions, the above code does not work.
How to present dynamic keys in a type struct or how to return all JSONB field data? Thanks!

edit your Data struct as follows,
type Data struct {
UserRoles map[string][]string `json:"user_roles,omitempty"`
}
you can also use a uuid type as the map's key type if you are using a package like https://github.com/google/uuid for uuids.
However please note that this way if you have more than one entry in the json object user_roles for a particular user(with the same uuid), only one will be fetched.

Related

Filtering with Like operator on integer column

I'm using mikro-orm for db related opeartions. My db entity has a number field:
#Property({ defaultRaw: 'srNumber', type: 'number' })
srNumber!: number;
and corresponding db column (Postgresql) is:
srNumber(int8)
The query input for where param in mikro-orm EntityRepository's findAndCount(where, option) is:
repository.findAndCount({"srNumber":{"$like":"%1000%"}}, options)
It translates to:
select * from table1 where srNumber like '%1000%'
The problem here is since srNumber column is not a string, there is a type-mismatch and query fails. Casting it like CAST(srNumber AS TEXT) like '%1000%' should work in db.
Is there any way to somehow specify the field casting here?
You can use custom SQL fragments in the query. To get around strictly typed FilterQuery, you can use expr which is just an identity function (returns its parameter), so have effect only for TS checks.
Something like this should work:
import { expr } from '#mikro-orm/core';
const res = await repo.findAndCount({
[expr('cast(srNumber as text)')]: { $like: '%1000%' },
}, options);
https://mikro-orm.io/docs/entity-manager/#using-custom-sql-fragments

Oracle SQL JSON_QUERY ignore key field

I have a json with several keys being a number instead of a fixed string. Is there any way I could bypass them in order to access the nested values?
{
"55568509":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"55501223",
"register":"001",
"readingValue":"5295",
"readingDate":"2021-02-25T00:00:00.000Z"
}
}
}
}
My expected output here would be 5295, but since 59668509 can vary from json to json, JSON_QUERY(data, '$."59668509".registers."001".readingValue) would not be an option. I'm not able to use regexp here because this is only a part of the original json, which contains more than this.
UPDATE: full json with multiple occurrences:
This is how my whole json looks like. I would like all the readingValue in brackets, in the example below, my expected output would be [32641, 00964].
WITH test_table ( data ) AS (
SELECT
'{
"session":{
"sessionStartDate":"2021-02-26T12:03:34+0000",
"interactionDate":"2021-02-26T12:04:19+0000",
"sapGuid":"369F01DFXXXXXXXXXX8553F40CE282B3",
"agentId":"USER001",
"channel":"XXX",
"bpNumber":"5551231234",
"contractAccountNumber":"55512312345",
"contactDirection":"",
"contactMethod":"Z08",
"interactionId":"5550848784",
"isResponsibleForPayingBill":"Yes"
},
"payload":{
"agentId":"USER001",
"contractAccountNumber":"55512312345",
"error":{
"55549271":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"55501223",
"register":"001",
"readingValue":"32641",
"readingDate":"2021-02-26T00:00:00.000Z"
}
},
"errors":[
{
"contractNumber":"55501231",
"language":"EN",
"errorCode":"62",
"errorText":"Error Text1",
"isHardError":false
},
{
"contractNumber":"55501232",
"language":"EN",
"errorCode":"62",
"errorText":"Error Text2",
"isHardError":false
}
],
"bpNumber":"5557273667"
},
"55583693":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"555121212",
"register":"001",
"readingValue":"00964",
"readingDate":"2021-02-26T00:00:00.000Z"
}
},
"errors":[
],
"bpNumber":"555123123"
}
}
}
}'
FROM
dual
)
SELECT
JSON_QUERY(data, '$.payload.error.*.registers.*[*].readingValue') AS reading_value
FROM
test_table;
UPDATE 2:
Solved, this would do the trick, upvoting the first comment.
JSON_QUERY(data, '$.payload.error.*.registers.*.readingValue' WITH WRAPPER) AS read_value
As I explained in the comment to your question, if you are getting that result from the JSON you posted, you are not using JSON_QUERY(); you must be using JSON_VALUE(). Either that, or there's something else you didn't share with us.
In any case, let's say you are using JSON_VALUE() with the arguments you showed. You are asking, how can you modify the path so that the top-level attribute name is not hard-coded. That is trivial: use asterisk (*) instead of the hard-coded name. (This would work the same with JSON_QUERY() - it's about JSON paths, not the specific function that uses them.)
with test_table (data) as (
select
'{
"59668509":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"40157471",
"register":"001",
"readingValue":"5295",
"readingDate":"2021-02-25T00:00:00.000Z"
}
}
}
}' from dual
)
select json_value (data, '$.*."registers"."001"."readingValue"'
returning number) as reading_value
from test_table
;
READING_VALUE
-------------
5295
As an aside that is not related to your question in any way: In your JSON you have an object with a single attribute named "registers", whose value is another object with a single attribute "001", and in turn, this object has an attribute named "register" with value "001". Does that make sense to you? It doesn't to me.

Can I update a FaunaDB document without knowing its ID?

FaunaDB's documentation covers how to update a document, but their example assumes that I'll have the id to pass into Ref:
Ref(schema_ref, id)
client.query(
q.Update(
q.Ref(q.Collection('posts'), '192903209792046592'),
{ data: { text: "Example" },
)
)
However, I'm wondering if it's possible to update a document without knowing its id. For instance, if I have a collection of users, can I find a user by their email, and then update their record? I've tried this, but Fauna returns a 400 (Database Ref expected, String provided):
client
.query(
q.Update(
q.Match(
q.Index("users_by_email", "me#example.com")
),
{ name: "Em" }
)
)
Although Bens comments are correct, (that's the way you do it), I wanted to note that the error you are receiving is because you are missing a bracket here: "users_by_email"), "me#example.com"
The error is logical if you know that Index takes an optional database reference as second argument.
To clarify what Ben said:
If you do this you'll get another error:
Update(
Match(
Index("accounts_by_email"), "test#test.com"
),
{ data: { email: "test2#test.com"} }
)
Since Match could potentially return more then one element. It returns a set of references called a SetRef. Think of setrefs as lists that are not materialized yet. If you are certain there is only one match for that e-mail (e.g. if you set a uniqueness constraint) you can materialize it using Paginate or Get:
Get:
Update(
Select(['ref'], Get(Match(
Index("accounts_by_email"), "test#test.com"
))),
{ data: { email: 'test2#test.com'} }
)
The Get returns the complete document, we need to specify that we require the ref with Select(['ref']..
Paginate:
Update(
Select(['data', 0],
Paginate(Match(
Index("accounts_by_email"), "test#test.com"
))
),
{ data: { email: "testchanged#test.com"} }
)
You are very close! Update does require a ref. You can get one via your index though. Assuming your index has a default values setting (i.e. paging a match returns a page of refs) and you are confident that the there is a single match or the first match is the one you want then you can do Select(["ref"], Get(Match(Index("users_by_email"), "me#example.com"))) to transform your set ref to a document ref. This can then be passed into update (or to any other function that wants a document ref, like Delete).

Is Composite Primary Key in DynamoDB for Query

As per this link:
Supported Operations on DynamoDB
"You can query only tables that have a composite primary key (partition key and sort key)."
This doesn't seem correct though. I have a table in DynamoDB called 'users' which has a Primary Key that consists of only one attribute 'username'.
And I'm able to query this table just fine in NodeJS using only a 'KeyConditionExpression' on the attribute 'username'. Please see below:
var getUserByUsername = function (username, callback) {
var dynamodbDoc = new AWS.DynamoDB.DocumentClient();
var params = {
TableName: "users",
KeyConditionExpression: "username = :username",
ExpressionAttributeValues: {
":username": username
}
};
dynamodbDoc.query(params, function (err, data) {
if (err) {
console.error("Unable to query. Error:", JSON.stringify(err, null, 2));
callback(err, null);
} else {
console.log("DynamoDB Query succeeded.");
callback(null, data);
}
});
}
This code works just fine. So I'm wondering if the documentation is incorrect or am I missing something?
The documentation is correct.
"Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function"
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html
If a table doesn't have a sort key (range attribute), then the composite key is built from the hash key only. One of the results of that is that items won't be sorted as you like (items are sorted by sort key)

Is there a way to get the Type for a Column using package database/sql in golang?

Basically, without knowing before hand what the resulting structure of a query might be, I'd like to query the database, and return a structure like this (json-y)
// Rows
[
// Row 1
[
{ ColumnName: "id", Value: 1, Type: int },
{ ColumnName: "name", Value: "batman", Type: string },
...
],
// Row 2
[
{ ColumnName: "id", Value: 2, Type: int },
{ ColumnName: "name", Value: "superman", Type: string },
...
]
]
Is there a way to get the Type for a Column using package database/sql in golang?
I'm suspecting that what I want to do is
make an array of interface{} the size of Column(),
then for each column determine it's type,
then fill the array with a pointer to that type,
and then pass the array to Scan()
Which is a little like this code example from sqlx, but without first knowing the Struct that the data would be populating.
You should be able to do it this way:
func printRows(rows *sql.Rows){
colTypes, err := rows.ColumnTypes()
for _,s := range colTypes {
log.Println("cols type:", s.DatabaseTypeName());
}
}
Using database/sql? No (as far as I know).
But you can use this code for arbitrary queries. And json.Marshall() from the json package will use reflection to determine the right way to print a value, so you could have a structure like this:
type Column struct {
ColumnName string
ColumnValue interface{}
ColumnType string
}
And then use reflect.TypeOf(someVariable).String() to get the type for ColumnType.