Parsing JSON polymorphic records with Elm - elm

Probably it is a beginner's question. I have a JSON data format that holds polymorphic records and I need to parse it. These are vertices or edges of a graph
{
"records": [{
"id": 0,
"object": {
"id": "vertex1"
}
}, {
"id": 1,
"object": {
"id": "vertex2"
}
}, {
"id": 2,
"object": {
"from": "vertex1",
"to": "vertex2"
}
}]
}
As you can see they all have id, but vertices and edges have different record structures.
I tried to find something on parsing such structures, but the only thing I found was Handling records with shared substructure in Elm, but I cannot translate the answer to Elm 0.17 (a simple renaming of data to type did not help)
In general there are 2 challenges:
defining a polymorphic record
decode JSON dynamically into a vertex or an edge
This is how far I got:
type alias RecordBase =
{ id : Int
}
type Records = List (Record RecordBase)
type Record o =
VertexRecord o
| EdgeRecord o
type alias VertexRecord o =
{ o | object : {
id : Int
}
}
type alias EdgeRecord o =
{ o | object : {
from : Int
, to : Int
}
}
but the compiler complains with
Naming multiple top-level values VertexRecord makes things
ambiguous.
Apparently union already defined the VertexRecord and EdgeRecord types.
I really don't know how to proceed from here. All suggestions are most welcome.

Since you have the label id in multiple places and of multiple types, I think it makes things a little cleaner to have type aliases and field names that indicate each id's purpose.
Edit 2016-12-15: Updated to elm-0.18
type alias RecordID = Int
type alias VertexID = String
type alias VertexContents =
{ vertexID : VertexID }
type alias EdgeContents =
{ from : VertexID
, to : VertexID
}
Your Record type doesn't actually need to include the field name of object anywhere. You can simply use a union type. Here is an example. You could shape this a few different ways, the important part to understand is fitting both types of data in as a single Record type.
type Record
= Vertex RecordID VertexContents
| Edge RecordID EdgeContents
You could define a function that returns the recordID given either a vertex or edge like so:
getRecordID : Record -> RecordID
getRecordID r =
case r of
Vertex recordID _ -> recordID
Edge recordID _ -> recordID
Now, onto decoding. Using Json.Decode.andThen, you can decode the common record ID field, then pass the JSON off to another decoder to get the rest of the contents:
recordDecoder : Json.Decoder Record
recordDecoder =
Json.field "id" Json.int
|> Json.andThen \recordID ->
Json.oneOf [ vertexDecoder recordID, edgeDecoder recordID ]
vertexDecoder : RecordID -> Json.Decoder Record
vertexDecoder recordID =
Json.object2 Vertex
(Json.succeed recordID)
(Json.object1 VertexContents (Json.at ["object", "id"] Json.string))
edgeDecoder : RecordID -> Json.Decoder Record
edgeDecoder recordID =
Json.object2 Edge
(Json.succeed recordID)
(Json.object2 EdgeContents
(Json.at ["object", "from"] Json.string)
(Json.at ["object", "to"] Json.string))
recordListDecoder : Json.Decoder (List Record)
recordListDecoder =
Json.field "records" Json.list recordDecoder
Putting it all together, you can decode your example like this:
import Html exposing (text)
import Json.Decode as Json
main =
text <| toString <| Json.decodeString recordListDecoder testData
testData =
"""
{
"records": [{
"id": 0,
"object": {
"id": "vertex1"
}
}, {
"id": 1,
"object": {
"id": "vertex2"
}
}, {
"id": 2,
"object": {
"from": "vertex1",
"to": "vertex2"
}
}]
}
"""

Related

How to use PSQL to extract data from an object (inside an array inside an object inside an array)

This is data that is currently sitting in a single cell (e.g. inside warehouse table in warehouse_data column) in our database (I'm unable to change the structure/DB design so would need to work with this), how would I be able to select the name of the shirt with the largest width? In this case, would expect output to be tshirt_b (without quotation marks)
{
"wardrobe": {
"apparel": {
"variety": [
{
"data": {
"shirt": {
"size": {
"width": 30
}
}
},
"names": [
{
"name": "tshirt_a"
}
]
},
{
"data": {
"shirt": {
"size": {
"width": 40
}
}
},
"names": [
{
"name": "tshirt_b"
}
]
}
]
}
}
}
I've tried a select statement, being able to get out
"names": [
{
"name": "tshirt_b"
}
]
but not too much further than that e.g.:
select jsonb_array_elements(warehouse_data#>'{wardrobe,apparel,variety}')->>'names'
from 'warehouse'
where id = 1;
In this table, we'd have 2 columns, one with the data and one with a unique identifier. I imagine I'd need to be able to select into size->>width, order DESC and limit 1 (if that's able to then limit it to include the entire object with data & shirt or with the max() func?
I'm really stuck so any help would be appreciated, thank you!
You'll first want to normalise the data into a relational structure:
SELECT
(obj #>> '{data,shirt,size,width}')::int AS width,
(obj #>> '{names,0,name}') AS name
FROM warehouse, jsonb_array_elements(warehouse_data#>'{wardrobe,apparel,variety}') obj
WHERE id = 1;
Then you can do your processing on that as a subquery, e.g.
SELECT name
FROM (
SELECT
(obj #>> '{data,shirt,size,width}')::int AS width,
(obj #>> '{names,0,name}') AS name
FROM warehouse, jsonb_array_elements(warehouse_data#>'{wardrobe,apparel,variety}') obj
WHERE id = 1
) shirts
ORDER BY width DESC
LIMIT 1;

Django Rest Framework- How to send data as strings

I want to send all the data in a response as string, like in database id is stored as integers but I want to send it as string in response.
eg: I have the response as
{
"categories": [
{
"id": 1,
"category": "xya",
"quantity": 25
}
]
}
I want it to be as:
{
"categories": [
{
"id": "1",
"category": "xya",
"quantity": "25"
}
]
}
I am using ModelSerializer to send all the fields.
Another option is to convert int to str using the to_representation method of your model serializer.
class YourSerializer(serializers.ModelSerializer):
# other fields
def to_representation(self, instance):
""" Override `to_representation` method """
repr = super().to_representation(instance)
repr['id'] = str(repr['id'])
repr['quantity'] = str(repr['quantity'])
return repr
You can explicitly define id field in your serializer to be CharField()
Like this
class YourSerializer(serializers.ModelSerializer):
# other fields
id = serializers.CharField()
class Meta:
model = SomeModel
fields = ('id', ..... other fields)

How to serialize F# discriminated union types in F# Asp Core Wep API

I´m trying to build a F# Asp.Net Core 3.0 Web API where I send some F# records.
As far as I know Asp.Net Core 3.0 uses System.Text.Json by default to serailize objects to json.
To make the domain model self documenting I use F# Discriminated Union Types.
The model looks like that:
type Starttime =
| NotStarted
| Started of DateTime
type Category = {
Name: string
Starttime: Starttime
}
type ValueId = ValueId of String
type Value = {
Id: ValueId
Category: Category
}
So I have one union type with just one possibility ValueId and I have one union type with two possibilities Starttime with NotStarted and Started which contains a DateTime object.
Now in my controller I build some sample data and return that.
let isStarted i : Starttime =
if i % 2 = 0 then
Started DateTime.Now
else
NotStarted
[<HttpGet>]
member __.Get() : Value[] =
[|
for index in 1..2 ->
{
Id = ValueId (Guid.NewGuid().ToString())
Category = {
Name = sprintf "Category %i" index
Starttime = isStarted index }
}
|]
When i look at the data returned in json I get the data for the single option union type (the Guid) but I never get a value for the multi option union type.
[
{
"id": {
"tag": 0,
"item": "6ed07303-6dfa-42b4-88ae-391bbebf772a"
},
"category": {
"name": "Category 1",
"starttime": {
"tag": 0,
"isNotStarted": true,
"isStarted": false
}
}
},
{
"id": {
"tag": 0,
"item": "5e122579-4945-4f19-919c-ad4cf16ad0ed"
},
"category": {
"name": "Category 2",
"starttime": {
"tag": 1,
"isNotStarted": false,
"isStarted": true
}
}
}
]
Does anyone know how to also send the values for the multi value union type?
Or is it generally better to map discriminated union types to anonymous records?
Thank you!
At the moment, System.Text.Json does not support F# types (there's an open issue about this here)
For now, you can use this library instead.

Nest Elastic - Building Dynamic Nested Query

I have to query a nested object using Nest, however the query is built in dynamic way. Below is code that demonstrate using query on nested "books" in a static way
QueryContainer qry;
qry = new QueryStringQuery()
{
DefaultField = "name",
DefaultOperator = Operator.And,
Query = "salman"
};
QueryContainer qry1 = null;
qry1 = new RangeQuery() // used to search for range ( from , to)
{
Field = "modified",
GreaterThanOrEqualTo = Convert.ToDateTime("21/12/2015").ToString("dd/MM/yyyy"),
};
QueryContainer all = qry && qry1;
var results = elastic.Search<Document>(s => s
.Query(q => q
.Bool(qb => qb
.Must(all)))
.Filter(f =>
f.Nested(n => n
.Path("books")
.Filter(f3 => f3.And(
f1 => f1.Term("book.isbn", "122"),
f2 => f2.Term("book.author", "X"))
)
)
)
);
The problem is that i need to combine multiple queries (using And,OR operators) for "books" in dynamic fashion. For example, get the books that satisfy these set of conditions:
Condition 1: Books that has Author "X" and isbn "1"
Condition 2: Books that has Author "X" and isbn "2"
Condition 3: Books that has Author "Z" and isbn "3"
Other Condtions: .....
Now, the filter in the nested Query should retrieve books if:
Condition 1 AND Condition 2 Or Condition 3
Suppose that i have class name FilterOptions that contains the following attributes:
FieldName
Value
Operator (which will combine the next filter)
I am going to loop on the given FilterOptions array to build the query.
Question:
What should i use to build the nested query? Is it a FilterDesciptor and how to combine them add the nested query to the Search Method?
Please, recommend any valuable link or example?
I agree with paweloque, it seems your first two conditions are contradictory and wouldn't work if AND-ed together. Ignoring that, here's my solution. I've implemented this in such a way that allows for more than the three specific conditions you have. I too feel it would fit better in a bool statement.
QueryContainer andQuery = null;
QueryContainer orQuery = null;
foreach(var authorFilter in FilterOptions.Where(f=>f.Operator==Operator.And))
{
andQuery &= new TermQuery
{
Field = authorFilter.FieldName,
Value = authorFilter.Value
};
}
foreach(var authorFilter in FilterOptions.Where(f=>f.Operator==Operator.Or))
{
orQuery |= new TermQuery
{
Field = authorFilter.FieldName,
Value = authorFilter.Value
};
}
After that, in the .Nested call I would put:
.Path("books")
.Query(q=>q
.Bool(bq=>bq
.Must(m=>m.MatchAll() && andQuery)
.Should(orQuery)
))
In the specific case of the Condition 1 and Condition 2 you'd probably not get any results because these are exclusive conditions. But I assume now, that you want to get results which match either of those conditions. You've chosen nested which is definitely the way to go. With the nested type you can combine parameters for a single book.
Combining nested queries
For your use case I'd use bool query type with must or should clauses.
A query to get books for either Condition 1 or Condition 2 would be:
POST /books/_search
{
"query": {
"bool": {
"should": [
{
"nested": {
"path": "books",
"query": {
"bool": {
"must": [
{
"match": {
"books.isbn": "2"
}
},
{
"match": {
"books.author": "X"
}
}
]
}
}
}
},
{
"nested": {
"path": "books",
"query": {
"bool": {
"must": [
{
"match": {
"books.isbn": "1"
}
},
{
"match": {
"books.author": "X"
}
}
]
}
}
}
}
]
}
}
}
Can you explain, why are your books nested? Without nesting them in a top structure but indexing directly as a top level object in an index/type you could simplify your queries.
Not-Analyzed
There is another caveat that you have to remind: If you want to have an exact match on the author and the ISBN you have to make sure that the ISBN and author fields are set to not_analyzed. Otherwise they get analyzed and splitted into parts and your match would'n work very well.
E.g. if you have a ISBN Number with dashes, then it would get split into parts:
978-3-16-148410-0
would become indexed as:
978
3
16
148410
0
And a search with exactly the same ISBN number would give you all the books which have one of the sub-numbers in their ISBN number. If you want to prevent this, use the not_analyzed index-type and Multi-fields:
"isbn": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
Then to address the not_analyzed isbn field you'd have to call it:
books.isbn.raw
Hope this helps.

How to get all values of an attribute of json array with jsonpath bigquery in bigquery? Asterisk operator not supported.

I'm trying to get all the values of a certain attribute from a json array.
Considering the following json, I'm trying to get all the types e.g. iPhone,home
{
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" :
{
"streetAddress": "naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers":
[
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
I am using $.phoneNumbers[*].type which seems to work fine on online parsers
but when I'm using it in big query:
select json_extract(my_column,'$.phoneNumbers[*].type')
from my_table
I get:
JSONPath parse error at: [*].type
You can write a Javascript UDF to do the extraction:
SELECT JSON_EXTRACT('[1,2,3]', '$[*]') parsed
Error: Unsupported operator in JSONPath: *
UDF alternative:
#standardSQL
CREATE TEMPORARY FUNCTION parseJson(libs STRING)
RETURNS ARRAY<INT64>
LANGUAGE js AS """
try {
return JSON.parse(libs);
} catch (e) {
return [];
}
""";
SELECT parseJson('[1,2,3]') parsed
More complex example:
#standardSQL
CREATE TEMPORARY FUNCTION parseJson(libs STRING)
RETURNS ARRAY<STRUCT<x INT64, y INT64, z INT64>>
LANGUAGE js AS """
try {
return JSON.parse(libs);
} catch (e) {
return [];
}
""";
SELECT parseJson(JSON_EXTRACT('{"a":[{"x":1},{"y":2},{"z":3}]}', '$.a')) parsed
(inspired by: https://discuss.httparchive.org/t/javascript-library-detection/955)
json_extract cannot return REPEATED field, it can only do one match - hence no support for *
Yet another interesting (I hope) solution for BigQuery Standard SQL
Can be easily adjusted to whatever specific needs are
#standardSQL
CREATE TEMPORARY FUNCTION parseJson(data STRING)
RETURNS ARRAY<STRUCT<parent STRING, item STRING, key STRING, value STRING>>
LANGUAGE js AS """
x = JSON.parse(data); z = []; processKey(x, '');
function processKey(node, parent) {
if (parent !== '') {parent += '.'};
Object.keys(node).map(function(key) {
value = node[key].toString();
if (!value.startsWith('[object Object]')) {
var q = {}; var arr = parent.split('.');
q.parent = arr[0]; q.item = arr[1];
q.key = key; q.value = value;
z.push(q);
} else {
processKey(node[key], parent + key);
};
});
}; return z;
""";
WITH t AS (
SELECT """ {
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" : {
"streetAddress": "naist street", "city" : "Nara", "postalCode" : "630-0192" },
"phoneNumbers": [
{ "type" : "iPhone", "number": "0123-4567-8888"},
{ "type" : "home", "number": "0123-4567-8910"},
{ "type" : "work", "number": "0123-4567-7777"}]
} """ AS info
)
SELECT parent, item, key, value FROM t, UNNEST(parseJson(info))
WHERE parent = 'phoneNumbers' AND key = 'type'