Backand (Back&) schema setup for user favorites - backand

I am struggling with setting up the notion of user favorites in the Backand schema editor. An example for what I am trying to do is as follows:
Say I have a website selling cars. Users are authenticated and on any car they like they can "favorite" that car. When favorited, that specific car object or id should be added to the users favorites property so that the user can return to or call upon their favorites list at any time.
In Back&s schema editor I have added a collection property to my user object but I am completely confused because this just creates a 1 to many relationship.
The Back& documentation is pretty lacking at this point so I figured I'd bring it up here so others who run into this can see it as well. What is the best way to accomplish this common functionality in Back&? Any help would be greatly appreciated.

I think what you are actually trying to do needs a many-to-many relation which is well documents here and has a code pen example here.
i.e. each user can favorite many cars and each car can be a favorite of many users
so the json would look somthing like this
[
{
"name": "user_car",
"fields": {
"user": {
"object": "user"
},
"car": {
"object": "car"
}
}
},
{
"name": "user",
"fields": {
"user_car": {
"collection": "user_car",
"via": "car"
},
"email": {
"type": "string"
},
"firstName": {
"type": "string"
},
"lastName": {
"type": "string"
}
}
},
{
"name": "car",
"fields": {
"user_car": {
"collection": "user_car",
"via": "car"
},
"name": {
"type": "string"
},
"model": {
"type": "int"
}
}
}
}
]

Related

Put validation of two array fields in JSON Schema using oneOf

Can I put check on two fields in JSON schema ? Both field are of type array of objects. Conditions:
Either one of them can contain value at a time (i.e. other should be empty).
Both can be empty.
Any leads ?
// The schema
var schema = {
"id": "https://kitoutapi.lrsdedicated.com/v1/json_schemas/login-request#",
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Login request schema",
"type": "object",
"oneOf": [
{ "categories": {
"maxItems": 0
},
"positionedOffers": {
"minItems": 1
}},
{ "categories": {
"minItems": 1
},
"positionedOffers": {
"maxItems": 0
}}
],
"properties": {
"categories": {
"type": "array"
},
"positionedOffers": {
"type": "array"
}
},
"additionalProperties": false
};
// Test data 1
// This test should return a good result
var data1 = {
"positionedOffers":['hello'],
"categories":[],
}
For your requirement, I think I'd come at this from the other direction. Rather than saying
If one contains a value, the other must be empty, but both may be empty.
I'd say
At least one must be empty.
That leads you to use an anyOf with subschemas checking that each property is an empty array.
{
"id": "https://kitoutapi.lrsdedicated.com/v1/json_schemas/login-request#",
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "Login request schema",
"type": "object",
"anyOf": [
{
"properties": {
"categories": {
"maxItems": 0
}
}
},
{
"properties": {
"positionedOffers": {
"maxItems": 0
}
}
}
],
"properties": {
"categories": {
"type": "array"
},
"positionedOffers": {
"type": "array"
}
},
"additionalProperties": false
}
Bonus Material
In your original post, you omitted the properties keywords under the oneOf. This may have been the cause of the schema's failure to validate. I've added it in the above.
Secondly, draft 4 is quite old at this point. You may be limited by the implementation you're using, but if you can, you should consider using a more recent version of JSON Schema. You can view available implementations and what versions they support on the JSON Schema implementations page.

How to define a related resource URI in JSON:API

In the json:api format relationships are defined with a type and a id.
Like in the example bellow. The article has a relationship with the type people and the id 9.
Now if i want to fetch the related resource i use the URI from "links.related"
// ...
{
"type": "articles",
"id": "1",
"attributes": {
"title": "Rails is Omakase"
},
"relationships": {
"author": {
"links": {
"self": "http://example.com/articles/1/relationships/author",
"related": "http://example.com/articles/1/author"
},
"data": { "type": "people", "id": "9" }
}
},
"links": {
"self": "http://example.com/articles/1"
}
}
// ...
But in my case the related resource (people) are in a separate API. There is no way to get the full people data from the articles API nor is it possible to include it. The only way to get the related data would be a call to:
http://example.com/v1-2/people/9/
Where can i define the relation between the URI and people:9
Or in other words: How would a client know where to fetch the related resource?

Is it possible to inline JSON schemas into a JSON document? [duplicate]

For example a schema for a file system, directory contains a list of files. The schema consists of the specification of file, next a sub type "image" and another one "text".
At the bottom there is the main directory schema. Directory has a property content which is an array of items that should be sub types of file.
Basically what I am looking for is a way to tell the validator to look up the value of a "$ref" from a property in the json object being validated.
Example json:
{
"name":"A directory",
"content":[
{
"fileType":"http://x.y.z/fs-schema.json#definitions/image",
"name":"an-image.png",
"width":1024,
"height":800
}
{
"fileType":"http://x.y.z/fs-schema.json#definitions/text",
"name":"readme.txt",
"lineCount":101
}
{
"fileType":"http://x.y.z/extended-fs-schema-video.json",
"name":"demo.mp4",
"hd":true
}
]
}
The "pseudo" Schema note that "image" and "text" definitions are included in the same schema but they might be defined elsewhere
{
"id": "http://x.y.z/fs-schema.json",
"definitions": {
"file": {
"type": "object",
"properties": {
"name": { "type": "string" },
"fileType": {
"type": "string",
"format": "uri"
}
}
},
"image": {
"allOf": [
{ "$ref": "#definitions/file" },
{
"properties": {
"width": { "type": "integer" },
"height": { "type": "integer"}
}
}
]
},
"text": {
"allOf": [
{ "$ref": "#definitions/file" },
{ "properties": { "lineCount": { "type": "integer"}}}
]
}
},
"type": "object",
"properties": {
"name": { "type": "string"},
"content": {
"type": "array",
"items": {
"allOf": [
{ "$ref": "#definitions/file" },
{ *"$refFromProperty"*: "fileType" } // the magic thing
]
}
}
}
}
The validation parts of JSON Schema alone cannot do this - it represents a fixed structure. What you want requires resolving/referencing schemas at validation-time.
However, you can express this using JSON Hyper-Schema, and a rel="describedby" link:
{
"title": "Directory entry",
"type": "object",
"properties": {
"fileType": {"type": "string", "format": "uri"}
},
"links": [{
"rel": "describedby",
"href": "{+fileType}"
}]
}
So here, it takes the value from "fileType" and uses it to calculate a link with relation "describedby" - which means "the schema at this location also describes the current data".
The problem is that most validators do not take any notice of any links (including "describedby" ones). You need to find a "hyper-validator" that does.
UPDATE: the tv4 library has added this as a feature
I think cloudfeet answer is a valid solution. You could also use the same approach described here.
You would have a file object type which could be "anyOf" all the subtypes you want to define. You would use an enum in order to be able to reference and validate against each of the subtypes.
If the sub-types schemas are in the same Json-Schema file you don't need to reference the uri explicitly with the "$ref". A correct draft4 validator will find the enum value and will try to validate against that "subschema" in the Json-Schema tree.
In draft5 (in progress) a "switch" statement has been proposed, which will allow to express alternatives in a more explicit way.

Schema to load JSON to Google BigQuery

Suppose I have the following JSON, which is the result of parsing urls parameters from a log file.
{
"title": "History of Alphabet",
"author": [
{
"name": "Larry"
},
]
}
{
"title": "History of ABC",
}
{
"number_pages": "321",
"year": "1999",
}
{
"title": "History of XYZ",
"author": [
{
"name": "Steve",
"age": "63"
},
{
"nickname": "Bill",
"dob": "1955-03-29"
}
]
}
All the fields in top-level, "title", "author", "number_pages", "year" are optional. And so are the fields in the second level, inside "author", for example.
How should I make a schema for this JSON when loading it to BQ?
A related question:
For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
How should I make a schema for this JSON when loading it to BQ?
The following schema should work. You may want to change some of the types (e.g. maybe you want the dob field to be a TIMESTAMP instead of a STRING), but the general structure should be similar. Since types are NULLABLE by default, all of these fields should handle not being present for a given row.
[
{
"name": "title",
"type": "STRING"
},
{
"name": "author",
"type": "RECORD",
"fields": [
{
"name": "name",
"type": "STRING"
},
{
"name": "age",
"type": "STRING"
},
{
"name": "nickname",
"type": "STRING"
},
{
"name": "dob",
"type": "STRING"
}
]
},
{
"name": "number_pages",
"type": "INTEGER"
},
{
"name": "year",
"type": "INTEGER"
}
]
A related question: For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
It should be possible to union two tables with differing schemas without too much difficulty.
Here's a quick example of how it works over public data (kind of a silly example, since the tables contain zero fields in common, but shows the concept):
SELECT * FROM
(SELECT * FROM publicdata:samples.natality),
(SELECT * FROM publicdata:samples.shakespeare)
LIMIT 100;
Note that you need the SELECT * around each table or the query will complain about the differing schemas.

Specific analyzers for sub-documents in lucene / elasticsearch

After reading the documentation, testing and reading a lot of other questions here on stackoverflow:
We have documents that have titles and description in multiple languages. There are also tags that are translated to the same languages. There might be up to 30-40 different languages in the system, but probably only 3 or 4 translations for a single document.
This is the planned document structure:
{
"luck": {
"id": 10018,
"pub": 0,
"pr": 100002,
"loc": {
"lat": 42.7,
"lon": 84.2
},
"t": [
{
"lang": "en-analyzer",
"title": "Forest",
"desc": "A lot of trees.",
"tags": [
"Wood",
"Nature",
"Green Mouvement"
]
},
{
"lang": "fr-analyzer",
"title": "ForĂȘt",
"desc": "A grand nombre d'arbre.",
"tags": [
"Bois",
"Nature",
"Mouvement Vert"
]
}
],
"dates": [
"2014-01-01T20:00",
"2014-06-06T20:00",
"2014-08-08T20:00"
]
}
}
Possible queries are "arbre" or "wood" or "forest" or "nature" combined with a date and a geo_distance filter, furthermore there will be some facets over the tags array (that obviously include counting).
We can produce any document structure that fits best for elasticsearch (or for lucene). It's crucial that each language is analyzed specifically, so we use "_analyzer" in order to distinguish the languages.
{
"luck": {
"properties": {
"id": {
"type": "long"
},
"pub": {
"type": "long"
},
"pr": {
"type": "long"
},
"loc": {
"type": "geo_point"
},
"t": {
"_analyzer": {
"path": "t.lang"
},
"properties": {
"lang": {
"type": "string"
},
"properties": {
"title": {
"type": "string"
},
"desc": {
"type": "string"
},
"tags": {
"type": "string"
}
}
}
}
}
}
A) Apparently, this idea does not work: after PUTting the mapping, we retrieve the same mapping ("GET") and it seems to ignore the specific analyzers (A test with a top-level "_analyzer" worked fine). Does "_analyzer" work for sub-documents and if yes how to should we refer to it? We also tested declaring the sub-document as "object" or "nested". How is multi-language document indexing supposed to work.
B) One possibility would be to put each language in its own document: In that case how do we manage the id? Finally both documents should refer to the same id. For example if the user searches for "nature" (and we don't know if the user intends to find "nature" in English or French), this document would appear twice in the result set, and the counting and paging would be very wrong (also facet counting).
Any ideas?