I'm confused slightly by JSON-LD compaction and whether it can be used to compact the IRIs of values.
I have the following JSON-LD object
{
"#context": {
"#base": "file:///",
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#",
"x:purpose": {
"#type": "#id"
}
},
"https://example.org/pub/x#purpose": "https://example.org/pub/x-attribute#on"
}
and the following new context
{
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#"
}
I'm expecting ... and want ... to get
{
"#context": {
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#"
},
"x:purpose": "x-attribute:on"
}
but what I end up getting is
{
"#context": {
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#"
},
"x:purpose": "https://example.org/pub/x-attribute#on"
}
You can plug this into the JSON-LD Playground if you'd like to try this.
How I can accomplish what I'm trying to do? I.e. basically use Compact IRIs in value positions.
First a quick note: you aren't using the term you've defined in the context in the input object. Since you're using the full URI, the #type definition is not being applied. Instead you should use the term (x:purpose):
{
"#context": {
"#base": "file:///",
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#",
"x:purpose": {
"#type": "#id"
}
},
"x:purpose": "https://example.org/pub/x-attribute#on"
}
If you don't use the term in the data, you'll need to specify that the value is an #id like so:
{
"#context": {
"#base": "file:///",
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#",
"x:purpose": {
"#type": "#id"
}
},
"https://example.org/pub/x#purpose": {
"#id": "https://example.org/pub/x-attribute#on"
}
}
Now, to get the effect you want where the value is compacted to a CURIE, you must indicate that the value is actually part of your vocabulary (an "enum" if you will). You do this by changing the new context to:
{
"x": "https://example.org/pub/x#",
"x-attribute": "https://example.org/pub/x-attribute#",
"x:purpose": {
"#type": "#vocab"
}
}
That should give you the result you want.
Related
I have a JSON Schema using draft 2020-12 and I am trying to use an if-else subschema to check that a particular property exists based on the value of another property. Below is the if statement I am currently using. There are more properties but I have have omitted them for the sake of brevity. They are identical except the type of the property in the then statement is different. They are all wrapped in an allOf array:
{
"AValue": {
"allOf": [
{
"if": {
"$ref": "#/$defs/ValueItem/properties/dt",
"const": "type1"
},
"then": {
"properties": {
"string": { "type": "string" }
},
"required": ["string"]
}
}
]
}
}
The #/$defs/ValueItem/properties/dt being referred to is here:
{
"ValueItem": {
"properties": {
"value": {
"$ref": "#/$defs/AValue"
},
"dt": {
"$ref": "#/$defs/DT"
}
},
"additionalProperties": false
}
}
#/$defs/DT is here:
{
"DT": {
"type": "string",
"enum": [
"type1",
"type2",
"type3",
"type4"
]
}
}
I expected that when dt is encountered in a JSON instance document, the validator will check if the value of dt is type1 and then validate that an additional property called string is also present and is of type string. However, what actually happens is the validator complains that "Property 'string' has not been defined and the schema does not allow additional properties".
I assume that this is because the condition in the if statement evaluates to false so the subschema is never applied. However, I am unsure why that would be as I followed the example here when creating the if-then-else block. The only thing I can think of that is different is the use of $ref which I have in my schema but it is not in the example.
I found this answer which makes me think that it is possible to use $ref in an if statement but is it possible to use a ref that points to another ref or am I thinking about it incorrectly?
I have also tried removing the $ref from the if statement like below but it still doesn't work. Is it because of the $ref in the properties?
{
"AValue": {
"properties": {
"dt": {
"$ref": "#/$defs/DT"
}
},
"required": [
"dt"
],
"allOf": [
{
"if": {
"properties": {
"dt": {
"const": "type1"
}
}
},
"then": {
"properties": {
"string": {
"type": "string"
}
}
}
}
]
}
}
The problem is not cascading the $ref keywords. The const keyword at the if statement is not applied to the target of the $ref, but to the current location in the JSON input data. In this case to "AValue". To check if the property "dt" is of value "type1" at this point, you would need an if like this (simple solution with no $ref):
"if": {
"properties": {
"dt": {
"const": "type1"
}
},
"required": [ "dt" ]
}
Edit: Showing complete JSON Schema and error in JSONBuddy with $ref:
I need to update document parameters via REST API.
I've tried using the following:
PUT .../raylight/v1/documents/33903/parameters/3
with the following json payload
{
"parameters":{
"parameter": {
"id": 3,
"answer": {
"values": {
"value": [
"2019/9"
]
}
}
}
}
}
But the returned answer shows unmodified parameters:
{
"parameter": {
"#optional": "false",
"#type": "prompt",
...
"id": 3,
...
"answer": {
...
"info": {
...
"previous": {
"value": [
"2015\/12"
]
}
},
"values": {
"value": [
"2015\/12"
]
}
}
}
}
How can I properly set new prompt parameters?
Do:
PUT .../raylight/v1/documents/33903/parameters
instead of:
PUT .../raylight/v1/documents/33903/parameters/3
Adding a parameter ID at the end performs a different function: it returns the list of parameters that are dependent upon the one provided. You have only one in this case, and it's returning itself. Leave it off, to refresh the document.
I have a JSON with the following structure and I am looking to remove certain parts of it either by using JSON-LD Context or JSON-LD Framing so in this situation I need a specific type2 only in the output along with the parent
{
"#context":{
"#vocab":"http://xyz.abc.com/01#",
"sdf":"http://xyz.abc.com/01#",
"#base":"http://xyz.abc.com/01#",
"rowid":"#id",
"values":"#nest",
"type":"#type"
},
"#id":"sdf:parent",
"type":"type1",
"values":[
{
"value":984657
}
],
"rows":[
{
"rowid":"5637220",
"type":"type2",
"values":[
{
"value":"i am type 2"
}
]
},
{
"rowid":"9847589",
"type":"type3",
"values":[
{
"value":"I am type 3"
}
]
}
]
}
So the output should look somewhat like this with the child either embedded in the parent or separate like below with a predicate hasChild
{
"#graph": [
{
"#id": "http://xyz.abc.com/01#parent",
"#type": "http://xyz.abc.com/01#type1",
"http://xyz.abc.com/01#value": 984657 ,
"hasChild" : "http://xyz.abc.com/5637220"
},
{
"#id": "http://xyz.abc.com/5637220",
"#type": "http://xyz.abc.com/01#type2",
"http://xyz.abc.com/01#value": "i am type 2"
}
]
}
Need to Create Avro schema for this ->
{"city":"XXXXXX", "brand":"YYYY", "discount": {} }
{"city":"XXXXXX", "brand":"YYYY", "discount": {"name": "Freedom", "value": 100} }
{"city":"XXXXXX", "brand":"YYYY", "discount": {"name": "Festive Sale", "value": 100} }
I tried with the below shemas, which do not work:
{ "type":"record", "name":"simple_avro",
"fields":[ { "name":"city", "type":"string" },
{ "name":"brand", "type":"string" },
{ "name":"discount",
"type":{ "type":"record", "name":"discount", "default":"",
"fields":[ { "name":"discount_name", "type":"string", "default":"null" },
{ "name":"discount_value", "type":"float", "default":0 }
] }}
] }
For discount field, I have tried default to as "[]", "{}", "", but none of these work.
I don't think an empty {} object is allowed in any case, but if you want to allow no object at all, then it needs to be a union type, as designated by an array for the type, the the default value goes on the outer field rather than inside the record body
{ "name":"discount",
"type" : [
"null",
{ "type":"record", "name":"discount", "fields": [...] }
],
"default" : "null"
In general, I find that easier to express in IDL format
Then, a valid message could be {"city":"XXXXXX", "brand":"YYYY"}
I changed the elasticsearch mapping field type from:
"articles": {
"properties": {
"id": {
"type": "long"
}}}
to
"articles": {
"properties": {
"id": {
"type": "string",
"index": "not_analyzed"
}
After that I did the following steps:
Create the index with new mapping
Reindex the mapping to the new index
After the mapping update my previous query filter doesn't work anymore and I have no results:
GET /art/_search
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"bool": {
"must": [
{
"type": {
"value": "articles"
}
},
{
"term": {
"id": "123467679"
}
}
]
}
}
}
},
"size": 1,
"sort": [
{
"_score": "desc"
}
]
}
If I check with this query the result is what I expect:
GET /art/articles/_search
{
"query": {
"match_all": {}
}
}
I would appreciate if somebody have some idea why after the field type change the query is no longer working.
Thanks!
The problem in the query was with ID filter.
The query works correctly changing the filter from:
"term": {
"id": "123467679"
}
in:
"term": {
"_id": "123467679"
}
I'm still a beginner with elasticsearch to figure out why the mapping change broke the query although I did the reindex, but "_id" fixed my query.
You can find more informations in the :
elasticsearch mapping reference documentation.