I assigned a geojson object to the fit property of the projection as described in the documentation: https://vega.github.io/vega/docs/projections/
I am always getting the error "Unsupported parameter object: {"type": "FeatureCollection"..."
I am assigning the geojson object as following: (my source data is topojson format)
spec.projections[0].fit = topojson.feature(
mapData,
mapData.objects.topology
);
The documentation explicitly says that this parameter should be a GeoJSON Feature or FeatureCollection. How am I supposed to use the fit property?
You have to use a reference to a data object.
"data": [
{
"name": "counties",
"url": "data/us-10m.json",
"format": {"type": "topojson", "feature": "counties" }
}
],
"projections": [
{
"name": "projection",
"type": "mercator",
"fit": {"signal": "data('counties')"},
"size": {"signal": "[width, height]"}
}
]
Here is an example from VEGA that you can test in their edit.
https://github.com/vega/vega/blob/master/test/specs-valid/map-fit.vg.json
Also notice the "size" property. The fit does not get reflected without it.
Related
I'm just starting out with Shopify development, and I'm creating a custom section called USP Banner which has a single block type called column that can be added up to 4 times; and that column just has one text and one richtext field within it. The content of each column is identical on every page, but the section can appear in different places depending on the page itself.
I've given the column default values for the text & richtext and added the 4 columns that will be used into the presets definition of the schema; however because it's just the same column x 4, each column has the same content 4 times. What I'm aiming for is that when you add the section to any page, it prefills each column with different content.
I read through the Shopify docs and all I found was some vague allusion to having a settings object within the block presets, but I can't find any examples of that anywhere. I also found an old SO answer stating that presets can't contain default content.
Is there an easy way to add default content to repeating blocks within a section?
Although it's not made clear in the official docs, you can set default values for repeated blocks in the presets; but whereas the settings when defining the blocks are arrays of objects, the settings within the preset blocks are objects where each pre-determined field value uses the field's unique ID as its key. For example, a typical section schema might look like:
{% schema %}
{
"name": "USP Banner",
"blocks": [
{
"type": "column",
"name": "Column",
"limit": 4,
"settings": [
{
"type": "text",
"id": "heading",
"label": "Heading",
},
{
"type": "richtext",
"id": "text",
"label": "Text",
}
]
}
],
"presets": [
{
"name": "USP Banner",
"blocks": [
{
"type": "column"
},
{
"type": "column"
}
]
}
]
}
{% endschema %}
...so to create preset values for each block, we simply add settings to the preset definition like so:
"presets": [
{
"name": "USP Banner",
"blocks": [
{
"type": "column",
"settings": {
"heading": "Remarkable Value",
"text": "<p>Never knowingly undersold</p>"
}
},
{
"type": "column",
"settings": {
"heading": "5-Year Warranty",
"text": "<p>We're that confident in our products</p>"
}
}
]
}
]
...where the key for each field (heading & text in this case) is the ID of the defined fields in the blocks section at the top of the schema.
Suppose I have a json like this:
{"1": {"first_name": "a", "last_name": "b"},
"2": {"first_name": "c", "last_name": "d"}}
As you can see, values have such schema:
{"type": "object",
"properties": {
"first_name": {"type": "string"},
"last_name": {"type": "string"}
},
"additionalProperties": false,
"required": ["first_name", "last_name"]}
I want to know how can I define a schema which can validate the above json?
The additionalProperties takes a JSON Schema as it's value. (Yes, a boolean is a valid JSON Schema!)
Let's recap what the additionalProperties keyword does...
The behavior of this keyword depends on the presence and annotation
results of "properties" and "patternProperties" within the same schema
object. Validation with "additionalProperties" applies only to the
child values of instance names that do not appear in the annotation
results of either "properties" or "patternProperties".
For all such properties, validation succeeds if the child instance
validates against the "additionalProperties" schema.
https://json-schema.org/draft/2020-12/json-schema-core.html#additionalProperties
In simplest terms, if you don't use properties or patternProperties within the same schema object, the value schema of additionalProperties applies to ALL values of the applicable object in your instance.
As such, you only need to nest your existing schema as follows.
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"additionalProperties": YOUR SCHEMA
}
I want to change multiple types (supported in the latest drafts of JSON Schema so does OpenAPI v3.1) to anyOf, oneOf but I am a bit confused to which the types would be mapped to. Or can I map to any of the two.
PS. I do have knowledge about anyOf, oneOf, etc. but multiple types behavior is a little ambiguous. (I know the schema is invalid but it is just an example that is more focused towards type conversion)
{
"type": ["null", "object", "integer", "string"],
"properties": {
"prop1": {
"type": "string"
},
"prop2": {
"type": "string"
}
},
"enum": [2, 3, 4, 5],
"const": "sample const entry",
"exclusiveMinimum": 1.22,
"exclusiveMaximum": 50,
"maxLength": 10,
"minLength": 2,
"format": "int32"
}
I am converting it this way.
{
"anyOf": [{
"type": "null"
},
{
"type": "object",
"properties": {
"prop1": {
"type": "string"
},
"prop2": {
"type": "string"
}
}
},
{
"type": "integer",
"enum": [2, 3, 4, 5],
"exclusiveMinimum": 1.22,
"exclusiveMaximum": 50,
"format": "int32"
},
{
"type": "string",
"maxLength": 10,
"minLength": 2,
"const": "sample const entry"
}
]
}
anyOf gives you a closer match for the semantics than oneOf;
The problem (or benefit!) of oneOf is that it will fail if you happen to match 2 different cases.
That is unlikely to be what you want, given the source of your conversion which has those looser semantics.
Imagine converting ["integer","number"], for example; if the input was a 1, you'd match both and fail using oneOf.
First of all, your example is not valid:
The initial schema doesn't match anything, it's an "impossible" schema. The "enum": [2, 3, 4, 5] and "const": "sample const entry" constraints are mutually exclusive, and so are "const": "sample const entry" and "maxLength": 10.
The rewritten schema is not equivalent to the original schema because the enum and const were moved from the root level into subschemas. Yes, this way the schema makes more sense and will sort of work (e.g. it will match the specified numbers - but not strings! because of const vs maxLength contradiction), but it's not the same the original schema.
With regard to oneOf/anyOf:
It depends.
The choice between anyOf and oneOf depends on the context, i.e. whether an instance is can match more than one subschema or exactly one subschema. In other words, whether multiple subschema match is considered OK or an error. Nullable references typically need anyOf rather than oneOf, but other cases vary from schema to schema.
For example,
"type": ["number", "integer"]
corresponds to anyOf because there's an overlap - integer values are also valid "number" values in JSON Schema.
Whereas
"type": ["string", "integer"]
can be represented using either oneOf or anyOf. oneOf is semantically closer since strings and integers are totally different data types with no overlap. But technically anyOf also works, it's just there won't be more than one subschema match in this particular case.
In your example, all base type values are distinct with no overlap, so I would use oneOf, but technically anyOf will also work.
I have been struggling with this problem for a long time. I need to create a new JSON flowfile using QueryRecord by taking an array (field ref) from input JSON field refs and skip the object field as shown in example below:
Input JSON flowfile
{
"name": "name1",
"desc": "full1",
"refs": {
"ref": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
}
QueryRecord configuration
JSONTreeReader setup as Infer Schema and JSONRecordSetWriter
select name, description, (array[rpath(refs, '//ref[*]')]) as sources from flowfile
Output JSON (need)
{
"name": "name1",
"desc": "full1",
"references": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
But got error:
QueryRecord Failed to write MapRecord[{references=[Ljava.lang.Object;#27fd935f, description=full1, name=name1}] with schema ["name" : "STRING", "description" : "STRING", "references" : "ARRAY[STRING]"] as a JSON Object due to java.lang.ClassCastException: null
Try the following approach, in your case it shoud work:
1) Read your JSON field fully (I imitated it with GenerateFlowFile processor with your example)
2) Add EvaluateJsonPath processor which will put 2 header fileds (name, desc) into the attributes:
3) Add SplitJson processor which will split your JSON byt refs/ref/ groups (split by "$.refs.ref"):
4) Add ReplaceText processor which will add you header fields (name, desc) to the split lines (replace "[{]" value with "{"name":"${json.name}","desc":"${json.desc}","):
5) It`s done:
Full process in my demo case:
Hope this helps.
Solution!: use JoltTransformJSON to transform JSON by Jolt specification. About this specification.
my problem is: I have a JsonObject like this:
{
"success": true,
"type": "message",
"body": {
"_id": "5215bdd32de81e0c0f000005",
"id": "411c79eb-a725-4ad9-9d82-2db54dfc80ee",
"type": "metaModel",
"title": "testchang",
"authorId": "5215bd552de81e0c0f000001",
"drawElems": [
{
"type": "App.draw.metaElem.ModelStartPhase",
"id": "27re7e35-550j",
"x": 60,
"y": 50,
"width": 50,
"height": 50,
"title": "problem engagement",
"isGhost": true,
"pointTo": "e88e2845-37a4-4c45-a030-d02a3c3e03f9",
"bindingId": "90f79d70-0afc-11e3-98d2-83967d2ad9a6",
"model": "meta",
"entityType": "phase",
"domainId": "411c79eb-a725-4ad9-9d82-2db54dfc80ee",
"authorId": "5215bd552de81e0c0f000001",
"userData": {},
"_id": "5215f4c5d89f629c1700000d"
},
{...}
]
}
}
And I tried to define a mapping as follows to index only parts of this object.
String mapping = XContentFactory.jsonBuilder()
.startObject()
.startObject("domaindata").field("dynamic","false")
.startObject("properties")
.startObject("id").field("type","string").field("store","yes").endObject()
.startObject("type").field("type","string").field("store","yes").endObject()
.startObject("title").field("type","integer").field("store","yes").endObject()
.startObject("drawElems")
.startObject("properties")
.startObject("type").field("store","yes").field("type","string").endObject()
.startObject("title").field("store","yes").field("type","string").endObject()
.endObject().endObject().endObject().endObject().endObject().string();
after adding this mapping into my type with:
node.client().admin()
.indices().prepareCreate("test")
.addMapping("domaindata", mapping)
.execute().actionGet();
I still got all of the jsonobject in my indexresponse, it seems that my mapping does not work.
Could anybody help me? Thanks a lot!
The problem here is that using static mapping only means that fields that are not already present in the mapping won't be added to it, thus won't be indexed either. But as they are part of the source document that you sent, they are returned as part of the _source field.
Same goes if you disable a specific object in the mapping ("enable":false) as mentioned here. That object won't be parsed nor indexed, but will still be part of the stored _source field.
If you want to avoid storing part of the _source you can use the source includes/excludes feature as described here.