If I have a blank JSON schema, such as
{}
and I try to validate the following data:
{
"hello": "world",
}
would validation be successful? (note the trailing comma).
I tried using everit json schema validator in java,
JSONObject rawSchema = new JSONObject(new JSONTokener("{}"));
Schema schema = SchemaLoader.load(rawSchema);
schema.validate(new JSONObject("{\"hello\" : \"world\",}"));
and it seems to validate.
Interestingly, some online validates this JSON
https://www.jsonschemavalidator.net/
whereas others don't
https://json-schema-validator.herokuapp.com/
The later uses a parser from Jackson before validating, perhaps that's the reason?
JSON Schema validates JSON. Technically, trailing commas are not valid JSON. However, many JSON ignore that and allow trailing commas. In general you're safer not having trailing commas in your JSON so you know it will work with all JSON parsers.
You are validating against the empty schema ({}). An empty schema means there are no constraints on what the value can be. An value that is valid JSON will be valid against this schema. Therefore, the only reason you would have validators reporting different results is if they disagree on whether it's valid JSON. If the validator uses a JSON parser that allows trailing commas, it will be valid, otherwise it will be invalid.
Related
I'm trying to implement spec-compliant parameter serialization for OpenAPI cookie parameters.
My only source for this is the docs: https://swagger.io/docs/specification/serialization/#cookie
In the description I'm seeing this line:
An optional explode keyword controls the array and object serialization
However the table below only defines array and object serialization, when explode is false.
What does this mean?:
Is cookie serialization with explode = true defined? If so could you please link docs?
If not, am I correct in saying, that explode = true basically "disables" array and object serialization, and has no effect on primitive serialization?
If neither of these, then what is the situation with explode?
I hope an OpenAPI expert can shed some light on this, thank you!
Cookie serialization is defined but unfortunately not well thought out, as a result some forms of it don't make much sense. One of the specification authors admits that they "never really thought anyone would go through the trouble of describing objects in cookies in an exploded way".
In a nutshell, cookie serialization follows the same rules as query parameters with style: form.
When using explode: true:
A cookie parameter named param with the array value [3, 4, 5] would be sent as:
Cookie: param=3¶m=4¶m=5
A cookie parameter named param with the object value {"foo": "test", "bar": 5} would be sent as:
Cookie: foo=test&bar=5
Note that the parameter name (param) is lost in this case.
As you may notice, both methods deviate from the standard Cookie header format which expects semicolon-separated name=value pairs:
Cookie: [cookie-name]=[cookie-value]; [cookie-name]=[cookie-value];...
In other words, OpenAPI's exploded form of cookies is not quite compatible with cookie parsers. For example, an OpenAPI-formatted exploded array cookie Cookie: param=3¶m=4¶m=5 would be parsed by a cookie parser as param = 3¶m=4¶m=5 - which is not what an API developer would expect.
The problems with cookie serialization are being discussed here:
Default 'explode' for cookie parameters?
Feel free to provide your implementer's feedback.
I have issue while using ObjectMapper with YAMLFactory to Parse a YAML File
The YAML file I’m trying to parse : https://drive.google.com/open?id=1Q85OmjH-IAIkordikLTsC1oQVTg8ggc8
Parsing the File using readValue as shown here :
ObjectMapper mapper = new ObjectMapper(new YAMLFactory().enable(Feature.MINIMIZE_QUOTES)//
.disable(Feature.WRITE_DOC_START_MARKER)//
.disable(YAMLGenerator.Feature.SPLIT_LINES));
TypeReference<HashMap<String, Object>> typeRef = new TypeReference<HashMap<String, Object>>() {};
HashMap<String, Object> obj = mapper.readValue(responseBuffer.toString(), typeRef);
Converting the Obj to json then to YAML again by :
JsonElement jsonElem = wrapJacksonObject(obj);
String cloudTemplateJsonString = new GsonBuilder().disableHtmlEscaping().setPrettyPrinting()//
.create()//
.toJson(jsonElem);
JsonNode jsonNode = mapper.readTree(cloudTemplateJsonString);
String yaml = new YAMLMapper().enable(Feature.MINIMIZE_QUOTES)//
.disable(Feature.WRITE_DOC_START_MARKER)//
.writeValueAsString(jsonNode);
After checking the last String, I see that these Special Characters are Changed/Deleted (they are Changed exactly after Point 2) :
a. ‘ transferred to “ or Deleted
b. ! : Regarding the exclamation mark : the whole string after it until first space is deleted totally
Examples :
Version: !Join ['-', [!Ref GatewayVersion, GW]]
After Parsing
Version:
- '-'
- - GatewayVersion
- GW
Also single Quotes sometimes Deleted / Converted to double Quote
AllowedPattern: '^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})$'
After Parsing Single quotes Deleted :
AllowedPattern: ^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})$
I try to use Escape Characters Customization By customizing Implementation for CharacterEscapes class but it didn’t help
In YAML, a value such as a string literal can be prepended by tokens indicating metadata about the node, known as node properties. Tokens beginning with a bang ! are considered to be 'node tags', and tokens begining with an ampersand & are 'node anchors'.
https://yaml.org/spec/1.2/spec.html#id2783797
JSON does not have an equivalent capability. Because Jackson is primarily a JSON parsing library its internal representation of structured data nodes do not have fields for metadata, and so its YAMLFactory parser implementation simply discards them.
Looking at your YAML file I expect the intended parser of the file (aws' cloudwatch cli tool?) would have known how to use those !Join and !Ref node tags to construct an internal representation of the Version field.
Similarly, single or double quotes surrounding a text value are considered to be part of the markup (i.e., used the parser) rather than part of the value. Thus the parser discards these characters (after using them as guides on how to consume the value). Quotes (double or single) may be added or not as neccessary when reserializing an internal representation back into YAML or JSON.
What is the difference between reading the properties from payload. for example there is a property in the payload which is named as con_id. when i read this property like this #[payload.con_id] then it is coming as null. where as #[payload.'con_id'] is returning the value.
few other notations which i know of is #[payload['con_id']] or #[json:con_id]
which one should be used at which scenario? if there are any special cases to use any specific notation then please let me know the scenario also.
Also, what is the common notation that has to be used from a mule soft platform supported point of view.
In Mule 3 any of those syntax are valid. Except the json: evaluator is for querying json documents where as the others are for querying maps/objects. Also the json: evaluator is deprecated in Mule 3 in favor of transforming to a map and using the MEL expressions below.
payload.property
payload.'property'
payload['property']
The reason the first fails in your case, is beacaue of the special character '_'. The underscore forces the field name to be wrapped in quotes.
Typically the . notation is preferred over the [''] as its shorter for accessing map fields. And then simply wrap property names in '' for any fields with special chars.
Note in Mule 4, you don't need to transform to a map/object first. Dataweave expression replace MEL as the expression language and allow you to directly query json or any type of payload without transforming to a map first.
I have to build a REST service with ServiceStack; the responses must have a certain format. Both JSON and XML are to be supported. The standard serializers do not return the response in the format I need.
For JSON, it would be enough to wrap the result, e.g. if a function returns a list of Site objects, the JSON serializer gives me [{...}, ...], but I need {"Sites": [{...}, ...]}. The requested content-type would be "Sites+json" in this case. For other functions, "Sites" would be replaced by something else.
How can I achieve this?
Edit:
The XML has to be the direct "translation" of the JSON, like
<Sites>...</Sites> instead of {"Sites":...}.
The standard XML serialization works differently, it always puts in the data type as well.
Has anyone an idea how to do this? I guess I have to write my own XML serializer and map all my XML types (like Sites+xml,...) to it?
Is there a way that I can instruct WCF to accept JSON that is formatted using either single quotes (as opposed to double quotes):
{
'foo': 'bar'
}
Or using non-quoted identifiers like so:
{
foo: 'bar'
}
As it is, it seems like JSON will only be accepted if it is formatted like so:
{
"foo": "bar"
}
Using either of the first two example results in a 400 (bad request).
The first two examples are invalid JSON texts.
http://www.ietf.org/rfc/rfc4627.txt
object = begin-object [ member *( value-separator member ) ]
end-object
member = string name-separator value
string = quotation-mark *char quotation-mark
quotation-mark = %x22 ; "
DataContractJsonSerializer always writes strict JSON.
At various points during deserialization (generally missing end tags for arrays or objects, or improper escaping, or improperly formatted numbers), it will accept incorrect, non-strict JSON.
However, I can tell you definitively that this is not one of those cases. DataContractJsonSerializer always requires double-quoted strings for JSON.
Hope this helps!