Jackson YAML Merge Operator Deserialization Wrong - kotlin

I'm using Jackson to serialize and deserialize YAML but I'm encountering a problem with deserializing merge operator.
For example:
- &item001
boolean: true
integer: 7
float: 3.14
- &item002
boolean: false
- <<: [ *item002, *item001 ]
This yaml file isn't deserialized properly with jackson yaml.
This is the code I'm using:
val text = ..... //
val mapper = ObjectMapper(YAMLFactory()
.disable(YAMLGenerator.Feature.WRITE_DOC_START_MARKER))
.registerKotlinModule()
val list = mapper.readValue<Any>(text)
And it outputs basically:
[{"boolean":true,"integer":7, "float": 3.14}, {"boolean": false}, {"<<": 2}]
Rather than:
[{"boolean":true,"integer":7, "float": 3.14}, {"boolean": false}, {"boolean":false,"integer":7, "float": 3.14}]
It's strange though because Jackson YAML uses SnakeYAML underlying and if I perform the same operation with SnakeYAML it deserializes correctly.
Of course I could switch to SnakeYAML but my project needs both json and yaml converters and Jackson provides me with both.
Any suggestions?

Related

pubsub <-> bigquery with protobuf: bool is getting converted to null or true, not false or true

I have a protobuf pubsub schema being published to bigquery (directly, no dataflow).
in the protobuf, i have a field like:
bool foo = 1;
In the bigquery schema this becomes:
"name": "foo",
"type": "BOOLEAN",
"mode": "NULLABLE"
},
From my python code, I call publish on the topic w/ a dict (encoded to bytes) that has:
foo: false
this becomes foo: null in the output bigquery table.
if I make if
foo: true
it becomes foo: true in the bigquery table.
this is happening for all of my bool. e.g. false becomes null, true remains true.
Suggestion on where to look?
This is a known bug with proto3 support that is being actively worked on. You can track progress in the public issue tracker. For now, the workaround is to use proto2 instead of proto3.
The JSON Mapping section from the Protocol Buffer documentation says:
. . . If a field has the default value in the protocol buffer, it will be omitted in the JSON-encoded data by default to save space.
As false is the default value for protocol Buffer's bools, the above seems to suggest that foo: false by default became foo: null to save space.
I reckon this is a bug because float value 0.0 is also being converted to null in BigQuery.

Karate: Unable to move schema definition out of feature file

I'm able to successfully run the Feature/scenario, When I define the schema inside my feature file .
Here is a simplified example of the schema.
Background:
...
...
* def accountSchema = { id: '#number? _ >= 0', prop1: '#number? _ >= 0', prop2: '#string', prop3: '#string', optionaProp: '##string' }
* def userAccountsSchema = ({ entitledAccounts: '#[] accountSchema', prop5: '#number' , prop6: '##string'})
And here is how I'm validating
Scenario:
...
...
When method GET
Then status 200
* print userAccountsSchema
And match response == userAccountsSchema
But the schema I posted here is simplified to ask this question, the real schema is far more complex.
So for clarity purpose, I decided to put schema in a separate js file response-schemas.js under the same folder as the feature file.
Here is the simplified content of response-schemas.js file.
function schema () {
let accountSchema = {
id: '#number? _ >= 0',
prop1: '#number? _ >= 0',
prop2: '#string',
prop3: '#string',
optionaProp: '##string',
}'
return {
accounts: `#[] ${accountSchema}` ,
prop5: '#string',
prop6: '#string',
};
}
now if I replace the 2 lines I mentioned at the beginning of the question under Background:, with below line
* def userAccountsSchema = call read('response-schemas.js')
I get this error
And match response == schemas
SyntaxError: Unnamed:1:8 Expected comma but found ident
[object Object]
^
I believe, I understand the problem, is this line
accounts: `#[] ${accountSchema}` ,
but unable to figure out the solution. If I tried to change the accountSchema variable in response-schemas.js to use multiline string then I get error in read step in Background
the whole idea to have a dedicated js file for schema is to keep it readable (by using multiple lines, preferably objects not a long string)
The main problem is this part:
accounts: `#[] ${accountSchema}`
Where you are trying to stuff a JSON into the Karate "fuzzy" expression. This is just not supported. Note that the Karate way of defining things like #(foo) and #[] bar has nothing to do with JavaScript, so I recommend not mixing these.
I know there is a desire to achieve the match in just one-line and somehow get one monstrous schema to do everything and I very strongly discourage this. Split your assertions into multiple lines. Split your response into smaller chunks of JSON if needed. There is nothing wrong with that. This also makes the life much easier of people who come along later who have to maintain your test.
For ideas see this answer: https://stackoverflow.com/a/61252709/143475
Other answers: https://stackoverflow.com/search?q=%5Bkarate%5D+array+schema
Tip: you can keep your schema "chunks" as JSON files if needed.

Karate : dynamic test data using scenario outline is not working in some cases

I was tryiny to solve dynamic test data problem using dynamic scenario outline as mentioned in the documentation https://github.com/karatelabs/karate#dynamic-scenario-outline
It worked perfectly fine when I passed something like this in Example section
Examples:
|[{'entity':country},{'entity':state},{'entity':district},{'entity':corporation}]]
But I tried to generate this json object programatically , I am getting aa strange error
WARN com.intuit.karate - ignoring dynamic expression, did not evaluate to list: users - [type: MAP, value: com.intuit.karate.ScriptObjectMap#2b8bb184]
Code to generate json object
* def user =
"""
function(response){
entity_type_ids =[]
var entityTypes = response.entityTypes
for(var i =0;i<entityTypes.length;i++ ){
object = {}
object['entity'] = entityTypes[i].id
entity_type_ids.push(object)
}
return JSON.stringify(entity_type_ids)
}
"""

Use array in JSON configuration file in serverless framework

My serverless framework is trying to set an environment variable, CONFIG, to be contents of a JSON object.
My serverless.yml has this entry:
environment:
${file(./config.json)}
and my config.json looks like this:
{
"VARIABLE1": "value1",
"VARIABLE2": "value2",
"INT_VARIABLE": 3
"BOOLEAN_TEST": true
}
This seems to work just fine. ie:
console.log(process.env.VARIABLE1) outputs value1
console.log(process.env.INT_VARIABLE) outputs 3 (as a string... but I can convert if needed)
console.log(process.env.BOOLEAN_TEST) outputs true (as a string... but that's not the end of the world)
But when I go to add an array to the config.json, making the config.json look like this:
{
"VARIABLE1": "value1",
"VARIABLE2": "value2",
"INT_VARIABLE": 3
"BOOLEAN_TEST": true
"ARRAY_TEST": ["arrVal1", "arrVal2", "arrVal3"]
}
I get the following error:
Warning: Invalid configuration encountered at
'provider.environment.ARRAY_TEST': unsupported configuration format
How can I add an array as a environmental variable in serverless framework? (same basic question about adding sub-objects)
I believe you can split the elements by a delimiter:
serverless.yml
environment:
VARIABLE_1: ${file(./config.json):VARIABLE_1}
ARRAY_TEST:
"Fn::Split":
- ","
- ${file(./config.json):ARRAY_TEST}
config.json
{
"VARIABLE_1": "value1",
"ARRAY_TEST": "arrVal1,arrVal2,arrVal3"
}

deserialization issue with char '\'

Does json.net have an in built method that would escape special characters? My json strings I recv from vendors have \, double " .
If not what is the best way to escape the special charecters before invoking JsonConvert.DeserializeObject(myjsonString)?
My sample json string
{
"EmailAddresses": [
{
"EmailAddress": "N\A"
}
]
}
Pasting this in json lint results in
Parse error on line 4:
... "EmailAddress": "N\A",
-----------------------^
Expecting 'STRING', 'NUMBER', 'NULL', 'TRUE', 'FALSE', '{', '['
VB.NET code
instanceofmytype = JsonConvert.DeserializeObject(Of myType)(myJsonString)
Exception: Newtonsoft.Json.JsonReaderException: Bad JSON escape sequence:
The JSON is not valid: a \ must be followed by one of the following: "\/bfnrtu. Since it's followed by A, Json.NET chokes (as it ought to). The source of your JSON should be fixed. If this is not an option, you can make a guess to fix it yourself, e.g.
myStr = Regex.Replace(myStr, "\\(?=[^""\\/bfnrtu])", "\\")
You shouldn't have to worry about it. JSON.NET handles a lot of nice things for you. It should just work.
Have you tried it?