I am trying to parse the following YAML content using Jackson in Kotlin.
template:
# More properties...
noise.max: 0.01
I get this exception:
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "noise.max" ...
When I change my YAML to this, it works:
template:
# More properties...
noise:
max: 0.01
It seems like Jackson can't parse nested values if they are noted inline with dots as separators. Is this incorrect YAML, or unconventional?
I know that spring boot can parse this kind of nested YAML params, and I guess they use Jackson for it too. But I can't find a way how I can configure the ObjectMapper so it works.
Can someone please help me an tell me how to configure ObjectMapper or whatever else needs to be done?
In YAML, a dot is not a special character and simply part of the content. The first file contains two mappings, with the inner one having noise.max as key, while the second file contains three mappings, where the innermost has max as key and the one above that has noise as key. These are different structures.
Spring boot maps YAML to Properties. It does so by concatenating nested keys via dots. If you do this, the result of both your YAML files will be:
template.noise.max = 0.01
And that is why it works with Spring boot.
Property files are a list of key/value pairs while YAML files describe a possibly complex node graph. Spring boot uses YAML as syntax sugar for Properties. If you use Jackson, you process the actual structure and not the simplified one that you get with Spring boot.
So the bottom line is: If you want to use a YAML library for loading YAML, you will not have this „feature“ of replacing nested maps with dots in keys. Theoretically you could use SnakeYAML to do some preprocessing at the event level to split such keys so that what you want is possible, but I wouldn't recommend it.
Related
I'm testing a graphQL endpoint. I want to keep the query separate from the feature file so that it can be reused elsewhere. The query has an embedded string which I want to pass in variables from my examples, however, I can't seem to update the query.
Here is the feature file:
Here is the query file:
Any help would be appreciated, thanks.
I think best practice is to read the query part alone as a text file, and then form the JSON within the test. Your JSON is actually not well-formed because JSON does not allow line-feeds within values, that's why you have the red squiggly line in your screen shot.
Refer articles like this: https://www.katk.dev/graphql-karate
Best practice is to use the variables in the JSON in addition to the query. If not, be aware that you can do placeholder substitution in plain-text using Karate: https://github.com/karatelabs/karate#replace
Also read this part of the documentation: https://github.com/karatelabs/karate#dont-parse-treat-as-raw-text
What is the difference between reading the properties from payload. for example there is a property in the payload which is named as con_id. when i read this property like this #[payload.con_id] then it is coming as null. where as #[payload.'con_id'] is returning the value.
few other notations which i know of is #[payload['con_id']] or #[json:con_id]
which one should be used at which scenario? if there are any special cases to use any specific notation then please let me know the scenario also.
Also, what is the common notation that has to be used from a mule soft platform supported point of view.
In Mule 3 any of those syntax are valid. Except the json: evaluator is for querying json documents where as the others are for querying maps/objects. Also the json: evaluator is deprecated in Mule 3 in favor of transforming to a map and using the MEL expressions below.
payload.property
payload.'property'
payload['property']
The reason the first fails in your case, is beacaue of the special character '_'. The underscore forces the field name to be wrapped in quotes.
Typically the . notation is preferred over the [''] as its shorter for accessing map fields. And then simply wrap property names in '' for any fields with special chars.
Note in Mule 4, you don't need to transform to a map/object first. Dataweave expression replace MEL as the expression language and allow you to directly query json or any type of payload without transforming to a map first.
I'm using noflo and am trying to send an array as an initiallizer. There doesn't seem to be a supported (or at least documented) way to do this.
I'm currently using:
'["Kicker"]' -> IN Nodes(strings/ParseJson)
'{"in":"go!"}' -> IN Config(strings/ParseJson)
Nodes() OUT -> NODES MyComponent(noflotest/Universe)
Config OUT -> CONFIG MyComponent()
Is there a better way to do this?
Currently arrays and other complicated data structures are not supported in the .fbp syntax. There is a feature request about this.
Right now you have three options:
If FBP parser accepts your string (see the matching rules), you can first send it to the strings/ParseJson component to turn it to the appropriate data structure
Reading the value from a JSON or YAML file and passing it through the appropriate parser component
Converting your graph to the JSON graph format
I'm writing a maven plugin that has a parameter that's a String[].
Like this:
/**
* #parameter expression="${args}"
*/
protected String[] args;
This can be utilized through the POM like this:
<args>
<arg>arg1</arg>
<arg>arg2</arg>
<args>
But I want to send it in from the command line
-Dargs={arg1, arg2}
Is this possible?
You can't do it directly as far as I know, but it is pretty common practice to accept a delimited String and split that into an array yourself.
For example the maven-site-plugin allows you to specify a comma-delimited String of locales, while the maven-scala-plugin handles this by allowing you to define the arguments with a pipe separator. You can look at the relevant Mojos to see how the argument is processed.
Some example usages below:
site-plugin:
-Dlocales=enGB,frFR
scala-plugin:
-DaddArgs=arg1|arg2|arg3
Update: if you want to handle this more elegantly, you could use maven-shared-io to allow definition of an external descriptor file, then pass the descriptor location as a property. This means a single command-line argument can reference a structure of configuration.
If this sounds like it might work for you, have a look at this answer that describes how to use external descriptors in the properties plugin, or this answer that does similar for the xml-maven-plugin. Or you can just look at the assembly-plugin for ideas.
Latest maven (3.0.3) should works with:
-DaddArgs=arg1,arg2,arg3
To update on #nybon’s answer a bit, it seems
#Parameter(property="your.param")
private List<String> yourParam;
works, at least when using maven-plugin-annotations:3.5 in Maven 3.5.0. Running with
-Dyour.param=val1,val2
sets the list.
According to Sonatype's blog here, if you are a plugin developer and
use Maven 3
and annotate your array/collection type plugin parameter using annotation like:
/** #parameter expression="${args}" */
In this way, the plugin parameter can be processed by Maven automatically and plugin users can provide the plugin array/collection type parameters via CLI using a comma separated system property like mvn myplugin:mygoal -Dargs=a,b,c
The way to specify a list of values via system property, for a plugin depends on how up to date the plugin is.
However, if you are dealing with a proper implemented plugin that is up to date, then the correct way of specifying an array of values to a plugin is via comma separated strings.
Here is a reference:
http://blog.sonatype.com/2011/03/configuring-plugin-goals-in-maven-3/
Here is a quote from the reference:
For many plugin parameters it is occasionally convenient to specify
their values from the command line via system properties. In the past,
this was limited to parameters of simple types like String or Boolean.
The latest Maven release finally allows plugin users to configure
collections or arrays from the command line via comma-separated
strings. Take for example a plugin parameter like this:
Going a little bit further, we can look at more concrete example.
Consider, the Wildfly maven plugin.
This plugin has a deprecated configuration property called:
jvmArgs.
This was expected to be passed in as a space separated list of values.
As we all know, in the command line, messing around with spaces is not adorable.
So if we look at the definition of this paramter in the plugin mojo code, you will find something like this (here goes another quote).
/**
* A space delimited list of JVM arguments.
*
* #deprecated use {#link #javaOpts}
*/
#Parameter(alias = "jvm-args", property = PropertyNames.JVM_ARGS)
#Deprecated
private String jvmArgs;
So this is the old way of doing stuff.
Now, if you are using the latest version of this plugin (e.g. Alpha6).
Then the source code will have a nice new field called javaOpts.
Let us look at what the field looks like in the code.
/**
* The JVM options to use.
*/
#Parameter(alias = "java-opts", property = PropertyNames.JAVA_OPTS)
private String[] javaOpts;
So what we see is that we have a nice array field in the StartMojo.
This array field is properly annoted.
And the maven engine will do the heavy lifting of setting the values into the Mojo.
When you want to pump data into this field via the command line, you would in you batch file specify something of the form:
-Dwildfly.javaOpts="-Xmx1536M,-Xms1536M,-XX:MaxMetaspaceSize=512M,-XX:-HeapDumpOnOutOfMemoryError"
If you try the samething using sapces instead of commans.
I will show you what happens:
[INFO] STANDALONE server is starting up. Invalid maximum heap size:
-Xmx1536M -XX:MaxMetaspaceSize=512m -XX:-HeapDumpOnOutOfMemoryError
So you see, maven when it with swallowed my system property full of spaces it did not do a string split. So Wildfly tried to setup the jvm memory settings as if the max memory was that full string.
On the other hand, when I use commas to separate it, the Mojo is properly enriched and I can take control of the memory settings of the app server when it starts up.
And of course, you want to use system properties and not pom.xml XML configuration, for tasks like setting up Jenkins jobs. With system properties you are rather more flexible.
That is it.
Aside from getting any real work done, I have an itch. My itch is to write a view engine that closely mimics a template system from another language (Template Toolkit/Perl). This is one of those if I had time/do it to learn something new kind of projects.
I've spent time looking at CoCo/R and ANTLR, and honestly, it makes my brain hurt, but some of CoCo/R is sinking in. Unfortunately, most of the examples are about creating a compiler that reads source code, but none seem to cover how to create a processor for templates.
Yes, those are the same thing, but I can't wrap my head around how to define the language for templates where most of the source is the html, rather than actual code being parsed and run.
Are there any good beginner resources out there for this kind of thing? I've taken a ganer at Spark, which didn't appear to have the grammar in the repo.
Maybe that is overkill, and one could just test-replace template syntax with c# in the file and compile it. http://msdn.microsoft.com/en-us/magazine/cc136756.aspx#S2
If you were in my shoes and weren't a language creating expert, where would you start?
The Spark grammar is implemented with a kind-of-fluent domain specific language.
It's declared in a few layers. The rules which recognize the html syntax are declared in MarkupGrammar.cs - those are based on grammar rules copied directly from the xml spec.
The markup rules refer to a limited subset of csharp syntax rules declared in CodeGrammar.cs - those are a subset because Spark only needs to recognize enough csharp to adjust single-quotes around strings to double-quotes, match curley braces, etc.
The individual rules themselves are of type ParseAction<TValue> delegate which accept a Position and return a ParseResult. The ParseResult is a simple class which contains the TValue data item parsed by the action and a new Position instance which has been advanced past the content which produced the TValue.
That isn't very useful on it's own until you introduce a small number of operators, as described in Parsing expression grammar, which can combine single parse actions to build very detailed and robust expressions about the shape of different syntax constructs.
The technique of using a delegate as a parse action came from a Luke H's blog post Monadic Parser Combinators using C# 3.0. I also wrote a post about Creating a Domain Specific Language for Parsing.
It's also entirely possible, if you like, to reference the Spark.dll assembly and inherit a class from the base CharGrammar to create an entirely new grammar for a particular syntax. It's probably the quickest way to start experimenting with this technique, and an example of that can be found in CharGrammarTester.cs.
Step 1. Use regular expressions (regexp substitution) to split your input template string to a token list, for example, split
hel<b>lo[if foo]bar is [bar].[else]baz[end]world</b>!
to
write('hel<b>lo')
if('foo')
write('bar is')
substitute('bar')
write('.')
else()
write('baz')
end()
write('world</b>!')
Step 2. Convert your token list to a syntax tree:
* Sequence
** Write
*** ('hel<b>lo')
** If
*** ('foo')
*** Sequence
**** Write
***** ('bar is')
**** Substitute
***** ('bar')
**** Write
***** ('.')
*** Write
**** ('baz')
** Write
*** ('world</b>!')
class Instruction {
}
class Write : Instruction {
string text;
}
class Substitute : Instruction {
string varname;
}
class Sequence : Instruction {
Instruction[] items;
}
class If : Instruction {
string condition;
Instruction then;
Instruction else;
}
Step 3. Write a recursive function (called the interpreter), which can walk your tree and execute the instructions there.
Another, alternative approach (instead of steps 1--3) if your language supports eval() (such as Perl, Python, Ruby): use a regexp substitution to convert the template to an eval()-able string in the host language, and run eval() to instantiate the template.
There are sooo many thing to do. But it does work for on simple GET statement plus a test. That's a start.
http://github.com/claco/tt.net/
In the end, I already had too much time in ANTLR to give loudejs' method a go. I wanted to spend a little more time on the whole process rather than the parser/lexer. Maybe in version 2 I can have a go at the Spark way when my brain understands things a little more.
Vici Parser (formerly known as LazyParser.NET) is an open-source tokenizer/template parser/expression parser which can help you get started.
If it's not what you're looking for, then you may get some ideas by looking at the source code.