DataWeave and Case Sensitivity - mule

Can I turn off case sensitivity in DataWeave?
Two different requests are returning responses where the first contains a node called CDATA while the other contains a node called CData. In DataWeave is there a way to treat these as equal or do I need to have separate code statements such as payload.Data.CDATA and payload.Data.CData? If things were case insensitive I could have a single statement such as payload.data.cdata.
Thanks in advance,
Terry
It appears that I need two different statements.
payload.Data.*CDATA map $.#SeqId when payload.Data? and payload.Data.CDATA? and payload.Data.CDATA.#SeqId?
payload.Data.*CData map $.#SeqId when payload.Data? and payload.Data.CData? and payload.Data.CData.#SeqId?

No, but you can create a function like the following to select ignoring case.
Which filters an object by a given key (mapObject comparing keys using lower) and then gets the values from the resulting object (with pluck).
%function selectIgnoreCase(obj, keyName)
obj mapObject ((v, k) -> k match {
x when (lower x) == keyName -> {(k): v},
default -> {}
}) pluck $
And you'd use it like this:
selectIgnoreCase(payload.Data, "cdata")
Note: With Mule 4 (and DW 2) syntax for this would be a little bit better.

Related

How to chain filter expressions together

I have data in the following format
ArrayList<Map.Entry<String,ByteString>>
[
{"a":[a-bytestring]},
{"b":[b-bytestring]},
{"a:model":[amodel-bytestring]},
{"b:model":[bmodel-bytestring]},
]
I am looking for a clean way to transform this data into the format (List<Map.Entry<ByteString,ByteString>>) where the key is the value of a and value is the value of a:model.
Desired output
List<Map.Entry<ByteString,ByteString>>
[
{[a-bytestring]:[amodel-bytestring]},
{[b-bytestring]:[bmodel-bytestring]}
]
I assume this will involve the use of filters or other map operations but am not familiar enough with Kotlin yet to know this
It's not possible to give an exact, tested answer without access to the ByteString class — but I don't think that's needed for an outline, as we don't need to manipulate byte strings, just pass them around. So here I'm going to substitute Int; it should be clear and avoid any dependencies, but still work in the same way.
I'm also going to use a more obvious input structure, which is simply a map:
val input = mapOf("a" to 1,
"b" to 2,
"a:model" to 11,
"b:model" to 12)
As I understand it, what we want is to link each key without :model with the corresponding one with :model, and return a map of their corresponding values.
That can be done like this:
val output = input.filterKeys{ !it.endsWith(":model") }
.map{ it.value to input["${it.key}:model"] }.toMap()
println(output) // Prints {1=11, 2=12}
The first line filters out all the entries whose keys end with :model, leaving only those without. Then the second creates a map from their values to the input values for the corresponding :model keys. (Unfortunately, there's no good general way to create one map directly from another; here map() creates a list of pairs, and then toMap() creates a map from that.)
I think if you replace Int with ByteString (or indeed any other type!), it should do what you ask.
The only thing to be aware of is that the output is a Map<Int, Int?> — i.e. the values are nullable. That's because there's no guarantee that each input key has a corresponding :model key; if it doesn't, the result will have a null value. If you want to omit those, you could call filterValues{ it != null } on the result.
However, if there's an ‘orphan’ :model key in the input, it will be ignored.

how would you write R.compose using R.o?

Seems like some use to knowing a good pattern to make an n-step composition or pipeline from a binary function. Maybe it's obvious or common knowledge.
What I was trying to do was R.either(predicate1, predicate2, predicate3, ...) but R.either is one of these binary functions. I thought R.composeWith might be part of a good solution but didn't get it to work right. Then I think R.o is at the heart of it, or perhaps R.chain somehow.
Maybe there's a totally different way to make an n-ary either that could be better than a "compose-with"(R.either)... interested if so but trying to ask a more general question than that.
One common way for converting a binary function into one that takes many arguments is by using R.reduce. This requires at least the arguments of the binary function and its return type to be the same type.
For your example with R.either, it would look like:
const eithers = R.reduce(R.either, R.F)
const fooOr42 = eithers([ R.equals("foo"), R.equals(42) ])
This accepts a list of predicate functions that will each be given as arguments to R.either.
The fooOr42 example above is equivalent to:
const fooOr42 = R.either(R.either(R.F, R.equals("foo")), R.equals(42))
You can also make use of R.unapply if you want to convert the function from accepting a list of arguments, to a variable number of arguments.
const eithers = R.unapply(R.reduce(R.either, R.F))
const fooOr42 = eithers(R.equals("foo"), R.equals(42))
The approach above can be used for any type that can be combined to produce a value of the same type, where the type has some "monoid" instance. This just means that we have a binary function that combines the two types together and some "empty" value, which satisfy some simple laws:
Associativity: combine(a, combine(b, c)) == combine(combine(a, b), c)
Left identity: combine(empty, a) == a
Right identity: combine(a, empty) == a
Some examples of common types with a monoid instance include:
arrays, where the empty list is the empty value and concat is the binary function.
numbers, where 1 is the empty value and multiply is the binary function
numbers, where 0 is the empty value and add is the binary function
In the case of your example, we have predicates (a function returning a boolean value), where the empty value is R.F (a.k.a (_) => false) and the binary function is R.either. You can also combine predicates using R.both with an empty value of R.T (a.k.a (_) => true), which will ensure the resulting predicate satisfies all of the combined predicates.
It is probably also worth mentioning that you could alternatively just use R.anyPass :)

Precedence in multiple DataWeave functions

I'm going through the Mule Dev 1 course and am stumped between module content and what I'm seeing in practice.
The module content states that:
"When using a series of functions, the last function in the chain is executed first."
So
filghts orderBy $.price filter ($.availableSeats > 30)
would "filter then orderBy".
However, I'm seeing that this statement:
payload.flights orderBy $.price filter $.price < 500 groupBy $.destination
actually does NOT execute groupBy first. In fact, placing the groupBy anywhere else throws an error (since the schema of the output after groupBy is changed).
Any thoughts here on why the module states the last function is executed first when that's clearly seems not the case?
Thanks!
The precedence is all the same for (orderBy, groupBy, etc).
So it will first do the orderBy by price then it will filter it by price and last it will groupBy destination.
This is the same for dw 1 (mule 3.x) and dw 2 ( mule 4.x). Now the difference between this to versions of DW is that in DW1 all this used to be lang operators but in DW 2 are just functions that are called using infix notation. So this mean that you can just write the same using the prefix notation
filter(
orderBy(filghts, (value, index) -> value.price),
(value, index) -> value.availableSeats > 30)
Just to show you this is the AST of this expression.

Fast lookup of tree with placeholders?

For an application I'm considering, there would be a large (100,000+) 'database' of trees (think expressions in a programming language, or S-expressions), and I would need to query that database for expressions that match a specific given expression.
Before giving the details of what I'd like to have, note that I'd appreciate any information related to indexing a large set of trees for optimizing lookup by a subtree.
In my specific situation (which would be for a backend to be used by Metamath proof assistants), expressions have the following structure (in Haskell-like notation):
data Expression = Placeholder Id | VarName Id | ConstName Id [Expression]
or as a BNF for an S-expression form:
Expression = '?' Id | Id | '(' Id Expression* ')'
where Id is some kind of identifier.
For example, I could have a database with expressions like
(equiv ?ph ?ps)
(not (in (appl (sqrt) (2)) (Q)))
(equiv (eq ?A ?B) (forall ?x (equiv (in ?x ?A) (in ?x ?B))))
In this context, two expressions match if they can be made equal by substitution of expressions for placeholders. So looking up (equiv (eq A (emptyset)) ?ph) in the above mini-database would result in the first and last expressions.
So again: how would I implement fast lookups in a large set of (expression) trees with placeholders? What kind of index data structure could I use?
I would implement the lookup with a trie. Each key would consist of one of the following:
ConstName Identifier
Variable w/ context info
ConstValue
Placeholder
These should be ordered in some fashion- possibly Placeholder, then all ConstNames (alphabetical), then variables (scope ordering, then argument order), then ConstValues (numerical order). As long as there's a concrete ordering for usage in the trie, you're fine.
Traverse the expression's tree, injecting the appropriate keys into the trie as they are encountered. Do this for all the expressions you want to insert into your data structure. When it comes time to query it, you can traverse the trie in a similar fashion, but with a few new rules.
Everything matches a placeholder node. If it matches some other key as well, then you'll need to explore both branches (easily done via a recursive DFS-like approach).
A placeholder matches everything. This is not equivalent to the previous point- we are talking about placeholders in the query here, the previous bullet is regarding placeholders as trie keys.
Now, this does mean that the search space can somewhat "explode" as you encounter placeholders, but there is one thing you can do to try to mitigate this in practice. Traverse the expression's tree in a breadth-first fashion (both in construction of the trie, and querying). This means if one of the arguments is a placeholder, you won't have to full-depth search every single subtree that matches that expression so far- instead you jump ahead to the next argument- which may not be a placeholder, and will thus greatly prune the search space (compared to matching "everything").
For completeness sake, lets take one of your examples
(not (in (appl (sqrt) (2)) (Q)))
and make a trie entry from that-
not -> in -> apply -> "Q" -> sqrt -> 2
adding (not (in ?ph E)) to this would result in-
not -> in -> apply -> "Q" -> sqrt -> 2
\-> ?ph -> "E"
Continue in this fashion injecting expressions into the trie. Also traverse in this fashion for querying until you reach the ends of your searches into the trie, and return those that matched.
Note- the uniqueness of these entries is based on the assumption you do not have to support variadic functions. If you do, attach to each key some context info (read the next paragraphs for info on how to do this) to distinguish which arguments go to which functions
There is one detail I glossed over- variables. If you only want it to match if they are the exact same variable name, then no work is necessary. But this likely isn't what you want; you probably want it to match generic variables as long as they are "consistent" with each other. The way to do this is to assign each variable an identifier that represents the scope of which it was first defined.
The easiest way to do this is just compose an identifier from the concatenation of the argument ordering of its ancestors. That is, if a variable is first defined as the second argument to a function which is the fifth argument to the root function, then we might label it as (5, 2) or (2, 5), whichever makes more sense intuitively. Either way, this will ensure the variable is given a consistent identifier regardless of other variables / functions elsewhere. Then proceed as normal with this new variable name.

Pig Nesting STRSPLIT

I have a string in field 'product' in the following form:
";TT_RAV;44;22;"
and am wanting to first split on the ';' and then split on the '_' so that what is returned is
"RAV"
I know that I can do something like this:
parse_1 = foreach {
splitup = STRSPLIT(product,';',3);
generate splitup.$1 as depiction;
};
This will return the string 'TT_RAV' and then I can do another split and project out the 'RAV' however this seems like it will be passing the data through multiple Map jobs -- Is it possible to parse out the desired field in one pass?
This example does NOT work, as the inner splitstring retuns tuples, but shows logic:
c parse_1 = foreach {
splitup = STRSPLIT(STRSPLIT(product,';',3),'_',1);
generate splitup.$1 as depiction;
};
Is it possible to do this in pure piglatin without multiple map phases?
Don't use STRSPLIT. You are looking for REGEX_EXTRACT:
REGEX_EXTRACT(product, '_([^;]*);', 1) AS depiction
If it's important to be able to precisely pick out the second semicolon-delimited field and then the second underscore-delimited subfield, you can make your regex more complicated:
REGEX_EXTRACT(product, '^[^;]*;[^_;]*_([^_;]*)', 1) AS depiction
Here's a breakdown of how that regex works:
^ // Start at the beginning
[^;]* // Match as many non-semicolons as possible, if any (first field)
; // Match the semicolon; now we'll start the second field
[^_;]* // Match any characters in the first subfield
_ // Match the underscore; now we'll start the second subfield (what we want)
( // Start capturing!
[^_;]* // Match any characters in the second subfield
) // End capturing
The only time there will be multiple maps is if you have an operator that triggers a reduce (JOIN, GROUP, etc...). If you run an explain on the script you can see if there is more than one reduce phase.