i want code to compare two arrays and determine if they are equal or not irrespective of their order
[a,b,c] compared to [a, b,c ] should be true
[a,b,c] compare to [a,c,b] should be true as well.
i tried using the diff function from dataweave 2.0 but it works only if the parameters are Json objects not for arrays.
as #George mentioned a simple orderBy fixed my issue
import diff from dw::util::Diff
%dw 2.0
output application/json
---
{
result: diff(payload.array orderBy $, vars.array orderBy $).matches
}
fixed the issue.
You can use the Diff module with the unordered property
import diff from dw::util::Diff
%dw 2.0
output application/json
---
{
result: diff(payload.array, vars.array, {unordered: true}).matches
}
Related
I am getting the date as below
2022-10-25 11:00:00
which I need to convert to
2022-10-25T11:00:00
Please let me know the appropriate data weave to achieve the above output.
#subhash,
you can try adding the letter "T" to the input string and then convert the whole string into DateTime as shown below.
*%dw 2.0
output application/json
(payload replace " " with "T") as DateTime*
%dw 2.0
output application/json
var data = "2022-10-25 11:00:00"
---
data replace " " with "T"
I am reading in a file (see below). The example file has 13 rows.
A|doe|chemistry|100|A|
B|shea|maths|90|A|
C|baba|physics|80|B|
D|doe|chemistry|100|A|
E|shea|maths|90|A|
F|baba|physics|80|B|
G|doe|chemistry|100|A|
H|shea|maths|90|A|
I|baba|physics|80|B|
J|doe|chemistry|100|A|
K|shea|maths|90|A|
L|baba|physics|80|B|
M|doe|chemistry|100|A|
Then iterating over these rows using a for each ( batch size 5 ) and then calling a REST API
Depending on REST API response ( success or failure ) I am writing payloads to respective success / error files.
I have mocked the called API such that first batch of 5 records will fail and rest of the files will succeed.
While writing to success / error files am using the following transformation :
output application/csv quoteValues=true,header=false,separator="|"
---
payload
All of this works fine.
Success log file:
"F"|"baba"|"physics"|"80"|"B"
"G"|"doe"|"chemistry"|"100"|"A"
"H"|"shea"|"maths"|"90"|"A"
"I"|"baba"|"physics"|"80"|"B"
"J"|"doe"|"chemistry"|"100"|"A"
"K"|"shea"|"maths"|"90"|"A"
"L"|"baba"|"physics"|"80"|"B"
"M"|"doe"|"chemistry"|"100"|"A"
Error log file:
"A"|"doe"|"chemistry"|"100"|"A"
"B"|"shea"|"maths"|"90"|"A"
"C"|"baba"|"physics"|"80"|"B"
"D"|"doe"|"chemistry"|"100"|"A"
"E"|"shea"|"maths"|"90"|"A"
Now what I want to do is append the row/line number to each of these files so when this goes to production , whoever is monitoring these files can easily understand and correlate with the original file .
So as an example in case of error log file ( the first batch failed which is rows 1 to 5 ) I want to append these numbers to each of the rows:
"1"|"A"|"doe"|"chemistry"|"100"|"A"
"2"|"B"|"shea"|"maths"|"90"|"A"
"3"|"C"|"baba"|"physics"|"80"|"B"
"4"|"D"|"doe"|"chemistry"|"100"|"A"
"5"|"E"|"shea"|"maths"|"90"|"A"
Not sure what I should write in DataWeave to achieve this?
Inside the ForEach scope, you have access to the counter vars.counter (or whatever name you've chosen since it's configurable).
You will need to iterate over each chunk of records for adding the position for each one. You can use something like:
%dw 2.0
output application/csv quoteValues=true,header=false,separator="|"
var batchSize = 5
---
payload map ({
counter: batchSize * (vars.counter - 1) + ($$ + 1)
} ++ $
)
Or if you prefer to use the update function (this will add the record counter at the last column instead though):
%dw 2.0
output application/csv quoteValues=true,header=false,separator="|"
var batchSize = 5
---
payload map (
$ update {
case .counter! -> batchSize * (vars.counter - 1) + ($$ + 1)
}
)
Remember to replace the batchSize variable from this code with the same value you're using in the ForEach scope (if it's parameterised, it would be better).
Edit 1 -
Clarification: the - 1 and + 1 are because both indexes (the counter from the For Each scope and the $$ from the map) are zero-based.
Just another workaround and to simplify without using any external variables. The script can be split into two; 1st is for Error group and 2nd is for Success.
%dw 2.0
output application/csv quoteValues=true,header=false,separator="|"
// Will be used for creating a counter for Error group
var errorIdx = 1
// Will be used for creating a counter for Success group
var successIdx = 6
---
//errorItems for the first 5 rows
(payload[0 to 4] map (items,idx) -> (({"0":(idx) + errorIdx} ++ items)))
++
//successItems from 6 and remaining items.
(payload[5 to -1] map (items,idx) -> (({"0":(idx) + successIdx} ++ items)))
DataWeave Inline Variables:
errorIdx is a pointer for starting the error counter
successIdx is a pointer for starting the success counter
This will extract from index 0 to 4 element:
payload[0 to 4]
This will extract from index 5 to remaining elements:
payload[5 to -1]
I have tried followings;
vars.counter as Number {format:'00'}
vars.counter as Number {format:'##'}
vars.counter as String {format:'00'}
vars.counter as String {format:'##'}
None of the above making 1 to 01
How can i do this in mule4?
Numbers (integers, floating point) don't have format DataWeave, like in many other languages. You have to convert to a String with the desired pattern. I tried the following combinations:
%dw 2.0
output application/json
---
[
1 as String {format:'##'},
1 as String {format:'00'},
1 as String {format:'#0'}
// , 1 as String {format:'0#'} ERROR!
]
Output:
[
"1",
"01",
"1"
]
Only the all zeros combination gives the desired result.
I want to count the count the number occurences of a substring in a field in an array.
Example:
The XML below has 3 occurences of the substring 'TXT1' in Field1.
<Level1>
<Level2>
<Field1>10000TXT1</Field1>
<Field1>TXT210000</Field1>
<Field1>10001TXT1</Field1>
<Field1>TXT30000</Field1>
<Field1>10TXT1000</Field1>
<Field1>TXT20000</Field1>
</Level2>
fun countOccurences(txtToSearchFor) =
// Some code that can count how many times the text 'TXT1' occur in all the Field1 fields.
I have tried the examples below, but they dont work
1)
trim(upper(Field1)) contains "TXT1"
2)
(((Field1) find 'TXT1') joinBy '')
Hope you can help :-)
Hi you can use the function sumBy from the dw::core::Arrays module. This function takes an array and a lambda that returns the number to be added for each element in the array. So then I just need to ask for the times of repetitions of a String inside another String. That is achieved by using sizeOf and find
%dw 2.0
output application/json
import sumBy from dw::core::Arrays
fun timesOf(value: Array<String>, txtToSearchFor: String) =
value sumBy ((text) -> sizeOf(text find txtToSearchFor))
---
payload.Level1.Level2.*Field1 timesOf "TXT1"
I found the answer: :-)
fun countOccurences(texts) =
sizeOf (Level1.Level2.*Field1 filter ($ contains texts))
We have a scenario where we need to concatenate all XML node values to String.
input XML
<root>
<Address>
<line1>1</line1>
<line2>2</line2>
<line3>3</line3>
<line4>4</line4>
</Address>
<PostCode>
<line5>5<line5>
</PostCode>
</root>
Output to String
1 2 3 4 5
Please let me know how can i achieve in form of String.
Thanks in advance.
This question is already answered here concatenate XML values using dataweave mule
Referring DataWeave Reference Documentation at Reduce section:
Transform
%dw 1.0
%output application/json
---
concat: ["a", "b", "c", "d"] reduce ($$ ++ $)
Output
{
"concat": "abcd"
}
Therefore, you can try something like this: concat: payload.root.*line reduce ($$ ++ $)