Dataweave in Mule - Change value Array of Objects - mule

I get a payload as input in the message transform component. It is an Array with Objcts:
[
{
"enterprise": "Samsung",
"description": "This is the Samsung enterprise",
},
{
"enterprise": "Apple",
"description": "This is the Apple enterprise ",
}
]
I have a variable that replaces the description and the output that I want is:
[
{
"enterprise": "Samsung",
"description": "This is the var value",
},
{
"enterprise": "Apple",
"description": "This is the var value",
}
]
I tried to use:
%dw 2.0
output application/java
---
payload map ((item, index) -> {
description: vars.descriptionValue
})
But it returns:
[
{
"description": "This is the var value",
},
{
"description": "This is the var value",
}
]
Is possible to replace only the description value keeping the rest of the fields? Avoiding adding the other fields in the mapping.

There are many ways to do this.
One way to do it is to first remove the original description field and then add the new one
%dw 2.0
output application/java
---
payload map ((item, index) ->
item - "description" ++ {description: vars.descriptionValue}
)
Otherwise you can use mapObject to iterate over the key-value pairs of each object and with pattern matching add a case for when the key is description.
I prefer this second way of doing it when I want to do many replacements.
%dw 2.0
output application/java
fun process(obj: Object) = obj mapObject ((value, key) -> {
(key): key match {
case "description" -> vars.descriptionValue
else -> value
}
})
---
payload map ((item, index) ->
process(item)
)

Related

How to turn a string which should be split on the <space> an turned into an array of objects in mulesoft

I am trying to turn a string which should be split on the basis of space and turned into an array of objects.
Please help me how can I form it.
Input
field: YYC:16:26 YVR:16:03 YEG:13:43
Output Expected
"details" : [
{
"field" : "YYC",
"time" : "16:26"
},
{
"field" : "YVR",
"Time" : "16:03"
},
{
"field" : "YEG",
"Time" : "13:43"
}
]
A slight twist to what Karthik posted:
%dw 2.0
output application/json
import * from dw::core::Arrays
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map
{
"field": ($ splitBy ":")[0],
"Time": drop(($ splitBy ":"),1)joinBy ":"
}
you need to split first by sapce and then need to break the remaining string as below
%dw 2.0
output application/json
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map ((item, index) ->
{
"field": item[0 to 2],
"Time": item [4 to -1]
})
Another way of approach similar to Anurag's solution
DW
%dw 2.0
output application/json
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map ((item, index) ->
{
"field": (item splitBy ":")[0],
"Time": (item splitBy ":")[1 to -1] joinBy ":"
})
Output
{
"details": [
{
"field": "YYC",
"Time": "16:26"
},
{
"field": "YVR",
"Time": "16:03"
},
{
"field": "YEG",
"Time": "13:43"
}
]
}

Need an optimized way to get the required output

Is there an optimized way to trim the beginning and end blank spaces from the below data array field values, have used three approaches, but need a more optimized way.
Note: there might be more than 20 objects in the data array and more than 50 fields for each object. Below payload is just a sample; field values can have digits or strings or dates of any size.
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ ",
"dateofAdmission": "2021-06-09 ",
"percentage": "89 "
},
{
"student Name": "ABC ",
"dateofAdmission": "2021-08-04 ",
"percentage": "90 "
},
{
"student Name": "PQR ",
"dateofAdmission": "2021-10-01 ",
"percentage": "88 "
}
]
}
Required output:
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ",
"dateofAdmission": "2021-06-09",
"percentage": "89"
},
{
"student Name": "ABC",
"dateofAdmission": "2021-08-04",
"percentage": "90"
},
{
"student Name": "PQR",
"dateofAdmission": "2021-10-01",
"percentage": "88"
}
]
}
Three approaches I've used:
First approach:
%dw 2.0
output application/json
//variable to remove beginning and end blank spaces from values in key:value pairs for data
var payload1 = payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($))
---
//constructed the payload
payload - "data" ++ data: payload1 map ((item, index) -> {
(item)
})
Second approach:
%dw 2.0
output application/json
---
payload - "data" ++ data: payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($)) map ((item, index) -> {
(item)
})
Third approach:
%dw 2.0
output application/json
---
{
"Name":payload.School,
"data": payload.data map ( $ mapObject (($$):trim($) ) )
}
Another solution using the update operator:
%dw 2.0
output application/json
---
payload update {
case data at .data ->
data map ($ mapObject ((value, key, index) -> (key):trim(value) ))
}
Note that your first two solutions are exactly the same, and both have an unneeded map() at the end that doesn't seem to have any purposes. The third solution is very similar but uses an incorrect key name for the school name (Name instead of School as in the example). There's nothing particularly wrong with each solution other than those minor issues.

Mule 3 convert a json object in to an array

I have the below dynamic response coming from third party API, now I need to transform only the particular JSON object ("MyValues") into an array.
The payload here is sample is is very large .
Current Output:
{
"Body": {
"Status": "200",
"Result": {
"MyValues":{
"Name":"ABC TEST",
"Phone":"1234"
}
}
}
}
Expected Output:
{
"Body": {
"Status": "200",
"Result": {
"MyValues":[{
"Name":"ABC TEST",
"Phone":"1234"
}]
}
}
}
You can use pattern matching based on the type received, array or object. I created a recursive function to find the instances of a key name and perform the change in a generic way.
Example:
%dw 1.0
%output application/json
%function convertToSingleArray(x, key)
x match {
// OPTIONAL :array -> x map convertToSingleArray($, key),
:object -> x mapObject {($$): [$] when ( (($$ as :string) == key) and ((typeOf $) as :string == ":object")) otherwise convertToSingleArray($, key)
},
default -> x
}
---
convertToSingleArray(payload, "MyValues")

csv to json conversion where some tags may not come in random

I have a csv input like below: Notice that the tag Edible is not coming for the second set of values. Also notice that the data for one object is coming in columns as well as three rows:
Key|Value|ConcatenatedString
Name|Fruit|"apple,orange,pineapple"
Status|Good|"apple,orange,pineapple"
Edible|Yes|"apple,orange,pineapple"
Name|Furniture|"chair,table,bed"
Status|Good|"chair,table,bed"
I need it in the below json format:
{
Name:"Fruit",
Status:"Good",
Edible:"Yes"
ConcatenatedString:"apple,orange,pineapple"
},
{
Name:"Furniture",
Status:"Good",
Edible:null
ConcatenatedString:"chair,table,bed"
}
I was using the below code when all the tags were coming for all objects. But now that some tags may not come at all I am not sure how to handle this as I was using a position based approach:
%dw 2.0
input payload application/csv separator='|'
output application/json
---
payload map
{
Name:payload[(($$)*4)+0].Value,
Status:payload[(($$)*4)+1].Value,
Edible:payload[(($$)*4)+2].Value,
ConcatenatedString:payload[(($$)*4)+0]."ConcatenatedString"
}
filter ($.Name!= null)
Thanks in advance,
Anoop
here is my answer.
%dw 2.0
input payload application/csv separator="|"
output application/json
---
payload
groupBy ((item, index) -> item.ConcatenatedString)
pluck ((value, key, index) -> {
Name: (value filter ((item, index) -> item.Key == "Name")).Value[0],
Status: (value filter ((item, index) -> item.Key == "Status")).Value[0],
Edible: (value filter ((item, index) -> item.Key == "Edible")).Value[0],
ConcatenatedString: key
})
Basically first you need to group by the criteria you want to group by. In your case ConcatenatedString. This returns
{
"chair,table,bed": [
{
"Key": "Name",
"Value": "Furniture",
"ConcatenatedString": "chair,table,bed"
},
{
"Key": "Status",
"Value": "Good",
"ConcatenatedString": "chair,table,bed"
}
],
"apple,orange,pineapple": [
{
"Key": "Name",
"Value": "Fruit",
"ConcatenatedString": "apple,orange,pineapple"
},
{
"Key": "Status",
"Value": "Good",
"ConcatenatedString": "apple,orange,pineapple"
},
{
"Key": "Edible",
"Value": "Yes",
"ConcatenatedString": "apple,orange,pineapple"
}
]
}
And then you iterate with pluck by every key value pair and filter the elements you want to map.

Lookup list of Maps variable in data weave script

I have a list of maps (listOfMapsObject) like below
[
{
"Id" : "1234",
"Value" : "Text1"
},
{
"Id" : "1235",
"Value" : "Text2"
}
]
I would like access "Value" field for a given Id in dataweave script.
For example: For Id = 1234, Text1 should be returned.
%dw 1.0
%output application/json
%var listOfMapsObject = flowVars.listOfMaps
---
payload map {
"key" : $.key,
"data" : lookup Value field in listOfMapsObject by given key
}
Approch suggested by #'sulthony h' is fine but it will end up in performance issue if you large number of data in pyload and listOfMapsObject. As filter is used , for each record of payload script will loop for all the entries in flowVars.listOfMaps.
Following will work fine and map key value only once.
%dw 1.0
%output application/json
%var dataLookup = {(flowVars.listOfMaps map {
($.Id): $.Value
})}
---
payload map {
key : $.key,
data : dataLookup[$.key]
}
Output-
[
{
"key": "1234",
"data": "Text1"
},
{
"key": "1235",
"data": "Text2"
}
]
Where Payload -
[
{
"key" : "1234"
},
{
"key" : "1235"
}
]
And -
[
{
"Id" : "1234",
"Value" : "Text1"
},
{
"Id" : "1235",
"Value" : "Text2"
}
]
Hope this helps.
I create my own object with the slightly similar object and successfully access "value" field with the following DataWeave expression:
%dw 1.0
%output application/json
%var listOfMapsObject = flowVars.listOfMaps
---
payload map using(data = $) {
"key" : data.key,
"data" : (listOfMapsObject filter $.id == data.key).value reduce ($$ ++ $)
}
You can modify it with your own object, e.g.: replace the "id" with "Id". Test and evaluate the result by using filter, flatten, reduce, etc.