Is there an optimized way to trim the beginning and end blank spaces from the below data array field values, have used three approaches, but need a more optimized way.
Note: there might be more than 20 objects in the data array and more than 50 fields for each object. Below payload is just a sample; field values can have digits or strings or dates of any size.
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ ",
"dateofAdmission": "2021-06-09 ",
"percentage": "89 "
},
{
"student Name": "ABC ",
"dateofAdmission": "2021-08-04 ",
"percentage": "90 "
},
{
"student Name": "PQR ",
"dateofAdmission": "2021-10-01 ",
"percentage": "88 "
}
]
}
Required output:
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ",
"dateofAdmission": "2021-06-09",
"percentage": "89"
},
{
"student Name": "ABC",
"dateofAdmission": "2021-08-04",
"percentage": "90"
},
{
"student Name": "PQR",
"dateofAdmission": "2021-10-01",
"percentage": "88"
}
]
}
Three approaches I've used:
First approach:
%dw 2.0
output application/json
//variable to remove beginning and end blank spaces from values in key:value pairs for data
var payload1 = payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($))
---
//constructed the payload
payload - "data" ++ data: payload1 map ((item, index) -> {
(item)
})
Second approach:
%dw 2.0
output application/json
---
payload - "data" ++ data: payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($)) map ((item, index) -> {
(item)
})
Third approach:
%dw 2.0
output application/json
---
{
"Name":payload.School,
"data": payload.data map ( $ mapObject (($$):trim($) ) )
}
Another solution using the update operator:
%dw 2.0
output application/json
---
payload update {
case data at .data ->
data map ($ mapObject ((value, key, index) -> (key):trim(value) ))
}
Note that your first two solutions are exactly the same, and both have an unneeded map() at the end that doesn't seem to have any purposes. The third solution is very similar but uses an incorrect key name for the school name (Name instead of School as in the example). There's nothing particularly wrong with each solution other than those minor issues.
Related
Hi everybody i hope you are well, i have a doubt how can i do to use the try inside map in dataweave, i explain about my issue, i receive a csv file with multiple rows, that rows are group by order first and the second, i use map to transform the data in json ( to join all the rows with the same order) format using a couple colums that colums coming from the csv file, if any colum has empty or null the map fail and broke all the content file, how can i use the try function in dataweave if any group of orders fail only get the order and put in another part of the json and follow with the next order without broke the loop.
Part of the CSV File - Demo:
number,date,upc,quantity,price
1234556,2022-08-04,4015,1,
1234556,2022-08-04,4019,1,2.00
1234556,2022-08-04,4016,1,3.00
1234557,2022-08-04,4015,1,3.00
Dataweave:
%dw 2.0
output application/json
---
payload groupBy ($.number) pluck $ map ( () -> {
"number": $[0].number,
"date": $[0].date,
"items": $ map {
"upc": $.upc,
"price": $.price as Number {format: "##,###.##"} as String {format: "##,###.00"},
"quantity": $.quantity
}
})
Error Message:
Unable to coerce `` as Number using `##,###.##` as format.
NOTE: if put the data in the position "price "= something the the issue are solve in the first row, but i need use the function try or what do you recomend, i cant validate all the elements in csv because this is a demo the complete file has a many columns... if you would has another coment to better my code i'd apreciated.
Expected Result: (I don't know if this is possible)
[
{
"data": [
{
"number":"1234557",
"date":"2022-08-04",
"items":[
{
"upc":"4015",
"price":"3.00",
"quantity":"1"
}
]
}
]
},
{
"Error":[
{
"number":"1234556",
"message":"Unable to coerce `` as Number using `##,###.##` as format."
}
]
}
]
best regards!!
Hi The closer I got from what you asked for was
%dw 2.0
output application/json
import * from dw::Runtime
fun safeMap<T, R>(items: Array<T>, callback: (item:T) -> R ): Array<R | {message: String}> =
items map ((item) -> try(() -> callback(item)) match {
case is {success: false} -> {message: $.error.message as String}
case is {success: true, result: R} -> $.result
})
---
payload
groupBy ($.number)
pluck $
safeMap ((item) -> {
"number": item[0].number,
"date": item[0].date,
"items": item safeMap {
"upc": $.upc,
"price": $.price as Number {format: "##,###.##"} as String {format: "##,###.00"},
"quantity": $.quantity
}
})
This uses a combination of map and try function.
And it outputs
[
{
"number": "1234556",
"date": "2022-08-04",
"items": [
{
"message": "Unable to coerce `` as Number using `##,###.##` as format."
},
{
"upc": "4019",
"price": "2.00",
"quantity": "1"
},
{
"upc": "4016",
"price": "3.00",
"quantity": "1"
}
]
},
{
"number": "1234557",
"date": "2022-08-04",
"items": [
{
"upc": "4015",
"price": "3.00",
"quantity": "1"
}
]
}
]
If you are looking to resolve the value of price if price is null/empty value in the input csv and get rid of the error(which is happening because it cannot format Null values to Number) , try adding default in case of empty/null values and formatting to String only when value exists, like below:
%dw 2.0
output application/json
---
payload groupBy ($.number) pluck $ map ( () -> {
"number": $[0].number,
"date": $[0].date,
"items": $ map {
"upc": $.upc,
"price": ($.price as String {format: "##,###.00"}) default $.price,
"quantity": $.quantity
}
})
Note:
For price, you don't need to convert to Number at all if you want your output as formatted string ultimately.
I am trying to turn a string which should be split on the basis of space and turned into an array of objects.
Please help me how can I form it.
Input
field: YYC:16:26 YVR:16:03 YEG:13:43
Output Expected
"details" : [
{
"field" : "YYC",
"time" : "16:26"
},
{
"field" : "YVR",
"Time" : "16:03"
},
{
"field" : "YEG",
"Time" : "13:43"
}
]
A slight twist to what Karthik posted:
%dw 2.0
output application/json
import * from dw::core::Arrays
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map
{
"field": ($ splitBy ":")[0],
"Time": drop(($ splitBy ":"),1)joinBy ":"
}
you need to split first by sapce and then need to break the remaining string as below
%dw 2.0
output application/json
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map ((item, index) ->
{
"field": item[0 to 2],
"Time": item [4 to -1]
})
Another way of approach similar to Anurag's solution
DW
%dw 2.0
output application/json
var test= "YYC:16:26 YVR:16:03 YEG:13:43" splitBy " "
---
details: test map ((item, index) ->
{
"field": (item splitBy ":")[0],
"Time": (item splitBy ":")[1 to -1] joinBy ":"
})
Output
{
"details": [
{
"field": "YYC",
"Time": "16:26"
},
{
"field": "YVR",
"Time": "16:03"
},
{
"field": "YEG",
"Time": "13:43"
}
]
}
I am reading an excel file (.xlsx) into a json array and I am creating into a map because I want to apply validations to each of the column individually. I am able to access it using the column name like so,
Excel file is :
column A, column B
value of Column A, value of column B
I am accessing it like this :
payload map(item, index) ->
"Column Name A" : item."Column Name A",
"Column Name B" : item."Column Name B"
Where column A and B are the excel column header.
What I want to do is to create the same map but using the column index like
payload map(item, index) ->
item[0].key : item[0],
item[1].key : item[1]
So that I do not have to hard code the excel header name and I can rely on the index of the excel columns.
I have tried using pluck $$ to create a map of Keys but I cannot create a map of keys-value, I am not able to use item[0] as key in a map.
How can I achieve above without using excel column header name?
Expected output should be like this :
{
"Column A " : "value of Column A",
"Column B" : "value of Column B",
"Errors" : "Column A is not valid"
}
Assuming that you'd like to validate each payload item loaded from an Excel file, you could use the following DataWeave expression:
%dw 2.0
output application/json
fun validate(col, val) =
if (isEmpty(val)) {"error": col ++ ": value is null or empty"}
else {}
fun validateRow(row) =
"Errors":
flatten([] << ((row mapObject ((value, key, index) -> ((validate((key), value))))).error default []))
---
payload map (item, index) -> item ++ validateRow(item)
Using the following input payload:
[
{"col1": "val1.1", "col2": "val1.2", "col3": "val1.3"},
{"col1": "val2.1", "col2": "val2.2", "col3": null}
]
would result in:
[
{
"col1": "val1.1",
"col2": "val1.2",
"col3": "val1.3",
"Errors": [
]
},
{
"col1": "val2.1",
"col2": "val2.2",
"col3": null,
"Errors": [
"col3: value is null or empty"
]
}
]
The expression will result in an output slightly different than the one you're expecting, but this version will allow you to have an array of error messages that can be easier to manipulate later on in your flow.
One thing to keep in mind is the possibility to have more than one error message per column. If that's the case, then the DataWeave expression would need some adjustments.
Try just using the index. It should work just fine.
%dw 2.0
output application/json
---
({ "someKey": "Val1", "lksajdfkl": "Val2" })[1]
results to
"Val2"
And if you want to use a variable as a key you have to wrap it in parentheses.
EG, to transform { "key": "SomeOtherKey", "val": 123 } to { "SomeOtherKey": 123 } you could do (payload.key): payload.val
Try this:
%dw 2.0
output application/json
var rules = {
"0": {
key: "Column A",
val: (val) -> !isEmpty(val),
},
"1": {
key: "Column B",
val: (val) -> val ~= "value of Column B"
}
}
fun validate(v, k, i) =
[
("Invalid column name: '$(k)' should be '$(rules[i].key)'") if (rules[i]? and rules[i].key? and k != rules[i].key),
("Invalid value for $(rules[i].key): '$(v default "null")'") if (rules[i]? and rules[i].val? and (!(rules[i].val(v))))
]
fun validate(obj) =
obj pluck { v: $, k: $$ as String, i: $$$ as String } reduce ((kvp,acc={}) ->
do {
var validation = validate(kvp.v, kvp.k, kvp.i)
---
{
(acc - "Errors"),
(kvp.k): kvp.v,
("Errors": (acc.Errors default []) ++
(if (sizeOf(validation) > 0) validation else [])
) if(acc.Errors? or sizeOf(validation) > 0)
}
}
)
---
payload map validate($)
Output:
[
{
"Column A": "value of Column A",
"Column B": "value of Column B"
},
{
"Column A": "",
"Column B": "value of Column B",
"Errors": [
"Invalid value for Column A: ''"
]
},
{
"Column A": "value of Column A",
"Column B": "value of Column C",
"Errors": [
"Invalid value for Column B: 'value of Column C'"
]
},
{
"Column A": null,
"Column C": "value of Column D",
"Errors": [
"Invalid value for Column A: 'null'",
"Invalid column name: 'Column C' should be 'Column B'",
"Invalid value for Column B: 'value of Column D'"
]
}
]
I have a csv input like below: Notice that the tag Edible is not coming for the second set of values. Also notice that the data for one object is coming in columns as well as three rows:
Key|Value|ConcatenatedString
Name|Fruit|"apple,orange,pineapple"
Status|Good|"apple,orange,pineapple"
Edible|Yes|"apple,orange,pineapple"
Name|Furniture|"chair,table,bed"
Status|Good|"chair,table,bed"
I need it in the below json format:
{
Name:"Fruit",
Status:"Good",
Edible:"Yes"
ConcatenatedString:"apple,orange,pineapple"
},
{
Name:"Furniture",
Status:"Good",
Edible:null
ConcatenatedString:"chair,table,bed"
}
I was using the below code when all the tags were coming for all objects. But now that some tags may not come at all I am not sure how to handle this as I was using a position based approach:
%dw 2.0
input payload application/csv separator='|'
output application/json
---
payload map
{
Name:payload[(($$)*4)+0].Value,
Status:payload[(($$)*4)+1].Value,
Edible:payload[(($$)*4)+2].Value,
ConcatenatedString:payload[(($$)*4)+0]."ConcatenatedString"
}
filter ($.Name!= null)
Thanks in advance,
Anoop
here is my answer.
%dw 2.0
input payload application/csv separator="|"
output application/json
---
payload
groupBy ((item, index) -> item.ConcatenatedString)
pluck ((value, key, index) -> {
Name: (value filter ((item, index) -> item.Key == "Name")).Value[0],
Status: (value filter ((item, index) -> item.Key == "Status")).Value[0],
Edible: (value filter ((item, index) -> item.Key == "Edible")).Value[0],
ConcatenatedString: key
})
Basically first you need to group by the criteria you want to group by. In your case ConcatenatedString. This returns
{
"chair,table,bed": [
{
"Key": "Name",
"Value": "Furniture",
"ConcatenatedString": "chair,table,bed"
},
{
"Key": "Status",
"Value": "Good",
"ConcatenatedString": "chair,table,bed"
}
],
"apple,orange,pineapple": [
{
"Key": "Name",
"Value": "Fruit",
"ConcatenatedString": "apple,orange,pineapple"
},
{
"Key": "Status",
"Value": "Good",
"ConcatenatedString": "apple,orange,pineapple"
},
{
"Key": "Edible",
"Value": "Yes",
"ConcatenatedString": "apple,orange,pineapple"
}
]
}
And then you iterate with pluck by every key value pair and filter the elements you want to map.
I get a payload as input in the message transform component. It is an Array with Objcts:
[
{
"enterprise": "Samsung",
"description": "This is the Samsung enterprise",
},
{
"enterprise": "Apple",
"description": "This is the Apple enterprise ",
}
]
I have a variable that replaces the description and the output that I want is:
[
{
"enterprise": "Samsung",
"description": "This is the var value",
},
{
"enterprise": "Apple",
"description": "This is the var value",
}
]
I tried to use:
%dw 2.0
output application/java
---
payload map ((item, index) -> {
description: vars.descriptionValue
})
But it returns:
[
{
"description": "This is the var value",
},
{
"description": "This is the var value",
}
]
Is possible to replace only the description value keeping the rest of the fields? Avoiding adding the other fields in the mapping.
There are many ways to do this.
One way to do it is to first remove the original description field and then add the new one
%dw 2.0
output application/java
---
payload map ((item, index) ->
item - "description" ++ {description: vars.descriptionValue}
)
Otherwise you can use mapObject to iterate over the key-value pairs of each object and with pattern matching add a case for when the key is description.
I prefer this second way of doing it when I want to do many replacements.
%dw 2.0
output application/java
fun process(obj: Object) = obj mapObject ((value, key) -> {
(key): key match {
case "description" -> vars.descriptionValue
else -> value
}
})
---
payload map ((item, index) ->
process(item)
)