How to create a map of Excel file key-value without using column name in Dataweave? - mule

I am reading an excel file (.xlsx) into a json array and I am creating into a map because I want to apply validations to each of the column individually. I am able to access it using the column name like so,
Excel file is :
column A, column B
value of Column A, value of column B
I am accessing it like this :
payload map(item, index) ->
"Column Name A" : item."Column Name A",
"Column Name B" : item."Column Name B"
Where column A and B are the excel column header.
What I want to do is to create the same map but using the column index like
payload map(item, index) ->
item[0].key : item[0],
item[1].key : item[1]
So that I do not have to hard code the excel header name and I can rely on the index of the excel columns.
I have tried using pluck $$ to create a map of Keys but I cannot create a map of keys-value, I am not able to use item[0] as key in a map.
How can I achieve above without using excel column header name?
Expected output should be like this :
{
"Column A " : "value of Column A",
"Column B" : "value of Column B",
"Errors" : "Column A is not valid"
}

Assuming that you'd like to validate each payload item loaded from an Excel file, you could use the following DataWeave expression:
%dw 2.0
output application/json
fun validate(col, val) =
if (isEmpty(val)) {"error": col ++ ": value is null or empty"}
else {}
fun validateRow(row) =
"Errors":
flatten([] << ((row mapObject ((value, key, index) -> ((validate((key), value))))).error default []))
---
payload map (item, index) -> item ++ validateRow(item)
Using the following input payload:
[
{"col1": "val1.1", "col2": "val1.2", "col3": "val1.3"},
{"col1": "val2.1", "col2": "val2.2", "col3": null}
]
would result in:
[
{
"col1": "val1.1",
"col2": "val1.2",
"col3": "val1.3",
"Errors": [
]
},
{
"col1": "val2.1",
"col2": "val2.2",
"col3": null,
"Errors": [
"col3: value is null or empty"
]
}
]
The expression will result in an output slightly different than the one you're expecting, but this version will allow you to have an array of error messages that can be easier to manipulate later on in your flow.
One thing to keep in mind is the possibility to have more than one error message per column. If that's the case, then the DataWeave expression would need some adjustments.

Try just using the index. It should work just fine.
%dw 2.0
output application/json
---
({ "someKey": "Val1", "lksajdfkl": "Val2" })[1]
results to
"Val2"
And if you want to use a variable as a key you have to wrap it in parentheses.
EG, to transform { "key": "SomeOtherKey", "val": 123 } to { "SomeOtherKey": 123 } you could do (payload.key): payload.val

Try this:
%dw 2.0
output application/json
var rules = {
"0": {
key: "Column A",
val: (val) -> !isEmpty(val),
},
"1": {
key: "Column B",
val: (val) -> val ~= "value of Column B"
}
}
fun validate(v, k, i) =
[
("Invalid column name: '$(k)' should be '$(rules[i].key)'") if (rules[i]? and rules[i].key? and k != rules[i].key),
("Invalid value for $(rules[i].key): '$(v default "null")'") if (rules[i]? and rules[i].val? and (!(rules[i].val(v))))
]
fun validate(obj) =
obj pluck { v: $, k: $$ as String, i: $$$ as String } reduce ((kvp,acc={}) ->
do {
var validation = validate(kvp.v, kvp.k, kvp.i)
---
{
(acc - "Errors"),
(kvp.k): kvp.v,
("Errors": (acc.Errors default []) ++
(if (sizeOf(validation) > 0) validation else [])
) if(acc.Errors? or sizeOf(validation) > 0)
}
}
)
---
payload map validate($)
Output:
[
{
"Column A": "value of Column A",
"Column B": "value of Column B"
},
{
"Column A": "",
"Column B": "value of Column B",
"Errors": [
"Invalid value for Column A: ''"
]
},
{
"Column A": "value of Column A",
"Column B": "value of Column C",
"Errors": [
"Invalid value for Column B: 'value of Column C'"
]
},
{
"Column A": null,
"Column C": "value of Column D",
"Errors": [
"Invalid value for Column A: 'null'",
"Invalid column name: 'Column C' should be 'Column B'",
"Invalid value for Column B: 'value of Column D'"
]
}
]

Related

How to use try function with map in dataweave 2.0

Hi everybody i hope you are well, i have a doubt how can i do to use the try inside map in dataweave, i explain about my issue, i receive a csv file with multiple rows, that rows are group by order first and the second, i use map to transform the data in json ( to join all the rows with the same order) format using a couple colums that colums coming from the csv file, if any colum has empty or null the map fail and broke all the content file, how can i use the try function in dataweave if any group of orders fail only get the order and put in another part of the json and follow with the next order without broke the loop.
Part of the CSV File - Demo:
number,date,upc,quantity,price
1234556,2022-08-04,4015,1,
1234556,2022-08-04,4019,1,2.00
1234556,2022-08-04,4016,1,3.00
1234557,2022-08-04,4015,1,3.00
Dataweave:
%dw 2.0
output application/json
---
payload groupBy ($.number) pluck $ map ( () -> {
"number": $[0].number,
"date": $[0].date,
"items": $ map {
"upc": $.upc,
"price": $.price as Number {format: "##,###.##"} as String {format: "##,###.00"},
"quantity": $.quantity
}
})
Error Message:
Unable to coerce `` as Number using `##,###.##` as format.
NOTE: if put the data in the position "price "= something the the issue are solve in the first row, but i need use the function try or what do you recomend, i cant validate all the elements in csv because this is a demo the complete file has a many columns... if you would has another coment to better my code i'd apreciated.
Expected Result: (I don't know if this is possible)
[
{
"data": [
{
"number":"1234557",
"date":"2022-08-04",
"items":[
{
"upc":"4015",
"price":"3.00",
"quantity":"1"
}
]
}
]
},
{
"Error":[
{
"number":"1234556",
"message":"Unable to coerce `` as Number using `##,###.##` as format."
}
]
}
]
best regards!!
Hi The closer I got from what you asked for was
%dw 2.0
output application/json
import * from dw::Runtime
fun safeMap<T, R>(items: Array<T>, callback: (item:T) -> R ): Array<R | {message: String}> =
items map ((item) -> try(() -> callback(item)) match {
case is {success: false} -> {message: $.error.message as String}
case is {success: true, result: R} -> $.result
})
---
payload
groupBy ($.number)
pluck $
safeMap ((item) -> {
"number": item[0].number,
"date": item[0].date,
"items": item safeMap {
"upc": $.upc,
"price": $.price as Number {format: "##,###.##"} as String {format: "##,###.00"},
"quantity": $.quantity
}
})
This uses a combination of map and try function.
And it outputs
[
{
"number": "1234556",
"date": "2022-08-04",
"items": [
{
"message": "Unable to coerce `` as Number using `##,###.##` as format."
},
{
"upc": "4019",
"price": "2.00",
"quantity": "1"
},
{
"upc": "4016",
"price": "3.00",
"quantity": "1"
}
]
},
{
"number": "1234557",
"date": "2022-08-04",
"items": [
{
"upc": "4015",
"price": "3.00",
"quantity": "1"
}
]
}
]
If you are looking to resolve the value of price if price is null/empty value in the input csv and get rid of the error(which is happening because it cannot format Null values to Number) , try adding default in case of empty/null values and formatting to String only when value exists, like below:
%dw 2.0
output application/json
---
payload groupBy ($.number) pluck $ map ( () -> {
"number": $[0].number,
"date": $[0].date,
"items": $ map {
"upc": $.upc,
"price": ($.price as String {format: "##,###.00"}) default $.price,
"quantity": $.quantity
}
})
Note:
For price, you don't need to convert to Number at all if you want your output as formatted string ultimately.

Need an optimized way to get the required output

Is there an optimized way to trim the beginning and end blank spaces from the below data array field values, have used three approaches, but need a more optimized way.
Note: there might be more than 20 objects in the data array and more than 50 fields for each object. Below payload is just a sample; field values can have digits or strings or dates of any size.
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ ",
"dateofAdmission": "2021-06-09 ",
"percentage": "89 "
},
{
"student Name": "ABC ",
"dateofAdmission": "2021-08-04 ",
"percentage": "90 "
},
{
"student Name": "PQR ",
"dateofAdmission": "2021-10-01 ",
"percentage": "88 "
}
]
}
Required output:
{
"School": "XYZ High school",
"data": [
{
"student Name": "XYZ",
"dateofAdmission": "2021-06-09",
"percentage": "89"
},
{
"student Name": "ABC",
"dateofAdmission": "2021-08-04",
"percentage": "90"
},
{
"student Name": "PQR",
"dateofAdmission": "2021-10-01",
"percentage": "88"
}
]
}
Three approaches I've used:
First approach:
%dw 2.0
output application/json
//variable to remove beginning and end blank spaces from values in key:value pairs for data
var payload1 = payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($))
---
//constructed the payload
payload - "data" ++ data: payload1 map ((item, index) -> {
(item)
})
Second approach:
%dw 2.0
output application/json
---
payload - "data" ++ data: payload.data map ((value , key ) ->
value mapObject ( ($$ )) : trim($)) map ((item, index) -> {
(item)
})
Third approach:
%dw 2.0
output application/json
---
{
"Name":payload.School,
"data": payload.data map ( $ mapObject (($$):trim($) ) )
}
Another solution using the update operator:
%dw 2.0
output application/json
---
payload update {
case data at .data ->
data map ($ mapObject ((value, key, index) -> (key):trim(value) ))
}
Note that your first two solutions are exactly the same, and both have an unneeded map() at the end that doesn't seem to have any purposes. The third solution is very similar but uses an incorrect key name for the school name (Name instead of School as in the example). There's nothing particularly wrong with each solution other than those minor issues.

Excel to Json Mapping in Dataweave 2.0

I have an excel sheet. I want it to be mapped to a json profile.
sample excel data
I want to convert into json like
[
{
"Scope" : {
"Content owner" : "",
"Language" : "",
"Territory" : ""
},
"Title" : {
"Content ID" : "",
"Billing ID" : "",
"IMDB" : "",
"Name" : "",
"Episode Number" : "",
"Episode Sequence" : "",
"Container Position" : "",
"Run Length" : "",
"Work Type" : "",
"Short Synopsis" : "",
"Long Synopsis" : "",
"Original Language" : "",
"Rating Set1" : "",
"Rating Set2" : "",
"Rating Set3" : "",
"Rating Set4" : "",
"Rating Set5" : "".....
Like this... the row would be main object and the next row would be the second object ... and next the actual data is to be mapped. I tried but I am unable to get it dynamically. I used some static index values but wasn't satisfied with the outcome.
Any help is appreciated
Thank you!
Dataweave won't be able to solve it 100% in a dynamic way. You may try to use the following expression:
%dw 2.0
output application/json
//endIndex: use -1 to include all remaining fields
fun addFields(item, startColIdx, endColIdx) =
(item pluck (value, key, index) -> (key): value) filter ($$ >= startColIdx and (endColIdx == -1 or $$ <= endColIdx)) reduce ($$ ++ $)
---
payload map(item, index) -> {
'Scope': addFields(item, 0, 2),
'Title': addFields(item, 3, -1)
}
You can use another version of the above expression, but instead of considering the start and end column index, you could consider start column index and column count (get 3 columns starting from index 0 instead of get columns from column index 0 to column index 2):
%dw 2.0
output application/json
//endIndex: use -1 to include all remaining columns
fun addFields(item, startColIdx, colCnt) =
(item pluck (value, key, index) -> (key): value) filter ($$ >= startColIdx and (colCnt == -1 or $$ < startColIdx + colCnt)) reduce ($$ ++ $)
---
payload map(item, index) -> {
'Scope': addFields(item, 0, 3),
'Title': addFields(item, 3, -1)
}

Lookup Country Code from the phone Numbers using Dataweave Mule

My Input Request JSON looks like this below:
{
"phoneNumbers": [{
"phoneNumberType": "mobile",
"phoneNumber": "54112724555"
},
{
"phoneNumberType": "mobile",
"phoneNumber": "16298765432"
}
]
}
I want to generate Output Json Like this :
{
"phoneNumbers": [{
"phoneNumberType": "mobile",
"phoneNumber": "54112724555",
"CountryCode": "ARG"
},
{
"phoneNumberType": "mobile",
"phoneNumber": "16298765432",
"CountryCode": "US"
}
]
}
I derive the countryCode from the PhoneNumber using callingCode and CountryCode Mapping given in csv file.
CALLING_CODE,COUNTRY_CODE
1,US
7,RU
54,AR
20,EG
32,BE
33,FR
505,NI
506,CR
1876,JM
1905,CA
1939,PR
262262,RE
262269,YT
.,.
.,.
I have used the fileConnector to read the CSV File and stored it in Vars.CallingCodeMapping.
I have to do lookup phoneNumber with calling code by passing first letter from the phonenumber matching return countryCode then first two letter ....firstsixLetter if nothing matches return NA.
Given this input as payload
[
"54112724555",
"16298765432"
]
And this csv in a var called country_codes
%dw 2.0
output application/json
var codeByCode = vars.country_code groupBy ((item, index) -> item.CALLING_CODE)
/**
* Returns the Country code or Null if not found
*/
fun lookupCountrCode(phoneNumber:String): String | Null =
//map the each sub part to a country code or null
(0 to 6 map ((index) -> codeByCode[phoneNumber[0 to index]])
//Filter non null and take the first this will return null if array is empty
filter ((item, index) -> item != null))[0]
---
payload map ((item, index) -> lookupCountrCode(item))
Outputs
[
[
{
"CALLING_CODE": "54",
"COUNTRY_CODE": "ARG"
}
],
[
{
"CALLING_CODE": "1",
"COUNTRY_CODE": "US"
}
]
]

Lookup list of Maps variable in data weave script

I have a list of maps (listOfMapsObject) like below
[
{
"Id" : "1234",
"Value" : "Text1"
},
{
"Id" : "1235",
"Value" : "Text2"
}
]
I would like access "Value" field for a given Id in dataweave script.
For example: For Id = 1234, Text1 should be returned.
%dw 1.0
%output application/json
%var listOfMapsObject = flowVars.listOfMaps
---
payload map {
"key" : $.key,
"data" : lookup Value field in listOfMapsObject by given key
}
Approch suggested by #'sulthony h' is fine but it will end up in performance issue if you large number of data in pyload and listOfMapsObject. As filter is used , for each record of payload script will loop for all the entries in flowVars.listOfMaps.
Following will work fine and map key value only once.
%dw 1.0
%output application/json
%var dataLookup = {(flowVars.listOfMaps map {
($.Id): $.Value
})}
---
payload map {
key : $.key,
data : dataLookup[$.key]
}
Output-
[
{
"key": "1234",
"data": "Text1"
},
{
"key": "1235",
"data": "Text2"
}
]
Where Payload -
[
{
"key" : "1234"
},
{
"key" : "1235"
}
]
And -
[
{
"Id" : "1234",
"Value" : "Text1"
},
{
"Id" : "1235",
"Value" : "Text2"
}
]
Hope this helps.
I create my own object with the slightly similar object and successfully access "value" field with the following DataWeave expression:
%dw 1.0
%output application/json
%var listOfMapsObject = flowVars.listOfMaps
---
payload map using(data = $) {
"key" : data.key,
"data" : (listOfMapsObject filter $.id == data.key).value reduce ($$ ++ $)
}
You can modify it with your own object, e.g.: replace the "id" with "Id". Test and evaluate the result by using filter, flatten, reduce, etc.