Mule 4 : Batch Processing : Error in Accept Expression in Batch step - mule

I am using Batch processing which has more than one batch steps.
The output of one batch step is :
[
{
"CustomerId": "00",
"TotalPurchase": 0
},
{
"CustomerId": "11",
"TotalPurchase": 1
},
{
"CustomerId": "22",
"TotalPurchase": 8
},
{
"CustomerId": "33",
"TotalPurchase": 27
},
{
"CustomerId": "44",
"TotalPurchase": 64
},
{
"CustomerId": "55",
"TotalPurchase": 125
},
{
"CustomerId": "66",
"TotalPurchase": 216
},
{
"CustomerId": "77",
"TotalPurchase": 343
},
{
"CustomerId": "88",
"TotalPurchase": 512
},
{
"CustomerId": "99",
"TotalPurchase": 729
},
{
"CustomerId": "1010",
"TotalPurchase": 1000
}
]
In The next Batch step, I am using the ACCEPT EXPRESSION field with value as :
#[payload.TotalPurchase > 100]
But I am getting the error :
Types `Array` and `Number` can not be compared.
payload.TotalPurchase > 100
^^^^^^^^^^^^^^^^^^^^^
Any ideas why this is happening?

Maybe you want to process each element of the array as a record but for that input payload the value of #[payload.TotalPurchase] is:
[
0,
1,
8,
27,
64,
125,
216,
343,
512,
729,
1000
]
That's because DataWeave returns an array of all TotalPurchase members in the array. So it is not possible to compare that computed array to a number.

Related

JSON SQL column in azure data factory

I have a JSON type SQL column in SQL table as below example. I want the below code to be converted into separate columns such as drugs as table name and other attribute as column name, how can I use adf or any other means please guide. The below code is a single column in a table called report where I need to convert this into separate columns .
{
"drugs": {
"Codeine": {
"bin": "Y",
"name": "Codeine",
"icons": [
93,
103
],
"drug_id": 36,
"pathway": {
"code": "prodrug",
"text": "is **inactive**, its metabolites are active."
},
"targets": [],
"rxnorm_id": "2670",
"priclasses": [
"Analgesic/Anesthesiology"
],
"references": [
1,
16,
17,
100
],
"subclasses": [
"Analgesic agent",
"Antitussive agent",
"Opioid agonist",
"Phenanthrene "
],
"metabolizers": [
"CYP2D6"
],
"phenotype_ids": {
"metabolic": "5"
},
"relevant_genes": [
"CYP2D6"
],
"dosing_guidelines": [
{
"text": "Reduced morphine formation. Use label recommended age- or weight-specific dosing. If no response, consider alternative analgesics such as morphine or a non-opioid.",
"source": "CPIC",
"guidelines_id": 1
},
{
"text": "Analgesia: select alternative drug (e.g., acetaminophen, NSAID, morphine-not tramadol or oxycodone) or be alert to symptoms of insufficient pain relief.",
"source": "DPWG",
"guidelines_id": 362
}
],
"drug_report_notes": [
{
"text": "Predicted codeine metabolism is reduced.",
"icons_id": 58,
"sort_key": 58,
"references_id": null
},
{
"text": "Genotype suggests a possible decrease in exposure to the active metabolite(s) of codeine.",
"icons_id": 93,
"sort_key": 56,
"references_id": null
},
{
"text": "Professional guidelines exist for the use of codeine in patients with this genotype and/or phenotype.",
"icons_id": 103,
"sort_key": 50,
"references_id": null
}
]
}
Since this json is already in a SQL column, you don't need ADF to break it down to parts. You can use JSON functions in SQL server to do that.
example of few first columns:
declare #json varchar(max) = '{
"drugs": {
"Codeine": {
"bin": "Y",
"name": "Codeine",
"icons": [
93,
103
],
"drug_id": 36,
"pathway": {
"code": "prodrug",
"text": "is **inactive**, its metabolites are active."
},
"targets": [],
"rxnorm_id": "2670",
"priclasses": [
"Analgesic/Anesthesiology"
],
"references": [
1,
16,
17,
100
],
"subclasses": [
"Analgesic agent",
"Antitussive agent",
"Opioid agonist",
"Phenanthrene "
],
"metabolizers": [
"CYP2D6"
],
"phenotype_ids": {
"metabolic": "5"
},
"relevant_genes": [
"CYP2D6"
],
"dosing_guidelines": [
{
"text": "Reduced morphine formation. Use label recommended age- or weight-specific dosing. If no response, consider alternative analgesics such as morphine or a non-opioid.",
"source": "CPIC",
"guidelines_id": 1
},
{
"text": "Analgesia: select alternative drug (e.g., acetaminophen, NSAID, morphine-not tramadol or oxycodone) or be alert to symptoms of insufficient pain relief.",
"source": "DPWG",
"guidelines_id": 362
}
],
"drug_report_notes": [
{
"text": "Predicted codeine metabolism is reduced.",
"icons_id": 58,
"sort_key": 58,
"references_id": null
},
{
"text": "Genotype suggests a possible decrease in exposure to the active metabolite(s) of codeine.",
"icons_id": 93,
"sort_key": 56,
"references_id": null
},
{
"text": "Professional guidelines exist for the use of codeine in patients with this genotype and/or phenotype.",
"icons_id": 103,
"sort_key": 50,
"references_id": null
}
]
}
}
}
select JSON_VALUE(JSON_QUERY(#json,'$.drugs.Codeine'),'$.bin') as bin,
JSON_VALUE(JSON_QUERY(#json,'$.drugs.Codeine'),'$.name') as name,
JSON_VALUE(JSON_QUERY(#json,'$.drugs.Codeine'),'$.drug_id') as drug_id,
JSON_VALUE(JSON_QUERY(#json,'$.drugs.Codeine'),'$.icons[0]') as icon_1
'
You need to decide how to handle arrays, such as icons, where there are multiple values inside the same element.
References:
JSON_QUERY function
JSON_VALUE function

Kotlin - Merge two data class

Data class
data class A(
var data: List<Data>
) {
data class Data(
var key: String,
var count: Long = 0,
var sub: List<Data>? = null
)
}
A class data values expressed in json.
[
{
"data": [
{
"key": "ALLOGENE THERAPEUTICS",
"count": 47,
"sub": [
{
"key": "N",
"count": 46,
"sub": [
{
"key": "S1",
"count": 1
},
{
"key": "S2",
"count": 13
}
]
},
{
"key": "B+",
"count": 1,
"sub": [
{
"key": "S1",
"count": 2
},
{
"key": "S2",
"count": 1
}
]
}
]
},
{
"key": "CELLECTIS",
"count": 5,
"sub": [
{
"key": "B+",
"count": 2,
"sub": [
{
"key": "S1",
"count": 3
},
{
"key": "S2",
"count": 5
}
]
},
{
"key": "B",
"count": 2,
"sub": [
{
"key": "S1",
"count": 6
},
{
"key": "S2",
"count": 1
}
]
},
{
"key": "N",
"count": 1,
"sub": [
{
"key": "S1",
"count": 8
},
{
"key": "S2",
"count": 4
}
]
}
]
},
{
"key": "PFIZER",
"count": 5,
"sub": [
{
"key": "N",
"count": 5,
"sub": [
{
"key": "S1",
"count": 83
},
{
"key": "S2",
"count": 1
}
]
}
]
}
]
}
]
I would like to combine elements with key values of "ALLOGENE THERAPEUTICS" and "CELECTIS" and replace the key value with "STUB".
When the elements are combined, all the "count" values must be combined.
And elements that are not there must be added.
Therefore, the results should be as follows.
[
{
"data": [
{
"key": "STUB",
"count": 52, // "ALLOGENE THERAPEUTICS"(47) + "CELECTIS"(5) = 52
"sub": [
{
"key": "N",
"count": 47, // 46 + 1
"sub": [
{
"key": "S1",
"count": 9
},
{
"key": "S2",
"count": 17
}
]
},
{
"key": "B+",
"count": 3,
"sub": [
{
"key": "S1",
"count": 5
},
{
"key": "S2",
"count": 6
}
]
},
{
"key": "B",
"count": 5,
"sub": [
{
"key": "S1",
"count": 11
},
{
"key": "S2",
"count": 7
}
]
}
]
},
{
"key": "PFIZER",
"count": 5,
"sub": [
{
"key": "N",
"count": 5,
"sub": [
{
"key": "S1",
"count": 83
},
{
"key": "S2",
"count": 1
}
]
}
]
}
]
}
]
How can I code the work neatly with Kotlin?
For reference, the values of the data class are expressed as json, and the result value must be data class.
This is the progress so far:
create a function for Data that creates a merged copy
data class Data(
var key: String,
var count: Long = 0,
var sub: List<Data> = emptyList()
) {
fun mergedWith(other: Data): Data {
return copy(
count = count + other.count,
sub = sub + other.sub
)
}
}
fold the consolidation list into a single data item and add them back together.
val consolidatedKeys = listOf("ALLOGENE THERAPEUTICS", "CELECTIS")
val (consolidatedValues, nonconsolidatedValues) = a.data.partition { it.key in consolidatedKeys }
val consolidatedData = when {
consolidatedValues.isEmpty() -> emptyList()
else -> listOf(consolidatedValues.fold(A.Data("STUB", 0), A.Data::mergedWith))
}
val result = A(consolidatedData + nonconsolidatedValues)
And combine the sub-elements.
consolidatedData.forEach { x ->
x.sub
.groupBy { group -> group.key }
.map { A.Data(it.key, it.value.sumOf { c -> c.count }) }
}
This is the current situation.
In this way, elements with depth of 2 will work normally, but elements with depth of 3 will not be added.
For example, up to "N" below STUB is combined, but "S1" and "S2" below "N" are not combined.
Therefore, the current result is output in this way.
[
{
"data": [
{
"key": "STUB",
"count": 52, <--------- WORK FINE
"sub": [
{
"key": "N",
"count": 47, <--------- WORK FINE
"sub": [] <--------- EMPTY !!
},
{
"key": "B+",
"count": 3, <--------- WORK FINE
"sub": [] <--------- EMPTY !!
},
{
"key": "B",
"count": 5, <--------- WORK FINE
"sub": [] <--------- EMPTY !!
}
]
},
{
"key": "PFIZER",
"count": 5,
"sub": [
{
"key": "N",
"count": 5,
"sub": [
{
"key": "S1",
"count": 83
},
{
"key": "S2",
"count": 1
}
]
}
]
}
]
}
]
How can all the sub-elements be combined and implemented?
First break down your problem. You can create a function for Data that creates a merged copy:
fun mergedWith(other: Data): Data {
return copy(
count = count + other.count,
sub = when {
sub == null && other.sub == null -> null
else -> sub.orEmpty() + other.sub.orEmpty()
}
)
}
I recommend if possible that you use a non-nullable List for your sub parameter, and use emptyList() when there's nothing in it. This makes it simpler since there aren't two different ways to represent a lack of items and you won't have to deal with nullability:
data class Data(
var key: String,
var count: Long = 0,
var sub: List<Data> = emptyList()
) {
fun mergedWith(other: Data): Data {
return copy(
count = count + other.count,
sub = sub + other.sub
)
}
}
Then you can split your list into ones that you want to consolidate vs. the rest. Then fold the consolidation list into a single data item and add them back together.
val consolidatedKeys = listOf("ALLOGENE THERAPEUTICS", "CELECTIS")
val (consolidatedValues, nonconsolidatedValues) = a.data.partition { it.key in consolidatedKeys }
val consolidatedData = when {
consolidatedValues.isEmpty() -> emptyList()
else -> listOf(consolidatedValues.fold(A.Data("STUB", 0), A.Data::mergedWith))
}
val result = A(consolidatedData + nonconsolidatedValues)

Karate: I get missing property in path $['data'] while using json filter path

I have gone through karate documentation and questions asked on stack overflow. There are 2 json arrays under resp.response.data. I am trying to retrieve and assert "bId": 81 in below json from the resp.response.data[1] but I get this missing property error while retrieving id value 81. Could you please help if I am missing something ?
* def resp =
"""
{
"response": {
"data": [
{
"aDetails": {
"aId": 15,
"aName": "Test",
"dtype": 2
},
"values": [
{
"bId": 45,
"value": "red"
}
],
"mandatory": false,
"ballId": "1231231414"
},
{
"aDetails": {
"aId": 25,
"aName": "Description",
"dtype": 2
},
"values": [
{
"bId": 46,
"value": "automation"
},
{
"bId": 44,
"value": "NESTED ARRAY"
},
{
"bId": 57,
"value": "sfERjuD"
},
{
"bId": 78,
"value": "zgSyPdg"
},
{
"bId": 79,
"value": "NESTED ARRAY"
},
{
"bId": 80,
"value": "NESTED ARRAY"
},
{
"bId": 81,
"value": "NESTED ARRAY"
}
],
"mandatory": true,
"ballId": "1231231414"
}
],
"corId": "wasdf-242-efkn"
}
}
"""
* def expectedbID=81
* def RespValueId = karate.jsonPath(resp, "$.data[1][?(#.bId == '" + expectedbID + "')]")
* match RespValueId[0] == expectedbID
Maybe you are over-complicating things ?
* match resp.response.data[1].values contains { bId: 81, value: 'NESTED ARRAY' }

How to Condense & Nest a (CSV) Payload in Dataweave 2.0?

I have a CSV payload TV Programs & Episodes that I want to Transform (Nest & Condense) to a JSON, with the following conditions:
Merge consecutive Program Lines (that are not followed by an Episode Line), so that it becomes 1 Program with the Start Date of the 1st Instance and the Summation of the Duration.
Episode Lines after a Program Line are Nested under the Program
INPUT
Channel|Name|Start|Duration|Type
ACME|Broke Girls|2018-02-01T00:00:00|600|Program
ACME|Broke Girls|2018-02-01T00:10:00|3000|Program
ACME|S03_8|2018-02-01T00:13:05|120|Episode
ACME|S03_9|2018-02-01T00:29:10|120|Episode
ACME|S04_1|2018-02-01T00:44:12|120|Episode
ACME|Lost In Translation|2018-02-01T02:01:00|1800|Program
ACME|Lost In Translation|2018-02-01T02:30:00|1800|Program
ACME|The Demolition Man|2018-02-01T03:00:00|1800|Program
ACME|The Demolition Man|2018-02-01T03:30:00|1800|Program
ACME|The Demolition Man|2018-02-01T04:00:00|1800|Program
ACME|The Demolition Man|2018-02-01T04:30:00|1800|Program
ACME|Photon|2018-02-01T05:00:00|1800|Program
ACME|Photon|2018-02-01T05:30:00|1800|Program
ACME|Miles & Smiles|2018-02-01T06:00:00|3600|Program
ACME|S015_1|2018-02-01T06:13:53|120|Episode
ACME|S015_2|2018-02-01T06:29:22|120|Episode
ACME|S015_3|2018-02-01T06:46:28|120|Episode
ACME|Ice Age|2018-02-01T07:00:00|300|Program
ACME|Ice Age|2018-02-01T07:05:00|600|Program
ACME|Ice Age|2018-02-01T07:15:00|2700|Program
ACME|S01_4|2018-02-01T07:17:17|120|Episode
ACME|S01_5|2018-02-01T07:32:11|120|Episode
ACME|S01_6|2018-02-01T07:47:20|120|Episode
ACME|My Girl Friday|2018-02-01T08:00:00|3600|Program
ACME|S05_7|2018-02-01T08:17:28|120|Episode
ACME|S05_8|2018-02-01T08:31:59|120|Episode
ACME|S05_9|2018-02-01T08:44:42|120|Episode
ACME|Pirate Bay|2018-02-01T09:00:00|3600|Program
ACME|S01_1|2018-02-01T09:33:12|120|Episode
ACME|S01_2|2018-02-01T09:46:19|120|Episode
ACME|Broke Girls|2018-02-01T10:00:00|1200|Program
ACME|S05_3|2018-02-01T10:13:05|120|Episode
ACME|S05_4|2018-02-01T10:29:10|120|Episode
OUTPUT
{
"programs": [
{
"StartTime": "2018-02-01T00:00:00",
"Duration": 3600,
"Name": "Broke Girls",
"episode": [
{
"name": "S03_8",
"startDateTime": "2018-02-01T00:13:05",
"duration": 120
},
{
"name": "S03_9",
"startDateTime": "2018-02-01T00:29:10",
"duration": 120
},
{
"name": "S04_1",
"startDateTime": "2018-02-01T00:44:12",
"duration": 120
}
]
},
{
"StartTime": "2018-02-01T06:00:00",
"Duration": 3600,
"Name": "Miles & Smiles",
"episode": [
{
"name": "S015_1",
"startDateTime": "2018-02-01T06:13:53",
"duration": 120
},
{
"name": "S015_2",
"startDateTime": "2018-02-01T06:29:22",
"duration": 120
},
{
"name": "S015_3",
"startDateTime": "2018-02-01T06:46:28",
"duration": 120
}
]
},
{
"StartTime": "2018-02-01T07:00:00",
"Duration": 3600,
"Name": "Ice Age",
"episode": [
{
"name": "S01_4",
"startDateTime": "2018-02-01T07:17:17",
"duration": 120
},
{
"name": "S01_5",
"startDateTime": "2018-02-01T07:32:11",
"duration": 120
},
{
"name": "S01_6",
"startDateTime": "2018-02-01T07:47:20",
"duration": 120
}
]
},
{
"StartTime": "2018-02-01T08:00:00",
"Duration": 3600,
"Name": "My Girl Friday",
"episode": [
{
"name": "S05_7",
"startDateTime": "2018-02-01T08:17:28",
"duration": 120
},
{
"name": "S05_8",
"startDateTime": "2018-02-01T08:31:59",
"duration": 120
},
{
"name": "S05_9",
"startDateTime": "2018-02-01T08:44:42",
"duration": 120
}
]
},
{
"StartTime": "2018-02-01T09:00:00",
"Duration": 3600,
"Name": "Pirate Bay",
"episode": [
{
"name": "S01_1",
"startDateTime": "2018-02-01T09:33:12",
"duration": 120
},
{
"name": "S01_2",
"startDateTime": "2018-02-01T09:46:19",
"duration": 120
}
]
},
{
"StartTime": "2018-02-01T10:00:00",
"Duration": 1200,
"Name": "Broke Girls",
"episode": [
{
"name": "S05_3",
"startDateTime": "2018-02-01T10:13:05",
"duration": 120
},
{
"name": "S05_4",
"startDateTime": "2018-02-01T10:29:10",
"duration": 120
}
]
}
]
}
Give this a try, comments are embedded:
%dw 2.0
output application/dw
var data = readUrl("classpath://data.csv","application/csv",{separator:"|"})
var firstProgram = data[0].Name
---
// Identify the programs by adding a field
(data reduce (e,acc={l: firstProgram, c:0, d: []}) -> do {
var next = acc.l != e.Name and e.Type == "Program"
var counter = if (next) acc.c+1 else acc.c
---
{
l: if (next) e.Name else acc.l,
c: counter,
d: acc.d + {(e), pc: counter}
}
}).d
// group by the identifier of individual programs
groupBy $.pc
// Get just the programs throw away the program identifiers
pluck $
// Throw away the programs with no episodes
filter ($.*Type contains "Episode")
// Iterate over the programs
map do {
// sum the program duration
var d = $ dw::core::Arrays::sumBy (e) -> if (e.Type == "Program") e.Duration else 0
// Get the episodes and do a little cleanup
var es = $ map $-"pc" filter ($.Type == "Episode")
---
// Form the desired structure
{
($[0] - "pc" - "Duration"),
Duration: d,
Episode: es
}
}
NOTE1: I stored the contents in a file and read it using readUrl, you need to adjust to accommodate from where you get your data from.
NOTE2: Maybe you need to rethink your inputs, organize them better, if possible.
NOTE3: Studio will show errors (at least Studio 7.5.1 does). They are false positives, the code runs
NOTE4: Lots of steps because of the non-trivial input. Potentialy the code could be optimized but I did spend enough time on it--I 'll let you deal with the optimization or somebody else from the community can help.

Creating a resulting array with unique values in dataweave

Need to compare two arrays efficiently and create a third array with values that are only in the second array using Dataweave transformation in mule. I wanted to use the negation of contains the keyword in mule. But it was giving errors. Hope that I can use a filter and Contains to filter out the values.
arr1 =[
{
"leadId": 127,
"playerId": 334353
},
{
"leadId": 128,
"playerId": 334354
},
{
"leadId": 123,
"playerId": 43456
}
{
"leadId": 122,
"playerId": 43458
}
arr2 =[
{
"leadId": 127,
"name": "James"
},
{
"leadId": 129,
"name": "Joseph"
},
{
"leadId": 120,
"name": "Samuel"
},
{
"leadId": 122,
"name": "Gabriel",
}
Need resulting array as
arr3 = [
{
"leadId": 129,
"name": Joseph
},
{
"leadId": 120,
"name": Samuel
}
]
UPDATED - Including DataWeave 1 and 2
I'm not sure exactly how you have your arrays stored (in the payload or other variables etc), but the below script should give you enough to go on. Comparison is done based on the leadId only.
The transform actually works across both DW1 and DW2, you just need to change the header.
Mule 3 and DW1
%dw 1.0
%output application/json
---
payload.array2 filter (not (payload.array1.leadId contains $.leadId))
Mule 4 and DW2
%dw 2.0
output application/json
---
payload.array2 filter (not (payload.array1.leadId contains $.leadId))
Input
{
"array1": [
{
"leadId": 127,
"playerId": 334353
},
{
"leadId": 128,
"playerId": 334354
},
{
"leadId": 123,
"playerId": 43456
},
{
"leadId": 122,
"playerId": 43458
}
],
"array2": [
{
"leadId": 127,
"name": "James"
},
{
"leadId": 129,
"name": "Joseph"
},
{
"leadId": 120,
"name": "Samuel"
},
{
"leadId": 122,
"name": "Gabriel"
}
]
}
Output
[
{
"leadId": 129,
"name": "Joseph"
},
{
"leadId": 120,
"name": "Samuel"
}
]