I am trying to take output from Salesforce & transform it to a json. here is my code:
%dw 1.0
%output application/json
payload map {
headerandlines:{ id : $.Id,
agreementLineID : $.LineItems__r.Id,
netPrice : $.LineItems__r.Price__c,
volume : $.Volume__c,
name : $.Name,
StartDate : $.Start_Date__c,
EndDate : $.End_Date__c,
poField : $.PO_Field__c,
ConsoleNumber : $.Console_Number__c,
Term : $.Term__c,
ownerID : $.OwnerId,
Unit : $.Unit__c,
siteNumber : $.Site_Num__c,
customerNumber : $.Customer_Num__c
}
}
input payload looks like this.. it is a collection of objects. Somehow after the transformation only the first object is sent & rest is clobbered.
[
{
"id": "DA0YAAW",
"LineID": [
"jGEAU",
"jBEAU",
"j6EAE"
],
"Price": [
"50000.0",
"12000.0",
"45000.0"
],
"netPrice": null,
"volume": null,
"name": " Test 2.24",
"StartDate": "2017-02-17",
"EndDate": "2018-02-17",
"poField": "123456",
"ConsoleNumber": "8888888",
"PaymentTerm": "thirty (30)",
"ownerID": “abcd”,
"OperatingUnit": " International Company",
"siteNumber": null,
"customerNumber": null
},
{
"id": "a37n0000000DAMAAA4",
"LineID": [
"JunEAE",
"JuiEAE",
"KdMEAU",
"JuYEAU"
],
"Price": [
"5000.0",
"8000.0",
"5000.0",
"5000.0"
],
"netPrice": null,
"volume": null,
"name": " Test 3.6",
"StartDate": "2017-03-06",
"EndDate": "2018-03-16",
"poField": "12345",
"ConsoleNumber": "123456-",
"PaymentTerm": "30 NET",
"ownerID": “dfgh”,
"OperatingUnit": ", inc.",
"siteNumber": null,
"customerNumber": null
},
….
]
When I call this code from the browser (using API testing) I get the complete payload with multiple objects. When I call this from another API I get only one 1 object indicating it is not looping through. I can confirm that the payload has multiple objects . Is there anything I am missing in terms of looping through this code to extract multiple objects? I assume that '$' notation is good enough for iteration.
#insaneyogi, your input is either incorrect or your dataweave is incorrect.
Here in the input you have specified id in the small. but in dataweave, it is mentioned in capital.
I think the problem here is with your Lineitem and Price type elements. They are collection within and element. In your data mapping $. will take care of the outer object. However, i think the mapping like LineItems__r.Price__c is not correct. It should have proper index , probably LineItems__r.Price__c[0]. Please try that and it should work. First change the input with single element for price or line-item and test.
It looks like the agreementLineID and netPrice are arrays and you need to loop through them with a map operator within the bigger outer map to get all the line items. That should work.
Related
What am I trying to achieve:
I would like to have a time series chart showing the total number of members in my club at any time. This member count should be calculated by using the field "Eintrittsdatum" (joining-date) and "Austrittsdatum" (leaving-date). I’m thinking of it as a running sum - every filled field with a joining-date means +1 on the member count, every leaving-date entry is a -1.
Data structure
I’m calling the API of webling.ch with a secret key. This is my data structure with sample data per member:
[
{
"type": "member",
"meta": {
"created": "2020-03-02 11:33:00",
"createuser": {
"label": "Joana Doe",
"type": "user"
},
"lastmodified": "2022-12-06 16:32:56",
"lastmodifieduser": {
"label": "Joana Doe",
"type": "user"
}
},
"readonly": true,
"properties": {
"Mitglieder ID": 99,
"Anrede": "Dear",
"Vorname": "Jon",
"Name": "Doe",
"Strasse": "Doeington Street",
"Adresszusatz": null,
"PLZ": "9999",
"Ort": "Doetown",
"E-Mail": "jon.doe#doenet.net",
"Telefon Privat": null,
"Telefon Geschäft": null,
"Mobile": "099 877 54 54",
"Geschlecht": "m",
"Geburtstag": "1966-03-10",
"Mitgliedschaftstyp": "Aktivmitgliedschaft",
"Eintrittsdatum": "2020-03-01",
"Austrittsdatum": null,
"Passfoto": null,
"Wordpress Benutzername": null,
"Wohnhaft im Glarnerland": false,
"Lat": "43.1563379",
"Long": "6.0474622"
},
"parents": [
240
],
"children": {
},
"links": {
"debitor": [
2124,
3056,
3897
],
"attendee": [
2576
]
},
"id": 1815
}
]
Grafana data source
I am using the “JSON API” by Marcus Olsson: GitHub - grafana/grafana-json-datasource: A data source plugin for loading JSON APIs into Grafana.
Grafana v9.3.1 (89b365f8b1) on Linux
My current approach
Queries:
Query C - uses a filter on the source-API to only show entries with "Eintrittsdatum" IS NOT EMPTY
Field 1 (alias "datum") has a JSONata-Query of:
properties.Eintrittsdatum
Field 2 (alias "names") should return the full name and has a query of:
$map($.properties, function($v) {(
($v.Vorname&" "&$v.Name);
)})
Field 3 (alias "value") should return "1" for every entry and has a query of:
$map($.properties, function($v) {(
(1);
)})
Query D - uses a filter on the source-API to only show entries with "Austrittsdatum" IS NOT EMPTY
Field 1 (alias "datum") has a JSONata-Query of:
properties.Austrittsdatum
Field 2 (alias "names") should return the full name and has a query of:
$map($.properties, function($v) {(
($v.Vorname&" "&$v.Name);
)})
Field 3 (alias "value") should return "1" for every entry and has a query of:
$map($.properties, function($v) {(
(1);
)})
Here's a screenshot to clarify things
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-1.png)
Transformations:
My applied transformations
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-2.png)
What's working
I can correctly gather the number of members added/subtracted per day.
What's not working
I can't get the graph to display the way i want: I'd like to have a running sum of these numbers instead of the following two graphs.
Time series graph with merged queries
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-3.png)
Time series graph with unmerged queries
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-4.png)
I can't get the names to display within the tooltip of the data points (really not THAT necessary).
I am trying to transform the below Array of Objects input:
[
{
"Id": "3",
"Code": "4190484",
"Expense": "Huge Expense "
},
{
"Id": "4",
"Code": "271",
"Expense": "Big Expense"
},
{
"Id": "3",
"Code": "433",
"Expense": "No Expense"
}
]
to this Output of a single object:
{
"Id": "3",
"Code": "4190484",
"Expense": "Huge Expense ",
"Id": "4",
"Code": "271",
"Expense": "Big Expense",
"Id": "3",
"Code": "433",
"Expense": "No Expense"
}
How would you accomplish this in Dataweave?
You can also use the dynamic elements feature of the language:
%dw 2.0
output application/json
---
{(payload)}
Like #aled explained in his answer, you should not be using duplicate keys in JSON.
You can use the reduce() function but be warned that using duplicate keys in JSON is implementation dependent. I think it is a bad design to use duplicate keys in JSON. It might lead to unexpected behaviors. Some implementations might ignore the duplicates. For example DataWeave will return only one Id of the resulting object with payload.Id.
If even after what I mentioned you still want to go ahead it this is an example:
%dw 2.0
output application/json
---
// I don't recommend to use duplicate keys
payload reduce ((item, acc = {}) -> acc ++ item)
I have been struggling with this problem for a long time. I need to create a new JSON flowfile using QueryRecord by taking an array (field ref) from input JSON field refs and skip the object field as shown in example below:
Input JSON flowfile
{
"name": "name1",
"desc": "full1",
"refs": {
"ref": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
}
QueryRecord configuration
JSONTreeReader setup as Infer Schema and JSONRecordSetWriter
select name, description, (array[rpath(refs, '//ref[*]')]) as sources from flowfile
Output JSON (need)
{
"name": "name1",
"desc": "full1",
"references": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
But got error:
QueryRecord Failed to write MapRecord[{references=[Ljava.lang.Object;#27fd935f, description=full1, name=name1}] with schema ["name" : "STRING", "description" : "STRING", "references" : "ARRAY[STRING]"] as a JSON Object due to java.lang.ClassCastException: null
Try the following approach, in your case it shoud work:
1) Read your JSON field fully (I imitated it with GenerateFlowFile processor with your example)
2) Add EvaluateJsonPath processor which will put 2 header fileds (name, desc) into the attributes:
3) Add SplitJson processor which will split your JSON byt refs/ref/ groups (split by "$.refs.ref"):
4) Add ReplaceText processor which will add you header fields (name, desc) to the split lines (replace "[{]" value with "{"name":"${json.name}","desc":"${json.desc}","):
5) It`s done:
Full process in my demo case:
Hope this helps.
Solution!: use JoltTransformJSON to transform JSON by Jolt specification. About this specification.
In jmeter, I want to pass dynamic parameters. For simple json its easy to put ${value1} but if json structure is complex like array or with multiple values then what is the proper method to pass parameter dynamically. Please refer below json.
Below is json with parameter :
{
"squadName": "Super hero squad",
"homeTown": "Metro City",
"formed": 2016,
"secretBase": "Super tower",
"active": true,
"members": [
{
"name": "Molecule Man",
"age": 29,
"secretIdentity": "Dan Jukes",
"powers": [
"Radiation resistance",
"Turning tiny",
"Radiation blast"
]
},
{
"name": "Madame Uppercut",
"age": 39,
"secretIdentity": "Jane Wilson",
"powers": [
"Million tonne punch",
"Damage resistance",
"Superhuman reflexes"
]
},
{
"name": "Eternal Flame",
"age": 1000000,
"secretIdentity": "Unknown",
"powers": [
"Immortality",
"Heat Immunity",
"Inferno",
"Teleportation",
"Interdimensional travel"
]
}
]
}
=======
Now I have used below method to send parameter through csv config file.
Is there any other simple method to pass parameter through variables in Jmeter for complex json (5-6 level with array data) ?
CSV DATA config is the best to parameterize your test data.
If you want to customize the way you want to pick values from CSV you can use BeanShell /JSR223 sampler
here is one article that shows how to pick random values from CSV data config.
I am new to PIG scripting and working with JSONs. I am in the need of parsing multi-level json files in PIG. Say,
{
"firstName": "John",
"lastName" : "Smith",
"age" : 25,
"address" :
{
"streetAddress": "21 2nd Street",
"city" : "New York",
"state" : "NY",
"postalCode" : "10021"
},
"phoneNumber":
[
{
"type" : "home",
"number": "212 555-1234"
},
{
"type" : "fax",
"number": "646 555-4567"
}
]
}
I am able to parse a single level json through JsonLoader() and do join and other operations and get the desired results as JsonLoader('name:chararray,field1:int .....');
Is it possible to parse the above mentioned JSON file using the built-in JsonLoader() function of PIG 0.10.0. If it is. Please explain me how it is done and accessing fields of the particular JSON?
You can handle nested json loading with Twitter's Elephant Bird: https://github.com/kevinweil/elephant-bird
a = LOAD 'file3.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad')
This will parse the JSON into a map http://pig.apache.org/docs/r0.11.1/basic.html#map-schema the JSONArray gets parsed into a DataBag of maps.
It is possible by creating your own UDF. A simple UDF example is shown in below link
http://pig.apache.org/docs/r0.9.1/udf.html#udf-java
C = load 'path' using JsonLoader('firstName:chararray,lastName:chararray,age:int,address:(streetAddress:chararray,city:chararray,state:chararray,postalCode:chararray),
phoneNumber:{(type:chararray,number:chararray)}')