I have scattergather in my flow, the output of which is a List of Maps, How can I convert that into a single Map in mule?
I have tried combine-collections-transfomer, flatten payload. Nothing seems to be working.
You can you following dataweave code, but it will override duplicate keys
%dw 1.0
%output application/java
---
{(payload)}
Hope this helps.
I would recommend using a custom Java Transformer so you can easily handle special situations such as duplicate keys with different values. A DataWeave function may also be able to do the trick, but you'll need Mule EE.
With a Transformer it's a simple question of Java code:
public class MyMapFlattener extends AbstractMessageTransformer{
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException {
//assuming your payload is the list of map
List<Map<?, ?>> listOfMap = message.getPayload();
Map<Object, Object> finalMap = new HashMap<Object, Object>();
for(Map<?, ?> map : listOfMap){
//you can use putAll if you don't care about duplicates
//finalMap.putAll(map);
// or a more complex algorithm to handle duplicates
for(Entry<?, ?> e : map.entrySet()){
if(finalMap.containsKey(e.getKey())){
//do something to handle situation when key is duplicate...
//you may want to check if both values are equals and skip this
//maybe throw an exception if values are different
//or keep a value
throw new Exception("Duplicate: " + e);
} else {
//key does not exists, put it
finalMap.put(e.getKey(), e.getValue());
}
}
}
return finalMap;
}
}
And then use this transformer in your flow. See the docs for details.
You have multiple ways of doing this one of which is flatten operator which merges 2 arrays into a single array. The other is to do it through the Dataweave Transform map operator and merging them as per your requirements.
Related
I'm writing some tests using rest-assured and its Kotlin extensions to test some simple Spring MVC endpoints. I'm trying to understand how to extract values.
One endpoint returns a BookDetailsView POJO, the other returns a Page<BookDetailsView> (where Page is an interface provided by Spring for doing paging).
BookDetailsView is a really simple Kotlin data class with a single field:
data class BookDetailsView(val id: UUID)
For the single object endpoint, I have:
#Test
fun `single object`() {
val details = BookDetailsView(UUID.randomUUID())
whenever(bookDetailsService.getBookDetails(details.id)).thenReturn(details)
val result: BookDetailsView = Given {
mockMvc(mockMvc)
} When {
get("/book_details/${details.id}")
} Then {
statusCode(HttpStatus.SC_OK)
} Extract {
`as`(BookDetailsView::class.java)
}
assertEquals(details.id, result.id)
}
This works as expected, but trying to apply the same technique for the Page<BookDetailsView> runs afoul of all sorts of parsing challenges since Page is an interface, and even trying to use PageImpl isn't entirely straightforward. In the end, I don't even really care about the Page object, I just care about the nested list of POJOs inside it.
I've tried various permutations like the code below to just grab the bit I care about:
#Test
fun `extract nested`() {
val page = PageImpl(listOf(
BookDetailsView(UUID.randomUUID())
))
whenever(bookDetailsService.getBookDetailsPaged(any())).thenReturn(page)
val response = Given {
mockMvc(mockMvc)
} When {
get("/book_details")
} Then {
statusCode(HttpStatus.SC_OK)
body("content.size()", `is`(1))
body("content[0].id", equalTo(page.first().id.toString()))
} Extract {
path<List<BookDetailsView>>("content")
}
println(response[0].javaClass)
}
The final println spits out class java.util.LinkedHashMap. If instead I try to actually use the object, I get class java.util.LinkedHashMap cannot be cast to class BookDetailsView. There are lots of questions and answers related to this, and I understand it's ultimately an issue of the underlying JSON parser not knowing what to do, but I'm not clear on:
Why does the "simple" case parse without issue?
Shouldn't the type param passed to the path() function tell it what type to use?
What needs configuring to make the second case work, OR
Is there some other approach for grabbing a nested object that would make more sense?
Digging a bit into the code, it appears that the two cases may actually be using different json parsers/configurations (the former seems to stick to rest-assured JSON parsing, while the latter ends up in JsonPath's?)
I don't know kotlin but here is the thing:
path() doesn't know the Element in your List, so it'll be LinkedHashMap by default instead of BookDetailsView.class
to overcome it, you can provide TypeReference for this.
java example
List<BookDetailsView> response = ....then()
.extract().jsonPath()
.getObject("content", new TypeRef<List<BookDetailsView>>() {});
kotlin example
#Test
fun `extract nested`() {
var response = RestAssured.given().get("http://localhost:8000/req1")
.then()
.extract()
.jsonPath()
.getObject("content", object : TypeRef<List<BookDetailsView?>?>() {});
println(response)
//[{id=1}, {id=2}]
}
I am working on a beam pipeline to process a json and write it to bigquery. The JSON is like this.
{
"message": [{
"name": "abc",
"itemId": "2123",
"itemName": "test"
}, {
"name": "vfg",
"itemId": "56457",
"itemName": "Chicken"
}],
"publishDate": "2017-10-26T04:54:16.207Z"
}
I parse this using Jackson to the below structure.
class Feed{
List<Message> messages;
TimeStamp publishDate;
}
public class Message implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
private String key;
private String value;
private Map<String, String> eventItemMap = new HashMap<>();
this property translate the list of map as a single map with all the key-value pair together. because, the messages property will be parsed as list of HashMap objets for each key/value. This will be translated to a single map.
Now in my pipeline, I will convert the collection as
PCollection<KV<String, Feed>>
to write it to different tables based on a property in the class. I have written a transform to do this.
The requirement is to create multiple TableRows based on the number of message objects. I have a few more properties in the JSON to along with publishDate which would be added to the tableRow and each message properties.
So the table would be as follows.
id, name, field1, field2, message1.property1, message1.property2...
id, name, field1, field2, message2.property1, message2.property2...
I tried to create the below transformation. But, not sure how it will output multiple rows based on the message list.
private class BuildRowListFn extends DoFn<KV<String, Feed>, List<TableRow>> {
#ProcessElement
public void processElement(ProcessContext context) {
Feed feed = context.element().getValue();
List<Message> messages = feed.getMessage();
List<TableRow> rows = new ArrayList<>();
messages.forEach((message) -> {
TableRow row = new TableRow();
row.set("column1", feed.getPublishDate());
row.set("column2", message.getEventItemMap().get("key1"));
row.set("column3", message.getEventItemMap().get("key2"));
rows.add(row);
}
);
}
But, this also will be a List which I won't be able to apply the BigQueryIO.write transformation.
Updated as per the comment from "Eugene" aka #jkff
Thanks #jkff. Now, i have changed the code as you mentioned in the second paragraph. context.output(row) inside messages.forEach, after setting table row as
List<Message> messages = feed.getMessage();
messages.forEach((message) -> {
TableRow row = new TableRow();
row.set("column2", message.getEventItemMap().get("key1"));
context.output(row);
}
Now, when i try to write this collection to BigQuery, as
rows.apply(BigQueryIO.writeTableRows().to(getTable(projectId, datasetId, tableName)).withSchema(getSchema())
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(WriteDisposition.WRITE_APPEND));
I am getting the below exception.
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.NullPointerException
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:331)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:301)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:200)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:63)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
at com.chefd.gcloud.analytics.pipeline.MyPipeline.main(MyPipeline.java:284)
Caused by: java.lang.NullPointerException
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:759)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:809)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:126)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:96)
Please help.
Thank you.
It seems that you are assuming that a DoFn can output only a single value per element. This is not the case: it can output any number of values per element - no values, one value, many values, etc. A DoFn can even output values to multiple PCollection's.
In your case, you simply need to call c.output(row) for every row in your #ProcessElement method, for example: rows.forEach(c::output). Of course you'll also need to change the type of your DoFn to DoFn<KV<String, Feed>, TableRow>, because the type of elements in its output PCollection is TableRow, not List<TableRow> - you're just producing multiple elements into the collection for every input element, but that doesn't change the type.
An alternative method would be to do what you currently did, also do c.output(rows) and then apply Flatten.iterables() to flatten the PCollection<List<TableRow>> into a PCollection<TableRow> (you might need to replace List with Iterable to get it to work). But the other method is easier.
I am using Anypoint Studio 6.1 and Mule 3.8.1 and have this MEL expression that replaces any text \n with a new line/carriage return.
payload.replace('\\n', System.getProperty('line.separator'))
I would like to move this functionality into Dataweave but cannot get the MEL expression to work or find a way to do this in Dataweave.
How can I reuse the MEL expression in Dataweave?
Thanks
You should investigate Global Functions
Like:
<configuration doc:name="Global MEL-Functions">
<expression-language>
<global-functions file="mel/extraFunctions.mvel">
</global-functions>
</expression-language>
</configuration>
And create your the global function in a resoruce file for reuse
def UUID() {
return java.util.UUID.randomUUID().toString();
}
def decode(value) {
return java.util.Base64.getDecoder().decode(value);
}
def encode(value) {
return java.util.Base64.getEncoder().encodeToString(value.getBytes());
}
def stringToAscii(value) {
StringBuilder sb = new StringBuilder();
for (char c : value.toCharArray())sb.append((int)c);
return new BigInteger(sb.toString());
}
And reference your global functions in your dataweave
payload map
{
target: stringToAscii($) as :string
}
DW is its own mini-language within Mule aside from MEL is how it was described to me and uses a different syntax to do what you are trying. I have not done new lines specifically as my DW expressions use line separators as record separators, but the same general tactic should work. Here is an example of changing commas to spaces within a dw payload mapping:
AcctID: $.ACCOUNT_ID replace "," with " ",
Using FluentValidation and a Custom() rule, I want to be able to validate a collection of child objects, and return a ValidationFailure for each child object that is invalid.
I can't use a collection validator because the child object doesn't contain the right information to execute the rule - it must run in the context of the parent.
However the Custom() API limits me to returning a single ValidationFailure or nothing at all.
Is there a pattern I can use that allows a single rule to generate multiple errors?
I found a good solution - use AddRule() with a DelegateValidator.
public MyValidator : AbstractValidator<MyClass>
{
public MyValidator()
{
AddRule(new DelegateValidator<MyClass>(MyRule));
}
private IEnumerable<ValidationFailure> MyRule(
MyClass instance,
ValidationContext<MyClass> context)
{
var result = new List<ValidationFailure>();
// add as many failures to the list as you want:
var message = "This is not a valid message";
result.Add(new ValidationFailure(nameof(MyClass.SomeProperty), message));
return result;
}
}
I am a beginner at Ignıte. I am doing a sample app in order to measure query times of it.
So the key in the cache is String, value is Map. One of the field in value Map is "order_item_subtotal" so the query is like:
select * from Map where order_item_subtotal>400
And the sample code is:
Ignite ignite= Ignition.ignite();
IgniteCache<String, Map<String, Object>> dummyCache= ignite.getOrCreateCache(cfg);
Map<String,Map<String, Object>> bufferMap=new HashMap<String,Map<String, Object>>();
int i=0;
for (String jsonStr : jsonStrs) {
if(i%1000==0){
dummyCache.putAll(bufferMap);
bufferMap.clear();
}
Map data=mapper.readValue(jsonStr, Map.class);
bufferMap.put(data.get("order_item_id").toString(), data);
i++;
}
SqlFieldsQuery asd=new SqlFieldsQuery("select * from Map where order_item_subtotal>400");
List<List<?>> result= dummyCache.query(asd).getAll();
But the result is always "[]", means empty. And there is no error or exceptions.
What am I missing here? any ideas?
PS: sample data below
{order_item_id=99, order_item_order_id=37, order_item_product_id=365, order_item_quantity=1, order_item_subtotal=59.9900016784668, order_item_product_price=59.9900016784668, product_id=365, product_category_id=17, product_name=Perfect Fitness Perfect Rip Deck, product_description=, product_price=59.9900016784668, product_image=http://images.acmesports.sports/Perfect+Fitness+Perfect+Rip+Deck}
This is not supported. You should use a simple POJO class instead of a map to make it work.
Note that Ignite will store data in binary format and will not deserialize objects when running queries. So you still don't need to deploy class definitions on server node. Please refer to this page for more details: https://apacheignite.readme.io/docs/binary-marshaller