I want to be able to make multiple services(for each aggregate) in a single plugin. But lagom does not allow this. Example, I have 2 aggregates - EmailThread and EmailMessage.
EmailThreadAggregate{
createEmailThread(emailMessageId);
closeEmailThread();
//many other commands
}
EmailMessageAggregate{
createEmailMessage(EmailMessageDetails emailMessageDetails);
markEmailMessageDeliverySuccess();
markEmailMessageDeliveryFailure();
//many more commands
}
Both have separate commands and life-cycle, but both are closely related.
Hence, I would want to create the above 2 aggregates and its 2 services, EmailThreadService and EmailMessageService in the same plugin. Can anyone help me with why lagom does not allow this?
Related
I would like to maintain comma separated lists of entries of the following form <ip>:<app> indexed by a an account ID. There would be one such list for each user indexed by their account ID with the number of users in the millions. This is mainly to track which server in a cluster a user using a certain application is connected to.
Since all servers are written in Java, with Redisson I'm currently doing:
RSet<String> set = client.getSet(accountKey);
and then I can modify the set using some typical Java container APIs supported by Redisson. I basically need three types of updates to these comma separated lists:
Client connects to a new application = append
Client reconnects with existing application to new endpoint = modify
Client disconnects = remove
A new connection would require a change to a field like:
1.1.1.1:foo,2.2.2.2:bar -> 1.1.1.1:foo,2.2.2.2:bar,3.3.3.3:baz
A reconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 3.3.3.3:foo,2.2.2.2:bar
A disconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 2.2.2.2:bar
As mentioned the fields would be keyed by the account ID of the user.
My question is the following: Without using Redisson how can I implement this "directly" on top of Redis commands? The goal is to allow rewriting certain components in a language different than Java. The cluster handles close to a million requests per second.
I'm actually quite curious how Redisson implements an RSet under the hood and I haven't had time to dig into it. I guess one option would be to use Lua, but I've never used it with Redis. Any ideas how to efficiently implement these operations on top of Redis on a manner that is easily supported by multiple languages, i.e. not relying on a specific library?
Having actually thought about the problem properly, it can be solved directly with a HSET. Where <app> is the field name and the value are the IPs. Keys being user accounts.
I'm new to CF workers and the wrangler publish system, and I can find very little information around my requirements within online sources, perhaps my search query is wrong, so hoping I can find some help here.
I have an NX workspace, containing 2x apps. One app is deployed into the top-level worker, and the second one should be deployed to a sub-directory in the same worker, effectively create a parent-child structure, like the following:
example.com/ -> top-level app
example.com/site2/ -> child-level app
My issue is, I do not understand where and how to define, in wrangler.toml, the /sub-directory/. Should I have 2x separate worker-sites for these? I was under the impression that, I could just update the worker (index.js) file in my single worker-site to handle /site2/ otherwise treat the request as standard?
All I would really like to know is, how can I specify that my publish should to the /site2/ sub-directory, if at all possible?
Thanks in advance.
There are a couple ways to handle this. If your code / logic in the workers for the top-level vs child-level is completely different, I'd recommend using two separate workers. Then you can configure which "routes" each worker will run on -
https://developers.cloudflare.com/workers/cli-wrangler/configuration
Worker 1 could be -
routes = ["example.com/"]
Worker 2 could be -
routes = ["example.com/site2/"]
Check this out for more details on how routing / matching behaves -
https://developers.cloudflare.com/workers/platform/routes#matching-behavior
The other way to do it would be to have a single worker, and inspect the incoming request to behave differently depending on whether the request is at the root, or at /site2/. I'd only recommend this if there are small differences between how the two sites should behave (e.g. swapping out a variable).
In a Mule app (using Mule 4), I am trying to invoke a single API multiple times for an input array of Strings as this:
"input_arr": [
"val1", "val2", "val3"
]
All the invocations can run in parallel as they are independent, but I want to wait and collate the results once they all complete. Also, if one or more result in errors, I want to obtain that as well.
I tried couple different ways:
1. Simple foreach -- not efficient since it is sequential.
2. Batch - it is async and the main flow does not wait.
What would be the best way to achieve this efficiently in Mulesoft?
If you are using Mule 4.2 + then parallel-foreach might achieve what you are looking for.
The Parallel For Each scope enables you to process a collection of messages by splitting the collection into parts that are simultaneously processed in separate routes within the scope of any limitation configured for concurrent-processing.
NOTE: However, because this feature is not available in the Anypoint Studio Mule Palette view, you must manually configure Parallel For Each scope in the XML.
Also there are some differences other than concurrency with the new scope, so make sure to read the documentation:
https://docs.mulesoft.com/mule-runtime/4.2/parallel-foreach-scope
Probably the solution is to use the Parallel Foreach scope from Mule 4.2.
I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)
I am looking for a way to change the steps in the saga, example: insert a step during the processing, preferablly during runtime
Is it possible to do using sagas?
Sagas (particularly those written using Automatonymous) were not designed to handle dynamic configuration at runtime. They are a codified way to create process monitors and workflows.
If you need to dynamically modify the steps of a workflow, you could use the Courier routing slip, which is built into MassTransit. It allows an activity in the workflow to revise the itinerary, adding or removing steps (activities) as needed.