AKKA.NET and DeathWatchNotification - akka.net

I get the following message when a ProductActor tries to Tell a ValidatorActor to validate the message. Although I see this message, I get the expected result.
I did not try to send a message from ProductActor to itself. Why still I get the following message?
[INFO][5/17/2015 8:06:03 AM][Thread 0012][akka://catalogSystem/user/productActor] Message DeathWatchNotification from akka://catalogSystem/user/productActor to akka://catalogSystem/user/productActor was not delivered. 1 dead letters encountered.
--UPDATE--
The two actors are given below:
public class ProductActor : UntypedActor
{
protected override void OnReceive(object message)
{
if (message is ReportableStatusChanged)
{
_reportableState = ((ReportableStatusChanged) message).ReportableState;
}
else
{
if (message is RetrieveProductState)
{
var state = new ProductState()
{
ReportableState = _reportableState
};
Sender.Tell(state);
}
else
{
Context.ActorSelection("akka://ProductSystem/user/ProductActor/validator").Tell(message);
}
}
}
protected override void PreStart()
{
Context.ActorOf(Props.Create(() => new ProductValidatorActor()), "validator");
base.PreStart();
}
private IReportableState _reportableState;
}
public class ProductValidatorActor : UntypedActor
{
protected override void OnReceive(object message)
{
if (message is ChangeReportableStatus)
{
Sender.Tell(new ReportableStatusChanged(ReportableStates.ReportableState));
}
}
}
This is the test to check the status:
class ChangeReportableStatusTest
{
public void Do()
{
var system = ActorSystem.Create("catalogSystem");
var ProductActor = system.ActorOf(Props.Create<ProductActor>(), "productActor");
ProductActor.Tell(new ChangeReportableStatus(true));
Thread.Sleep(50);
var state = ProductActor.Ask<ProductState>(new RetrieveProductState());
Console.WriteLine("Reportable State: " + (state.Result.ReportableState == ReportableStates.ReportableState ? "TRUE" : "FALSE"));
system.Shutdown();
system.AwaitTermination();
Console.WriteLine("Please press any key to terminate.");
Console.ReadKey();
}
}

You're getting a dead letters notification, which means that the message you're trying to send is not deliverable. The actor you're trying to send a message to may be dead, or it may have never existed. In this case, it appears to be the latter.
I noticed that the name of the ActorSystem that your ProductActor lives within is different in your error message (catalogSystem) vs your code (ProductSystem).
With your ActorSelection, you're sending a message to an actor path in the wrong ActorSystem, to an actor path where no actor exists. Hence the DeadLetters notice. Assuming ProductActor is created as a top-level actor in the catalogSystem, the path you're trying to send to is correct (/user/ProductActor/validator), but the actor system name is not (should be catalogSystem but here it is ProductSystem).
How to fix it
So how do you fix it? Two options:
Use the correct path in your ActorSelection like so: Context.ActorSelection("akka://catalogSystem/user/ProductActor/validator").Tell(message);. While this works, it's the wrong answer.
Since you create the ProductValidatorActor as a child of the ProductActor, just store the IActorRef of the child in the parent, and send messages to it directly. This is the approach I recommend. In this particular case, you don't need an ActorSelection at all.
It works now, but what can we learn here?
There are two lessons to take from this.
Lesson 1: don't use an ActorSelection when don't need it
Generally, you should be Telling messages to IActorRefs, not to ActorSelections. With an IActorRef, you know that the actor has existed at some point in time in the past. This is a guarantee of the Akka framework, that all IActorRefs have existed at some point, even if the actor is now dead.
With an ActorSelection, you have no such guarantee. It's kind of like UDP—you're just firing messages at an address with no idea if anyone is listening.
This brings up the question of "so when should I use an ActorSelection?" The guideline I follow is to use an ActorSelection when:
I need to take advantage of wildcard selection in actor paths for some reason.
I need to send an initial message to an actor on a remote actor system, so I don't actually have a handle to it yet (and don't have a guarantee that it ever ex
Lesson 2: don't fat finger actor paths in your actor code
If you need to use ActorSelections, put the paths in a shared class and then have all your other actors reference that class. Something like this:
using Akka.Actor;
namespace ProductActors
{
/// <summary>
/// Static helper class used to define paths to fixed-name actors
/// (helps eliminate errors when using <see cref="ActorSelection"/>)
/// </summary>
public static class ActorPaths
{
public static readonly ActorMetaData ProductValidatorActor = new ActorMetaData("validator", "akka://ProductActors/user/validator");
public static readonly ActorMetaData ProductCoordinatorActor = new ActorMetaData("coordinator", "akka://ProductActors/user/commander/coordinator");
}
/// <summary>
/// Meta-data class
/// </summary>
public class ActorMetaData
{
public ActorMetaData(string name, string path)
{
Name = name;
Path = path;
}
public string Name { get; private set; }
public string Path { get; private set; }
}
}
... which can then be referenced like so:
Context.ActorSelection(ActorPaths.ProductValidatorActor.Path).Tell(message);

Related

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

Hangfire - DisableConcurrentExecution - Prevent concurrent execution if same value passed in method parameter

Hangfire DisableConcurrentExecution attribute not working as expected.
I have one method and that can be called with different Id. I want to prevent concurrent execution of method if same Id is passed.
string jobName= $"{Id} - Entry Job";
_recurringJobManager.AddOrUpdate<EntryJob>(jobName, j => j.RunAsync(Id, Null), "0 2 * * *");
My EntryJob interface having RunAsync method.
public class EntryJob: IJob
{
[DisableConcurrentExecution(3600)] <-- Tried here
public async Task RunAsync(int Id, SomeObj obj)
{
//Some coe
}
}
And interface look like this
[DisableConcurrentExecution(3600)] <-- Tried here
public interface IJob
{
[DisableConcurrentExecution(3600)] <-- Tried here
Task RunAsync(int Id, SomeObj obj);
}
Now I want to prevent RunAsync method to call multiple times if Id is same. I have tried to put DisableConcurrentExecution on top of the RunAsync method at both location inside interface declaration and also from where Interface is implemented.
But it seems like not working for me. Is there any way to prevent concurrency based on Id?
The existing implementation of DisableConcurrentExecution does not support this. It will prevent concurrent executions of the method with any args. It would be fairly simple to add support in. Note below is untested pseudo-code:
public class DisableConcurrentExecutionWithArgAttribute : JobFilterAttribute, IServerFilter
{
private readonly int _timeoutInSeconds;
private readonly int _argPos;
// add additional param to pass in which method arg you want to use for
// deduping jobs
public DisableConcurrentExecutionAttribute(int timeoutInSeconds, int argPos)
{
if (timeoutInSeconds < 0) throw new ArgumentException("Timeout argument value should be greater that zero.");
_timeoutInSeconds = timeoutInSeconds;
_argPos = argPos;
}
public void OnPerforming(PerformingContext filterContext)
{
var resource = GetResource(filterContext.BackgroundJob.Job);
var timeout = TimeSpan.FromSeconds(_timeoutInSeconds);
var distributedLock = filterContext.Connection.AcquireDistributedLock(resource, timeout);
filterContext.Items["DistributedLock"] = distributedLock;
}
public void OnPerformed(PerformedContext filterContext)
{
if (!filterContext.Items.ContainsKey("DistributedLock"))
{
throw new InvalidOperationException("Can not release a distributed lock: it was not acquired.");
}
var distributedLock = (IDisposable)filterContext.Items["DistributedLock"];
distributedLock.Dispose();
}
private static string GetResource(Job job)
{
// adjust locked resource to include the argument to make it unique
// for a given ID
return $"{job.Type.ToGenericTypeString()}.{job.Method.Name}.{job.Args[_argPos].ToString()}";
}
}

Catagorizing messages in chronicle queue for reading

I want to use chronicle queue to store messages using the high level API as mentioned in the answer of this question. But I also want some kind of key for my messages as mentioned here
1.) Firstly , is this the right/efficient way to read/write using high level API? - Code samples below
2.) How do I separate different category of messages? For example "get me all messages for a particular key , the key in code sample below being ric". Maybe use different topics in the same queue? But how would I do that?
Here's my test code to write to the queue:
public void saveHighLevel(MyInterface obj)
{
try (ChronicleQueue queue = ChronicleQueue.singleBuilder(_location).build()) {
ExcerptAppender appender = queue.acquireAppender();
MyInterface trade = appender.methodWriter(MyInterface.class);
// Write
trade.populate(obj);
}
}
And here's one to read:
public void readHighLevel()
{
try(ChronicleQueue queue = ChronicleQueue.singleBuilder(_location).build()) {
ExcerptTailer tailer = queue.createTailer();
MyInterface container = new MyData();
MethodReader reader = tailer.methodReader(container);
while (reader.readOne()) {
System.out.println(container);
}
}
}
MyInterface:
public interface MyInterface
{
public double getPrice();
public int getSize();
public String getRic();
public void populate(MyInterface obj);
}
Implementation of populate:
public void populate(MyInterface obj)
{
this.price = obj.getPrice();
this.ric = obj.getRic();
this.size = obj.getSize();
}
I found the answer for part (2) of my question in the question of this post.
Essentially by doing:
ChronicleQueue queue = ChronicleQueue.singleBuilder("Topic/SubTopic").build();
where Topic can be substituted with the key I'm looking for.

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.

MassTransit with RabbitMq Request/Response wrong reply address because of network segments

I have a web app that uses a request/response message in Masstransit.
This works on out test environment, no problem.
However on the customer deployment we face a problem. At the customer site we do have two network segments A and B. The component doing the database call is in segment A, the web app and the RabbitMq server in segment B.
Due to security restrictions the component in segment A has to go through a loadbalancer with a given address. The component itself can connect to RabbitMQ with Masstransit. So far so good.
The web component on segment B however uses the direct address for the RabbitMq server. When the web component now is starting the request/response call, I can see that the message arrives at the component in segment A.
However I see that the consumer tries to call the RabbitMQ server on the "wrong" address. It uses the address the web component uses to issue the request. However the component in segment A should reply on the "loadbalancer" address.
Is there a way to configure or tell the RespondAsync call to use the connection address configured for that component?
Of course the easiest would be to have the web component also connect through the loadbalancer, but due to the network segments/security setup the loadbalancer is only reachable from segment A.
Any input/help is appreciated.
I had a similar problem with rabbitmq federation. Here's what I did.
ResponseAddressSendObserver
class ResponseAddressSendObserver : ISendObserver
{
private readonly string _hostUriString;
public ResponseAddressSendObserver(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task PreSend<T>(SendContext<T> context)
where T : class
{
if (context.ResponseAddress != null)
{
// Send relative response address alongside the message
context.Headers.Set("RelativeResponseAddress",
context.ResponseAddress.AbsoluteUri.Substring(_hostUriString.Length));
}
return Task.CompletedTask;
}
...
}
ResponseAddressConsumeFilter
class ResponseAddressConsumeFilter : IFilter<ConsumeContext>
{
private readonly string _hostUriString;
public ResponseAddressConsumeFilter(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task Send(ConsumeContext context, IPipe<ConsumeContext> next)
{
var responseAddressOverride = GetResponseAddress(_hostUriString, context);
return next.Send(new ResponseAddressConsumeContext(responseAddressOverride, context));
}
public void Probe(ProbeContext context){}
private static Uri GetResponseAddress(string host, ConsumeContext context)
{
if (context.ResponseAddress == null)
return context.ResponseAddress;
object relativeResponseAddress;
if (!context.Headers.TryGetHeader("RelativeResponseAddress", out relativeResponseAddress) || !(relativeResponseAddress is string))
throw new InvalidOperationException("Message has ResponseAddress but doen't have RelativeResponseAddress header");
return new Uri(host + relativeResponseAddress);
}
}
ResponseAddressConsumeContext
class ResponseAddressConsumeContext : BaseConsumeContext
{
private readonly ConsumeContext _context;
public ResponseAddressConsumeContext(Uri responseAddressOverride, ConsumeContext context)
: base(context.ReceiveContext)
{
_context = context;
ResponseAddress = responseAddressOverride;
}
public override Uri ResponseAddress { get; }
public override bool TryGetMessage<T>(out ConsumeContext<T> consumeContext)
{
ConsumeContext<T> context;
if (_context.TryGetMessage(out context))
{
// the most hackish part in the whole arrangement
consumeContext = new MessageConsumeContext<T>(this, context.Message);
return true;
}
else
{
consumeContext = null;
return false;
}
}
// all other members just delegate to _context
}
And when configuring the bus
var result = MassTransit.Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(hostAddress), h =>
{
h.Username(...);
h.Password(...);
});
cfg.UseFilter(new ResponseAddressConsumeFilter(hostAddress));
...
});
result.ConnectSendObserver(new ResponseAddressSendObserver(hostAddress));
So now relative response addresses are sent with the messages and used on the receiving side.
Using observers to modify anything is not recommended by the documentation, but should be fine in this case.
Maybe three is a better solution, but I haven't found one. HTH