I have an actor system that randomly fails because of messages being delivered to dead letters. Fail meaning it just does not complete
Message [UploadFileFromDropboxSuccessMessage] from
akka://MySystem-Actor-System/user/...../DropboxToBlobSourceSubmissionUploaderActor/DropboxToBlobSourceFileUploaderActor--1
to
akka://MySystem-Actor-System/user/.../DropboxToBlobSourceSubmissionUploaderActor
was not delivered. [5] dead letters encountered .This logging can be
turned off or adjusted with configuration settings
'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
var sender = Sender;
var self = Self;
var parent = Parent;
var logger = Logger;
UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).ContinueWith(o =>
{
if (!o.IsFaulted)
{
parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, o.Result), self);
}
else
{
parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path), self);
}
}, TaskContinuationOptions.ExecuteSynchronously);
});
}
I also tried
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
try
{
var result = UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).Result;
Parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, result));
}
catch (Exception ex)
{
Parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path, ex));
}
});
}
This happens randomly and happens on both success and complete. I have checked parent.IsNobody()... and this returns false. In the documentation it says that alocal actor can fail:
if the mailbox does not accept the message (e.g. full BoundedMailbox)
if the receiving actor fails while processing the message or is
already terminated
I can't imagine a use case where this is true but also don't really know how to check from the context of my current actor (even if its just for logging purposes).
EDIT: Does AKKA have a limitation on the total amount of messages in the entire system?
EDIT: This happens 1 would say 10% of the time.
EDIT: Eventually discovered it was an actor waaay higher in the tree being killed. I am still confused why IsNobody() returned false if its in deed dead.
Related
When I attach consumers during intial message bus config, the consumers are called as expected.
When I attach the consumers after bus config, using ConnectConsumer the consumers are never called; The temporary queue/exchange is created, but it doesn't seen to know of the consumers that are supposed to be attached to that queue.
There is another service/consumer on the bus that is receiving Request messages being published here and publishing Response messages that should be consumed here.
Any idea why this is not working?
NOTE: I know that the "preferred" way of connecting consumers to the bus in the bus config (as in the working example); this is not an option for me as in practice, the bus is being creating/configed in a referenced assembly and the end-user programmers that are adding consumers to the bus do not have access to the bus configuration method. This is something that used to be trivial in version 2; it seems later version make such usecases much more difficult - not all use-cases have easy access to the bus creation/config methods.
Ex.
public class TestResponseConsumer : IConsumer<ITestResponse>
{
public Task Consume(ConsumeContext<ITestResponse> context)
{
Console.WriteLine("TestResponse received");
return Task.CompletedTask;
}
}
...
This works (consumer gets called):
public IBusControl ServiceBus;
public IntegrationTestsBase()
{
ServiceBus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host("vmdevrab-bld", "/", h => {
h.Username("guest");
h.Password("guest");
});
cfg.ReceiveEndpoint("Int_Test", e =>
{
e.Consumer<TestResponseConsumer>();
});
cfg.AutoStart = true;
});
ServiceBus.Start();
}
~IntegrationTestsBase()
{
ServiceBus.Stop();
}
}
This does not work:
[TestMethod]
public void Can_Receive_SampleResponse()
{
try
{
ITestRequest request = new TestRequest(Guid.NewGuid(), Guid.NewGuid(), Guid.NewGuid());
ServiceBus.ConnectConsumer<TestResponseConsumer>();
ServiceBus.Publish<ITestRequest>(request);
mre.WaitOne(60000);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
Assert.Fail();
}
}
It doesn't work because as explained in the documentation when you connect a consumer to the bus endpoint, no exchange bindings are created. Published messages will not be delivered to the bus endpoint.
If you want to connect consumers to the bus after it has been started, you should use ConnectReceiveEndpoint() instead, which is also covered in the documentation.
var handle = bus.ConnectReceiveEndpoint("secondary-queue", x =>
{
x.Consumer<TestResponseConsumer>();
})
var ready = await handle.Ready;
The endpoint can be stopped when it is no longer needed, otherwise it will be stopped when the bus is stopped.
I am trying to create a communication controller for a hardware device that always responds with some delay. If I would only request one value, I could create a Single<ByteArray> and do the final conversion in .subscribe{ ...}.
But when I request more than one value I need to make sure that the second request happens after the first request has been fully closed.
Is that something that I can do with RxJava, e.g. defer? Or should I create a queue on my own and handle the sequence of events manually with my queue?
We're using RxJava anyway (and I'm obviously new to it) and of course it would be nice to use it for this purpose as well. But is that a good use-case?
Edit:
Code that I could use, but that wouldn't be generic enough:
hardware.write(byteArray)
.subscribe(
{
hardware.receiveResult().take(1)
.doFinally { /* dispose code */ }
.subscribe(
{ /* onSuccess */ }
{ /* onError */ }
.let { disposable = it }
},
{ /* onError */ }
)
All code for the next request in the queue could be put in the inner onSuccess and then the next one in that onSuccess. That would be executed sequentially but that wouldn't be generic enough. Any other class that makes a request would end up spoiling my sequence.
I am searching for a solution that builds up the queue automatic in the hardware communication controller class.
Long time passed, the project developed and we got a solution long time ago. Now I wanted to share it here:
fun writeSequential(data1: ByteArray, data2: ByteArray) {
disposable = hardwareWrite(data1)
.zipWith(hardwareWrite(data2))
.subscribe(
{
/* handle results.
it.first will be the first response,
it.second the second. */
},
{ /* handle error */ }
)
compositeDisposable.add(disposable)
}
fun hardwareWrite(data: ByteArray): Disposable {
var emitter: SingleEmitter<ByteArray>? = null
var single = Single.create<ByteArray> { emitter = it }
return hardware.write(data)
.subscribe(
{ hardwareRead(emitter) },
{ /* onError */ }
))
}
fun hardwareRead(emitter: SingleEmitter<ByteArray>): Disposable {
return hardware.receiveResult()
.take(1)
.timeout( /* your timeout */ )
.single( /* default value */ )
.doFinally( /* cleanup queue */ )
.subscribe(
{ emitter.onSuccess(it) }
{ emitter.onError(it) }
)
}
The solution is not perfect and now I see that the middle part doesn't do anything with the disposable result.
Also in out example it's a bit more complicated as hardwareWrite doesn't fire immediatelly but gets queued. This way we assure that the hardware is accessed sequentially and the result don't get mixed up.
Still I hope this might help someone, who is looking for a solution, and is maybe new to kotlin and/or RxJava stuff (like I was in the beginning of the project).
I have a scenario where I call an api in one of my handlers and that Api can go down for like 6 hours per month. Therefore, I designed a retry logic with 1sec retry, 1 minute retry and a 6 hour retry. This all works fine but then I found that long delay retries are not a good option.Could you please give me your experience about this?
Thank you!
If I were you, I would use Rebus' ability to defer messages to the future to implement this functionality.
You will need to track the number of failed delivery attempts manually though, by attaching and updating headers on the deferred message.
Something like this should do the trick:
public class YourHandler : IHandleMessages<MakeExternalApiCall>
{
const string DeliveryAttemptHeaderKey = "delivery-attempt";
public YourHandler(IMessageContext context, IBus bus)
{
_context = context;
_bus = bus;
}
public async Task Handle(MakeExternalApiCall message)
{
try
{
await MakeCallToExternalWebApi();
}
catch(Exception exception)
{
var deliveryAttempt = GetDeliveryAttempt();
if (deliveryAttempt > 5)
{
await _bus.Advanced.TransportMessage.Forward("error");
}
else
{
var delay = GetNextDelay(deliveryAttempt);
var headers = new Dictionary<string, string> {
{DeliveryAttemptHeaderKey, (deliveryAttempt+1).ToString()}
};
await bus.Defer(delay.Value, message, headers);
}
}
}
int GetDeliveryAttempt() => _context.Headers.TryGetValue(DeliveryAttemptHeaderKey, out var deliveryAttempt)
? deliveryAttempt
: 0;
TimeSpan GetNextDelay() => ...
}
When running in production, please remember to configure some kind of persistent subscription storage – e.g. SQL Server – otherwise, your deferred messages will be lost in the event of a restart.
You can configure it like this (after having installed the Rebus.SqlServer package):
Configure.With(...)
.(...)
.Timeouts(t => t.StoreInSqlServer(...))
.Start();
I'm discovering Dart language. To train myself, I try to code a simple UDP server that logs every datagram received.
Here's my code so far:
import 'dart:async';
import 'dart:convert';
import 'dart:io';
class UDPServer {
static final UDPServer _instance = new UDPServer._internal();
// Socket used by the server.
RawDatagramSocket _udpSocket;
factory UDPServer() {
return _instance;
}
UDPServer._internal();
/// Starts the server.
Future start() async {
await RawDatagramSocket
.bind(InternetAddress.ANY_IP_V4, Protocol.udpPort, reuseAddress: true)
.then((RawDatagramSocket udpSocket) {
_udpSocket = udpSocket;
_udpSocket.listen((RawSocketEvent event) {
switch (event) {
case RawSocketEvent.READ:
_readDatagram();
break;
case RawSocketEvent.CLOSED:
print("Connection closed.");
break;
}
});
});
}
void _readDatagram() {
Datagram datagram = _udpSocket.receive();
if (datagram != null) {
String content = new String.fromCharCodes(datagram.data).trim();
String address = datagram.address.address;
print('Received "$content" from $address');
}
}
}
It works partially great... After logging some messages, it just crashes. The number varies between 1 and ~5. And IDEA just logs Lost connection to device., nothing more. I tried to debug but didn't find anything.
Does anyone have an idea why it crashes? Many thanks in advance.
EDIT: I forgot to mention but I was using this code into a Flutter application and it seems to come from that. See this Github issue for more info.
I am using Redis with StackExchange.Redis. I have multiple threads that will at some point access and edit the value of the same key, so I need to synchronize the manipulation of the data.
Looking at the available functions, I see that there are two functions, TakeLock and ReleaseLock. However, these functions take both a key and a value parameter rather than the expected single key to be locked. The intellisene documentation and source on GitHub don't explain how to use the LockTake and LockRelease functions or what to pass in for the key and value parameters.
Q: What is the correct usage of LockTake and LockRelease in StackExchange.Redis?
Pseudocode example of what I'm aiming to do:
//Add Items Before Parallel Execution
redis.StringSet("myJSONKey", myJSON);
//Parallel Execution
Parallel.For(0, 100, i =>
{
//Some work here
//....
//Lock
redis.LockTake("myJSONKey");
//Manipulate
var myJSONObject = redis.StringGet("myJSONKey");
myJSONObject.Total++;
Console.WriteLine(myJSONObject.Total);
redis.StringSet("myJSONKey", myNewJSON);
//Unlock
redis.LockRelease("myJSONKey");
//More work here
//...
});
There are 3 parts to a lock:
the key (the unique name of the lock in the database)
the value (a caller-defined token which can be used both to indicate who "owns" the lock, and to check that releasing and extending the lock is being done correctly)
the duration (a lock intentionally is a finite duration thing)
If no other value comes to mind, a guid might make a suitable "value". We tend to use the machine-name (or a munged version of the machine name if multiple processes could be competing on the same machine).
Also, note that taking a lock is speculative, not blocking. It is entirely possible that you fail to obtain the lock, and hence you may need to test for this and perhaps add some retry logic.
A typical example might be:
RedisValue token = Environment.MachineName;
if(db.LockTake(key, token, duration)) {
try {
// you have the lock do work
} finally {
db.LockRelease(key, token);
}
}
Note that if the work is lengthy (a loop, in particular), you may want to add some occasional LockExtend calls in the middle - again remembering to check for success (in case it timed out).
Note also that all individual redis commands are atomic, so you don't need to worry about two discreet operations competing. For more complexing multi-operation units, transactions and scripting are options.
There is my part of code for lock->get->modify(if required)->unlock actions with comments.
public static T GetCachedAndModifyWithLock<T>(string key, Func<T> retrieveDataFunc, TimeSpan timeExpiration, Func<T, bool> modifyEntityFunc,
TimeSpan? lockTimeout = null, bool isSlidingExpiration=false) where T : class
{
int lockCounter = 0;//for logging in case when too many locks per key
Exception logException = null;
var cache = Connection.GetDatabase();
var lockToken = Guid.NewGuid().ToString(); //unique token for current part of code
var lockName = key + "_lock"; //unique lock name. key-relative.
T tResult = null;
while ( lockCounter < 20)
{
//check for access to cache object, trying to lock it
if (!cache.LockTake(lockName, lockToken, lockTimeout ?? TimeSpan.FromSeconds(10)))
{
lockCounter++;
Thread.Sleep(100); //sleep for 100 milliseconds for next lock try. you can play with that
continue;
}
try
{
RedisValue result = RedisValue.Null;
if (isSlidingExpiration)
{
//in case of sliding expiration - get object with expiry time
var exp = cache.StringGetWithExpiry(key);
//check ttl.
if (exp.Expiry.HasValue && exp.Expiry.Value.TotalSeconds >= 0)
{
//get only if not expired
result = exp.Value;
}
}
else //in absolute expiration case simply get
{
result = cache.StringGet(key);
}
//"REDIS_NULL" is for cases when our retrieveDataFunc function returning null (we cannot store null in redis, but can store pre-defined string :) )
if (result.HasValue && result == "REDIS_NULL") return null;
//in case when cache is epmty
if (!result.HasValue)
{
//retrieving data from caller function (from db from example)
tResult = retrieveDataFunc();
if (tResult != null)
{
//trying to modify that entity. if caller modifyEntityFunc returns true, it means that caller wants to resave modified entity.
if (modifyEntityFunc(tResult))
{
//json serialization
var json = JsonConvert.SerializeObject(tResult);
cache.StringSet(key, json, timeExpiration);
}
}
else
{
//save pre-defined string in case if source-value is null.
cache.StringSet(key, "REDIS_NULL", timeExpiration);
}
}
else
{
//retrieve from cache and serialize to required object
tResult = JsonConvert.DeserializeObject<T>(result);
//trying to modify
if (modifyEntityFunc(tResult))
{
//and save if required
var json = JsonConvert.SerializeObject(tResult);
cache.StringSet(key, json, timeExpiration);
}
}
//refresh exiration in case of sliding expiration flag
if(isSlidingExpiration)
cache.KeyExpire(key, timeExpiration);
}
catch (Exception ex)
{
logException = ex;
}
finally
{
cache.LockRelease(lockName, lockToken);
}
break;
}
if (lockCounter >= 20 || logException!=null)
{
//log it
}
return tResult;
}
and usage :
public class User
{
public int ViewCount { get; set; }
}
var cachedAndModifiedItem = GetCachedAndModifyWithLock<User>(
"MyAwesomeKey", //your redis key
() => // callback to get data from source in case if redis's store is empty
{
//return from db or kind of that
return new User() { ViewCount = 0 };
},
TimeSpan.FromMinutes(10), //object expiration time to pass in Redis
user=> //modify object callback. return true if you need to save it back to redis
{
if (user.ViewCount< 3)
{
user.ViewCount++;
return true; //save it to cache
}
return false; //do not update it in cache
},
TimeSpan.FromSeconds(10), //lock redis timeout. if you will have race condition situation - it will be locked for 10 seconds and wait "get_from_db"/redis read/modify operations done.
true //is expiration should be sliding.
);
That code can be improved (for example, you can add transactions for less count call to cache and etc), but i glad it will be helpfull for you.