Not able to create node level local actors in Akka.Net cluster - akka.net

We are trying to create couple of node level actors [pool routers] for app level administration, local routing and throttling purposes.
Node specific role is mentioned as target role for these actors for STRICTLY local routing.
Below is the sample code and hocon.
//// In App Start - Actor is initialized and stored in static container
var props = Props.Create(() => new ThrottlerActor()).WithRouter(FromConfig.Instance);
actorSystem.ActorOf(props, "ThrottlerActor");
## hocon ##
/ThrottlerActor{
router = round-robin-pool
nr-of-instances = 100
cluster {
enabled = on
allow-local-routees = on
max-nr-of-instances-per-node = 10
use-role = node1
}
}
But when we send message to this actor, it behaves like a cluster actor. It redirects the n+1th [n = max-nr-of-instances-per-node] message to the similar actor in different node.
It looks like as if the role setting was ignored.
We even tried disabling clustering [cluster -> enabled = off AND also by removing cluster configuration from hocon]. But it didn't work. The moment this router is created below user guardian, the actor behaves as if it is a cluster actor.
Please advise.

We even tried disabling clustering [cluster -> enabled = off AND also by removing cluster configuration from hocon]. But it didn't work. The moment this router is created below user guardian, the actor behaves as if it is a cluster actor.
So this smells to me like your HOCON isn't being loaded correctly. You can't have a router that routes to cluster routees on other nodes with cluster.enabled = off inside its deployment. The code needed to listen to the cluster in the first place gets elided with that off.
Try removing the cluster section in its entirety and work backwards. Your issue here seems to be which config is being loaded / where it's coming from - not a bug with Akka.NET.

Related

error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension

I'm trying to enable ECS autoscaling for some Fargate services and run into the error in the title:
error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension
The error happens on line 4 here:
resource "aws_appautoscaling_target" "autoscaling" {
max_capacity = var.max_capacity
min_capacity = 1
resource_id = var.resource_id
// <snip... a bunch of other vars not relevant to question>
I call the custom autoscaling module like so:
module "myservice_autoscaling" {
source = "../autoscaling"
resource_id = aws_ecs_service.myservice_worker.id
// <snip... a bunch of other vars not relevant to question>
My service is a normal ECS service block starting with:
resource "aws_ecs_service" "myservice_worker" {
After poking around online, I thought maybe I should construct the "service/clusterName/serviceName" sort of "manually", like so:
resource_id = "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
But that leads to a different error:
The argument "cluster_name" is required, but no definition was found.
I created cluster_name in my calling module (i.e. myservice ECS stuff that calls my new autoscaling module) variables.tf. And I have cluster_name in the outputs.tf of our cluster module where we're setting up the ECS cluster. I must be missing some linking still.
Any ideas? Thanks!
Edit: here's the solution that got it working for me
Yes, you do need to construct the resource_id in the form of "service/yourClusterName/yourServiceName". Mine ended up looking like: "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
You need to make sure you have access to the cluster name and service name variables. In my case, though I had the variable defined in my ECS service's variables.tf, and I added it my cluster module's outputs.tf, I was failing to pass down from the root module to the service module. This fixed that:
module "myservice" {
source = "./modules/myservice"
cluster_name = module.cluster.cluster_name // the line I added
(the preceding snippet goes in the main.tf of your root module (a level above your service module)
You are on the right track constructing the "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}" string. It looks like you simply aren't referencing the cluster name correctly.
And I have cluster_name in the outputs.tf of our cluster module
So you need to reference that module output, instead of referencing a not-existent variable:
"service/${module.my_cluster_module.cluster_name}/${aws_ecs_service.myservice_worker.name}"
Change "my_cluster_module" to whatever name you gave the module that is creating your ECS cluster.

How to implement PersistenceQuery of ReadJournalFor in Akka.Net Hosting Model

I've worked through the documentation on Akka.Net PersistenceQuery here, but I'm struggling to figure out how I would hook up any of those queries inside an ASP.Net6 Blazor Server startup pipeline using the new Akka.Net Hosting model.
What I have in mind is to Sink such a query out to a SignalR hub that will cause views to refresh their data based on the output of a ReadForJournal stream.
Has anyone done this, and if so, please can you provide me with some guidance in this regard?
I have not done this before, much less an expert are this, but I can try to point you in the right direction! :)
If you want to run a local actor, you can spawn the ProjectionBehavior as any other Behavior. This can be useful for testing or when running a local ActorSystem without Akka Cluster.
SourceProvider<Offset, EventEnvelope<ShoppingCart.Event>> sourceProvider(String tag) {
return EventSourcedProvider.eventsByTag(system, CassandraReadJournal.Identifier(), tag);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection(String tag) {
return CassandraProjection.atLeastOnce(
ProjectionId.of("shopping-carts", tag), sourceProvider(tag), ShoppingCartHandler::new);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection1 = projection("carts-1");
ActorRef<ProjectionBehavior.Command> projection1Ref =
context.spawn(ProjectionBehavior.create(projection1), projection1.projectionId().id());
You can combine this with your predefined query, e.g.:
var queries = PersistenceQuery.Get(actorSystem)
.ReadJournalFor<SqlReadJournal>("akka.persistence.query.my-read-journal");
var mat = ActorMaterializer.Create(actorSystem);
Source<string, NotUsed> src = queries.AllPersistenceIds();
So I was thinking maybe your queries could be linked to your ProjectionBehavior for the akka hosting model to host it.
Related sources:
Getakka.net persistence-query
Akka projection dox
Akka hosting

Vault Telemetry to CloudWatch

I'm trying to get Vault telemetry streamed through Cloudwatch Agent's StatsD interface into CW metrics, however, the gauge metric values are coming through with prefixes based on the instance ID and tags that are making the metrics impossible to target for IaC managed Cloudwatch alarms.
For instance, the vault.core.unsealed telemetry event is coming through as vault_CLOUDWATCH_AGENT_HOSTNAME_core_unsealed_INSTANCE_NAME instead of the vault_core_unsealed that I was expecting.
Managing the alarms for these metrics using Terraform is impossible because they will have dynamic names and based on whichever instance is determined as the current leader in the cluster which we have no control over.
In the Vault configuration HCL file, I have:
telemetry {
statsd_address = "127.0.0.1:8125"
disable_hostname = true
enable_hostname_label = true
}
along with several other combinations of hostname configuration values and they all seem to produce the same output. Is there a solution to this that I'm missing or just a flaw in deciding to use Cloudwatch with StatsD to capture telemetry?
Seemed to have gotten the gauge value names to a usable point with a few non-obvious configuration changes.
In the Vault telemetry stanza, only add the disable_hostname = true property with the StatsD address. Adding the labels in addition will simply move the hostname to a different position in the metric name.
The Cloudwatch agent configuration has an option to omit hostnames which can be toggles by appending of setting a new configuration:
{
"agent": {
"omit_hostname": true
}
}
This will prevent the CloudWatch agent from adding its own labels and suffixes to the gauge metric names and clean up some of the naming that is produced
(Optional) Adjust the appended dimensions in the CloudWatch agent configuration. By default, the agent will append the instance ID, image ID, autoscaling group name, and instance type. This may be something you want to keep, however, if you want to do something like IaC created metric alarms, you may need to remove some dimensions to make the metric names targetable (able to be found via direct match). The following can be added to the custom config that will replace the default CloudWatch agent configuration if you want to adjust which dimensions are automatically appended to the incoming telemetry.
{
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}"
}
}
}
As long as you know the name of the autoscaling group that the instances are targeted under, the gauge metric names coming in from the Vault telemetry will be named ambiguously enough to target them for IaC purposes.

Is it possible to pass to lettuce redis library MasterSlave connection only slaves uris?

my aim is to add only slaves URIs, because master is not available in my case. But lettuce library returns
io.lettuce.core.RedisException: Master is currently unknown: [RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6382], role=SLAVE], RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6381], role=SLAVE]]
So the question is: Is it possible so avoid this exception somehow? Maybe configuration. Thank you in advance
UPDATE: Forgot to say that after borrowing object from pool I set connection.readFrom(ReadFrom.SLAVE) before running commands.
GenericObjectPoolConfig config = fromRedisConfig(properties);
List<RedisURI> nodes = new ArrayList<>(properties.getUrl().length);
for (String url : properties.getUrl()) {
nodes.add(RedisURI.create(url));
}
return ConnectionPoolSupport.createGenericObjectPool(
() -> MasterSlave.connect(redisClient, new ByteArrayCodec(), nodes), config);
The problem was that I tried to set data, which is possible only with master node. So there is no problem with MasterSlave. Get data works perfectly

Akka.NET - Cluster and ActorSelection path

I have an akka.net cluster and I want to send a message to actors that are both local and remote, and that all have the path "/user/foobar" (at least locally). Should I use ActorSelection, and what should the path look like in order to target both matching local and remote actors?
It's unclear from the question whether you mean you want to send a message locally within one node in your cluster, or across multiple nodes.
If you just want to send it in one node, you can use an ActorSelection and just send it to whatever the desired actor path is (e.g. /user/*/processingActor). If you want to message across the cluster itself, you'll need to set up a cluster-aware Group router.
See the docs here for router configuration, which is where you'll define the routees.
In a nutshell, you'll be doing something like this:
# inside akka.actor.deployment HOCON
/some-group-router {
router = round-robin-group
routees.paths = ["/user/*/processingActor",]
nr-of-instances=3
cluster {
enabled=on
use-role=targetRoleName
allow-local-routees=on
}
}