Akka.net cluster broadcast only received by one node - akka.net

I learned from the Akka.net WebCrawler and created my own cluster test. I have a Processor node(Console App) and an API node(SignalR). Here are the configurations.
Processor node:
akka {
actor{
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/dispatcher/signalR {
router = broadcast-group
routees.paths = ["/user/signalr"]
cluster {
enabled = on
#max-nr-of-instances-per-node = 1
allow-local-routees = false
use-role = api
}
}
}
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://stopfinder#127.0.0.1:4545"]
roles = [processor]
}
}
API node: (Non seed-node will have port = 0)
akka {
actor{
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
port = 4545
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://stopfinder#127.0.0.1:4545"]
roles = [api]
}
}
Inside of the API node, I created a normal actor called SignalR.
Inside of the processor node I created a normal actor and used the Scheduler to Tell() the API node's signalR actor some string.
This works great when I have one Processor and one API. It also works when I have multiple Processor nodes and a single API node. Unfortunately, when I have multiple API node, no matter how I setup the configuration, the "tell" won't tell all of the API nodes; the message only goes to one of them. Which node receives the message is based on the API node start sequence. It seems that I have the all API nodes registered in the cluster correctly, but I could be wrong.
I'm starting to feel that this is a configuration or understanding issue. Can anyone share any insights?
I did some additional testing. The behavior remains the same when I replace the ASP.NET SignalR API node with a normal console application.
UPDATE: I contacted Akka.NET team. This behavior is a known bug. It will be fixed in 1.1 release.
UPDATE 2: The issue has been marked as fixed for the proejct on GitHub.

Related

MQTTnet Connection Issue with HiveMQ Cloud

I am new to the MQTT world and I am trying to create a .Net 5.0 application that connects to a HiveMQ Cloud Broker.
I have created a free broker and I am able to connect to it with HiveMQ Websocket Client.
Here is a screenshot of my host.
I have created MQTT credentials for the host and I am able to connect over the sample client. Here is a screenshot of that client.
This works, I can publish and subscribe to the message queue.
However, now I am trying to translate this to c# and I am not able to connect. I am starting with this example project: https://github.com/rafiulgits/mqtt-client-dotnet-core
Then plugged the values from my cluster instance but I am a getting connection timeout on startup.
Here is what my service configuration looks like:
public static IServiceCollection AddMqttClientHostedService(this IServiceCollection services)
{
services.AddMqttClientServiceWithConfig(aspOptionBuilder =>
{
//var clientSettinigs = AppSettingsProvider.ClientSettings;
//var brokerHostSettings = AppSettingsProvider.BrokerHostSettings;
aspOptionBuilder
.WithCredentials("Test1", "xxxxx") //clientSettinigs.UserName, clientSettinigs.Password)
.WithClientId("clientId-jqE8uIw6Pp") //clientSettinigs.Id)
.WithTcpServer("xxxxxxxxxxxxxx.s2.eu.hivemq.cloud", 8884); //brokerHostSettings.Host, brokerHostSettings.Port);
});
return services;
}
private static IServiceCollection AddMqttClientServiceWithConfig(this IServiceCollection services, Action<AspCoreMqttClientOptionBuilder> configure)
{
services.AddSingleton<IMqttClientOptions>(serviceProvider =>
{
var optionBuilder = new AspCoreMqttClientOptionBuilder(serviceProvider);
configure(optionBuilder);
return optionBuilder.Build();
});
services.AddSingleton<MqttClientService>();
services.AddSingleton<IHostedService>(serviceProvider =>
{
return serviceProvider.GetService<MqttClientService>();
});
services.AddSingleton<MqttClientServiceProvider>(serviceProvider =>
{
var mqttClientService = serviceProvider.GetService<MqttClientService>();
var mqttClientServiceProvider = new MqttClientServiceProvider(mqttClientService);
return mqttClientServiceProvider;
});
return services;
}
I am not sure where I am going wrong, any help would be greatly appreciated.
You appear to be trying to connect to the WebSocket endpoint (port 8884) in your code, when I suspect you really should be using the normal TLS endpoint (port 8883)
Also you will need to use different clientid values if you want to have both clients connected at the same time as having matching will mean the clients will continuously kick each other off the broker.
(edit: on looking closer the client ids are actually different, but only in the last char)
I had this issue in two days ago and it seems coming form TLS confgurations/settings. By the way, my Startup.cs service injections and some configurations were same with yours. I have .NetCore app and I am trying to connect my own hivemq broker (cloud side).
In this case we need to add additional option to our mqtt client option build phase.
When I add this code, Auth problems gone.
.WithTls();
Here is part of the client option codes should like that
AddMqttClientServiceWithConfig(services,optionBuilder =>
{
var clientSettings = BrokerAppSettingsProvider.BrokerClientSettings;
var brokerHostSettings = BrokerAppSettingsProvider.BrokerHostSettings;
optionBuilder
.WithCredentials(clientSettings.UserName, clientSettings.Password)
.WithTls()
.WithTcpServer(brokerHostSettings.Host, brokerHostSettings.Port);
});
return services;
We can consider this as a different solution.

Azure Virtual Network subnet connection issues

I'm having is that I have one Vnet with 2x /27 subnets that have been delegated to WebApps.
webApp-1 -> subnet1
WebApp-2 -> subnet2.
I've terraformed the Vnet:
resource "azurerm_resource_group" "main-rg"{
name = "main-rg"
location = "westeurope"
}
resource "azurerm_virtual_network" "main-vnet" {
name = "main-vnet"
location = azurerm_resource_group.main-rg.location
resource_group_name = azurerm_resource_group.main-rg.name
address_space = ["172.25.44.0/22"]
subnet {
name = "test"
address_prefix = "172.25.44.64/27"
security_group = ""
}
}
resource "azurerm_subnet" "subnet1" {
name = "subnet1"
resource_group_name = azurerm_resource_group.main-rg.name
virtual_network_name = azurerm_virtual_network.main-vnet.name
address_prefixes = ["172.25.44.0/27"]
delegation {
name = "webapp1delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
resource "azurerm_subnet" "subnet2" {
name = "subnet2"
resource_group_name = azurerm_resource_group.main-rg.name
virtual_network_name = azurerm_virtual_network.main-vnet.name
address_prefixes = ["172.25.44.32/27"]
delegation {
name = "webapp2delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
The problem I'm having is when I'm trying to connect the WebApps to their respective subnets.
FYI: I'm connecting the WebApps from the Azure Portal (old test resources, don't want to import them as they will be removed soon).
The first on (WebApp1 to Subnet1) works out fine.
When I then try to connect WebApp2 to Subnet2 it fails, but I am able to connect WebApp2 to Subnet1.
I also tried the other way around; I'm able to connect both apps to Subnet2 (but I first have to disconnect both apps from Subnet1).
I'm not seeing any error messages other than a little "Connection failed" popup in the Portal UI.
So I guess my question is: is it not possible to have 2x subnets with WebApp-delegations in one Vnet, or am I missing something?
And again, sorry if this is something blatantly obvious that I've overlooked.
In advance; thanks!
It's not possible to have two web apps in the same app service plan to use differently integrated subnets because all web apps in the same app service plan could share the VNet integration. However, you can use each integrated subnet for each app service plan. You could read the limitations:
The integration subnet can be used by only one App Service plan.
You can have only one regional VNet Integration per App Service plan.
Multiple apps in the same App Service plan can use the same VNet.
To avoid any issues with subnet capacity, a /26 subnet address mask with 64 addresses is the recommended size.

SignalR and Redis

I've got a project that uses SignalR and a RedisBackplane, we've moved from StackExchange.Redis to ServiceStack.Redis due to Redis Sentinel compatibility issues (Not movable)
However, it now looks like the support for SignalR Redis Backplane seems to be tied into StackExchange?
Have I completely missed something, or is there support for ServiceStack on a SignalR Redis Backplane?
Current code looks like:
var redisConnection = ConnectionMultiplexer.Connect(this.Configuration.GetValue<string>("Redis"));
services.AddSignalR(o => { o.EnableDetailedErrors = true; })
.AddStackExchangeRedis(options =>
{
options.Configuration.ChannelPrefix = "Audit";
options.ConnectionFactory =
writer => Task.FromResult(redisConnection as IConnectionMultiplexer);
});
I don't believe anyone has implemented a SignalR Redis backplane using ServiceStack.Redis.
ServiceStack does have it's own real-time events solution using SSE which includes a Redis Server Events implementation that uses ServiceStack.Redis (akin to SignalR Redis backplane).
I prefer to use username/password in the configuration
//StackExchange.Redis for configuration options
var redisConfiguration = new ConfigurationOptions
{
EndPoints = { "serverinfo:portinfo" },
User = username,
Password = password
//,Ssl = true
};
signalRBuilder.AddStackExchangeRedis(options => { options.Configuration = redisConfiguration; });

Call Service Fabric service from console application using WCF HTTPS endpoint

I have a service hosted in a Service Fabric cluster in Azure (not locally) and I'm trying to call a method in it using a console application on my local machine. Using WCF for communication, I have a HTTPS endpoint set up in my application on a specific port, and have configured load balancing rules for the port in the Azure portal. The cluster has 6 nodes and the application is the only one deployed on the cluster.
Have followed the ServiceFabric.WcfCalc on GitHub (link), which works on a local cluster using HTTP endpoints, but can't call a method on the service using HTTPS endpoints once it has been deployed. What do I need to do to get it working? Have tried following the example here but don't know how to configure this for HTTPS with a service on multiple nodes for a console application to access.
Thanks in advance.
EDIT Here's my client code which I am using to call the service method. I pass the fabric:/ URI into the constructor here.
public class Client : ServicePartitionClient<WcfCommunicationClient<IServiceInterface>>, IServiceInterface
{
private static ICommunicationClientFactory<WcfCommunicationClient<IServiceInterface>> communicationClientFactory;
static Client()
{
communicationClientFactory = new WcfCommunicationClientFactory<IServiceInterface>(
clientBinding: new BasicHttpBinding(BasicHttpSecurityMode.Transport));
}
public Client(Uri serviceUri)
: this(serviceUri, ServicePartitionKey.Singleton)
{ }
public Client(
Uri serviceUri,
ServicePartitionKey partitionKey)
: base(
communicationClientFactory,
serviceUri,
partitionKey)
{ }
public Task<bool> ServiceMethod(DataClass data)
{
try
{
//It hangs here
return this.InvokeWithRetry((c) => c.Channel.ServiceMethod(data));
}
catch (Exception)
{
throw;
}
}
}
When debugging my console application on my local machine, the application hangs on the InvokeWithRetry call which calls the method in my service in Service Fabric. The application does not throw any exceptions and does not return to the debugger in Visual Studio.
Make sure you run every service instance /replica with a unique url.
Make sure you call the WebHttpBinding constructor using WebHttpSecurityMode.Transport.
Make sure you register the url using the same port number (443 likely) as in you service manifest endpoint declaration.
Make sure the endpoint is configured as HTTPS.
The warning you see in Service Fabric is telling you that there is already another service registered to listen on port 443 on your nodes. This means that Service Fabric fails to spin up your service (since it throws an exception internally when it is trying to register the URL with http.sys). You can change the port for your service to something else that will not conflict with the existing service, e.g.:
<Resources>
<Endpoint Name="CalculatorEndpoint" Protocol="https" Type="Input" Port="44330" />
</Endpoints>
If you log in to Service Fabric Explorer on https://{cluster_name}.{region}.cloudapp.azure.com:19080 you should be able to see what other applications and services are running there. If you expand services all the way down to node you should be able to see the registered endpoints, including ports, for existing services.
Bonus
You can query the cluster using FabricClient for all registered endpoints
var fabricClient = new FabricClient();
var applicationList = fabricClient.QueryManager.GetApplicationListAsync().GetAwaiter().GetResult();
foreach (var application in applicationList)
{
var serviceList = fabricClient.QueryManager.GetServiceListAsync(application.ApplicationName).GetAwaiter().GetResult();
foreach (var service in serviceList)
{
var partitionListAsync = fabricClient.QueryManager.GetPartitionListAsync(service.ServiceName).GetAwaiter().GetResult();
foreach (var partition in partitionListAsync)
{
var replicas = fabricClient.QueryManager.GetReplicaListAsync(partition.PartitionInformation.Id).GetAwaiter().GetResult();
foreach (var replica in replicas)
{
if (!string.IsNullOrWhiteSpace(replica.ReplicaAddress))
{
var replicaAddress = JObject.Parse(replica.ReplicaAddress);
foreach (var endpoint in replicaAddress["Endpoints"])
{
var endpointAddress = endpoint.First().Value<string>();
Console.WriteLine($"{service.ServiceName} {endpointAddress} {endpointAddress}");
}
}}}}}
Just run that with the proper FabricClient credentials (if it is a secured cluster) and you should see it listing all endpoints for all services there. That should help you find the one that has an endpoint for :443

Paho Rabitmqq connection getting failed

Here is my paho client code
// Create a client instance
client = new Paho.MQTT.Client('127.0.0.1', 1883, "clientId");
// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
// connect the client
client.connect({onSuccess:onConnect});
// called when the client connects
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("onConnect");
client.subscribe("/World");
message = new Paho.MQTT.Message("Hello");
message.destinationName = "/World";
client.send(message);
}
// called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:"+responseObject.errorMessage);
}
}
// called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:"+message.payloadString);
}
On Rabbitmq server everything is default seetings. When i run this code i get WebSocket connection to 'ws://127.0.0.1:1883/mqtt' failed: Connection closed before receiving a handshake response
What i am missing ?
From my personal experience with Paho MQTT JavaScript library and RabbitMQ broker on windows, here is a list of things that you need to do to be able to use MQTT from JS from within a browser:
Install rabbitmq_web_mqtt plugin (you may find latest binary here, copy it to "c:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.2\plugins\", and enable from command line using "rabbitmq-plugins enable rabbitmq_web_mqtt".
Of course, MQTT plugin also needs to be enabled on broker
For me, client was not working with version 3.6.1 of RabbitMQ, while it works fine with version 3.6.2 (Windows)
Port to be used for connections is 15675, NOT 1883!
Make sure to specify all 4 parameters when making instance of Paho.MQTT.Client. In case when you omit one, you get websocket connection error which may be quite misleading.
Finally, here is a code snippet which I tested and works perfectly (just makes connection):
client = new Paho.MQTT.Client("localhost", 15675, "/ws", "client-1");
//set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
//connect the client
client.connect({
onSuccess : onConnect
});
//called when the client connects
function onConnect() {
console.log("Connected");
}
//called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:" + responseObject.errorMessage);
}
}
//called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:" + message.payloadString);
}
It's not clear in the question but I assume you are running the code above in a web browser.
This will be making a MQTT connection over Websockets (as shown in the error). This is different from a native MQTT over TCP connection.
The default pure MQTT port if 1883, Websocket support is likely to be on a different port.
You will need to configure RabbitMQ to accept MQTT over Websockets as well as pure MQTT, this pull request for RabbitMQ seams to talk about adding this capability. It mentions that this capability was only added in version 3.6.x and that the documentaion is still outstanding (as of 9th Feb 2016)