MassTransit.RabbitMQ - Connect Failed: Broker unreachable - rabbitmq

After updating MassTransit packages to the latest version (4.1.0.1426-develop) I experience problems with registering more then 26 queues. For example, code below crushes with error
[20:51:06 ERR] RabbitMQ Connect Failed: Broker unreachable:
guest#localhost:5672/test
static void Main(string[] args)
{
var builder = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true);
var configuration = builder.Build();
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.Console()
.ReadFrom.Configuration(configuration)
.CreateLogger();
Log.Information("Starting Receiver...");
var services = new ServiceCollection();
services.AddSingleton(context => Bus.Factory.CreateUsingRabbitMq(x =>
{
IRabbitMqHost host = x.Host(new Uri("rabbitmq://guest:guest#localhost:5672/test"), h => { });
for (var i = 0; i < 27; i++)
{
x.ReceiveEndpoint(host, $"receiver_queue{i}", e =>
{
e.Consumer<TestHandler>();
});
}
x.UseSerilog();
}));
var container = services.BuildServiceProvider();
var busControl = container.GetRequiredService<IBusControl>();
busControl.Start();
Log.Information("Receiver started...");
}
So, it can't register 27 queues. However it works if I decrease the number to 26 :)
If I downgrade MT NuGet packages to the latest stable 4.0.1 version it perfectly works and I can register up to 50 queues.
Also, another observation - with 4.1.0.1426-develop versions it takes much longer to start this very tiny app. However when I test it with latest stable 4.0.1 and try to create 50 queues it starts almost immediately.
Any ideas where this limitation came from and how to avoid it?

Thank you for opening the issue, that helps track it.
Also, it seems to be fixed now. There was an issue with how the netcoreapp2.0 program stack (and possibly the TaskScheduler) causing it to delay for a long period of time the Connect method in RabbitMQ.Client. I'm thinking this is a TPL/thread issue where the connection wasn't being scheduled for a good 15+ seconds, after which it completed immediately.
Moving it into a Task.Factory.StartNew() (deep inside the MT code) appears to have fixed the issue to where it doesn't fail, and it executes immediately.

I know this has been marked as resolved but I ran into a similar issue.
fail: MassTransit[0] partners.moneytransfer | RabbitMQ Connect Failed:
serviceUser#rabbitmq:5672/
The only way I was able to resolve is, is by adding the user to the rabbitmq database with its username and password as specified in the Bus configuration.

Related

Integrationtest IHost TestServer won't shutdown

I wrote some integration tests for an aspnetcore 3.1 application using xunit.
Tests show successful, but process is still running. After some time I get:
The process dotnet:1234 has finished running tests assigned to it, but is still running. Possible reasons are incorrect asynchronous code or lengthy test resource disposal [...]
This behavior does even show with boilerplate code like:
[Fact]
public async Task TestServer()
{
var hostBuilder = new HostBuilder()
.ConfigureWebHost(webHost =>
{
// Add TestServer
webHost.UseTestServer();
webHost.Configure(app => app.Run(async ctx =>
await ctx.Response.WriteAsync("Hello World!")));
});
// Build and start the IHost
var host = await hostBuilder.StartAsync();
}
Same if I add await host.StopAsync()...
I am on an Ubuntu 18.04 machine.
Try to dispose the host at the end of the test. Most likely, the error is caused just because you don't dispose disposable resource.
I would recommend you to use WebApplicationFactory for testing instead of HostBuilder. You may find more in the docs
I had the same problem. Since i was working with NUnit the suggested change was not an option (it is based on xunit). So i was digging into it and the root cause of it was quite simple:
I created a long running task in the Startup.cs which was doing some monitoring throughout all the hosting's lifetime. So i had to stop this task, and then also the instance of the TestHost was disposed properly.

WebSockets not working when application is built

I have got to ASP.NET-Core 2.0 apps communicating via WebSockets.
App A is Server.
Application A is running on a remote server with Ubuntu.
App B is Client
Application B is running on a PC setup in my office.
When I test my applications locally in Debug everything works fine. Client connects to the server and they can exchange information.
However, when I build my Server app, Client can connect to it but when server tries to send a message to the client the message is not received by the client.
public async Task<RecievedResult> RecieveAsync(CancellationToken cancellationToken)
{
RecievedResult endResult;
var buffer = new byte[Connection.ReceiveChunkSize];
WebSocketReceiveResult result;
MemoryStream memoryStream = new MemoryStream();
do
{
if (cancellationToken.IsCancellationRequested)
{
throw new TaskCanceledException();
}
Console.WriteLine("Server Invoke");
// result never finishes when application is build. On debug it finishes and method returns the correct result
result = await _webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), cancellationToken);
if (result.MessageType == WebSocketMessageType.Close)
{
await CloseAsync(cancellationToken);
endResult = new RecievedResult(null, true);
return endResult;
}
memoryStream.Write(buffer, 0, result.Count);
} while (!result.EndOfMessage);
endResult = new RecievedResult(memoryStream, false);
return endResult;
}
This is the part of code where everything hangs.
What I tried was:
Build Server - Build Client => not working
Build Server - Debug Client => not working
Debug Server - Debug Client => working
I need any advice what might be wrong here and where I should look for issues.
Console if free of errors. Everything hangs on:
result = await _webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), cancellationToken);

Azure Web Jobs Redis (RedLock) & Blob Storage Access Issues

We switched to WebJobs with our background tasks that are starting to work when a new item lands on an Azure Queue. Now we have some weird issues that he seems to have problems accessing Redis RedLock and Storage that I can't explain.
Now the biggest issue we have is RedLock. We are using RedLock.Net for distributed locking. Now this works all fine in our production web application and it also worked on the background workers we had but as soon as we switched to WebJobs he basically failed to aquire the lock. To back this up with some code...we are locking like this:
using (var redisLock = await _redLockConnection.RedisLockFactory.CreateAsync(resource, UserLockExpiryTime, UserLockWaitTime, UserLockRetryTime))
{
// make sure we got the lock
if (redisLock.IsAcquired)
{
// execute code...
}
else
{
throw new CouldNotAcquireRedLockException();
}
}
The problem here is, IsAcquired is always false within a Webjob and I have no clue why!?
The second thing that maybe relates to this problem is deleting a blob file in azure storage that fails with a 404 only within a WebJob.
var file = _blobContainer.GetBlockBlobReference("file.txt");
file?.Delete();
This will fail with a 404 Not found exception within the WebJob.
Is there anything I missed setting up the webjob? Could it be an access problem for write operations? Would be glad for any help!
IsAcquired is always false within a Webjob
I do a test with the following code using RedLock.net in an Azure WebJob, I can acquire lock on resource if the lock is available.
public static void ProcessQueueMessage([QueueTrigger("mymes")] string message, TextWriter log)
{
var azureEndPoint = new RedisLockEndPoint
{
EndPoint = new DnsEndPoint("{YOUR_CACHE}.redis.cache.windows.net", 6380),
Password = "YOUR_ACCESS_KEY",
Ssl = true
};
var eps = new[] { azureEndPoint };
var rlf = new RedisLockFactory(eps);
var resource = "https://{storageaccount}.blob.core.windows.net/{containername}/test.txt";
var expiry = TimeSpan.FromSeconds(50);
var wait = TimeSpan.FromSeconds(10);
var retry = TimeSpan.FromSeconds(1);
using (var redisLock = rlf.Create(resource, expiry, wait, retry))
{
Console.WriteLine("Lock acquired: " + redisLock.IsAcquired);
}
log.WriteLine(message);
}
Result of test:
deleting a blob file in azure storage that fails with a 404
As I mentioned in comment, Please check if that Blob is existing via Azure portal or Azure storage explorer, or call Exists method to check existence of the blob before you delete it.

UniqueConstraints bundle not picked up by EmbeddableDocumentStore with custom plugins directory

We are using EmbeddableDocumentStore for non-production deployments and in general it works great. I stumbled upon an issue which took me few hours to solve and it would be good to know if the behaviour I am experiencing is by design.
I init EmbeddableDocumentStore like this:
var store = new EmbeddableDocumentStore()
{
DataDirectory = dataDirectory,
DefaultDatabase = "DbName",
RunInMemory = false,
UseEmbeddedHttpServer = true,
};
store.Configuration.Port = 10001;
store.Configuration.PluginsDirectory = pluginsDirectory; // this is the important line
store.Configuration.CompiledIndexCacheDirectory = compiledIntexCacheDirectory;
store.Configuration.Storage.Voron.AllowOn32Bits = true;
store.RegisterListener(new UniqueConstraintsStoreListener());
store.Initialize();
With this setup UniqueConstraints are not working on the embedded server.
However, when I put plugins directory to it's default location (WorkingDirectory + /Plugins), it magically starts working. Is it expected behaviour?
More info:
I can reproduce it in Console app and in Web app. In web app, the default location is web root + /Plugins.
After a little bit of investigation I found out that there is a difference in how UniqueConstraints' triggers are registered in store.Configuration.Catalog.Catalogs which might have something to do with the unexpected (for me) behaviour.
With custom PluginDirectory, triggers are registered in store.Configuration.Catalog.Catalogs as BuiltinFitleringCatalog:
When bundle is in the default location, triggers are added to BundlesFilteredCatalog in store.Configuration.Catalog.Catalogs with all other default triggers:
What version of RavenDB?
In RavenDB 3.5 registering plugins on the server-side requires a magic string. Adding this to your example above will probably fix it.
store.Configuration.Settings =
{
{ "Raven/ActiveBundles", "Unique Constraints" }
};

Redis Timeout Expired message on GetClient call

I hate the questions that have "Not Enough Info". So I will try to give detailed information. And in this case it is code.
Server:
64 bit of https://github.com/MSOpenTech/redis/tree/2.6/bin/release
There are three classes:
DbOperationContext.cs: https://gist.github.com/glikoz/7119628
PerRequestLifeTimeManager.cs: https://gist.github.com/glikoz/7119699
RedisRepository.cs https://gist.github.com/glikoz/7119769
We are using Redis with Unity ..
In this case we are getting this strange message:
"Redis Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use.";
We checked these:
Is the problem configuration issue
Are we using wrong RedisServer.exe
Is there any architectural problem
Any idea? Any similar story?
Thanks.
Extra Info 1
There is no rejected connection issue on server stats (I've checked it via redis-cli.exe info command)
I have continued to debug this problem, and have fixed numerous things on my platform to avoid this exception. Here is what I have done to solve the issue:
Executive summary:
People encountering this exception should check:
That the PooledRedisClientsManager (IRedisClientsManager) is registed in a singleton scope
That the RedisMqServer (IMessageService) is registered in a singleton scope
That any utilized RedisClient returned from either of the above is properly disposed of, to ensure that the pooled clients are not left stale.
The solution to my problem:
First of all, this exception is thrown by the PooledRedisClient because it has no more pooled connections available.
I'm registering all the required Redis stuff in the StructureMap IoC container (not unity as in the author's case). Thanks to this post I was reminded that the PooledRedisClientManager should be a singleton - I also decided to register the RedisMqServer as a singleton:
ObjectFactory.Configure(x =>
{
// register the message queue stuff as Singletons in this AppDomain
x.For<IRedisClientsManager>()
.Singleton()
.Use(BuildRedisClientsManager);
x.For<IMessageService>()
.Singleton()
.Use<RedisMqServer>()
.Ctor<IRedisClientsManager>().Is(i => i.GetInstance<IRedisClientsManager>())
.Ctor<int>("retryCount").Is(2)
.Ctor<TimeSpan?>().Is(TimeSpan.FromSeconds(5));
// Retrieve a new message factory from the singleton IMessageService
x.For<IMessageFactory>()
.Use(i => i.GetInstance<IMessageService>().MessageFactory);
});
My "BuildRedisClientManager" function looks like this:
private static IRedisClientsManager BuildRedisClientsManager()
{
var appSettings = new AppSettings();
var redisClients = appSettings.Get("redis-servers", "redis.local:6379").Split(',');
var redisFactory = new PooledRedisClientManager(redisClients);
redisFactory.ConnectTimeout = 5;
redisFactory.IdleTimeOutSecs = 30;
redisFactory.PoolTimeout = 3;
return redisFactory;
}
Then, when it comes to producing messages it's very important that the utilized RedisClient is properly disposed of, otherwise we run into the dreaded "Timeout Expired" (thanks to this post). I have the following helper code to send a message to the queue:
public static void PublishMessage<T>(T msg)
{
try
{
using (var producer = GetMessageProducer())
{
producer.Publish<T>(msg);
}
}
catch (Exception ex)
{
// TODO: Log or whatever... I'm not throwing to avoid showing users that we have a broken MQ
}
}
private static IMessageQueueClient GetMessageProducer()
{
var producer = ObjectFactory.GetInstance<IMessageService>() as RedisMqServer;
var client = producer.CreateMessageQueueClient();
return client;
}
I hope this helps solve your issue too.