I ran into a situation where my application need to send messages to 2 queues that are in two different virtual hosts in rabbit and read from one of them. Moving those 2 queues into one virtual host (which is the perfect solution) not possible. Therefore, I need to run 2 rebus instances in a single process.
I'm using autofac for dependency injection. Could you please redirect me to some resource explaining how I can setup multiple instances in rebus with autofac in a single process?
thank you very much!
You should configure the bus instance, that you intend to use to send/publish messages on the other vhost, as a one-way client, which you then access through a dedicated service you create for this purpose.
Something along the lines of this:
public class OtherVhostBusClient : IDisposable
{
readonly IBus _bus;
public OtherVhostBusClient(string amqpConnectionString)
{
_bus = Configure.With(new BuiltinHandlerActivator())
.Transport(t => {
t.UseRabbitMqAsOneWayClient(amqpConnectionString);
})
.Start();
}
public Task Publish(object e) => _bus.Publish(e);
public void Dispose() => _bus.Dispose();
}
If you then configure OtherVhostBusClient as a singleton in your Autofac container, you can have it injected and use it to publish stuff on the other vhost.
This way, you are essentially treating this as a "foreign network" of sorts, implementing the integration with it as you would any other type of integration between "networks" (could be Rebus on any other transport, HTTP, etc.)
I hope that makes sense :)
Related
I have an issue with Hangfire, most likely because of my ignorance about some topics.
I have a host/plugins infrastructure, where each plugin is loaded at runtime and it register its interfaces.
public void ConfigureServices(IServiceCollection services, IConfigurationRoot Configuration)
{
services.AddTransient<IManager, Manager>();
services.AddTransient<IAnotherManager, AnotherManager>();
this.AddControllers(services);
}
Some plugin may add jobs using Hangfire, which are also set during runtime
public void ScheduleJobs()
{
RecurringJob.AddOrUpdate<IManager>(n => n.SayHello(), Cron.Monthly);
}
The issue I have is, while any service registered directly in the host is correctly resolved in hangfire,
all the interfaces (ex IManager) that are defined in external assemblies aren't found.
I added a customer JobActivator where I'm passing the IServiceCollection and I can actually see that those external services are registered (and I can use them anywhere else but from Hangfire), but still
in the JobActivator, when Hangfire tries to resolve the external service, it fails.
public override object ActivateJob(Type type)
{
// _serviceCollection contains the IManager service
var _provider = _serviceCollection.BuildServiceProvider();
// this will throw an Exception => No service for type '[...].IManager' has been registered.
var implementation = _provider.GetRequiredService(type);
return implementation;
}
In the same example, if I use the Default JobActivator, then the exception I get is System.MissingMethodException: Cannot create an instance of an interface.
I could enqueue the job using the Class instead of the Interface, but that's not the point and anyway if the Class has services injected, those will not be resolved as well.
What am I missing?
The problem has been solved. The solution is to add a specific IoC Container for hangfire. I used Unity. In that way dependencies are resolved correctly.
Thanks Matteo for making it clear that HF requires its own IoC container. This link makes the point too:
Hangfire needs to have it's own container with dependencies registered independently of the global UnityContainer. The reason for this is twofold; Hangfire's dependencies need to be registered with the PerResolveLifetimeManager lifetime manager. This is so that you don't get concurrency issues between workers that have resolved a dependency to the same instance. For example; with the normal HierarchicalLifetimeManager, two workers needing the same repository dependency may resolve to the same instance and share a common db context. The workers are meant to each have their own db contexts. Secondly, when the OWIN bootstrapper is run, the global UnityContainer may or may not be initialised yet and Hangfire is unable to take in a reference to the container. So giving Hangfire it's own managed container is a clear separation of purpose and behaviour in how our dependencies are resolved.
I've gone through all the documentation and examples of setting up NServiceBuse in NetCore, however, all the examples have the configuration being done in the Program.cs (Host.CreateDefaultBuilder().UseNServiceBus()).
I would like to know if I can configure NServiceBus in the ConfigureServices method of Program.cs.
The reason is that in the HostBuilder I'm building up all of the IConfiguration options (e.g. reading from appsettings.json, EnvironmentVariables, AzureKeyVault, ConfigMaps, etc.) and the Logger implementation. By the time ConfigureServices is called, all of those have been resolved. I need to be able to get things like connection strings from the IConfiguration, and so I don't believe it will work to do it in the HostBuilder.
It looks like a lot of work might be being done under the covers to inject the IMessageSession and scan for IHandlMessages instances. That should be able to be done in the Service.
Edit: Forgot to add, because it is in the Program.cs and we are using Serilog, I do not have a LoggerFactory. The LoggerFactory is registered and injected by the Services, but I cannot get it at this point in startup.
Looks like this isn't an option. I was able to have a workaround to get it all working, which is just to put it in the Program() and just make sure it is called after all the other configuration is done. It doesn't seem ideal and seem to be an anti-pattern from where netcore 3 is going.
I'd like to add that this is a poor design choice. I should be able to register my stuff in startup and package scanning shouldn't be happening.
This is a neat project, but I think that for any non-trivial development it may be left lacking.
The reason is that I would like to have a web host with multiple endpoints and I cannot do that without running two full instances (https://docs.particular.net/samples/hosting/generic-multi-hosting/).
My workflow is
message comes in to do all the work
message #1 starts a saga with 100+ messages
each message publishes an update that it is done, so that the UI can check the status of the Saga
The messages from #3 are not handled until all 100+ messages are processed (FIFO).
What I'm wanting to do is have a second queue (we're using Azure service bus) to listen for the worker updates on and update the UI.
Although you already have a workaround I have build a similar setup as you described with with Serilog as logger and NServiceBus. You can access the configuration in Program.cs like so:
public static IHostBuilder CreateHostBuilder() =>
Host.CreateDefaultBuilder()
.ConfigureAppConfiguration()
.UseSerilog()
.UseNServiceBus(c => NServiceBusSetup.Configure(c.Configuration, c.HostingEnvironment))
In the self made method NServiceBus.Configure you can setup your endpoint.
I have a .net micro-service receiving messages using RabbitMQ client, I need to test the following:
1- consumer is successfully connected to rabbitMq host.
2- consumer is listening to queue.
3- consumer is receiving messages successfully.
To achieve the above, I have created a sample application that sends messages and I am debugging consumer to be sure that it is receiving messages.
How can I automate this test? hence include it in my micro-service CI.
I am thinking to include my sample app in my CI so I can fire a message then run a consumer unit test that waits a specific time then passes if the message received, but this seems like a wrong practice to me because the test will not start until a few seconds the message is fired.
Another way I am thinking of is firing the sample application from the unit test itself, but if the sample app fails to work that would make it the service fault.
Is there any best practices for integration testing of micro-services connecting through RabbitMQ?
I have built many such tests. I have thrown up some basic code on
GitHub here with .NET Core 2.0.
You will need a RabbitMQ cluster for these automated tests. Each test starts by eliminating the queue to ensure that no messages already exist. Pre existing messages from another test will break the current test.
I have a simple helper to delete the queue. In my applications, they always declare their own queues, but if that is not your case then you'll have to create the queue again and any bindings to any exchanges.
public class QueueDestroyer
{
public static void DeleteQueue(string queueName, string virtualHost)
{
var connectionFactory = new ConnectionFactory();
connectionFactory.HostName = "localhost";
connectionFactory.UserName = "guest";
connectionFactory.Password = "guest";
connectionFactory.VirtualHost = virtualHost;
var connection = connectionFactory.CreateConnection();
var channel = connection.CreateModel();
channel.QueueDelete(queueName);
connection.Close();
}
}
I have created a very simple consumer example that represents your microservice. It runs in a Task until cancellation.
public class Consumer
{
private IMessageProcessor _messageProcessor;
private Task _consumerTask;
public Consumer(IMessageProcessor messageProcessor)
{
_messageProcessor = messageProcessor;
}
public void Consume(CancellationToken token, string queueName)
{
_consumerTask = Task.Run(() =>
{
var factory = new ConnectionFactory() { HostName = "localhost" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: queueName,
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
_messageProcessor.ProcessMessage(message);
};
channel.BasicConsume(queue: queueName,
autoAck: false,
consumer: consumer);
while (!token.IsCancellationRequested)
Thread.Sleep(1000);
}
}
});
}
public void WaitForCompletion()
{
_consumerTask.Wait();
}
}
The consumer has an IMessageProcessor interface that will do the work of processing the message. In my integration test I created a fake. You would probably use your preferred mocking framework for this.
The test publisher publishes a message to the queue.
public class TestPublisher
{
public void Publish(string queueName, string message)
{
var factory = new ConnectionFactory() { HostName = "localhost", UserName="guest", Password="guest" };
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
var body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish(exchange: "",
routingKey: queueName,
basicProperties: null,
body: body);
}
}
}
My example test looks like this:
[Fact]
public void If_SendMessageToQueue_ThenConsumerReceiv4es()
{
// ARRANGE
QueueDestroyer.DeleteQueue("queueX", "/");
var cts = new CancellationTokenSource();
var fake = new FakeProcessor();
var myMicroService = new Consumer(fake);
// ACT
myMicroService.Consume(cts.Token, "queueX");
var producer = new TestPublisher();
producer.Publish("queueX", "hello");
Thread.Sleep(1000); // make sure the consumer will have received the message
cts.Cancel();
// ASSERT
Assert.Equal(1, fake.Messages.Count);
Assert.Equal("hello", fake.Messages[0]);
}
My fake is this:
public class FakeProcessor : IMessageProcessor
{
public List<string> Messages { get; set; }
public FakeProcessor()
{
Messages = new List<string>();
}
public void ProcessMessage(string message)
{
Messages.Add(message);
}
}
Additional advice is:
If you can append randomized text to your queue and exchange names on each test run then do so to avoid concurrent tests interfering with each other
I have some helpers in the code for declaring queues, exchanges and bindings also, if your applications don't do that.
Write a connection killer class that will force close connections and check your applications still work and can recover. I have code for that, but not in .NET Core. Just ask me for it and I can modify it to run in .NET Core.
In general, I think you should avoid including other microservices in your integration tests. If you send a message from one service to another and expect a message back for example, then create a fake consumer that can mock the expected behaviour. If you receive messages from other services then create fake publishers in your integration test project.
I was successfully doing such kind of test. You need test instance of RabbitMQ, test exchange to send messages to and test queue to connect to receive messages.
Do not mock everything!
But, with test consumer, producer and test instance of rabbitMQ there is no actual production code in that test.
use test rabbitMQ instance and real aplication
In order to have meaniningfull test I would use test RabbitMQ instance, exchange and queue, but leave real application (producer and consumer).
I would implement following scenario
when test application does something that test message to rabbitMQ
then number of received messages in rabbitMQ is increased then
application does something that it should do upon receiving messages
Steps 1 and 3 are application-specific. Your application sends messages to rabbitMQ based on some external event (HTTP message received? timer event?). You could reproduce such condition in your test, so application will send message (to test rabbitMQ instance).
Same story for verifying application action upon receiving message. Application should do something observable upon receiving messages.
If application makes HTTP call- then you can mock that HTTP endpoint and verify received messages. If application saves messages to the database- you could pool database to look for your message.
use rabbitMQ monitoring API
Step 2 can be implemented using RabbitMQ monitoring API (there are methods to see number of messages received and consumed from queue https://www.rabbitmq.com/monitoring.html#rabbitmq-metrics)
consider using spring boot to have health checks
If you are java-based and then using Spring Boot will significantly simpify your problem. You will automatically get health check for your rabbitMQ connection!
See https://spring.io/guides/gs/messaging-rabbitmq/ for tutorial how to connect to RabbitMQ using Spring boot.
Spring boot application exposes health information (using HTTP endpoint /health) for every attached external resource (database, messaging, jms, etc)
See https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html#_auto_configured_healthindicators for details.
If connection to rabbitMQ is down then health check (done by org.springframework.boot.actuate.amqp.RabbitHealthIndicator) will return HTTP code 4xx and meaninfull json message in JSON body.
You do not have to do anything particular to have that health check- just using org.springframework.boot:spring-boot-starter-amqp as maven/gradle dependency is enough.
CI test- from src/test directory
I have written such test (that connect to external test instance of RabbitMQ) using integration tests, in src/test directory. If using Spring Boot it is easiest to do that using test profile, and having details of connection to test RabbitMQ instance in application-test.properties (production could use production profile, and application-production.properties file with production instance of RabbitMQ).
In simplest case (just verify connection to rabbitMQ) all you need is to start application normally and validate /health endpoint.
In this case I would do following CI steps
one that builds (gradle build)
one that run unit tests (tests without any external dependenices)
one that run integration tests
CI test- external
Above described approach could also be done for application deployed to test environment (and connected to test rabbitMQ instance). As soon as application starts, you can check /health endpoint to make sure it is connected to rabbitMQ instance.
If you make your application send message to rabbitMQ, then you could observe rabbbitMQ metrics (using rabbitMQ monitoring API) and observe external effects of message being consumed by application.
For such test you need to start and deploy your application from CI befor starting tests.
for that scenario I would do following CI steps
step that that builds app
steps that run all tests in src/test directory (unit, integration)
step that deploys app to test environment, or starts dockerized application
step that runs external tests
for dockerized environment, step that stops docker containers
Consider dockerized enevironment
For external test you could run your application along with test RabbitMQ instance in Docker. You will need two docker containers.
one with application
one with rabbitMQ . There is official docker image for rabbitmq https://hub.docker.com/_/rabbitmq/ and it is really easy to use
To run those two images, it is most reasonable to write docker-compose file.
I have a Glassfish v2.1.1 cluster setup. I deployed an EAR file consisting a single stateless bean to stand alone server. It has an IIOP port 3752.
My client application which will be communicating with this bean is deployed on cluster. When i lookup bean's name, i get NameNotFoundException. Code looks as below:
Properties props = new Properties();
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
if (logger.isDebugEnabled()) {
logger.debug("Looking for bean from location : " + PropertiesService.instance().getSchedulerOrbHost() + ":"
+ PropertiesService.instance().getSchedulerOrbPort());
}
props.setProperty("org.omg.CORBA.ORBInitialHost", PropertiesService.instance().getSchedulerOrbHost());
props.setProperty("org.omg.CORBA.ORBInitialPort", PropertiesService.instance().getSchedulerOrbPort());
InitialContext context = null;
try {
context = new InitialContext(props);
} catch (NamingException e) {
e.printStackTrace();
}
String beanName = "test.OperationControllerRemote";
OperationControllerRemote remote = (OperationControllerRemote) context.lookup(beanName);
Note that i checked JNDI tree and name "test.OperationControllerRemote" is there.
Any opinions please?
Here are the ways I have got it to work with a GF 2.1.1 cluster and a Swing client. I'm currently going with the Standalone option because of client launch speed, but the ACC might work for you.
Standalone
The way you're doing it is considered standalone.
http://glassfish.java.net/javaee5/ejb/EJB_FAQ.html#StandaloneRemoteEJB
http://blogs.oracle.com/dadelhardt/entry/standalone_iiop_clients_with_glassfish
ACC
Another way to approach this is to launch the client with the ACC. This means packaging the client code into the ear as an Application Client and either launching using the JNLP method or manually installing a bundled ACC (mini glassfish really) on client machines. In GF 2.1, either way works ok, but both are pretty fat and JNLP method can make startup times a bit longer. Supposedly in GF 3.1 they've reworked the ACC and it starts up faster. Something that may not be obvious is that with the ACC you get the list of servers in the cluster provided automatically at client startup.
http://blogs.oracle.com/theaquarium/entry/java_ee_clients_with_or
http://download.oracle.com/docs/cd/E18930_01/html/821-2418/beakv.html#scrolltoc
http://download.oracle.com/docs/cd/E18930_01/html/821-2418/gkusn.html
Lookups
Either of the above ways provides RMI/CORBA failover and load balancing for the client.
Either way, when you have the right dependencies on your classpath and the com.sun.appserv.iiop.endpoints system property set (like node1:33700,node2:33701), you'll only need the no-args InitialContext because Glassfish's stuff autoregisters their connection properties, etc as described in the first link:
new InitialContext()
And lookups will work. For my remote session beans (EJB 3.0) I typically do it like:
#Stateless(mappedName="FooBean")
public class FooBean implements FooBeanRemote {}
#Remote
public interface FooBeanRemote {}
then in client code:
FooBeanRemote foo = (FooBeanRemote) ctx.lookup("FooBean");
I have a MVC 3 solution configured with Ninject using a repository pattern. Some of my bindings include:
kernel.Bind<IDatabaseFactory>().To<DatabaseFactory>().InRequestScope();
kernel.Bind<IUnitOfWork>().To<UnitOfWork>().InRequestScope();
kernel.Bind<IMyRepository>().To<MyRepository>().InRequestScope();
kernel.Bind<IMyService>().To<MyService>().InRequestScope();
kernel.Bind<ILogging>().To<Logging>().InSingletonScope();
I also added a console application to my solution and I want to leverage the same repository and services. My Ninject configuration for the console application looks like:
kernel.Bind<IDatabaseFactory>().To<DatabaseFactory>().InSingletonScope();
kernel.Bind<IUnitOfWork>().To<UnitOfWork>().InSingletonScope();
kernel.Bind<IMyRepository>().To<MyRepository>().InSingletonScope();
kernel.Bind<IMyService>().To<MyService>().InSingletonScope();
kernel.Bind<ILogging>().To<Logging>().InSingletonScope();
My console code looks like:
static void Main(string[] args)
{
IKernel kernel = new StandardKernel(new IoCMapper());
var service = kernel.Get<IMyService>();
var logger = kernel.Get<ILogging>();
... do some processing here
}
This works just fine but I want t be sure that I am configuring Ninject correctly for a console application. Is it correct to use InSingletonScope() for all my bindings in my console application? Should I be configuring it differently?
Do you want one and only one instance of each of your repository services for the whole application? If so, then use InSingletonScope.
Is your console application multithreaded? If this is the case and you want a new instance of your services for each thread then you will use InThreadScope.
If you want a new instance of the service(s) each time they are called for, set it to InTransientScope.
You also have the option of defining your own scope using InScope. Bob Cravens gives a good overview of each of these here http://blog.bobcravens.com/2010/03/ninject-life-cycle-management-or-scoping/