We are trying to build a console to process redis queries. But, in the back end we need to use Jedis. So, the commands, given as the inputs needs to be processed using Jedis. For example, in redis-cli, we use " keys * ". For the same we use jedis.keys(" * ") in Jedis. I have no idea, how to convert " keys * " into jedis.keys(" * "). Kindly tell me some suggestions....
I know this is an old question, but hopefully the following will be useful for others.
Here's something I came up with as the most recent version of Jedis (3.2.0 as of this time) did not support the "memory usage " command which is available on Redis >= 4. This code assumes a Jedis object has been created, probably from a Jedis resource pool:
import redis.clients.jedis.util.SafeEncoder;
// ... Jedis setup code ...
byteSize = (Long) jedis.sendCommand(new ProtocolCommand() {
#Override
public byte[] getRaw() {
return SafeEncoder.encode("memory");
}},
SafeEncoder.encode("usage"),
SafeEncoder.encode(key));
This is a special case command as it has a primary keyword memory with a secondary action usage (other ones are doctor, stats, purge, etc). When sending multi-keyword commands to Redis, the keywords must be treated as a list. My first attempt at specifying memory usage as a single argument failed with a Redis server error.
Subsequently, it seems the current Jedis implementation is geared toward single keyword commands, as underneath the hood there's a bunch of special code to deal with multi-keyword commands such as debug object that doesn't quite fit the original command keyword framework.
Anyway, once my current project that required the ability to call memory usage is complete, I'll try my hand at providing a patch to the Jedis maintainer to implement the above command in a more official/conventional way, which would look something like:
Long byteSize = jedis.memoryUsage(key);
Finally, to address your specific need, you're best bet is to use the scan() method of the Jedis class. There are articles here on SO that explain how to use the scan() method.
Hmm...You can make the same thing by referring to the following.
redis.clients.jedis.Connection.sendCommand(Command, String...)
Create a class extend Connection.
Create a class extend Connection instance and call the connect() method.
Call super.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), args[1~end]).
example for you:
public class JedisConn extends Connection {
public JedisConn(String host, int port) {
super(host, port);
}
#Override
protected Connection sendCommand(final Protocol.Command cmd, final String... args) {
return super.sendCommand(cmd, args);
}
public static void main(String[] args) {
JedisConn jedisConn = new JedisConn("host", 6379);
jedisConn.connect();
Connection connection = jedisConn.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), Arrays.copyOfRange(args, 1, args.length));
System.out.println(connection.getAll());
jedisConn.close();
}
}
Haha~~
I have found a way for this. There is a function named eval(). We can use that for this as shown below.
`Scanner s=new Scanner(System.in);String query=s.nextLine();
String[] q=query.split(" ");
String cmd='\''+q[0]+'\'';
for(int i=1;i<q.length;i++)
cmd+=",\'"+q[i]+'\'';
System.out.println(j.eval("return redis.call("+cmd+")"));`
In the documentation it is written that you should wrap blocking code into a Mono: http://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
But it is not written how to actually do it.
I have the following code:
#PostMapping(path = "some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doeSomething(#Valid #RequestBody Flux<Something> something) {
something.subscribe(something -> {
// some blocking operation
});
// how to return Mono<Void> here?
}
The first problem I have here is that I need to return something but I cant.
If I would return a Mono.empty for example the request would be closed before the work of the flux is done.
The second problem is: how do I actually wrap the blocking code like it is suggested in the documentation:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.elastic());
You should not call subscribe within a controller handler, but just build a reactive pipeline and return it. Ultimately, the HTTP client will request data (through the Spring WebFlux engine) and that's what subscribes and requests data to the pipeline.
Subscribing manually will decouple the request processing from that other operation, which will 1) remove any guarantee about the order of operations and 2) break the processing if that other operation is using HTTP resources (such as the request body).
In this case, the source is not blocking, but only the transform operation is. So we'd better use publishOn to signal that the rest of the chain should be executed on a specific Scheduler. If the operation here is I/O bound, then Schedulers.elastic() is the best choice, if it's CPU-bound then Schedulers .paralell is better. Here's an example:
#PostMapping(path = "/some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doSomething(#Valid #RequestBody Flux<Something> something) {
return something.collectList()
.publishOn(Schedulers.elastic())
.map(things -> {
return processThings(things);
})
.then();
}
public ProcessingResult processThings(List<Something> things) {
//...
}
For more information on that topic, check out the Scheduler section in the reactor docs. If your application tends to do a lot of things like this, you're losing a lot of the benefits of reactive streams and you might consider switching to a Servlet-based model where you can configure thread pools accordingly.
I have a basic stream processing flow which looks like
master topic -> my processing in a mapper/filter -> output topics
and I am wondering about the best way to handle "bad messages". This could potentially be things like messages that I can't deserialize properly, or perhaps the processing/filtering logic fails in some unexpected way (I have no external dependencies so there should be no transient errors of that sort).
I was considering wrapping all my processing/filtering code in a try catch and if an exception was raised then routing to an "error topic". Then I can study the message and modify it or fix my code as appropriate and then replay it on to master. If I let any exceptions propagate, the stream seems to get jammed and no more messages are picked up.
Is this approach considered best practice?
Is there a convenient Kafka streams way to handle this? I don't think there is a concept of a DLQ...
What are the alternative ways to stop Kafka jamming on a "bad message"?
What alternative error handling approaches are there?
For completeness here is my code (pseudo-ish):
class Document {
// Fields
}
class AnalysedDocument {
Document document;
String rawValue;
Exception exception;
Analysis analysis;
// All being well
AnalysedDocument(Document document, Analysis analysis) {...}
// Analysis failed
AnalysedDocument(Document document, Exception exception) {...}
// Deserialisation failed
AnalysedDocument(String rawValue, Exception exception) {...}
}
KStreamBuilder builder = new KStreamBuilder();
KStream<String, AnalysedPolecatDocument> analysedDocumentStream = builder
.stream(Serdes.String(), Serdes.String(), "master")
.mapValues(new ValueMapper<String, AnalysedDocument>() {
#Override
public AnalysedDocument apply(String rawValue) {
Document document;
try {
// Deserialise
document = ...
} catch (Exception e) {
return new AnalysedDocument(rawValue, exception);
}
try {
// Perform analysis
Analysis analysis = ...
return new AnalysedDocument(document, analysis);
} catch (Exception e) {
return new AnalysedDocument(document, exception);
}
}
});
// Branch based on whether analysis mapping failed to produce errorStream and successStream
errorStream.to(Serdes.String(), customPojoSerde(), "error");
successStream.to(Serdes.String(), customPojoSerde(), "analysed");
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
Any help greatly appreciated.
Right now, Kafka Streams offers only limited error handling capabilities. There is work in progress to simplify this. For now, your overall approach seems to be a good way to go.
One comment about handling de/serialization errors: handling those error manually, requires you to do de/serialization "manually". This means, you need to configure ByteArraySerdes for key and value for you input/output topic of your Streams app and add a map() that does the de/serialization (ie, KStream<byte[],byte[]> -> map() -> KStream<keyType,valueType> -- or the other way round if you also want to catch serialization exceptions). Otherwise, you cannot try-catch deserialization exceptions.
With your current approach, you "only" validate that the given string represents a valid document -- but it could be the case, that the message itself is corrupted and cannot be converted into a String in the source operator in the first place. Thus, you don't actually cover deserialization exception with you code. However, if you are sure a deserialization exception can never happen, you approach would be sufficient, too.
Update
This issues is tackled via KIP-161 and will be included in the next release 1.0.0. It allows you to register an callback via parameter default.deserialization.exception.handler. The handler will be invoked every time a exception occurs during deserialization and allows you to return an DeserializationResponse (CONTINUE -> drop the record an move on, or FAIL that is the default).
Update 2
With KIP-210 (will be part of in Kafka 1.1) it's also possible to handle errors on the producer side, similar to the consumer part, by registering a ProductionExceptionHandler via config default.production.exception.handler that can return CONTINUE.
Update Mar 23, 2018: Kafka 1.0 provides much better and easier handling for bad error messages ("poison pills") via KIP-161 than what I described below. See default.deserialization.exception.handler in the Kafka 1.0 docs.
This could potentially be things like messages that I can't deserialize properly [...]
Ok, my answer here focuses on the (de)serialization issues as this might be the most tricky scenario to handle for most users.
[...] or perhaps the processing/filtering logic fails in some unexpected way (I have no external dependencies so there should be no transient errors of that sort).
The same thinking (for deserialization) can also be applied to failures in the processing logic. Here, most people tend to gravitate towards option 2 below (minus the deserialization part), but YMMV.
I was considering wrapping all my processing/filtering code in a try catch and if an exception was raised then routing to an "error topic". Then I can study the message and modify it or fix my code as appropriate and then replay it on to master. If I let any exceptions propagate, the stream seems to get jammed and no more messages are picked up.
Is this approach considered best practice?
Yes, at the moment this is the way to go. Essentially, the two most common patterns are (1) skipping corrupted messages or (2) sending corrupted records to a quarantine topic aka a dead letter queue.
Is there a convenient Kafka streams way to handle this? I don't think there is a concept of a DLQ...
Yes, there is a way to handle this, including the use of a dead letter queue. However, it's (at least IMHO) not that convenient yet. If you have any feedback on how the API should allow you to handle this -- e.g. via a new or updated method, a configuration setting ("if serialization/deserialization fails send the problematic record to THIS quarantine topic") -- please let us know. :-)
What are the alternative ways to stop Kafka jamming on a "bad message"?
What alternative error handling approaches are there?
See my examples below.
FWIW, the Kafka community is also discussing the addition of a new CLI tool that allows you to skip over corrupted messages. However, as a user of the Kafka Streams API, I think ideally you want to handle such scenarios directly in your code, and fallback to CLI utilities only as a last resort.
Here are some patterns for the Kafka Streams DSL to handle corrupted records/messages aka "poison pills". This is taken from http://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-messages
Option 1: Skip corrupted records with flatMap
This is arguably what most users would like to do.
We use flatMap because it allows you to output zero, one, or more output records per input record. In the case of a corrupted record we output nothing (zero records), thereby ignoring/skipping the corrupted record.
Benefit of this approach compared to the others ones listed here: We need to manually deserialize a record only once!
Drawback of this approach: flatMap "marks" the input stream for potential data re-partitioning, i.e. if you perform a key-based operation such as groupings (groupBy/groupByKey) or joins afterwards, your data will be re-partitioned behind the scenes. Since this might be a costly step we don't want that to happen unnecessarily. If you KNOW that the record keys are always valid OR that you don't need to operate on the keys (thus keeping them as "raw" keys in byte[] format), you can change from flatMap to flatMapValues, which will not result in data re-partitioning even if you join/group/aggregate the stream later.
Code example:
Serde<byte[]> bytesSerde = Serdes.ByteArray();
Serde<String> stringSerde = Serdes.String();
Serde<Long> longSerde = Serdes.Long();
// Input topic, which might contain corrupted messages
KStream<byte[], byte[]> input = builder.stream(bytesSerde, bytesSerde, inputTopic);
// Note how the returned stream is of type KStream<String, Long>,
// rather than KStream<byte[], byte[]>.
KStream<String, Long> doubled = input.flatMap(
(k, v) -> {
try {
// Attempt deserialization
String key = stringSerde.deserializer().deserialize(inputTopic, k);
long value = longSerde.deserializer().deserialize(inputTopic, v);
// Ok, the record is valid (not corrupted). Let's take the
// opportunity to also process the record in some way so that
// we haven't paid the deserialization cost just for "poison pill"
// checking.
return Collections.singletonList(KeyValue.pair(key, 2 * value));
}
catch (SerializationException e) {
// log + ignore/skip the corrupted message
System.err.println("Could not deserialize record: " + e.getMessage());
}
return Collections.emptyList();
}
);
Option 2: dead letter queue with branch
Compared to option 1 (which ignores corrupted records) option 2 retains corrupted messages by filtering them out of the "main" input stream and writing them to a quarantine topic (think: dead letter queue). The drawback is that, for valid records, we must pay the manual deserialization cost twice.
KStream<byte[], byte[]> input = ...;
KStream<byte[], byte[]>[] partitioned = input.branch(
(k, v) -> {
boolean isValidRecord = false;
try {
stringSerde.deserializer().deserialize(inputTopic, k);
longSerde.deserializer().deserialize(inputTopic, v);
isValidRecord = true;
}
catch (SerializationException ignored) {}
return isValidRecord;
},
(k, v) -> true
);
// partitioned[0] is the KStream<byte[], byte[]> that contains
// only valid records. partitioned[1] contains only corrupted
// records and thus acts as a "dead letter queue".
KStream<String, Long> doubled = partitioned[0].map(
(key, value) -> KeyValue.pair(
// Must deserialize a second time unfortunately.
stringSerde.deserializer().deserialize(inputTopic, key),
2 * longSerde.deserializer().deserialize(inputTopic, value)));
// Don't forget to actually write the dead letter queue back to Kafka!
partitioned[1].to(Serdes.ByteArray(), Serdes.ByteArray(), "quarantine-topic");
Option 3: Skip corrupted records with filter
I only mention this for completeness. This option looks like a mix of options 1 and 2, but is worse than either of them. Compared to option 1, you must pay the manual deserialization cost for valid records twice (bad!). Compared to option 2, you lose the ability to retain corrupted records in a dead letter queue.
KStream<byte[], byte[]> validRecordsOnly = input.filter(
(k, v) -> {
boolean isValidRecord = false;
try {
bytesSerde.deserializer().deserialize(inputTopic, k);
longSerde.deserializer().deserialize(inputTopic, v);
isValidRecord = true;
}
catch (SerializationException e) {
// log + ignore/skip the corrupted message
System.err.println("Could not deserialize record: " + e.getMessage());
}
return isValidRecord;
}
);
KStream<String, Long> doubled = validRecordsOnly.map(
(key, value) -> KeyValue.pair(
// Must deserialize a second time unfortunately.
stringSerde.deserializer().deserialize(inputTopic, key),
2 * longSerde.deserializer().deserialize(inputTopic, value)));
Any help greatly appreciated.
I hope I could help. If yes, I'd appreciate your feedback on how we could improve the Kafka Streams API to handle failures/exceptions in a better/more convenient way than today. :-)
For the processing logic you could take this approach:
someKStream
.mapValues(inputValue -> {
// for each execution the below "return" could provide a different class than the previous run!
// e.g. "return isFailedProcessing ? failValue : successValue;"
// where failValue and successValue have no related classes
return someObject; // someObject class vary at runtime depending on your business
}) // here you'll have KStream<whateverKeyClass, Object> -> yes, Object for the value!
// you could have a different logic for choosing
// the target topic, below is just an example
.to((k, v, recordContext) -> v instanceof failValueClass ?
"dead-letter-topic" : "success-topic",
// you could completelly ignore the "Produced" part
// and rely on spring-boot properties only, e.g.
// spring.kafka.streams.properties.default.key.serde=yourKeySerde
// spring.kafka.streams.properties.default.value.serde=org.springframework.kafka.support.serializer.JsonSerde
Produced.with(yourKeySerde,
// JsonSerde could be an instance configured as you need
// (with type mappings or headers setting disabled, etc)
new JsonSerde<>()));
Your classes, though different and landing into different topics, will serialize as expected.
When not using to(), but instead one wants to continue with other processing, he could use branch() with splitting the logic based on the kafka-value class; the trick for branch() is to return KStream<keyClass, ?>[] in order to further allow one to cast to the appropriate class the individual array items.
If you want to send an exception (custom exception) to another topic (ERROR_TOPIC_NAME):
#Bean
public KStream<String, ?> kafkaStreamInput(StreamsBuilder kStreamBuilder) {
KStream<String, InputModel> input = kStreamBuilder.stream(INPUT_TOPIC_NAME);
return service.messageHandler(input);
}
public KStream<String, ?> messageHandler(KStream<String, InputModel> inputTopic) {
KStream<String, Object> output;
output = inputTopic.mapValues(v -> {
try {
//return InputModel
return normalMethod(v);
} catch (Exception e) {
//return ErrorModel
return errorHandler(e);
}
});
output.filter((k, v) -> (v instanceof ErrorModel)).to(KafkaStreamsConfig.ERROR_TOPIC_NAME);
output.filter((k, v) -> (v instanceof InputModel)).to(KafkaStreamsConfig.OUTPUT_TOPIC_NAME);
return output;
}
If you want to handle Kafka exceptions and skip it:
#Autowired
public ConsumerErrorHandler(
KafkaProducer<String, ErrorModel> dlqProducer) {
this.dlqProducer = dlqProducer;
}
#Bean
ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ObjectProvider<ConsumerFactory<Object, Object>> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory.getIfAvailable());
factory.setErrorHandler(((exception, data) -> {
ErrorModel errorModel = ErrorModel.builder().message()
.status("500").build();
assert data != null;
dlqProducer.send(new ProducerRecord<>(DLQ_TOPIC, data.key().toString(), errorModel));
}));
return factory;
}
All above answers although valid and useful, they are assuming that your streams topology is stateless. For example going back to the original example,
master topic -> my processing in a mapper/filter -> output topics
"my processing in a mapper/filter" should be stateless. I.e. Not re-partitioning (aka writing to a persistent re-partition topic) or doing a toTable() (aka writing to a changelog topic). If the processing fails further down the topology and you commit the transaction (by following any of the 3 option mention above - flatmap, branch or filter - then you have to cater for manually or programmatically eventually deleting that inconsistent state. That would mean writing extra custom code for automatic this.
I would personally expect Streams to also give you a LogAndSkip option for any unhandled runtime exception, not only for deserialization and production ones.
Has anyone any ideas on this?
I don't believe these examples work at all when working with Avro.
When the schema can't be resolved (i.e there is bad/non-avro message corrupting the topic, for example) there is no key or value to deserialize in the first place because by the time the DSL .branch() code is called, the exception has already been thrown (or handled).
Can anyone confirm if this i indeed the case? The very fluent approach you refer to here isn't possible when working with Avro?
KIP-161 does explain how to use a handler, however, it's much more fluent to see it as part of the topology.
In my test application I can see messages that were processed with an exception being automatically inserted into the default EasyNetQ_Default_Error_Queue, which is great. I can then successfully dump or requeue these messages using the Hosepipe, which also works fine, but requires dropping down to the command line and calling against both Hosepipe and the RabbitMQ API to purge the queue of retried messages.
So I'm thinking the easiest approach for my application is to simply subscribe to the error queue, so I can re-process them using the same infrastructure. But in EastNetQ, the error queue seems to be special. We need to subscribe using a proper type and routing ID, so I'm not sure what these values should be for the error queue:
bus.Subscribe<WhatShouldThisBe>("and-this", ReprocessErrorMessage);
Can I use the simple API to subscribe to the error queue, or do I need to dig into the advanced API?
If the type of my original message was TestMessage, then I'd like to be able to do something like this:
bus.Subscribe<ErrorMessage<TestMessage>>("???", ReprocessErrorMessage);
where ErrorMessage is a class provided by EasyNetQ to wrap all errors. Is this possible?
You can't use the simple API to subscribe to the error queue because it doesn't follow EasyNetQ queue type naming conventions - maybe that's something that should be fixed ;)
But the Advanced API works fine. You won't get the original message back, but it's easy to get the JSON representation which you could de-serialize yourself quite easily (using Newtonsoft.JSON). Here's an example of what your subscription code should look like:
[Test]
[Explicit("Requires a RabbitMQ server on localhost")]
public void Should_be_able_to_subscribe_to_error_messages()
{
var errorQueueName = new Conventions().ErrorQueueNamingConvention();
var queue = Queue.DeclareDurable(errorQueueName);
var autoResetEvent = new AutoResetEvent(false);
bus.Advanced.Subscribe<SystemMessages.Error>(queue, (message, info) =>
{
var error = message.Body;
Console.Out.WriteLine("error.DateTime = {0}", error.DateTime);
Console.Out.WriteLine("error.Exception = {0}", error.Exception);
Console.Out.WriteLine("error.Message = {0}", error.Message);
Console.Out.WriteLine("error.RoutingKey = {0}", error.RoutingKey);
autoResetEvent.Set();
return Task.Factory.StartNew(() => { });
});
autoResetEvent.WaitOne(1000);
}
I had to fix a small bug in the error message writing code in EasyNetQ before this worked, so please get a version >= 0.9.2.73 before trying it out. You can see the code example here
Code that works:
(I took a guess)
The screwyness with the 'foo' is because if I just pass that function HandleErrorMessage2 into the Consume call, it can't figure out that it returns a void and not a Task, so can't figure out which overload to use. (VS 2012)
Assigning to a var makes it happy.
You will want to catch the return value of the call to be able to unsubscribe by disposing the object.
Also note that Someone used a System Object name (Queue) instead of making it a EasyNetQueue or something, so you have to add the using clarification for the compiler, or fully specify it.
using Queue = EasyNetQ.Topology.Queue;
private const string QueueName = "EasyNetQ_Default_Error_Queue";
public static void Should_be_able_to_subscribe_to_error_messages(IBus bus)
{
Action <IMessage<Error>, MessageReceivedInfo> foo = HandleErrorMessage2;
IQueue queue = new Queue(QueueName,false);
bus.Advanced.Consume<Error>(queue, foo);
}
private static void HandleErrorMessage2(IMessage<Error> msg, MessageReceivedInfo info)
{
}
I want to have several buses in one process. I googled about this and found that it is possible only if having several AppDomains. But I cannot make it work.
Here is my code sample (I do everything in one class library):
using System;
using System.Diagnostics;
using System.Reflection;
using MyMessages;
using NServiceBus;
using NServiceBus.Config;
using NServiceBus.Config.ConfigurationSource;
namespace Subscriber1
{
public class Sender
{
public static void Main()
{
var domain = AppDomain.CreateDomain("someDomain", AppDomain.CurrentDomain.Evidence);
domain.Load(Assembly.GetExecutingAssembly().GetName());
domain.CreateInstance(Assembly.GetExecutingAssembly().FullName, typeof (PluginBusCreator).FullName);
//here I have some code to send messages to "PluginQueue".
}
}
public class PluginBusCreator
{
public PluginBusCreator()
{
var Bus = Configure.With(
Assembly.Load("NServiceBus"), Assembly.Load("NServiceBus.Core"),
Assembly.LoadFrom("NServiceBus.Host.exe"), Assembly.GetCallingAssembly())
.CustomConfigurationSource(new PluginConfigurationSource())
.SpringFrameworkBuilder()
.XmlSerializer().MsmqTransport()
.UnicastBus().LoadMessageHandlers<First<SomeHandler>>().CreateBus().Start();
}
protected IBus Bus { get; set; }
}
class PluginConfigurationSource : IConfigurationSource
{
public T GetConfiguration<T>() where T : class
{
{
if (typeof (T) == typeof (MsmqTransportConfig))
return new MsmqTransportConfig
{
ErrorQueue = "error",
InputQueue = "PluginQueue",
MaxRetries = 1,
NumberOfWorkerThreads = 1
} as T;
return null;
}
}
}
public class SomeHandler : IHandleMessages<EventMessage1>
{
public void Handle(EventMessage1 message)
{
Debugger.Break();
}
}
}
And I don't get handler invoked.
If you have any ideas, please help. I'm fighting this problem a lot of time.
Also if full code need to be published, please tell.
I need several buses to solve the following problem :
I have my target application, and several plugins with it. We decided to make our plugins according to service bus pattern.
Each plugin can have several profiles.
So, target application(it is web app.) is publishing message, that something has changed in it. Each plugin which is subscribed to this message, need to do some action for each profile. But plugin knows nothing about its profiles (customers are writing plugins). Plugin should only have profile injected in it, when message handling started.
We decided to have some RecepientList (pattern is described in "Enterprise Integration Patterns"), which knows about plugin profiles, iterates through them and re-send messages with profiles injected.(So if plugin has several profiles, several messages will be sent to it).
But I don't want to have each plugin invoked in a new process. Perfectly I want to dynamically configure buses for each plugin during start. All in one process. But it seems I need to do it in separate AppDomains. So I have a problem described above:-).
Sergey,
I'm unclear as to why each plugin needs to have its own bus. Could they all not sit on the same bus? Each plugin developer would write their message handlers as before, and the subscriptions would happen automatically by the bus.
Then, also, you wouldn't need to specify to load each of the NServiceBus DLLs.
BTW, loading an assembly by name tends to cause problems - try using this to specify assemblies:
typeof(IMessage).Assembly, typeof(MsmqTransportConfig).Assembly, typeof(IConfigureThisEndpoint).Assembly