I have some problem so can you help me. Is instance of AmqpTemplate class from RabbitMQ ( implementation of AMQP protocol) thread safe. Can it be accessed from multiple threads?
Thanks
AmqpTemplate is the interface, and RabbitTemplate is the implementation, and I assume by "thread-safe" you mean that its send/receive/sendAndReceive methods may be used concurrently. If so, then YES. The only state it maintains within instance variables are "converter" strategies for the Message and MessageProperties as well as default Exchange, Queue, and Routing Key settings (which are not even used if you invoke the methods that take those as arguments instead), and all of those are typically configured one time initially (e.g. via dependency injection). The template does not maintain any non-local state for any particular operation at runtime. With AMQP, the "Channel" is the instance that can only be used by one thread at a time, and the RabbitTemplate manages that internally such that each operation is retrieving a Channel to use within the scope of that operation. Multiple concurrent operations therefore lead to multiple instances of Channel being used, but that is not something you need to be worried about as an end-user of the template.
Hope that helps.
-Mark
Related
I am new to message queues and was wondering if people can explain usecase(s) for using an exclusive queue in RabbitMQ.
From the docs:
An exclusive queue can only be used (consumed from, purged, deleted, etc) by its declaring connection.
Exclusive queues are deleted when their declaring connection is closed or gone. They therefore are only suitable for client-specific transient state
Exclusive queues are a type of temporary queue, and as such they:
"...can be a reasonable choice for workloads with transient clients, for example, temporary WebSocket connections in user interfaces, mobile applications and devices that are expected to go offline or use switch identities. Such clients usually have inherently transient state that should be replaced when the client reconnects."
See notes on durability for more context.
Exclusive queues, as you note, have an added restriction. An exclusive queue:
"... can only be used (consumed from, purged, deleted, etc) by its declaring connection."
This makes it (potentially) suitable for use as queue contained within a single application, where the application will create and then process its own work queue items.
The following points are just my opinion, reflecting on the above notes:
I think it may be a relatively rare RabbitMQ use case, compared to "publish-and-subscribe" use cases such as topic exchanges and similar implementations which communicate across different nodes and connections.
Also, I would expect that much of the time, the core functionality of this type of queue could be provided by a language's built-in data structures (such as Java's queue implementations). But if you want a mature out-of-the-box queueing solution for your application to use internally, then maybe it can be a suitable option.
Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.
When calling singleRequest, how can one customize the execution context that is used by the connection pool?
I took a brief look at the code, and a call to singleRequest results in a message being sent to the PoolMasterActor, which in turn sends a message to the pool interface actor.
Is each connection blocking or non-blocking?
Which context is used for the connection pool? (I want to make sure that my HTTP requests don't block all the threads)
If you check out the singleRequest signature, it requires an implicit Materializer (and therefore an ActorSystem and its dispatchers) to run the underlying HTTP infrastructure - which is based on Akka Streams. More knowledge on how materializers spawn threads under-the-hood can be found in the docs, and this blogpost.
Going back to your questions:
The whole Akka-HTTP infrastructure is inherently non-blocking (as it's based on Akka Streams - which adheres to the Reactive Streams spec and is based on Akka Actors).
The threading used by the singleRequest call inherits from the ActorSystem dispatcher used down the line. Unless you do anything specific, you will end up using your system's default dispatcher. This is reasonable choice in many cases when you are writing an Akka HTTP client.
In case you really need your materializer to use a custom dispatcher you can achieve this by customizing your ActorMaterializerSettings, e.g.
implicit val materializer = ActorMaterializer(
ActorMaterializerSettings(actorSystem).withDispatcher("my-custom-dispatcher")
)
How can I make the WCF server instance (the instance of the class in the .svc.cs / .svc.vb file) stay alive between requests?
It's a stateless, read-only type of service: I'm fine with different clients reusing the same instance. However, it's not thread-safe: I don't want two threads to execute a method on this instance concurrently.
Ideally, what I'm looking for is that WCF manages a "worker pool" of these instances. Say, 10. New request comes in: fetch an instance, handle the request. Request over, go back to the pool. Already 10 concurrent requests running? Pause the 11th until a new worker is free.
What I /don't/ want is per-client sessions. Startup for these instances is expensive, I don't want to do that every time a new client connects.
Another thing I don't want: dealing with this client-side. This is not the responsibility of the client, which should know nothing about the implementation of the server. And I can't always control that.
I'm getting a bit lost in unfamiliar terminology from the MSDN docs. I have a lot working, but this pool system I just can't seem to get right.
Do I have to create a static pool and manage it myself?
Thanks
PS: A source of confusion for me is that almost anything in this regard points toward the configuration of the bindings. Like basicHttp or wsHttp. But that doesn't sound right: this should be on a higher level, unrelated to the binding: this is about the worker managers. Or not?
In the event that you have a WCF service that centralizes business logic, provides/controls access to another “single” backend resource (e.g. data file, network socket) or otherwise contains some type of shared resource, then you most likely need to implement a singleton.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
In general, use a singleton object if it maps well to a natural singleton in the application domain. A singleton implies the singleton has some valuable state that you want to share across multiple clients. The problem is that when multiple clients connect to the singleton, they may all do so concurrently on multiple worker threads. The singleton must synchronize access to its state to avoid state corruption. This in turn means that only one client at a time can access the singleton. This may degrade responsiveness and availability to the point that the singleton is unusable as the system grows.
The singleton service is the ultimate shareable service, which has both pros(as indicated above) and cons (as implied in your question, you have to manage thread safety). When a service is configured as a singleton, all clients get connected to the same single well-known instance independently of each other, regardless of which endpoint of the service they connect to. The singleton service lives forever, and is only disposed of once the host shuts down. The singleton is created exactly once when the host is created.
https://msdn.microsoft.com/en-us/magazine/cc163590.aspx
Does DMLC creates separate threads for each concurrent consumer? What happens under the hood? The documentation writes this:
Actual MessageListener execution happens in asynchronous work units which are created through Spring's TaskExecutor abstraction. By default, the specified number of invoker tasks will be created on startup, according to the "concurrentConsumers" setting.
I am not able to understand this, are these tasks executed in parallel? If yes, what are the default limits for this, like thread count etc.?
Thanks!
Yes a separate thread is used for each consumer (obtained from the task executor). By default, a SimpleAsyncTaskExecutor is used and the thread is destroyed when the consumer is stopped. There is no thread limit beyond the container's concurrency settings.
If you inject a different kind of task executor (such as a ThreadPoolTaskExecutor) you must make sure it has enough available threads to support your container's concurrency settings. Container threads are generally long-lived.