How to create a listener for RBlockingQueue using redisson? - redis

In one of my services I am adding to the queue:
RBlockingQueue<String> queue = redissonClient.getBlockingQueue("ABC");
queue.add(receivedTask.toString());
And in the 2nd service I am connecting to the same redis instance and want to read/pop from the queue once a new element gets added from the 1st service, something like this:
RBlockingQueue<String> queue = redisClient.getRedissonClient().getBlockingDeque("ABC");
System.out.println("received: " + queue.poll(0, TimeUnit.SECONDS));
I was earlier dealing with RTopic and it was working fine but the use case has changed and now have to use RQueue instead. Not sure what I am doing wrong here.

Actually found what I was doing wrong.
Should be using subscribeOnElements(): it notifies the listener and polls the new element which can be accessed.
queue.subscribeOnElements((msg) -> {
//code
});

Related

Concept of vert.x concerning a webserver?

I don't quite get how vert.x is applied for a webserver.
The concept I know for webserver is the thread-based one.
You start your webserver, which then is running.
Then for every client that connects, you get a socket, which is then passed to its own thread handler.
The thread handler then processes the tasks for this specific socket.
So it is clearly defined which thread is doing the work for which socket.
However for every socket you need a new thread, which is expensive in the long run for many sockets.
Then there is the event-based concept that vert.x supplies.
So far I have understood, it should work anyhow like this:
The Vertx instance deploys Verticles.
Verticles run in background threads, but not every Verticle has its own thread. As an example there could be 1000 verticles deployed in a Vertx instance, but the Vertx instance handles only 8 threads (nr of cores * 2).
Then there are the event loops. I'm not sure how they refer to verticles. I've read that every verticle has 2 event loops, but don't really know how that works.
As a webserver example:
class WebServer: AbstractVerticle() {
lateinit var server: HttpServer
override fun start() {
server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
var router = Router.router(vertx);
router.route("/test").handler { routingContext ->
var response = routingContext.response();
response.end("Hello from my first HttpServer")
}
server.requestHandler(router).listen()
}
}
This WebServer can be deployed multiple times in a Vertx instance. And as it seems, each WebServer instance gets its own thread. When I try to connect 100 Clients and reply with a simple response, it seems like each Client is handled synchronously. Because when I do a Thread.sleep statement in each server handler, then the every second one client gets a response. However it should be that all server handlers should start their 1 second sleep and then almost identically reply to all clients after this time.
This is the code to start 100 clients:
fun main(){
Vertx.vertx().deployVerticle(object : AbstractVerticle(){
override fun start() {
for(i in 0 .. 100)
MyWebClient(vertx)
}
})
}
class MyWebClient(val vertx: Vertx) {
init {
println("Client starting ...")
val webClient = WebClient.create(vertx, WebClientOptions().setDefaultPort(1234).setDefaultHost("localhost"))
webClient.get("/test").send { ar ->
if(ar.succeeded()){
val response: HttpResponse<Buffer> = ar.result()
println("Received response with status code ${response.statusCode()} + ${response.body()}")
} else {
println("Something went wrong " + ar.cause().message)
}
}
}
}
Does anybody know an explanation for this?
There are some major issues there.
When you do this:
class WebServer: AbstractVerticle() {
lateinit var server: HttpServer
override fun start() {
server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
...
}
}
Then something like this:
vertx.deployVerticle(WebServer::class.java.name, DeploymentOptions().setInstances(4)
You'll get 4 verticles, but only single one of them will actually listen on the port. So, you're not getting any more concurrency.
Second, when you use Thread.sleep in your Vert.x code, you're blocking the event loop thread.
Third, your test with client is incorrect. Creation of a WebClient is very expensive, so by creating those one after the other, you're actually issuing requests very slowly. If you really want to test your web application, use something like https://github.com/wg/wrk
The issue with your code is that by default Vert.x only uses a maximum of one thread per verticle (if there are more verticles than available threads, a single thread has to handle multiple verticles).
Therefore, if you perform 100 requests against a single instance of a single verticle, the requests are processed by a single thread.
To solve your issue, you should deploy multiple instances of your verticle, i.e.
vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setInstances(4))
when doing that, always 4 responses will be received at nearly the same time, because 4 instances of the verticle are running and thus 4 threads are utilized.
In previous versions of Vert.x, you could also simply configure multi-threading for a verticle if you didn't want to set a specific amount of instances.
vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setWorker(true).setMultiThreaded(true))
However, this feature has been deprecated and replaced with customer worker pools.
For more information concerning this topic, I encourage you to take a look at the Vert.x-core Kotlin documentation

Consumable channel

Use Case
Android fragment that consumes items of T from a ReceiveChannel<T>. Once consumed, the Ts should be removed from the ReceiveChannel<T>.
I need a ReceiveChannel<T> that supports consuming items from it. It should function as a FIFO queue.
I currently attach to the channel from my UI like:
launch(uiJob) { channel.consumeEach{ /** ... */ } }
I detach by calling uiJob.cancel().
Desired behavior:
val channel = Channel<Int>(UNLIMITED)
channel.send(1)
channel.send(2)
// ui attaches, receives `1` and `2`
channel.send(3) // ui immediately receives `3`
// ui detaches
channel.send(4)
channel.send(5)
// ui attaches, receiving `4` and `5`
Unfortunately, when I detach from the channel, the channel is closed. This causes .send(4) and .send(5) to throw exceptions because the channel is closed. I want to be able to detach from the channel and have it remain usable. How can I do this?
Channel<Int>(UNLIMITED) fits my use case perfect, except that is closes the channel when it is unsubscribed from. I want the channel to remain open. Is this possible?
Channel.consumeEach method calls Channel.consume method which has this line in documentation:
Makes sure that the given block consumes all elements from the given channel by always invoking cancel after the execution of the block.
So the solution is to simply not use consume[Each]. For example you can do:
launch(uiJob) { for (it in channel) { /** ... */ } }
You can use BroadcastChannel. However, you need to specify a limited size (such as 1), as UNLIMITED and 0 (for rendez-vous) are not supported by BroadcastChannel.
You can also use ConflatedBroadcastChannel which always gives the latest value it had to new subscribers, like LiveData is doing.
BTW, is it a big deal if you new Fragment instance receives only the latest value? If not, then just go with ConflatedBroadcastChannel. Otherwise, none of BroacastChannels may suit your use case (try it and see if you get the behavior you're looking for).

Read SQL Server Broker messages and publish them using NServiceBus

I am very new to NServiceBus, and in one of our project, we want to accomplish following -
Whenever table data is modified in Sql server, construct a message and insert in sql server broker queue
Read the broker queue message using NServiceBus
Publish the message again as another event so that other subscribers
can handle it.
Now it is point 2, that I do not have much clue, how to get it done.
I have referred the following posts, after which I was able to enter the message in broker queue, but unable to integrate with NServiceBus in our project, as the NServiceBus libraries are of older version and also many methods used are deprecated. So using them with current versions is getting very troublesome, or if I was doing it in improper way.
http://www.nullreference.se/2010/12/06/using-nservicebus-and-servicebroker-net-part-2
https://github.com/jdaigle/servicebroker.net
Any help on the correct way of doing this would be invaluable.
Thanks.
I'm using the current version of nServiceBus (5), VS2013 and SQL Server 2008. I created a Database Change Listener using this tutorial, which uses SQL Server object broker and SQLDependency to monitor the changes to a specific table. (NB This may be deprecated in later versions of SQL Server).
SQL Dependency allows you to use a broad selection of all the basic SQL functionality, although there are some restrictions that you need to be aware of. I modified the code from the tutorial slightly to provide better error information:
void NotifyOnChange(object sender, SqlNotificationEventArgs e)
{
// Check for any errors
if (#"Subscribe|Unknown".Contains(e.Type.ToString())) { throw _DisplayErrorDetails(e); }
var dependency = sender as SqlDependency;
if (dependency != null) dependency.OnChange -= NotifyOnChange;
if (OnChange != null) { OnChange(); }
}
private Exception _DisplayErrorDetails(SqlNotificationEventArgs e)
{
var message = "useful error info";
var messageInner = string.Format("Type:{0}, Source:{1}, Info:{2}", e.Type.ToString(), e.Source.ToString(), e.Info.ToString());
if (#"Subscribe".Contains(e.Type.ToString()) && #"Invalid".Contains(e.Info.ToString()))
messageInner += "\r\n\nThe subscriber says that the statement is invalid - check your SQL statement conforms to specified requirements (http://stackoverflow.com/questions/7588572/what-are-the-limitations-of-sqldependency/7588660#7588660).\n\n";
return new Exception(messageMain, new Exception(messageInner));
}
I also created a project with a "database first" Entity Framework data model to allow me do something with the changed data.
[The relevant part of] My nServiceBus project comprises two "Run as Host" endpoints, one of which publishes event messages. The second endpoint handles the messages. The publisher has been setup to IWantToRunAtStartup, which instantiates the DBListener and passes it the SQL statement I want to run as my change monitor. The onChange() function is passed an anonymous function to read the changed data and publish a message:
using statements
namespace Sample4.TestItemRequest
{
public partial class MyExampleSender : IWantToRunWhenBusStartsAndStops
{
private string NOTIFY_SQL = #"SELECT [id] FROM [dbo].[Test] WITH(NOLOCK) WHERE ISNULL([Status], 'N') = 'N'";
public void Start() { _StartListening(); }
public void Stop() { throw new NotImplementedException(); }
private void _StartListening()
{
var db = new Models.TestEntities();
// Instantiate a new DBListener with the specified connection string
var changeListener = new DatabaseChangeListener(ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
/* START OF EVENT HANDLING CODE */
//This uses LINQ against the EF data model to get the changed records
IEnumerable<Models.TestItems> _NewTestItems = DataAccessLibrary.GetInitialDataSet(db);
while (_NewTestItems.Count() > 0)
{
foreach (var qq in _NewTestItems)
{
// Do some processing, if required
var newTestItem = new NewTestStarted() { ... set properties from qq object ... };
Bus.Publish(newTestItem);
}
// Because there might be a number of new rows added, I grab them in small batches until finished.
// Probably better to use RX to do this, but this will do for proof of concept
_NewTestItems = DataAccessLibrary.GetNextDataChunk(db);
}
changeListener.Start(string.Format(NOTIFY_SQL));
/* END OF EVENT HANDLING CODE */
};
// Now everything has been set up.... start it running.
changeListener.Start(string.Format(NOTIFY_SQL));
}
}
}
Important The OnChange event firing causes the listener to stop monitoring. It basically is a single event notifier. After you have handled the event, the last thing to do is restart the DBListener. (You can see this in the line preceding the END OF EVENT HANDLING comment).
You need to add a reference to System.Data and possibly System.Data.DataSetExtensions.
The project at the moment is still proof of concept, so I'm well aware that the above can be somewhat improved. Also bear in mind I had to strip out company specific code, so there may be bugs. Treat it as a template, rather than a working example.
I also don't know if this is the right place to put the code - that's partly why I'm on StackOverflow today; to look for better examples of ServiceBus host code. Whatever the failings of my code, the solution works pretty effectively - so far - and meets your goals, too.
Don't worry too much about the ServiceBroker side of things. Once you have set it up, per the tutorial, SQLDependency takes care of the details for you.
The ServiceBroker Transport is very old and not supported anymore, as far as I can remember.
A possible solution would be to "monitor" the interesting tables from the endpoint code using something like a SqlDependency (http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx) and then push messages into the relevant queues.
.m

ActiveMQ Dynamic Queues have dynamicQueues in their name

I am using dynamic queues for testing with names like dynamicQueues/Foo, but in the web console I am seeing the queue names as dynamicQueues/Foo rather than just Foo.
Other code (not ours) uses the same dynamicQueues/Foo but the queue name on the console is just Foo so things are misaligned so to speak.
I have followed the instructions here: http://activemq.apache.org/jndi-support.html
I am confused about whether the queue name reported in the web console should include dynamicQueues or not - finding it hard to debug our problem as a result.
You should see Foo in the console window, yes.
This code will produce a message on FOO and show the queue as FOO in the web console (amq 5.6.0):
Properties props = new Properties();
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,"org.apache.activemq.jndi.ActiveMQInitialContextFactory");
props.setProperty(Context.PROVIDER_URL,"tcp://127.0.0.1:61616");
javax.naming.Context ctx = new InitialContext(props);
ConnectionFactory cf = (ConnectionFactory)ctx.lookup("ConnectionFactory");
Connection conn = cf.createConnection();
Destination dest = (Destination)ctx.lookup("dynamicQueues/FOO");
Session s = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer prod = s.createProducer(dest);
prod.send(s.createTextMessage("Hello, World!"));
Are you sure you are using JNDI to lookup the queue and that you did not configure anything in jndi.properties?
(I can't respond to the comment above so sorry about answering here.)
This is my problem (in scala):
This works, queue is called FOO
val destination = JmsConnectionFactory.initialContext.lookup("dynamicQueues/FOO").asInstanceOf[Destination]
val consumer = session.createConsumer(destination)
This doesn't, queue is called dynamicQueues/FOO
val queue = session.createQueue("dynamicQueues/FOO")
val consumer = session.createConsumer(queue)
sigh, it makes sense I guess.

Ninject: More than one matching bindings are available

I have a dependency with parameters constructor. When I call the action more than 1x, it show this error:
Error activating IValidationPurchaseService
More than one matching bindings are available.
Activation path:
1) Request for IValidationPurchaseService
Suggestions:
1) Ensure that you have defined a binding for IValidationPurchaseService only once.
public ActionResult Detalhes(string regionUrl, string discountUrl, DetalhesModel detalhesModel)
{
var validationPurchaseDTO = new ValidationPurchaseDTO {...}
KernelFactory.Kernel.Bind<IValidationPurchaseService>().To<ValidationPurchaseService>()
.WithConstructorArgument("validationPurchaseDTO", validationPurchaseDTO)
.WithConstructorArgument("confirmPayment", true);
this.ValidationPurchaseService = KernelFactory.Kernel.Get<IValidationPurchaseService>();
...
}
I'm not sure what are you trying to achieve by the code you cited. The error is raised because you bind the same service more than once, so when you are trying to resolve it it can't choose one (identical) binding over another. This is not how DI Container is supposed to be operated. In your example you are not getting advantage of your DI at all. You can replace your code:
KernelFactory.Kernel.Bind<IValidationPurchaseService>().To<ValidationPurchaseService>()
.WithConstructorArgument("validationPurchaseDTO", validationPurchaseDTO)
.WithConstructorArgument("confirmPayment", true);
this.ValidationPurchaseService = KernelFactory.Kernel.Get<IValidationPurchaseService>();
With this:
this.ValidationPurchaseService = new ValidationPurchaseService(validationPurchaseDTO:validationPurchaseDTO, confirmPayment:true)
If you could explain what you are trying to achieve by using ninject in this scenario the community will be able to assist further.
Your KernelFactory probably returns the same kernel (singleton) on each successive call to the controller. Which is why you add a similar binding every time you hit the URL that activates this controller. So it probably works the first time and starts failing after the second time.