We got [Admin] [Monitoring] [Policymaker] [Management] [Impersonator] [None]
privileges for users in rabbit MQ.
In UI for rabbit, I need to create a user who needs only read access to queues(via UI), but not to create new queues / play with exchanges . How can i do it?
Thanks
See the rabbitmq access control, and in detail the permissions
In you case should be enough to do:
The user myuser can consume only from queues that starting from myqueue
EDIT
if you try to create a queue using the user myuser, you get this error:
Related
I am thinking how to remove duplicacy for subscriber cluster in pubsub, for example:
There is a service called email, which should send welcome emails after user signing up. By using pub/sub, email service shall listen a event called "signedUp" which will be triggered each time user sign up. However, how about if I have 2 email services to balance? Without any special efforts, I think two welcome emails will be sent out. So how to solve this issue?
I prefer redis as pubsub server for simplicity, or rabbitmq if redis doesn't work out.
I don't think it is possible to do it in redis. But in rabbitmq, it can, let me explain below:
Rabbitmq has a separate stuff called 'exchange' from queue. So server publish a message to exchange, client can create queues to bind to the exchange. Therefore, instances from one services can create the same queue to bind with exchange, by doing that exchange will delivery message to the queue once and handled by only one instance once.
Account service:
channel.assertExchange(‘signedUp’, 'fanout')
channel.publish(ex, '', new Buffer(message)
Email service:
let queue = channmel.assertQueue(‘email’);
channel.bindQueue(queue, 'signedUp'); // bind this queue to exchange
ch.consume(queue, logMessage)
By given a queue name in email service, no matter how many email services started, the published message (signedUp in this case) will be handled by one and ONLY ONE email service.
I have created extended events in SQL server 2012. Everything is working fine.
Now I am looking for if any events occur (example :deadlock), it should send mail to given mail id.
Is it possible in extended events?
There is a very interesting article about it, basically you need to:
Enable service broker on the database.
Create a service broker queue to receive the event notification messages.
Create a service broker service to deliver event notification messages.
Create a service broker route to route the event notification message to the service broker queue.
Create event notification on deadlock event to create messages and send them to the service broker service
Through service broker, a
stored procedure can be written that responds to deadlock events. Event notifications allow deadlock graphs to be
transformed, stored, and sent wherever they need to go.
Store the deadlock graph in a table.
Retrieve the cached plans associated with the deadlock in another table.
Email the deadlock graph to the DBA team.
You can find the article with the examples on this link:
http://sqlmag.com/site-files/sqlmag.com/files/archive/sqlmag.com/content/content/142603/wpd-sql-extevtandnotif-us-sw-01112012_1.pdf
Pages of reference:
9 - 13
How should I setup EventStore's RavenPersistence in a multi-tenant application?
I have an Azure worker role that processes commands received through service bus.
Each message may belong to a different tenant. The actual tenant is sent in the message header, which means that I know which database to use only after I receive each message.
I'm using CommonDomain so my command handlers have IRepository injected.
Right now I build a new store while processing each message (I set DefaultDatabase) but I have a feeling this may not be the most optimal way.
Is there a way to create a single event store and then just switch databases?
If not, can I cache the stores for each tenant?
Do you know about any multi-tenant sample that uses EventStore with RavenDB?
We do exactly the same - spawn new instance of EventStore for every request. JOliver EventStore was designed without multi-tenancy support in mind. So this is the only way ...
I am quite new to WCF. I followed a tutorial on how to use Internal Endpoints (WCF) to make a Role to Role communication. Link for the tutorial
They actually create multiple instance of a worker role and poke each other.
The code is
foreach (var ep in endPoints)
{
IService1 worker = WorkerRole.factory.CreateChannel(new EndpointAddress(string.Format("net.tcp://{0}/Service1", ep.IPEndpoint)));
try
{
Trace.WriteLine(worker.SayHello(currentInstance.Id.ToString()), "Information");
((ICommunicationObject)worker).Close();
}
catch (Exception e)
{
Trace.TraceError("unable to poke worker role instance '{0}'. {1}", ep.RoleInstance.Id, e.Message);
((ICommunicationObject)worker).Abort();
}
}
But I want to make a worker role wait till it is being poked by other worker role. Say for example, there are 3 worker roles. The worker role 2 and worker role 3 should wait till it is being poked by worker role 1.
Can anyone tell me how to do that.
To do this diectly see http://blog.structuretoobig.com/post/2010/02/03/Windows-Azure-Role-Communication.aspx, using the Azure Service Bus see http://blogs.msdn.com/b/appfabriccat/archive/2010/09/30/implementing-reliable-inter-role-communication-using-windows-azure-appfabric-service-bus-observer-pattern-amp-parallel-linq.aspx , or you can use the Azure Queue to communicate between roles, or you could use the Azure Caching Service ( http://msdn.microsoft.com/en-us/library/windowsazure/gg278356.aspx )
I think I would architect this slightly differently.
Instead of having worker roles exposing WCF Endpoints and sending messages between them it may be neater to use queues.
Messages can be posted to queues and picked up and processed by other worker roles. This introduces a certain amount of durability in that if a worker role that is supposed to receive a message is down for any reason it can carry on processing those messages on its queue when it comes back up. Also any unhandled exceptions that happen whilst processing a message will mean that the message reappears on the queue after a certain timeout period. If your app/site really takes off you can add additional instances of those worker roles to process the messages on the queues more quickly.
Therefore by using queues you have a certain amount of additional durability and its easier to scale up later.
There is a good intro to using queues on the developer fusion website
Everytime the client/browser connects to
Mochiweb server, it creates new process of Loop, doesn't it? So, if I want
to transfer a message from one client to another (typical chat system) I
should use the self() of Loop to store all connected clients PIDs, shouldn't
I?
If something(or everything) is wrong so far plz explain me briefly how the
system works, where is server process and where is client process?
How to send a message to the Loop process of client using its PID? I mean where to
put the "receive" in the Loop?
Here's a good article about a Mochiweb Web Chat implemention. HTTP Clients don't have PID's as HTTP is a stateless protocol. You can use cookies to connect a request to a unique visitor of the chat room.
First, do your research right. Check out this article , and this one and then this last one.
Let the mochiweb processes bring chat data into your other application server (could be a gen_server, a worker in your OTP app with many supervisors, other distributed workers e.t.c). You should not depend on the PID of the mochiweb process. Have another way of uniquely identifying your users. Cookies, Session ids, Auth tokens e.t.c. Something managed only by your application. Let the mochiweb processes just deliver chat data to your servers as soon as its available. You could do some kind of queuing in mnesia where every user has a message queue into which other users post chat messages. Then the mochiweb processes just keep asking mnesia if there is a message available for the user at each connection. In summary, it will depend on the chat methodology: HTTP Long polling/COMET, REST/ Server push/Keep-alive connections blur blur blur....Just keep it fault tolerant and do not involve mochiweb processes in the chat engine, just let mochiweb be only transport and do your chat jungle behind it!
You can use several data structures to avoid using PIDs for identity. Take an example of a queue(). Imagine you have a replicated mnesia Database with RAM Table in which you have implemented a clearly uniquely identifiable queue() per user. A process (mochiweb process), holding a connection to the user only holds an identity of this users session. It then uses this identity to keep checking into his queue() in Mnesia at regular intervals (if you are intending to it this way -- keeping mochiweb processes alive as long as the users session). Then it means that no matter which Process PID a user is connected through, as long as the process has the users identity, then it can fetch (read) messages from his message queue(). This would consequently result in having the possibility of a user having multiple client sessions. The same process can use this identity to dump messages from this user into other users' queues().