Is there a way I can watch for change of leadership using Curator? When the leader changes I need all of the followers to know that this has happened so that they can discover who the new leader is. It seems like something standard that the followers would want to know who the leader is, but I cannot work it out.
It's possible to watch for changes by using a path cache based on the path of the leader selector. When an event occurs on this path then their are probably new nodes, less nodes or a new leader.
Related
I'm creating a plugin(targeting WebStorm) that monitors the IDE usage and need to be notified when the user is "no longer using" the IDE, meaning he closed IDE, closed the laptop lid, turned it off, or smth along those lines.
Is there an appropriate Topic I can subscribe to to achieve this (or even better, a list of ALL the topics one can subscribe to so I could deduce what to use myself)?
see com.intellij.ide.AppLifecycleListener to be notified when application is started/stopped
We have around 300k items on dmi_queue_item
If I do right click and select "destroy queue item" I see the that row no longer appears if I query by r_object_id.
Would it mean that the file no longer will be processed by the CTS service ? Need to know if this would it be the way to clear up the queue for the rendition process (to convert to PDF) or what it would it be the best way to clear up the queue ?
Also for some items/rows I get this message when doing the right click "destroy" thing, what does it mean ? or how can I avoid it ? Not sure if maybe the item was processed and the row no longer exists or is something else.
dmi_queue_item table is used as queue for all sorts of events at Content Server.
Content Transformation Service is using it to read at least two types of events, afaik.
According to Content Transformation Services, Administration Guide, ver. 7.1, page 18 it reads dm_register_assets and performs the configured content actions for this specific objects.
I was using CTS for generating content renditions for some objects using dm_transcode_content event.
However, be carefull when cleaning up dmi_queue_item since there could be many different event types. It is up to system administrators to keep this queue clean by configuring system components to use events or not to stuff up events that are not supposed to be used.
As per cleaning the queue it is advised to use destroy API command, though you can even try to delete row using DELETE query. Of course try to do this in dev environment first.
You would need to look at 2 queues:
dm_autorender_win31 and dm_mediaserver. In order to delete them you would run a query:
delete dmi_queue_item objects where name = 'dm_mediaserver' or name = 'dm_autorender_win31'
I accidentally delivered the change set which include some additional config files having local system specific configuration in RTC. Is there any way to discard those changes once delivered? I mean the changes should not be come as incoming changes to other team members.
Please provide any pointer if you have.
Is there any way to discard those changes once delivered?
Not exactly: once delivered, that change set will come to the other team members as incoming.
There are two solutions:
revert the stream configuration to a state previous to your deliver. That is easy only if you are delivering baselines in addition of change sets, because you can then open the stream, and in the "component" section click on "Replace with", and replace the delivered baseline with the previous one.
But... if you never delivered baselines (and delivered only change sets), this isn't easy at all.
You can try and follow "Is there a way to create a RTC snapshot or baseline based on a past date?", but that is quite tedious.
Plus, if your colleagues already accepted your change set and started delivering change sets of their own, this solution isn't recommended at all.
Or, much simpler, you create a new changeset which will cancel the one you just deliver.
Right-click on your component, and select show > history, then right click on the latest change set you incorrectly delivered, and select revert.
That will create a patch.
Right-click on that patch, and select "apply to your workspace": that will create a change set which is the negative image of the one already delivered.
Deliver that new change set.
That means your colleagues will have to accept both change sets: the incorrect one, and the new one which cancels it.
This thread introduces a variation of the first alternative:
you can really remove the change set from the stream you delivered it to.
You can do this by:
discarding the change set from your local workspace
and then replacing the content of the stream with the content of your workspace for the particular component that's affected.
This is a more risky solution, because it really replaces the content of the stream with whatever you have in your workspace... it will remove anything in the stream that you don't have in your workspace. To do this:
a. Accept any incoming changes from the stream you are working with (to prevent losing anyone else's work).
b. Right click on the owning component in the Pending Changes view and select Show->History. The change set will appear in the History view.
c. Right click on the change set and choose Discard... This will discard the change set from your workspace.
So your workspace should now have all change from the stream except the one you want to remove. You can verify this by checking that your bad change set is the only thing you see Incoming.
d. Right click on the component and choose "Replace in [your stream name]..."
I wish to use Redis to create a system which publishes stock quote data to subscribers in an internal network. The problem is that publishing is not enough, as I need to find a way to implement an atomic "get snapshot and then subscribe" mechanism. I'm pretty new to Redis so I'm not sure my solution is the "proper way".
In a given moment each stock has a book of orders which contains at most 10 bids and 10 asks. The publisher receives data for the exchange and should publish them to subscribers.
While the publishing of changes in the order book can be easily done using publish and subscribe, each subscriber that connects also needs to get the snapshot of the current order book of the stock and only then subscribe to changes in the order book.
As I understand, Redis channel never saves information, so the publisher also needs to maintain the complete order book in a hash key (Or a sorted set. I'm not sure which is more appropriate) in addition to publishing changes.
I also understand that a Redis client cannot issue any commands except subscribing and unsubscribing once it subscribes to the first channel.
So, once the subscriber application is up, it needs first to get the key which contains the complete order book and then subscribe to changes in that book. However, this may result in a race condition. A change in the book order can be made after the client got the key containing the current snapshot but before it actually subscribed to changes, resulting a change which it will never see.
As it is not possible to use subscribe and then use get in a single connection, the client application needs two connections to the Redis server. At this point I started thinking that I'm probably not doing things in the proper way if I need more than one connection in the same application. Anyway, my idea is that the client will have a subscribing connection and a query connection. First, it will use the subscribing connection to subscribe to changes in order book, but still won't not enter the loop which process events. Afterwards, it will use the query connection to get the complete snapshot of the book. Finally, it will enter the loop which process events, but as he actually subscribed before taking the snapshot, it is guaranteed that it will not miss any changed that occurred after the snapshot was taken.
Is there any better way to accomplish my goal?
I hope you found your way already, if not here we goes a personal suggestion:
If you are in javascript land i would recommend having a look on Meteor.js they do somehow achieve the goal you want to achieve, with the default setup you will end up writing to mongodb in order to "update" the GUI for the "end user".
In any case, you might be interested in reading about how meteor's ddp protocol works: https://meteorhacks.com/introduction-to-ddp/ and https://www.meteor.com/ddp
Is it possible to create end points dynamically at runtime. E.g. Send a message to a known endpoint with details of a new endpoint so that a network node can learn of new nodes on the fly.
NServiceBus does not support this out of the box, but if you really really want it (and you are sure that it is the right way to go), you are free to implement your own message routing and send messages explicitly to an endpoint with bus.Send(endpoint, message).
In a project I am currently involved with, we do this with great success, because it allows us to seamlessly sign services in and out of the system while it is running, resulting in zero downtime during upgrades.
It took a bit of work to get it working though, so I would only recommend this if you are certain that your requirements demand it.