Master_slave in RabbitMQ - rabbitmq

How can we implement Master_slave configuration using RabbitMQ server.
I have read at many places and have experienced it myself that
"RabbitMQ Nodes under a Cluster can't really share same files except for the cookie file. Script itself makes sure that it creates folders and files names prefixed with "$NODE_ID$" while starting the broker so that all the files for that node will be created inside a single folder ill it. It basically creates two main folders inside folders does following thing:
a. db : Creates Folder named "$NODE_ID$"-mnesia and creates all db files inside it.
b. log : Creates files with name prefixed with "$NODE_ID$"
Even if we tweak the script for both nodes to point to same mnesia folder, 2nd instance of the broker will fail to start because of mnesia locking issue with following error :
{"init terminating in do_boot",
{{nocatch,{error,{cannot_start_application,mnesia,{killed,{mnesia_sup,start,[normal,[]]}}}}},[{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump init terminating in do_boot ()".
All I wanted to know is if in a sitation in which there are 2 nodes 'master' and 'slave' in a cluster and if master is down for some time, then for that time how can slave can come in picture for recieving and sending messages on behalf of master. Since the sharing of database is not possible.

Take a look at the guidelines for building a highly available cluster with DRDB and Pacemaker: http://www.rabbitmq.com/pacemaker.html
However, that's a bit difficult to set up, so you might prefer to wait for a few more weeks, as the next major release will include built in support for redundant queues for clusters. See more about that in the attachment here:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2011-June/013304.html

Related

ClearCase error, registry does not contain VOB with UUID

I am in the process of migrating a very large multisite installation to newer OS platforms. Running ClearCase 9. In one particular migration stage all the VOBs appear to have migrated correctly, ct lsvob -s -host xxxx shows no VOBs remaining on the old server, but now I am getting packets stuck in the incoming bin on that old server. I assume it has to do with devs who still had views open before the migration, but the problem is that mt lspacket is complaining that it cannot find a VOB with a single UUID in the registry. Packets are piling up, and they are all complaining about the same UUID, so I assume they are all related to one VOB. ct lsvob -uuid xxxx says it cannot find a VOB with that UUID.
How would I go about correcting this?
Looking at multitool lspacket, check if a multitool lspacket –long /usr/tmp/packet1 (one of the packet listed by multitool lspacket) helps (a bit like the old CC7.0 multitool lspacket -l -dump)
If this is linked to dev views, check if you can get the a cleartool rmview --force -vob \avob -uuid an_uuid is still possible, to make sure there is no view referencing the old Vob.
The packets are getting routed to the old server by the other sites. It has nothing to do with developer views.
#VonC's lspacket -long answer will give you the name of the sending replica... Where you'll have to describe the target replica to see what it currently thinks the host is for the moved replica.
In the interim, you can copy/move the sync packets to the new server and the should import fine.
Assuming that you use the default jobs, and don't use -out to change the default packet names, running run multitool lspacket on the receiving host, you will show you names like "sh_o_sync_P50-rep_2022-11-14T160519-0500_17508." In this case, "P50-rep" is the name of the SENDING replica.
You will also see a line reading:
VOB family identifier is: 19fd6066.dbf111e1.9886.44:37:e6:60:fc:96
cleartool lsvob -family {above UUID} will identify the VOB whose sync packet this is.
* \bc-linuxtest \\this-is-the-vob-server-host\vobstore\bc-linuxtest.vbs public (replicated)
You can then combine that information to locate the sending site since the describe would look something like this:
replica "P50-rep"
created 2018-04-10T08:50:15-04:00 by CC VOB Admin (vobadm2.ccusers#Bullwinkle)
"Test replica 3."
replica type: unfiltered
master replica: P50-rep#\bc-linuxtest
request for mastership: enabled
owner: PROD\vobadm
group: PROD\ccusers
host: "this-is-the-vob-server-host"
identities: preserved
permissions: preserved
Once you go there, you will be able to see what IT thinks the replica host is, and then we can make it know where the replica is now... By hook or by crook if need be. However, the "by crook" method would mean that you need to open a support case to get the tool and the steps to use it.
My guess is that the problem replica is:
The problem replica is self mastering, and
Does not send updates to at least one "upstream" replica.

Unresolved reference to WseeFileStore

I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse
The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.
This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)

How to tracking process's status in Tibco?

I hope you show me resolve in my case.
When I define many process, how to get status data's tracking of that process. In other word, I want to get process's history. My purpose to show for my client checking.
I have defined a process communicate 3 applications and i deploy it to client.but unfortunately, my client would like to add more an application ( up to 4 apps) in the future. i wonder if how to do that? i perhaps open process again and edit it. Have a way create dynamic process.
Thanks very much.
PVA.
You get a very limited "history" in TIBCO Administator (more or less which process instances completed with success/failure; in case of failure it will also provided the exception and where in the process it failed). However that doesn't show you any tracking of the individual steps/activities that the process passed through. For this, you'd either have to put lots of logging steps into your process (and need to build something that parses this information from log files). Or you could use BusinessWorks ProcessMonitoring, which gives you a full history trail for each process automatically. However it not included with BW and you'll probably need a separate license.
Change the process in TIBCO Designer, build a new ear file, re-deploy the new EAR file in TIBCO Administrator.

Start external process several times simultaneously

I need to start an external process (which is around 300MB large on its own) several times using System.Diagnostics.Process.
The only problem is: once the first instance starts, it generates temporary data in its base folder (where the application is located), so I can't just start another instance - it would corrupt the data of the first one and mess up everything.
I thought about temporarily copying the whole application folder programmatically, so that each instance has its own, but that doesn't feel right.
Could anybody help me out? Thanks in advance!
Try starting each copy in a different directory.
If the third-party app ignores the current directory, you could make a symlink to it in a different folder. I'm not necessarily recommending that, though.
Pass an argument to your external process that specifies the temp folder to use.

How to reliably handle files uploaded periodically by an external agent?

It's a very common scenario: some process wants to drop a file on a server every 30 minutes or so. Simple, right? Well, I can think of a bunch of ways this could go wrong.
For instance, processing a file may take more or less than 30 minutes, so it's possible for a new file to arrive before I'm done with the previous one. I don't want the source system to overwrite a file that I'm still processing.
On the other hand, the files are large, so it takes a few minutes to finish uploading them. I don't want to start processing a partial file. The files are just tranferred with FTP or sftp (my preference), so OS-level locking isn't an option.
Finally, I do need to keep the files around for a while, in case I need to manually inspect one of them (for debugging) or reprocess one.
I've seen a lot of ad-hoc approaches to shuffling upload files around, swapping filenames, using datestamps, touching "indicator" files to assist in synchronization, and so on. What I haven't seen yet is a comprehensive "algorithm" for processing files that addresses concurrency, consistency, and completeness.
So, I'd like to tap into the wisdom of crowds here. Has anyone seen a really bulletproof way to juggle batch data files so they're never processed too early, never overwritten before done, and safely kept after processing?
The key is to do the initial juggling at the sending end. All the sender needs to do is:
Store the file with a unique filename.
As soon as the file has been sent, move it to a subdirectory called e.g. completed.
Assuming there is only a single receiver process, all the receiver needs to do is:
Periodically scan the completed directory for any files.
As soon as a file appears in completed, move it to a subdirectory called e.g. processed, and start working on it from there.
Optionally delete it when finished.
On any sane filesystem, file moves are atomic provided they occur within the same filesystem/volume. So there are no race conditions.
Multiple Receivers
If processing could take longer than the period between files being delivered, you'll build up a backlog unless you have multiple receiver processes. So, how to handle the multiple-receiver case?
Simple: Each receiver process operates exactly as before. The key is that we attempt to move a file to processed before working on it: that, and the fact the same-filesystem file moves are atomic, means that even if multiple receivers see the same file in completed and try to move it, only one will succeed. All you need to do is make sure you check the return value of rename(), or whatever OS call you use to perform the move, and only proceed with processing if it succeeded. If the move failed, some other receiver got there first, so just go back and scan the completed directory again.
If the OS supports it, use file system hooks to intercept open and close file operations. Something like Dazuko. Other operating systems may let you know about file operations in anoter way, for example Novell Open Enterprise Server lets you define epochs, and read list of files modified during an epoch.
Just realized that in Linux, you can use inotify subsystem, or the utilities from inotify-tools package
File transfers is one of the classics of system integration. I'd recommend you to get the Enterprise Integration Patterns book to build your own answer to these questions -- to some extent, the answer depends on the technologies and platforms you are using for endpoint implementation and for file transfer. It's a quite comprehensive collection of workable patterns, and fairly well written.