ClearCase error, registry does not contain VOB with UUID - migration

I am in the process of migrating a very large multisite installation to newer OS platforms. Running ClearCase 9. In one particular migration stage all the VOBs appear to have migrated correctly, ct lsvob -s -host xxxx shows no VOBs remaining on the old server, but now I am getting packets stuck in the incoming bin on that old server. I assume it has to do with devs who still had views open before the migration, but the problem is that mt lspacket is complaining that it cannot find a VOB with a single UUID in the registry. Packets are piling up, and they are all complaining about the same UUID, so I assume they are all related to one VOB. ct lsvob -uuid xxxx says it cannot find a VOB with that UUID.
How would I go about correcting this?

Looking at multitool lspacket, check if a multitool lspacket –long /usr/tmp/packet1 (one of the packet listed by multitool lspacket) helps (a bit like the old CC7.0 multitool lspacket -l -dump)
If this is linked to dev views, check if you can get the a cleartool rmview --force -vob \avob -uuid an_uuid is still possible, to make sure there is no view referencing the old Vob.

The packets are getting routed to the old server by the other sites. It has nothing to do with developer views.
#VonC's lspacket -long answer will give you the name of the sending replica... Where you'll have to describe the target replica to see what it currently thinks the host is for the moved replica.
In the interim, you can copy/move the sync packets to the new server and the should import fine.
Assuming that you use the default jobs, and don't use -out to change the default packet names, running run multitool lspacket on the receiving host, you will show you names like "sh_o_sync_P50-rep_2022-11-14T160519-0500_17508." In this case, "P50-rep" is the name of the SENDING replica.
You will also see a line reading:
VOB family identifier is: 19fd6066.dbf111e1.9886.44:37:e6:60:fc:96
cleartool lsvob -family {above UUID} will identify the VOB whose sync packet this is.
* \bc-linuxtest \\this-is-the-vob-server-host\vobstore\bc-linuxtest.vbs public (replicated)
You can then combine that information to locate the sending site since the describe would look something like this:
replica "P50-rep"
created 2018-04-10T08:50:15-04:00 by CC VOB Admin (vobadm2.ccusers#Bullwinkle)
"Test replica 3."
replica type: unfiltered
master replica: P50-rep#\bc-linuxtest
request for mastership: enabled
owner: PROD\vobadm
group: PROD\ccusers
host: "this-is-the-vob-server-host"
identities: preserved
permissions: preserved
Once you go there, you will be able to see what IT thinks the replica host is, and then we can make it know where the replica is now... By hook or by crook if need be. However, the "by crook" method would mean that you need to open a support case to get the tool and the steps to use it.
My guess is that the problem replica is:
The problem replica is self mastering, and
Does not send updates to at least one "upstream" replica.

Related

Apache Flume - send only new file contents

I am a very new user to Flume, please treat me as an absolute noob. I am having a minor issue configuring Flume for a particular use case and was hoping you could assist. Note that I am not using HDFS, which is why this question is different from others you may have seen on forums.
I have two Virtual Machines (VMs) connected to each other through an internal network on Oracle Virtual Box. My goal is to have one VM watch a particular directory that will only ever have one file in it. When the file is changed, I wish for Flume to only send only the new lines/data. I want the other VM to receive this data and update/concatenate the data to a single file in a particular directory on it.
So far, I have this process very close to working. Whenever changes are made in VM1, they are updated on VM2. However, the entire file on VM1 is sent to VM2 every time, not the new lines. For example, if I wrote “Test1” and then a while later underneath wrote “Test2” to the file on VM1, on VM2 the output would be:
Test1
Test1
Test2
What I want to see is:
Test1
Test2
I am not sure how to implement this, and am sending this email after thoroughly examining the Flume user guide documentation and most relevant articles on stackoverflow/stackexchange. For your reference, below are the current configurations(they are working in the manner I mentioned above).
VM1 configuration
VM2 configuration
I realize another solution would be to keep the configuration on VM1 and overwrite the file on VM2 everytime new contents are detected. However, I am also unsure how to implement this.
Any assistance you could provide is greatly appreciated!
Use TailDir source provided in Flume.It periodically writes last position read in position file and its more reliable than exec source as even in case of agent crashes or stops for some reason it will start reading from last position saved in the position file.
agent1.sources.src1.type = TAILDIR
agent1.sources.src1.channels = ch1
agent1.sources.src1.filegroups =f1
agent1.sources.src1.filegroups.f1= //path to log file
agent1.sources.src1.maxBackoffSleep = 10000
Set maxBackoffSleep value as per your need it means how much max time agent should wait before polling for changes in log file , when it didnt find any changes in last attempt made.

Where does this SSH Exception originate?

I'm using the ssh.net client to connect to a sftp server that identifies itself as maverick_20, which appears to be the the closed-source offering from Sshtools. When I attempt to read bytes out of a file in stream mode, I have a general exception that bubbles up containing the string 'read from 13 for 32755 from 32772 not supported', which I believe is being returned to me from the server. That message is meaningless to me, but the server certainly allows me to seek() to different positions in the file without issue.
Googling the phrase suspiciously returns a list of ssh error codes on the WinSCP site, though that phrase does not occur in the page. As the source code for the NG product is not available, I can't investigate the issue that way.
Is the Maverick server broken in some way? I can't imagine what sort conditions would allow seek and complete file reads, but fail in this specific way.

Unresolved reference to WseeFileStore

I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse
The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.
This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)

xperfview on a different computer

Most use cases I've seen with xperf involve using xperfview on the same computer. A remote record and play back for me don't seem to work well. Symbols are not resolved correctly. Is there a known issue with remote record and local play with xperf/xperfview?
Why do you try remote connection? if you use xperf -d to stop logging the ETL contains all metadata, so that the symbols can be loaded from any PC you want. Copy it from PC A to PC B and view the ETL there.
Now that the 8.1 version of WPT is out, the recommended way to record traces is not with xperf.exe but with wprui.exe. This makes trace recording much simpler and much less error prone. See this blog post for details:
http://randomascii.wordpress.com/2013/04/20/xperf-basics-recording-a-trace-the-easy-way/
And yes, you absolutely should be able to record traces on one machine and view them on another.

Master_slave in RabbitMQ

How can we implement Master_slave configuration using RabbitMQ server.
I have read at many places and have experienced it myself that
"RabbitMQ Nodes under a Cluster can't really share same files except for the cookie file. Script itself makes sure that it creates folders and files names prefixed with "$NODE_ID$" while starting the broker so that all the files for that node will be created inside a single folder ill it. It basically creates two main folders inside folders does following thing:
a. db : Creates Folder named "$NODE_ID$"-mnesia and creates all db files inside it.
b. log : Creates files with name prefixed with "$NODE_ID$"
Even if we tweak the script for both nodes to point to same mnesia folder, 2nd instance of the broker will fail to start because of mnesia locking issue with following error :
{"init terminating in do_boot",
{{nocatch,{error,{cannot_start_application,mnesia,{killed,{mnesia_sup,start,[normal,[]]}}}}},[{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump init terminating in do_boot ()".
All I wanted to know is if in a sitation in which there are 2 nodes 'master' and 'slave' in a cluster and if master is down for some time, then for that time how can slave can come in picture for recieving and sending messages on behalf of master. Since the sharing of database is not possible.
Take a look at the guidelines for building a highly available cluster with DRDB and Pacemaker: http://www.rabbitmq.com/pacemaker.html
However, that's a bit difficult to set up, so you might prefer to wait for a few more weeks, as the next major release will include built in support for redundant queues for clusters. See more about that in the attachment here:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2011-June/013304.html