Unresolved reference to WseeFileStore - weblogic

I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse

The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.

This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)

Related

ClearCase error, registry does not contain VOB with UUID

I am in the process of migrating a very large multisite installation to newer OS platforms. Running ClearCase 9. In one particular migration stage all the VOBs appear to have migrated correctly, ct lsvob -s -host xxxx shows no VOBs remaining on the old server, but now I am getting packets stuck in the incoming bin on that old server. I assume it has to do with devs who still had views open before the migration, but the problem is that mt lspacket is complaining that it cannot find a VOB with a single UUID in the registry. Packets are piling up, and they are all complaining about the same UUID, so I assume they are all related to one VOB. ct lsvob -uuid xxxx says it cannot find a VOB with that UUID.
How would I go about correcting this?
Looking at multitool lspacket, check if a multitool lspacket –long /usr/tmp/packet1 (one of the packet listed by multitool lspacket) helps (a bit like the old CC7.0 multitool lspacket -l -dump)
If this is linked to dev views, check if you can get the a cleartool rmview --force -vob \avob -uuid an_uuid is still possible, to make sure there is no view referencing the old Vob.
The packets are getting routed to the old server by the other sites. It has nothing to do with developer views.
#VonC's lspacket -long answer will give you the name of the sending replica... Where you'll have to describe the target replica to see what it currently thinks the host is for the moved replica.
In the interim, you can copy/move the sync packets to the new server and the should import fine.
Assuming that you use the default jobs, and don't use -out to change the default packet names, running run multitool lspacket on the receiving host, you will show you names like "sh_o_sync_P50-rep_2022-11-14T160519-0500_17508." In this case, "P50-rep" is the name of the SENDING replica.
You will also see a line reading:
VOB family identifier is: 19fd6066.dbf111e1.9886.44:37:e6:60:fc:96
cleartool lsvob -family {above UUID} will identify the VOB whose sync packet this is.
* \bc-linuxtest \\this-is-the-vob-server-host\vobstore\bc-linuxtest.vbs public (replicated)
You can then combine that information to locate the sending site since the describe would look something like this:
replica "P50-rep"
created 2018-04-10T08:50:15-04:00 by CC VOB Admin (vobadm2.ccusers#Bullwinkle)
"Test replica 3."
replica type: unfiltered
master replica: P50-rep#\bc-linuxtest
request for mastership: enabled
owner: PROD\vobadm
group: PROD\ccusers
host: "this-is-the-vob-server-host"
identities: preserved
permissions: preserved
Once you go there, you will be able to see what IT thinks the replica host is, and then we can make it know where the replica is now... By hook or by crook if need be. However, the "by crook" method would mean that you need to open a support case to get the tool and the steps to use it.
My guess is that the problem replica is:
The problem replica is self mastering, and
Does not send updates to at least one "upstream" replica.

Update changes from Developement instance to Production instance in Odoo

I have 2 instances of Odoo v9 running in the same server (Ubuntu 14.04). I want to make changes (install modules, change source code or anything) in the developement instance and after confirming they are OK, move the changes to the Production Instance. Is there anyway of doing that without repeating the whole process of development?
Thank you.
As I can understand you do not want to stop the production instance.
If they are only XML files you might be able to get away by only updating the module from the frontend (Apps-> Your Module -> Update. Although if you have modified the __openerp__.py file inside your module you have to enter the debug mode and click Update Apps List first of all.
For changes in files that are inside the static folder of your module, you do not need to stop the server. Although, your users must click ctr + shift + R in order to flush their caches and bring to their browsers the new content.
For Python source code I am afraid that you have to stop both instances of the server so that the code can be correctly recompiled.
(See note 1 on this)
In the end you should stop and update everything because unexpected things might pop up at random times due to resources not been properly updated.
Note 1: The Python documentation about the compilation of Python modules above others mentions:
As an important speed-up of the start-up time for short programs that
use a lot of standard modules, if a file called spam.pyc exists in the
directory where spam.py is found, this is assumed to contain an
already-“byte-compiled” version of the module spam. The modification
time of the version of spam.py used to create spam.pyc is recorded in
spam.pyc, and the .pyc file is ignored if these don’t match.
So theoretically if you modify fileA.py in a module and a new fileA.pyc is generated the server will be able to interpret and use it. In any case I had an issue with two instances running where the py file was creating the field and the XML file was using it and the server reported that a filed had not been created for the XML view, that means that the server did pick up and parse the XML file but did not recompile the py.

Possibilities of Datazen server migration

I know that similar topics have been already raised, but maybe there are some latest news or ideas?
I want to migrate Datazen users/sources/dashboards etc. to another server (production one) in a smooth way. I was trying to do that via backup/restore, but then I couldn't access the control panel on the target server. I received an error
401 unauthorized access.
Maybe I should change something in logs/config files on the destination server?
Any ideas? I would be grateful for any help!
I dont think there is a way to do this out of the box. However, the files are quite simple XML, so can be pointed at a different server if you know PowerShell (and work out the correct values from the server).
You will have to re-point the GUID, ServerGUID and ServerURI within the sources.xml file and then rezip (as .datazen). Providing you have your hubs set up the same, Datazen will then believe the file belongs to your prod environment and you will be able to publish

How to debug coldfusion orm settings and mappings

I'm having trouble with a specific ORM cfclocation not being mapped correctly. Coldfusion is checking in the cfclocation I have provided and then cacheing the mapping in a different location so that when I try to load the entity it can't find the cfc (its looking in the wrong folder).
This is only happening to one of my cfclocations. I have tried the same application on a cf10 server and it worked, and on another cf9 server where it didn't work.
So somehow its getting confused about where the cfc is located and generating a different location.
What I am wondering is how can I debug the process coldfusion is going through to cache the locations? I read the outline on orm's architecture on adobes doc pages and it mentioned coldfusion generating .hbmxml files. Where do I find these? Is there another way I can work out why coldfusion thinks a file is located somewhere else.
(I had another question similar to this which I deleted to post this one as the previous question was asking for a fix, this one is asking how to debug)
UPDATE:
I have turned saveMapping on and am getting the hbmxml files now. I can see that the class name is being generated incorrectly. Is there a way to manually set this? IS there a way to manually tell coldfusion what to map the location as. I have no clue why Coldfusion is mapping the location elsewhere.
Answers to questions below
The entityname, name, and file name are all the same.
The file is located in an area that uses a nameing convention that is the same for all of our applications which are working. In inetpub/resources/applications/[application name]/cfcs/orm.
I can dump out the array of addresses that are sent to cfclocation and it is clearly there. It has to be or the file wouldn't be detected in the first place. The ColdFusion and webroot are in different areas, but as mentioned before this works fine for all our other apps.
I have restarted the application and the ColdFusion service repeatedly while testing different things.
The folder that the hbmxml file specifies does not even exist, and never did.
Rebooting coldfusion has no effect. After messing around by adding additional mappings closer to the specific folder and then removing the mappings I eventually got the app working temporarily on my local site. but as soon as i moved it to another server it didn't work. so it seems like a 99% chance of not working

How can I specify an endpoint class with NServiceBus.Host.exe?

I have have a solution that I created with the new modeler tools. This gave me
two full "endpoints" in a single solution.
Now when I run them through my automated build, I have two dlls in the same
folder that implement IConfigureThisEndpoint.
If I just run NServiceBus.Host.exe \install (to get a Windows Service), it gives
me the (expected) error that there is more than one class that can be used.
I did some searching and Udi states here:
http://tech.groups.yahoo.com/group/nservicebus/message/3937 that "You can
specify which class you want loaded and avoid these issues - as the server
project in the pub/sub sample shows".
I looked at the pub/sub sample and I can't see how I can specify my class (at
least not at the command line).
Is there a way to get around having to modify my build to put the files in
separate folders? (Not really an easy task for me.)
Add a config entry to your app settings with the key EndpointConfigurationType and the value being the assembly qualified name of the type.