How to debug coldfusion orm settings and mappings - orm

I'm having trouble with a specific ORM cfclocation not being mapped correctly. Coldfusion is checking in the cfclocation I have provided and then cacheing the mapping in a different location so that when I try to load the entity it can't find the cfc (its looking in the wrong folder).
This is only happening to one of my cfclocations. I have tried the same application on a cf10 server and it worked, and on another cf9 server where it didn't work.
So somehow its getting confused about where the cfc is located and generating a different location.
What I am wondering is how can I debug the process coldfusion is going through to cache the locations? I read the outline on orm's architecture on adobes doc pages and it mentioned coldfusion generating .hbmxml files. Where do I find these? Is there another way I can work out why coldfusion thinks a file is located somewhere else.
(I had another question similar to this which I deleted to post this one as the previous question was asking for a fix, this one is asking how to debug)
UPDATE:
I have turned saveMapping on and am getting the hbmxml files now. I can see that the class name is being generated incorrectly. Is there a way to manually set this? IS there a way to manually tell coldfusion what to map the location as. I have no clue why Coldfusion is mapping the location elsewhere.
Answers to questions below
The entityname, name, and file name are all the same.
The file is located in an area that uses a nameing convention that is the same for all of our applications which are working. In inetpub/resources/applications/[application name]/cfcs/orm.
I can dump out the array of addresses that are sent to cfclocation and it is clearly there. It has to be or the file wouldn't be detected in the first place. The ColdFusion and webroot are in different areas, but as mentioned before this works fine for all our other apps.
I have restarted the application and the ColdFusion service repeatedly while testing different things.
The folder that the hbmxml file specifies does not even exist, and never did.
Rebooting coldfusion has no effect. After messing around by adding additional mappings closer to the specific folder and then removing the mappings I eventually got the app working temporarily on my local site. but as soon as i moved it to another server it didn't work. so it seems like a 99% chance of not working

Related

File Addition and Synchronization issues in RavenFS

I am having a very hard time making RavenFS behave properly and was hoping that I could get some help.
I'm running into two separate issues, one where uploading files to the ravenfs while using an embedded db inside a service causes ravendb to fall over, and the other where synchronizing two instances setup in the same way makes the destination server fall over.
I have tried to do my best in documenting this... Code and steps to reproduce these issues are located here (https://github.com/punkcoder/RavenFSFileUploadAndSyncIssue), and video is located here (https://youtu.be/fZEvJo_UVpc). I tried looking for these issues in the issue tracker and didn't find something directly that looked like it related, but I may have missed something.
Solution for this problem was to remove Raven from the project and replace it with MongoDB. Binary storage in Mongo can be done on the record without issue.

Unresolved reference to WseeFileStore

I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse
The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.
This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)

Can't migrate custom Plone file types to Blobs

We have custom content types that were created as extensions of the ATTypes, two of them extend the ATFile type and one extends the ATImage type. We recently upgraded from Plone 4.2 to Plone 4.3.2. Just discovered we are not using Blob storage at all. No wonder our Data.fs is HUGE. So, I have been trying to migrate these custom types.
I have followed all of the steps explained in this example and the product's notes from pypi, these Plone instructions, and used the example from the pypi page for archetypes.schemaextender (Sorry, since I'm still a noob my reputation won't let me post more than 2 links).
In the end, I created an extender script that just extends the ATFile type changing the FileField to BlobField. It seems to be working for new items. I can add a new CustomFileType and it appears to be uploading the file to blob, and my new upload field is showing (I changed the description as a quick way to verify which one it was using).
However, I am having a problem migrating all existing content items to move the binary files over to blob. I tried the generic migrate() script, then I created my own migrate and walker as suggested in the above resources. It doesn't seem like it is doing anything though. When printing results for each item it tries merging, I do see this returned for each item:
DEBUG ATCT.migration Migrating /site/path/to/custom/file/filename.ext (CustomFile -> Blob)
When I navigate to the custom file type in the site, where it usually shows the link to the file, it is just empty. Then going to edit, it treats it as if there is no file there. As a check, I disabled the extender, restarted, and reloaded the custom file. The file was there now. So it looks like the script I am running just isn't moving that file over to where it should be now.
I feel like I am missing something simple, and it is right there, but I can't seem to find it. All of this is learn as I go and a bit over my head, so hopefully someone can easily set me straight.
If I need to provide any additional information leave a comment and I will try to provide what you need.
UPDATE
I used the Red Turtle objects as examples to migrate my custom types as suggested by keul. I still was not able to get the file to migrate to blob within the type itself. So, I tried a different approach. I created a new custom type "CustomBlob", that is a mimic setup of my CustomFile type, and only extended this new blob type to be blob aware. Then I migrated the CustomFiles to CustomBlob, did a complete clear and rebuild, and packed the zeo. The migration seemed to work for the most part, the blobstorage grew by an expected amount, the new types worked. However, the Data.fs didn't go down in size. I would have thought that the binary files that were stored in Data.fs would be removed during the migration. Am I understanding this incorrectly? How can I remove these files so the Data.fs size goes down appropriately?
Not sure if this is the best solution, but here is how I was able to get this to work.
I created temporary content types parallel of each type (for CustomImage I made CustomImageBlob, and so on). I made the new types blob-aware only, migrated all types to their parallel. Then I enabled the extender for the original types to make them blob-aware, and migrated back. It is a little redundant and time consuming, but I just could not get the files to migrate to blob when migrating to itself.
Providing this as the best answer so far in case it helps someone else, or might encourage someone to find a better solution. Thanks for the tip keul, it definitely helped me get to this solution.

Error with relational mapping on new server

I just moved a custom built CMS over to a live server (it was on the development server) because it is easier to set up RSS. The problem is that none of my relational mappings work anymore, despite me changing the application.cfclocation to reflect the new path. It uses an absolute path, as well. The setup is like so:
F:\...\cmsRoot\com\dac (this is the original path)
F:\...\cmsRoot\admin\com\dac (this is the path on the new server. The only difference is the extra layer for the admin folder; the drive letters are the same)
The Application.cfc and most pages are located in the cmsRoot and cmsRoot\admin folders, respectively. The dac folders contain my relational CFC files.
Originally, when loading each cfc for the first time Coldfusion would throw an error saying
"Error Occurred While Processing Request
Cannot load the target CFC abc for the relation property abc in CFC xyz
for each relational mapping (I commented them out to make sure every single one had the same problem).
After I added the line <cfscript>ORMReload();</cfscript> to the beginning of each CFC file, I could get past this error and access the login page just fine. However, now I get an error any time I try to create an entity:
Mapping for component abc not found.
The first instance that calls it (and throws the error) looks like this:
objectABC = EntityToQuery(EntityLoad("abc", {ActiveInd=1}));
I've already searched for any related problems on stackoverflow already, and it helped me fix the original error by adding ORMReload() calls. However, that doesn't solve the current problem. I've changed the mapping for the CFC's (in the Application.cfc) to use a relative path, and that did not help either (since I figured it was likely a mapping issue). I also checked folder permissions to make sure they matched, since one user said that fixed their problem. Both folders have the same permissions, as well.
Here's any useful Application.cfc info, if that helps:
this.ormsettings = { cfclocation = ["F:\...\cmsRoot\admin\com\dac", "F:\...\cmsRoot\admin\com"]
, dialect="MicrosoftSQLServer"
, eventHandling = true
};
The only difference I can find between the Application.cfc files on the two servers is the filepaths. Database is set up correctly, and the pages themselves have no problems (that I know of).
Another thing I've found is that commenting out any relational mappings causes everything to load correctly (minus any objectABC.getXYZ() calls since I removed those properties).
I have also restarted the Coldfusion application server, but there were no noticeable differences.
Is it possible that an Application.cfc farther up in the file structure is overriding any cfclocation settings I set up? I didn't think this would be the case, but since nothing seems amiss with my Application.cfc, I am out of ideas. And the application.cfc/.cfm lookup order (under "Settings" in the CFIDE administrator) is the same for both; set as default.
I have also tried removing the extra folder layer (so all mappings are the same), but the error is identical.
Update: By adding a specific mapping for /cmsRoot (to F:...\cmsRoot), I get a new error that the components are not persistent. However, all my cfc's start like this:
component persistent = "true" entityName = .....
Is there a reason why Coldfusion would think the entities aren't persistent even though I defined it otherwise? And yes, I have used ormReload() to make sure it is updated properly.
The solution I found was to add a specific mapping to the cmsRoot folder by using application.mappings['\cmsRoot'] = 'F:\...\cmsRoot'; in my Application.cfc file.
I had some old ormReload() calls at the top of all the .cfc files because that allowed some things to work; by deleting those calls it now loads properly.

Migrations don't run on hosting

I'm using MigratorDotNet to manage Rails-style migrations for my web app. I have a workflow where, if I delete all the tables in the database, I can access an installation view that will run MigratorDotNet and create all the necessary tables.
This works locally. For some reason, when I upload my code to my Arvixe hosting, the migrations just never run. I get this odd error:
There is already an object named 'SchemaInfo' in the database.
This is odd because, prior to running migrations, I manually deleted all the tables in the database (to make sure it wasn't left over from a previous install).
My code essentially boils down to:
new Migrator.Migrator("SqlServer", connectionString.ToString(), migrationsAssembly).MigrateToLastVersion();
I've already verified by logging that the connection string is correct (production/hosting settings), and the assembly is correctly loaded (name and version).
Works locally, but not on Arvixe. How do I troubleshoot this?
This is a dark day.
It turns out (oddly) that the root cause was my hosting company used a schema other than dbo for my database. Because of this, the error message I saw (SchemaInfo already exists) was talking about their table.
My solution, unfortunately, was to rip out MigratorDotNet and go with FluentMigator instead. not only did this solve the problem, but it also gave me a more intelligible error message (one referring to the schema names).
While it doesn't seem possible to auto-set the schema, and while I need to switch the schema on my dev vs. production machine, it's still a solvable problem (and a better API, IMO). I googled, but did not find any way to change the default schema in migratordotnet.
I'm sorry for the issues that you were having. On shared hosting, unfortunately the only way that we may be able to change the schema is manually. If you are still looking for a solution that requires our assistance, please forward your ticket ID to qa .at. arvixe.com as well as arvand .at. arvixe.com and we can look into the best way to resolve this.