I've read a lot of complaints about the treepanel, now I'm having some trouble with it.
The first is that paging in the treegrid isn't working correctly. I found a solution in another post that doesn't work, so I'm trying to fix it for everybody.
The second is where I need help. The first time a parent node is expanded, it make a call to the server and displays the children correctly. When I collapse and then expand it again, however, it will paint the same child twice. The tree crashes, showing this error:
Uncaught TypeError: Cannot read property 'internalId' of undefined
After working on it, I discovered that the problem is not when it inserts it again, but in the beforeitemexpand function. It appends the same child twice, causing the js and the treestore to contain 2 nodes with the same internalId, which in turn causes the crash.
Any ideas?
The way tree store works is a bit complicated. I suspect the issues you are having are related to the idProperty of the models your tree store is storing.
I ran into these issues as well. I found out you can not have two records with the same identity in mutliple places in the tree. For example if your tree was representing a file system and you copied the same file into two directories. If your file had an idProperty (set to id by default) the store puts your file model into a hash keyed off the id, regardless of it's path in the tree.
The suggested solution was to either not have the id set at all (not a very good solution for an editable tree grid), or to set a compound key that takes into account the entire chain of nodes up to the root to grantee uniqueness in the tree.
Once you get passed these issues the tree works fairly well. Oh, and paging the tree ... don't think that will happen any time soon, already asked.
Related
I just moved a custom built CMS over to a live server (it was on the development server) because it is easier to set up RSS. The problem is that none of my relational mappings work anymore, despite me changing the application.cfclocation to reflect the new path. It uses an absolute path, as well. The setup is like so:
F:\...\cmsRoot\com\dac (this is the original path)
F:\...\cmsRoot\admin\com\dac (this is the path on the new server. The only difference is the extra layer for the admin folder; the drive letters are the same)
The Application.cfc and most pages are located in the cmsRoot and cmsRoot\admin folders, respectively. The dac folders contain my relational CFC files.
Originally, when loading each cfc for the first time Coldfusion would throw an error saying
"Error Occurred While Processing Request
Cannot load the target CFC abc for the relation property abc in CFC xyz
for each relational mapping (I commented them out to make sure every single one had the same problem).
After I added the line <cfscript>ORMReload();</cfscript> to the beginning of each CFC file, I could get past this error and access the login page just fine. However, now I get an error any time I try to create an entity:
Mapping for component abc not found.
The first instance that calls it (and throws the error) looks like this:
objectABC = EntityToQuery(EntityLoad("abc", {ActiveInd=1}));
I've already searched for any related problems on stackoverflow already, and it helped me fix the original error by adding ORMReload() calls. However, that doesn't solve the current problem. I've changed the mapping for the CFC's (in the Application.cfc) to use a relative path, and that did not help either (since I figured it was likely a mapping issue). I also checked folder permissions to make sure they matched, since one user said that fixed their problem. Both folders have the same permissions, as well.
Here's any useful Application.cfc info, if that helps:
this.ormsettings = { cfclocation = ["F:\...\cmsRoot\admin\com\dac", "F:\...\cmsRoot\admin\com"]
, dialect="MicrosoftSQLServer"
, eventHandling = true
};
The only difference I can find between the Application.cfc files on the two servers is the filepaths. Database is set up correctly, and the pages themselves have no problems (that I know of).
Another thing I've found is that commenting out any relational mappings causes everything to load correctly (minus any objectABC.getXYZ() calls since I removed those properties).
I have also restarted the Coldfusion application server, but there were no noticeable differences.
Is it possible that an Application.cfc farther up in the file structure is overriding any cfclocation settings I set up? I didn't think this would be the case, but since nothing seems amiss with my Application.cfc, I am out of ideas. And the application.cfc/.cfm lookup order (under "Settings" in the CFIDE administrator) is the same for both; set as default.
I have also tried removing the extra folder layer (so all mappings are the same), but the error is identical.
Update: By adding a specific mapping for /cmsRoot (to F:...\cmsRoot), I get a new error that the components are not persistent. However, all my cfc's start like this:
component persistent = "true" entityName = .....
Is there a reason why Coldfusion would think the entities aren't persistent even though I defined it otherwise? And yes, I have used ormReload() to make sure it is updated properly.
The solution I found was to add a specific mapping to the cmsRoot folder by using application.mappings['\cmsRoot'] = 'F:\...\cmsRoot'; in my Application.cfc file.
I had some old ormReload() calls at the top of all the .cfc files because that allowed some things to work; by deleting those calls it now loads properly.
I am developing a simple application that has a Parent and Child model. The problem is that the parents are often updated through reading a text file, but the Children for each of the Parents are updated through the web-app. So how can I keep the Children attached to each of their Parents since each time I read the files I create new Parents?
Well, the most obvious thing would be to not re-create the parents every time you parse the file (is that possible?). Using first_or_create(link) that shouldn't be to painful...
I'm building a Sharepoint 2010 export tool for back up reasons (a bit like the filemanager from Metavis).
When downloading a file to local disk I need to back up the metadata associated with the document. Which I will store in a csv-file. My first approach was to iterate all listItem.fieldvalues, but that doesn't really work because some fieldvalues are complex types, which would needlessly complicate the backup file. Some values even have line endings, for example "MetaInfo". Furthermore not all values are needed to restore the content when that might be necessary.
So my idea is to only get the values from the Fieldvalues collection which are needed to do a functional restore, supplemented with all the user added metadata.
To do this I want to check all fieldvalues against an exclusion list to see if it is present. If it is present don't back up. If it is it is either user generated metadata or a value I need like for instance "author", "created".
So my question is, does anyone know of a list of all fieldvalues keys?
Or is there a better approach to my problem?
Thanks
Update: Well, as I was iterating through the FieldValues collection any way. It was easy to do a dump of all the values to a CSV. Running it once was enough to get all the values. Now all I need to write is an xml file for configuration. This leaves the question: is there a better way of doing this?
Filter the list fields by writing following code
using System;
using Microsoft.SharePoint.Client;
clientContext.Load(
listItems,
items => items
.Include(
item => item["Title"],
item => item["Category"],
item => item["Estimate"]));
Source: http://msdn.microsoft.com/en-us/library/ee857094.aspx#SP2010ClientOM_Creating_Windows_Console_Application
You can create an view with all fields, get the view using sharepoint object model and and get its column name from collection and filter them as per your requirement.
I have finished the application. As I wrote in my update I have made a list of all fieldValues by exporting them to a CSV file. After that I made a configuration-file with a boolean 'Backup'. This makes it possible to control which values are to be used when a backup is made.
I retrospect I think a configuration file was not needed. The values used when backing up are so much part of the whole workings of the program that a configuration file gives an administrator or casual future developer the impression that a simple reconfiguring will fulfill there needs.
I can now see that if the program needs to change due to new requirements the code has to be changed anyway. So even though setting a value to 'True' will change the output. Some other code has to be written as well. If I were to write it again I would probably use constants. This makes it all less dynamic, but still fulfill the needs of the program.
(BTW a list of all the names off the standard fieldValues would have been nice to start with. I would publish it here, but I don't have access to the file anymore, because I switched jobs recently.)
I'm confused as to how CacheDependency works in VirtualPathProvider.GetCacheDependency().
Every example I've seen creates a cache dependency based on some physical file on disk, while I'm returning records from a database. Right now, I'm overriding GetFileHash and just returning the last date/time the relevant record was modified as the hash string. This works well, and I'm not sure using a CacheDependency item would affect the performance as I'd still have to go check the database every time the view is requested to see if it's been updated, but I'm still curious how to use CacheDependency.
Has anyone used this when returning views from a database?
Update
Using this now (http://razorengine.codeplex.com/) which works VERY well.
The point of CacheDependency is to provide you with an event that will be called when the cache becomes invalid (because the file on disk changed). Check out SqlCacheDependency that does the same thing with SQL Server entries.
I have an app, using Core Data with a SQLite store.
At some point I'd like to remove all objects for a few entities. There may be close to a thousand objects.
From what I can tell via google and the official docs, the only way to delete objects is to [managedObjectContext deleteObject:(Entity *)] for every object. But this means that I must first fetch all objects.
The data store is just sqlite, is there no way to simply pass a TRUNCATE TABLE ZENTITY; to it?
If you relate your objects to a parent entity simply delete the parent. If your parents delete rule is set to 'cascade' all of those (1k) children will be removed as well.
John
The issue is that CoreData isn't just a SQLite wrapper. It's an object graph management solution and it stores cached versions of your object in memory in addition to other stuff. As far as I know to remove all instances of a given managed object you'll have to fetch them and then delete each. This isn't to say that this functionality shouldn't exist, because it probably should.
Have you actually tried it and found it to be a performance issue? If so, then you could provide some information showing that, and more importantly file a bug report on it. If not, then why are you bothering to ask?
It's not as simple as doing a TRUNCATE TABLE ZENTITY; since Core Data must also apply the delete rule for each object (among other actions), which means doing a fetch. So they may as well let you make the fetch and then pass the results into the context one-by-one.