RavenDB Query Documents with Property Removed - ravendb

In the RavenDB Studio, I can see 69 CustomVariableGroup documents. My query only returns 66 of them. After some digging, I see that the three docs that are not returned have the new class structure: a property was removed. Since I saved these three CustomVariableGroup documents, their structure is different from the other 66. Why though, when I query for all of them, do I only get the other 66 documents with the old structure?
Both my C# code, and my query in LinqPad, only return the 66. Here's the LinqPad query:
Session.Query<CustomVariableGroup>().Dump(); // returns 66 docs
But, if I do this, I can get one of the three documents that is missing from the above query:
Session.Query<CustomVariableGroup>().Where(x => x.Name == "Derating").Dump();
How can I get all 69 documents returned in one query?
** Edit: Index Info **
In the SQL tab of the LinqPad query (and in the Raven server output), the index looks like this:
Url: /indexes/dynamic/CustomVariableGroups?query=&start=0&pageSize=128&aggregation=None
I don't see that index in Raven Studio, presumably because it's dynamic.
** Edit 2: This HACK works **
If I do this, I get all 69 documents:
Session.Query<CustomVariableGroup>().Where(x => x.Name != string.Empty).Dump();
My guess is that Raven must be using an old index that only gets documents that still contain that deleted column. I somehow need to use a new/different index...
Interestingly, this does not work; it only returns 66:
Session.Query<CustomVariableGroup>().Where(x => x.Id != string.Empty).Dump();
** Edit 3: This HACK works as well **
Session.Advanced.LuceneQuery<CustomVariableGroup>("Raven/DocumentsByEntityName").Where("Tag:CustomVariableGroups").Dump();

An index, with the old property, had to be removed.
** Before ** This didn't work (only returned 66 of the 69 documents):
Session.Query<CustomVariableGroup>().Dump();
** Fix ** Delete index that used the old property that was deleted from my C# class:
In Raven Studio, I deleted this index: Auto/CustomVariableGroups/ByApplicationId
** After ** This same query now returns all 69 documents:
Session.Query<CustomVariableGroup>().Dump();
Now, I'm not sure why these queries would use that index. I'm querying for all CustomVariableGroup documents, and not ByApplicationId. However, removing that index fixed it. I'm sure someone else can explain why.

What do the indexes look like? Did you manually create the indexes or were they dynamically created? Just wondering if that is the cause of the issue based on your comments above that there was a structure change to the object.
--S

Could it be a stale index.. if its not returning all the results you expect.
You could use
.Customize(x=>x.WaitForNonStaleResultsAsOfLastWrite())

Related

Peoplesoft CreateRowset with related display record

According to the Peoplebook here, CreateRowset function has the parameters {FIELD.fieldname, RECORD.recname} which is used to specify the related display record.
I had tried to use it like the following (just for example):
&rs1 = CreateRowset(Record.User, Field.UserId, Record.UserName);
&rs1.Fill();
For &k = 1 To &rs1.ActiveRowCount
MessageBox(0, "", 999999, 99999, &rs1(&k).UserName.Name.Value);
End-for;
(Record.User contains only UserId(key), Password.
Record.UserName contains UserId(key), Name.)
I cannot get the Value of UserName.Name, do I misunderstand the usage of this parameter?
Fill is the problem. From the doco:
Note: Fill reads only the primary database record. It does not read
any related records, nor any subordinate rowset records.
Having said that, it is the only way I know to bulk-populate a standalone rowset from the database, so I can't easily see a use for the field in the rowset.
Simplest solution is just to create a view, but that gets old very soon if you have to do it a lot. Alternative is to just loop through the rowset yourself loading the related fields. Something like:
For &k = 1 To &rs1.ActiveRowCount
&rs1(&k).UserName.UserId.value = &rs1(&k).User.UserId.value;
&rs1(&k).UserName.SelectByKey();
End-for;

Neo4j: "ghost" node in label index throws error

I have a neo4j database with a set of nodes with label :EXAMPLE.
There are two operations. First I delete one node and then I look for another one. They are done separately using neo4j API.
MATCH (n:EXAMPLE {Name: { name1 }}) DELETE n;
and
MATCH (n:EXAMPLE {Name: { name2 }}) RETURN n;
Sometimes, when I execute second query, it throws an error "Node with id 123". Node with id 123 is the same node that was deleted in the first query.
It happens when there is a lot of requests are coming to the database simultaneously.
I guess that it could happen if node was deleted, but EXAMPLE label index wasn't updated yet. There are two facts that prove such theory.
1) The error is unstable.
2) If I change second query like this (remove the label), I won't get the error:
MATCH (n {Name: { name2 }}) RETURN n;
Neo4j version is 2.1.5, Java - OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2~deb7u1) and operation system is Debian. There are no other indexes in the database except the label.
The question is how can I fix this, but still use labels?
What ends up happening is that (simplified) the operations will order like so:
Q1: MATCH (n)
Q2: DELETE (n), COMMIT
Q1: RETURN n # Error, n no longer exists
For implementation reasons, this is much more likely to happen if cypher is going via an index. The database will eventually handle this for you, but for now, you'll need to wrap that read query in a retry block - if it fails with this type of error, you simply run it again.
On that note, there are other errors that are easily recoverable from by retrying, such as deadlock errors, so wrapping your statements and/or transactions in retry-blocks is a useful thing to do in general.
This is a possible workaround:
Mark nodes as deleted instead of deleting. Ignore nodes that are marked as deleted. Delete all such nodes at once with a garbage collector.

DQL query to return all files in a Cabinet in Documentum?

I want to retrieve all the files from a cabinet (called 'Wombat Insurance Co'). Currently I am using this DQL query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);
This is ok except it only returns a maximum of 100 results. If there are 5000 files in the cabinet I want to get all 5000 results. Is there a way to use pagination to get all the results?
I have tried this query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend)
ENABLE (RETURN_RANGE 0 100 'r_object_id DESC');
with the intention of getting results in 100 file increments, but this query gives me an error when I try to execute it. The error says this:
com.emc.documentum.fs.services.core.CoreServiceException: "QUERY" action failed.
java.lang.Exception: [DM_QUERY2_E_UNRECOGNIZED_HINT]error:
"RETURN_RANGE is an unknown hint or is being used incorrectly."
I think I am using the RETURN_RANGE hint correctly, but maybe I'm not. Any help would be appreciated!
I have also tried using the hint ENABLE(FETCH_ALL_RESULTS 0) but this still only returns a maximum of 100 results.
To clarify, my question is: how can I get all the files from a cabinet?
You have already accepted an answer which is using DFS.
Since your are playing with DFC, these information might help you.
DFS:
If you are using DFS, you have to aware about the number of concurrent sessions that you can consume with DFS.
I think it is 100 or 150.
DFC:
Actually there is a limit that you can fetch via DFC (I'm not sure with DFS).
Go to your DFC application(webtop or da or anything) and check the dfc.properties file.
# Maximum number of results to retrieve by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results = 100
# Maximum number of results to retrieve per source by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results_per_source = 400
dfc.properties.full or similar file is there and you can verify these values according to your system.
And I'm talking about the ContentServer side, not the client side dfc.properties file.
If you use ENABLE (RETURN_TOP) hint with DFC, there are 2 ways to fetch the results from the ContentServer.
Object based
Row based
You have to configure this by using the parameter return_top_results_row_based in the server.ini file.
All of these changes for the documentum server side, not for your DFC/DQL client.
Aha, I've figured it out. Using DFS with Java (an abstraction layer on top of DFC) you can set the starting index for query results:
String queryStr = "select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);"
PassthroughQuery query = new PassthroughQuery();
query.setQueryString(queryStr);
query.addRepository(repositoryStr);
QueryExecution queryEx = new QueryExecution();
queryEx.setCacheStrategyType(CacheStrategyType.DEFAULT_CACHE_STRATEGY);
queryEx.setStartingIndex(currentIndex); // set start index here
OperationOptions operationOptions = null;
// will return 100 results starting from currentIndex
QueryResult queryResult = queryService.execute(query, queryEx, operationOptions);
You can just increment the currentIndex variable to get all results.
Well, the hint is being used incorrectly. Start with 1, not 0.
There is no built-in limit in DQL itself. All results are returned by default. The reason you get only 100 results must have something to do with the way you're using DFC (or whichever other client you are using). Using IDfCollection in the following way will surely return everything:
IDfQuery query = new DfQuery("SELECT r_object_id, object_name "
+ "FROM dm_document(all) WHERE FOLDER('/System', DESCEND)");
IDfCollection coll = query.execute(session, IDfQuery.DF_READ_QUERY);
int i = 0;
while (coll.next()) i++;
System.out.println("Number of results: " + i);
In a test environment (CS 6.7 SP1 x64, MS SQL), this outputs:
Number of results: 37162
Now, there's proof. Using paging is however a good idea if you want to improve the overall performance in your application. As mentioned, start counting with the number 1:
ENABLE(RETURN_RANGE 1 100 'r_object_id DESC')
This way of paging requires that sorting be specified in the hint rather than as a DQL statement. If all you want is the first 100 records, try this hint instead:
ENABLE(RETURN_TOP 100)
In this case sorting with ORDER BY will work as you'd expect.
Lastly, note that adding (all) will not only find all documents matching the specified qualification, but all versions of every document. If this was your intention, that's fine.
I've worked with DFC API (with Java) for a while but I don't remember any default limit on queries, IIRC we've always got all of the documents, there weren't any limit. Actually (according to my notes) we have to set the limit explicitly with, for example, enable (return_top 2000). (As far I know the syntax might be depend on the DBMS behind EMC Documentum.)
Just a guess: check your dfc.properties file.

Rails 3 selecting only values

In rails 3, I would like to do the following:
SomeModel.where(:some_connection_id => anArrayOfIds).select("some_other_connection_id")
This works, but i get the following from the DB:
[{"some_other_connection_id":254},{"some_other_connection_id":315}]
Now, those id-s are the ones I need, but I am uncapable of making a query that only gives me the ids. I do not want to have to itterate over the resulst, only to get those numbers out. Are there any way for me to do this with something like :
SomeModel.where(:some_connection_id => anArrayOfIds).select("some_other_connection_id").values()
Or something of that nautre?
I have been trying with the ".select_values()" found at Git-hub, but it only returns "some_other_connection_id".
I am not an expert in rails, so this info might be helpful also:
The "SomeModel" is a connecting table, for a many-to-many relation in one of my other models. So, accually what I am trying to do is to, from the array of IDs, get all the entries from the other side of the connection. Basicly I have the source ids, and i want to get the data from the models with all the target ids. If there is a magic way of getting these without me having to do all the sql myself (with some help from active record) it would be really nice!
Thanks :)
Try pluck method
SomeModel.where(:some => condition).pluck("some_field")
it works like
SomeModel.where(:some => condition).select("some_field").map(&:some_field)
SomeModel.where(:some_connection_id => anArrayOfIds).select("some_other_connection_id").map &:some_other_connection_id
This is essentially a shorthand for:
results = SomeModel.where(:some_connection_id => anArrayOfIds).select("some_other_connection_id")
results.map {|row| row.some_other_connection_id}
Look at Array#map for details on map method.
Beware that there is no lazy loading here, as it iterates over the results, but it shouldn't be a problem, unless you want to add more constructs to you query or retrieve some associated objects(which should not be the case as you haven't got the ids for loading the associated objects).

MOSS 2007: What is the source of "Directories"?

I'm trying to generate a new SharePoint list item directly using SQL server. What's stopping me is damn tp_DirName column. I have no ideas how to create this value.
Just for instance, I have selected all tasks from AllUserData, and there are possible values for the column: 'MySite/Lists/Task', 'Lists/Task' and even 'MySite/Lists/List2'.
MySite is the FullUrl value from Webs table. I can obtain it. But what about 'Lists/Task' and '/Lists/List2'? Where they are stored?
If try to avoid SQL context, I can formulate it the following way: what is the object, that has such attribute as '/Lists/List2'? Where can I set it up in GUI?
Just a FYI. It is VERY not supported to try and write directly to SharePoint's SQL Tables. You should really try and write something that utilizes the SharePoint Object Model. Writing to the SharePoint database directly mean Microsoft will not support the environment.
I've discovered, that [AllDocs] table, in contrast to its title, contains information about "directories", that can be used to generate tp_DirName. At least, I've found "List2" and "Task" entries in [AllDocs].[tp_Leaf] column.
So the solution looks like this -- concatenate the following 2 components to get tp_DirName:
[Webs].[FullUrl] for the web, containing list, containing item.
[AllDocs].[tp_Leaf] for the list, containing item.
Concatenate the following 2 components to get tp_Leaf for an item:
(Item count in the list) + 1
'_.000'
Regards,
Well, my previous answer was not very useful, though it had a key to the magic. Now I have a really useful one.
Whatever they said, M$ is very liberal to the MOSS DB hackers. At least they provide the following documents:
http://msdn.microsoft.com/en-us/library/dd304112(PROT.13).aspx
http://msdn.microsoft.com/en-us/library/dd358577(v=PROT.13).aspx
Read? Then, you know that all folders are listed in the [AllDocs] table with '1' in the 'Type' column.
Now, let's look at 'tp_RootFolder' column in AllLists. It looks like a folder id, doesn't it? So, just SELECT the single row from the [AllDocs], where Id = tp_RootFolder and Type = 1. Then, concatenate DirName + LeafName, and you will know, what the 'tp_DirName' value for a newly generated item in the list should be. That looks like a solid rock solution.
Now about tp_LeafName for the new items. Before, I wrote that the answer is (Item count in the list) + 1 + '_.000', that corresponds to the following query:
DECLARE #itemscount int;
SELECT #itemscount = COUNT(*) FROM [dbo].[AllUserData] WHERE [tp_ListId] = '...my list id...';
INSERT INTO [AllUserData] (tp_LeafName, ...) VALUES(CAST(#itemscount + 1 AS NVARCHAR(255)) + '_.000', ...)
Thus, I have to say I'm not sure that it works always. For items - yes, but for docs... I'll inquire into the question. Leave a comment if you want to read a report.
Hehe, there is a stored procedure named proc_AddListItem. I was almost right. MS people do the same, but instead of (count + 1) they use just... tp_ID :)
Anyway, now I know THE SINGLE RIGHT answer: I have to call proc_AddListItem.
UPDATE: Don't forget to present the data from the [AllUserData] table as a new item in [AllDocs] (just insert id and leafname, see how SP does it itself).