Delete documents in all collections using indexeradmin rdocs - fast-esp

Is it possible to use?
indexeradmin rdocs list-of-ids.txt
to delete all documents across all collections.
I ran the command but found it hadn't deleted the documents, from the docs it says to specify the collection name, I can delete the documents per collection but just wondered if it was possible to have a list of mixed internal_ids from different collections and have indexeradmin delete them all. For example a file containing:
0034953453453453_collection1
4345345345345334_collection2
would delete the documents in collection1 and 2.

For each collection do this:
indexerinfo -a reportcontents collection-name > /tmp/list-of-ids.txt
indexeradmin --column=1 --row=1 rdocs /tmp/list-of-ids.txt collection-name session-id
Execute the second line for all columns/rows of the cluster. And you can choose any random integer for the session-id.

Related

fn.subsequence work different in optic API

In my program I need to join 2 and more collections by some json properties.
When I run only subsequence method it return array of json objects but when I use it in op.fromLiterals in my optic plan it returns a list of document uris.
I can't use the method op.fromSearch because I can't upgrade to a later MarkLogic version.
I need something like this to work:
var items = fn.subsequence(search).toArray();
op.fromLiterals(items)
.joinInner(article, op.on('fragmentId', 'viewDocId'))
.result()
But now items is a list of document locations (document_1.json) and this code gives me an error:
XDMP-ARGTYPE:
xdmp.documentGet(cts.doc("/Documents/document_1.json"))
Solution: I push properties to results in this way: results.push({id: doc.toObject()["document_id"]}); and its work fine.

What does 'count' mean in Master API of Assign a file key?

I was reading this link about api:
https://github.com/chrislusf/seaweedfs/wiki/Master-Server-API#assign-a-file-key
but I didn't understand what will "count" option do.
Can you give an example about this option please?
count: how many file ids to assign. Use <fid>_1, <fid>_2 for the assigned additional file ids. e.g. 3,01637037d6_1, 3,01637037d6_2
Best regards
Sometimes you may want to reserve multiple file ids.
E.g., one picture may have multiple versions.
In a file id, <volume_id, key, cookie>.
For _1, _2, they will be translated to <volume_id, key+1, cookie>, <volume_id, key+2, cookie>

(Neo4j) Carry variable over subsequent queries

I am trying to carry over a variable through 2 subsequent queries. It seems like WITH only helps carry over the variable to the next query, but not any before that. Suggestions?
This is example of what I am trying to do:
Person nodes contain information on publishers, writers and editors (e.g. name, gender, etc.)
Story nodes contain data on Story (e.g. title, publish date, etc.)
IN relationships have categories: created, edited, published.
Return editor-publishers who have edited stories published by another editor-publisher:
assume no duplicate Person names
Find all Persons who have edited at least one story who have also published at least one story
Find list of stories published by these editor-publishers in 1
In all editors of stories in 2, return sublist of these editors also in 1
MATCH (EditorPublisher:Person)-[:IN{category: "published"}]->(:Story) // 1
WHERE (EditorPublisher:Person)-[:IN{category: "edited"}]->(:Story)
WITH COLLECT(EditorPublisher.name) as EditorPublisher_list
MATCH (EditorPublisher_stories:Story)<-[:IN{category: "published"}]-(publisher:Person) // 2
WHERE publisher.name in EditorPublisher_list
WITH EditorPublisher_list // throws error EditorPublisher_list variable not found
WITH COLLECT(EditorPublisher_stories.title) as EditorPublisher_stories_list
MATCH (epe:Person)-[contribution:PLAYED]-(eps:Movie) // 3
WHERE epe.name in EditorPublisher_list
AND eps IN EditorPublisher_stories_list
RETURN epe.name
NVM I got it to work. With does keep the variables if i don't rename them.
I just had to do WITH return.nodes, and call the return.nodes in subsequent queries instead of using in [return.nodes.list]

How to retrieve all hash values from a list in Redis?

In Redis, to store an array of objects we should use hash for the object and add its key to a list:
HMSET concept:unique_id name "concept"
...
LPUSH concepts concept:unique_id
...
I want to retrieve all hash values (or objects) in the list, but the list contains only hash keys so a two step command is necessary, right? This is how I'm doing in python:
def get_concepts():
list = r.lrange("concepts", 0, -1)
pipe = r.pipeline()
for key in list:
pipe.hgetall(key)
pipe.execute()
Is it necessary to iterate and fetch each individual item? Can it be more optimized?
You can use the SORT command to do this:
SORT concepts BY nosort GET concept:*->name GET concept:*->some_key
Where * will expand to each item in the list.
Add LIMIT offset count for pagination.
Note that you have to enumerate each field in the hash (each field you want to fetch).
Another option is to use the new (in redis 2.6) EVAL command to execute a Lua script in the redis server, which could do what you are suggesting, but server side.

MOSS 2007: What is the source of "Directories"?

I'm trying to generate a new SharePoint list item directly using SQL server. What's stopping me is damn tp_DirName column. I have no ideas how to create this value.
Just for instance, I have selected all tasks from AllUserData, and there are possible values for the column: 'MySite/Lists/Task', 'Lists/Task' and even 'MySite/Lists/List2'.
MySite is the FullUrl value from Webs table. I can obtain it. But what about 'Lists/Task' and '/Lists/List2'? Where they are stored?
If try to avoid SQL context, I can formulate it the following way: what is the object, that has such attribute as '/Lists/List2'? Where can I set it up in GUI?
Just a FYI. It is VERY not supported to try and write directly to SharePoint's SQL Tables. You should really try and write something that utilizes the SharePoint Object Model. Writing to the SharePoint database directly mean Microsoft will not support the environment.
I've discovered, that [AllDocs] table, in contrast to its title, contains information about "directories", that can be used to generate tp_DirName. At least, I've found "List2" and "Task" entries in [AllDocs].[tp_Leaf] column.
So the solution looks like this -- concatenate the following 2 components to get tp_DirName:
[Webs].[FullUrl] for the web, containing list, containing item.
[AllDocs].[tp_Leaf] for the list, containing item.
Concatenate the following 2 components to get tp_Leaf for an item:
(Item count in the list) + 1
'_.000'
Regards,
Well, my previous answer was not very useful, though it had a key to the magic. Now I have a really useful one.
Whatever they said, M$ is very liberal to the MOSS DB hackers. At least they provide the following documents:
http://msdn.microsoft.com/en-us/library/dd304112(PROT.13).aspx
http://msdn.microsoft.com/en-us/library/dd358577(v=PROT.13).aspx
Read? Then, you know that all folders are listed in the [AllDocs] table with '1' in the 'Type' column.
Now, let's look at 'tp_RootFolder' column in AllLists. It looks like a folder id, doesn't it? So, just SELECT the single row from the [AllDocs], where Id = tp_RootFolder and Type = 1. Then, concatenate DirName + LeafName, and you will know, what the 'tp_DirName' value for a newly generated item in the list should be. That looks like a solid rock solution.
Now about tp_LeafName for the new items. Before, I wrote that the answer is (Item count in the list) + 1 + '_.000', that corresponds to the following query:
DECLARE #itemscount int;
SELECT #itemscount = COUNT(*) FROM [dbo].[AllUserData] WHERE [tp_ListId] = '...my list id...';
INSERT INTO [AllUserData] (tp_LeafName, ...) VALUES(CAST(#itemscount + 1 AS NVARCHAR(255)) + '_.000', ...)
Thus, I have to say I'm not sure that it works always. For items - yes, but for docs... I'll inquire into the question. Leave a comment if you want to read a report.
Hehe, there is a stored procedure named proc_AddListItem. I was almost right. MS people do the same, but instead of (count + 1) they use just... tp_ID :)
Anyway, now I know THE SINGLE RIGHT answer: I have to call proc_AddListItem.
UPDATE: Don't forget to present the data from the [AllUserData] table as a new item in [AllDocs] (just insert id and leafname, see how SP does it itself).