How to traverse & query faster TitanGraph with Cassandra as a backend? - indexing

I have millions of nodes stored in Titan 1.0.0 with Cassandra 2.2.4. I want to retrieve graph from Cassandra and query or traverse it in fast way.
If I build index in a code,
mgmt.buildIndex("nameSearchIndex", Vertex.class).addKey(namep, Mapping.TEXT.asParameter()).buildMixedIndex("search");
mgmt.buildIndex("addressSearchIndex", Vertex.class).addKey(addressp, Mapping.TEXT.asParameter()).buildMixedIndex("search");
Still the querying seems to be slower.
When I use
g.traversal().V().count()
it still gives warning - please use indexes, when I have already build indexes in code. Is there any specific configuration to forcefully activate indexes? How to query graph with using indexes?
g.traversal().V().has("Name","Jason")
Does this query uses indexes? if not then how do I make use of indexes to query faster?
Can Spark be used for fast traversal? How to use SparkComputerGraph for the same? I am not able to find the configurations for CassnadraInputFormat with Spark.
Thanks.

There are lots of questions bundled up in this question.
The indexes you're making are mixed indexes, which are implemented by an external indexing system, such as Solr or ElasticSearch. These would help if you are looking for a vertex with a certain name, such as your .has("Name", "Jason) example.
To find out if an index is being used, I suggest looking into the profile() step in Gremlin. You can read about it here.
Spark is meant to be used for traversals that need to potentially load a graph that is bigger than one machine can hold. What use case is .V().count() important for?

This answer was cross-posted on the Titan mailing list.
Indexing is useful for doing fast traversals, but ultimately "fast queries" depends on many factors, including your graph model/volume/shape and the types of questions you are trying to answer.
Read Chapter 8 "Indexing for better Performance" in the Titan docs, and digest the differences between the different types: Composite, Mixed, and Vertex-centric.
Based on the example query you posted, and as Daniel noted, it looks to me like an exact match type of query, so I would start with a Composite index. You can cut and paste this to try it out in the Titan Console.
graph = TitanFactory.open('inmemory')
mgmt = graph.openManagement()
name = mgmt.makePropertyKey('name').dataType(String.class).cardinality(Cardinality.SINGLE).make()
nameIndex = mgmt.buildIndex('nameIndex',Vertex.class).addKey(name).buildCompositeIndex()
mgmt.commit()
graph.addVertex('name','jason')
g = graph.traversal()
g.V().has('name','jason') // no warning should appear
If after reading the Composite vs Mixed Index section you decide that a Mixed index (backed by Elasticsearch, Solr, or Lucene) is what you really need, read Chapter 20 "Index Parameters and Full-Text Search", and digest the differences between the mappings TEXT, STRING, and TEXTSTRING.
Here's an example that uses a STRING mixed index
graph = TitanFactory.build().set('storage.backend','inmemory').set('index.search.backend','elasticsearch').open()
mgmt = graph.openManagement()
name = mgmt.makePropertyKey('name').dataType(String.class).cardinality(Cardinality.SINGLE).make()
nameIndex = mgmt.buildIndex('nameIndex',Vertex.class).addKey(name, Mapping.STRING.asParameter()).buildMixedIndex("search")
mgmt.commit()
graph.addVertex('name','jason')
g = graph.traversal()
g.V().has('name','jason') // no warning should appear

Related

Neo4j: how do I use an index in a relationship between two nodes?

I'm debugging the code of an api and I found a cypher instruction that takes 6 minutes to return the data.
I ran the neo4j code in smaller chunks and found that this snippet is causing the problem: MATCH(copart:CopartOperadora) WHERE NOT (copart)-[:FROM_TO]->(:Coexistence)
I'm new to neo4j so I still haven't figured out how I can optimize this instruction.
Thanks to everyone who contributed.
Optimizations of this kind, usually depend on the schema, of your graph database, without that it's very hard to provide any insights. But you can try this:
MATCH (copart:CopartOperadora)-[:FROM_TO]->(:Coexistence)
WITH collect(id(copart)) AS connectedNodesIds
MATCH (copart:CopartOperadora) WHERE id(copart) NOT IN connectedNodesIds
We can't create any index as such, unfortunately. But if the relationship FROM_TO is only present from CopartOperadora to Coexistence nodes. Then you can remove the node label for Coexistence, all together, which will be optimal. Something like this:
MATCH(copart:CopartOperadora) WHERE NOT (copart)-[:FROM_TO]->()

creating a full text index in SQL database

I am searching for a database solution for real full text indexing.
I have read Postgres' full text search chapter but it describes text searching which is not a "full" index and it is heuristic in nature.
However I found this https://pgpedia.info/f/fulltextindex.hml contrib/fulltextindex module which sound promising.
So my questions are as follows.
why was it removed in PostgreSQL 8.1?
how can I use it?
are there other alternative database solutions that do support this kind of feature?
what is the performance one can expect?
The index to use for full-text search is a GiST index, and there is nothing heuristic about it (except the "picksplit" algorithm). "fulltextindex" was removed in 8.2, and full text search got added to core in 8.3, so that's what you should use.
Read the WARNING file from release 8.1:
WARNING
-------
This implementation of full text indexing is very slow and inefficient. It is
STRONGLY recommended that you switch to using contrib/tsearch which offers these
features:
Advantages
----------
* Actively developed and improved
* Tight integration with OpenFTS (openfts.sourceforge.net)
* Orders of magnitude faster (eg. 300 times faster for two keyword search)
* No extra tables or multi-way joins required
* Select syntax allows easy 'and'ing, 'or'ing and 'not'ing of keywords
* Built-in stemmer with customisable dictionaries (ie. searching for 'jellies' will find 'jelly')
* Stop words automatically ignored
* Supports non-C locales
Disadvantages
-------------
* Only indexes full words - substring searches on words won't work.
eg. Searching for 'burg' won't find 'burger'
Due to the deficiencies in this module, it is quite likely that it will be removed from the standard PostgreSQL distribution in the future.
PostgreSQL is open source. To see the discussion that led to the removal of the module, search the archives. You will find this and this.

Using index in DSE graph

I'm trying to get the list of persons in a datastax graph that have the same address with other persons and the number of persons is between 3 and 5.
This is the query:
g.V().hasLabel('person').match(__.as('p').out('has_address').as('a').dedup().count().as('nr'),__.as('p').out('has_address').as('a')).select('nr').is(inside(3,5)).select('p','a','nr').range(0,20)
At first run I've noticed this error messages:
Could not find an index to answer query clause and graph.allow_scan is
disabled: ((label = person))
I've enabled graph.allow_scan=true and now it's working
I'm wondering how can I create an index to be able to run this query without enabling allow_scan=true ?
Thanks
You can create an index by adding it to the schema using a command like this:
schema.vertexLabel('person').index('address').materialized().by('has_address').add()
Full documentation on adding indexes is available here: https://docs.datastax.com/en/latest-dse/datastax_enterprise/graph/using/createIndexes.html
You should not enable graph.allow_scan=true as under the covers it is turning on ALLOW FILTERING on the CQL queries. This will cause a lot of cluster scans and will inevitably time out with any real amount of data in the system. You should never enable this in any sort of production environment.
I am not sure that indexing is the solution for your problem.
The best way to do this would be to reify addresses as nodes and look for nodes with an indegree between 3 and 5.
You can use index on textual fields of your address nodes.

Improving lucene spellcheck

I have a lucene index, the documents are in around 20 different languages, and all are in the same index, I have a field 'lng' which I use to filter the results in only one language.
Based on this index I implemented spell-checker, the issue is that I get suggestions from all languages, which are irrelevant (if I am searching in English, suggestions in German are not what I need). My first idea was to create a different spell-check index for each language and than select index based on the language of the query, but I do not like this, is it possible to add additional column in spell-check index and use this, or is there some better way to do this?
Another question is how could I improve suggestions for 2 or more Terms in search query, currently I just do it for the first, which can be strongly improved to use them in combination, but I could not find any samples, or implementations which could help me solve this issue.
thanks
almir
As far as I know, it's not possible to add a 'language' field to the spellchecker index. I think that you need to define several search SpellCheckers to achieve this.
EDIT: As it turned out in the comments that the language of the query is entered by the user as well, then my answer is limited to: define multiple spellcheckers. As for the second question that you added, I think that it was discussed before, for example here.
However, even if it would be possible, it doesn't solve the biggest problem, which is the detection of query language. It is highly non-trivial task for very short messages that can include acronyms, proper nouns and slang terms. Simple n-gram based methods can be inaccurate (as e.g. the language detector from Tika). So I think that the most challenging part is how to use certainty scores from both language detector and spellchecker and what threshold should be chosen to provide meaningful corrections (e.g. language detector prefers German, but spellchecker has a good match in Danish...).
If you look at the source of SpellChecker.SuggestSimilar you can see:
BooleanQuery query = new BooleanQuery();
String[] grams;
String key;
for (int ng = GetMin(lengthWord); ng <= GetMax(lengthWord); ng++)
{
<...>
if (bStart > 0)
{
Add(query, "start" + ng, grams[0], bStart); // matches start of word
}
<...>
I.E. the suggestion search is just a bunch of OR'd boolean queries. You can certainly modify this code here with something like:
query.Add(new BooleanClause(new TermQuery(new Term("Language", "German")),
BooleanClause.Occur.MUST));
which will only look for suggestions in German. There is no way to do this without modifying your code though, apart from having multiple spellcheckers.
To deal with multiple terms, use QueryTermExtractor to get an array of your terms. Do spellcheck for each, and cartesian join. You may want to run a query on each combo and then sort based on the frequency they occur (like how the single-word spellchecker works).
After implement two different search features in two different sites with both lucene and sphinx, I can say that sphinx is the clear winner.
Consider using http://sphinxsearch.com/ instead of lucene. It's used by craigslist, among others.
They have a feature called morphology preprocessors:
# a list of morphology preprocessors to apply
# optional, default is empty
#
# builtin preprocessors are 'none', 'stem_en', 'stem_ru', 'stem_enru',
# 'soundex', and 'metaphone'; additional preprocessors available from
# libstemmer are 'libstemmer_XXX', where XXX is algorithm code
# (see libstemmer_c/libstemmer/modules.txt)
#
# morphology = stem_en, stem_ru, soundex
# morphology = libstemmer_german
# morphology = libstemmer_sv
morphology = none
There are many stemmers available, and as you can see, german is among them.
UPDATE:
Elaboration on why I feel that sphinx has been the clear winner for me.
Speed: Sphinx is stupid fast. Both indexing and in the serving search queries.
Relevance: Though it's hard to quantify this, I felt that I was able to get more relevant results with sphinx compared to my lucene implementation.
Dependence on the filesystem: With lucene, I was unable to break the dependence on the filesystem. And while their are workarounds, like creating a ram disk, I felt it was easier to just select the "run only in memory" option of sphinx. This has implications for websites with more than one webserver, adding dynamic data to the index, reindexing, etc.
Yes, these are just points of an opinion. However, they are an opinion from someone that has tried both systems.
Hope that helps...

Collect all hits for a search in Lucene / Optimization

Summary: I collect the doc ids of all hits for a given search by using a custom Collector (it populates a BitSet with the ids). The searching and getting doc ids are quite fast according to my needs but when it comes to actually fetching the documents from disk, things get very slow. Is there a way to optimize Lucene for faster document collection?
Details: I'm working on a processed corpus of Wikipedia and I keep each sentence as a separate document. When I search for "computer", I get all sentences containing the term computer. Currently, searching the corpus and getting all document ids work in sub-second but fetching the first 1000 documents takes around 20 seconds. Fetching all documents takes proportionally more time (i.e. another 20 sec for each 1000-document batch).
Subsequent searches and document fetching takes much less time (though, I don't know who's doing the caching, OS or Lucene?) but I'll be searching for many diverse terms and I don't want to rely on caching, the performance on the very first search is crucial for me.
I'm looking for suggestions/tricks that will improve the document-fetching performance (if it's possible at all). Thanks in advance!
Addendum:
I use Lucene 3.0.0 but I use Jython to drive Lucene classes. Which means, I call the get_doc method of the following Jython class for every doc id I retrieved during the search:
class DocumentFetcher():
def __init__(self, index_name):
self._directory = FSDirectory.open(java.io.File(index_name))
self._index_reader = IndexReader.open(self._directory, True)
def get_doc(self, doc_id):
return self._index_reader.document(doc_id)
I have 50M documents in my index.
You, probably, are storing lot of information in the document. Reduce the stored fields to as much as you can.
Secondly, while retrieving fields, select only those fields which you need. You can use following method of IndexReader to specify only few of the stored fields.
public abstract Document document(int n, FieldSelector fieldSelector)
This way you don't load up fields which are not used.
You can utilize following code sample.
FieldSelector idFieldSelector =
new SetBasedFieldSelector(Collections.singleton("idFieldName"), Collections.emptySet());
for (int i: resultDocIDs) {
String id = reader.document(i, idFieldSelector).get("idFieldName");
}
Scaling Lucene and Solr discusses many ways to improve Lucene performance.
As you are working on Lucene search within Wikipedia, you may be interested in Rainman's Lucene Search of Wikipedia. He mostly discusses algorithms and less performance, but this may still be relevant.