Case insensitive index in Neo4j using Py2neo - indexing

I want to make a case-insensitive index in Neo4j using Py2neo.
Read through the docs and googled a lot but didn't find anything. There seems to be this option in Java but not in Py2neo.
Please help!

You can pass configuration options into the GraphDatabaseService.get_or_create_index function as indicated here:
http://book.py2neo.org/en/latest/graphs_nodes_relationships/#py2neo.neo4j.GraphDatabaseService.get_or_create_index
These arguments are passed directly into the REST call as described here:
http://docs.neo4j.org/chunked/milestone/rest-api-indexes.html#rest-api-create-node-index-with-configuration
Hope this helps.

When using legacy indexes you can supply a configuration upon initial creation of the index. You have to set to_lower_case=true in combination with type=fulltext.
Schema indexes on the other hand do not yet support case insensitivity. As a workaround, introduce a copy of the respective property, e.g. name -> nameLower, which gets populated by the lowercase variant of that string. You could do something like this on existing datasets:
CREATE INDEX ON :Person(nameLower);
// --- use seperate transaction
MATCH (p:Person) set p.nameLower = lower(p.name); // maybe apply LIMITs for large amount of nodes
Your query string of course needs to use lower case:
MATCH (p:Person {nameLower:'john'}) RETURN p

Related

Building a (process) variable in Appian using the value of another one?

As far as I understand, it is not possible in Appian to dynamically construct (process) variable names, just like you would do e.g. with bash using backticks like MY_OBJECT=pv!MY_CONS_`extract(valueOfPulldown)`. Is that correct? Is there a workaround?
I have set of Appian constants, let's call them MY_CONS_FOO, MY_CONS_BAR, MY_CONS_LALA, all of which are e.g. refering to an Appian data store entity. I would like to write an Appian expression rule which populates another variable MY_OBJECT of the same type (here: data store entity), depending e.g. of the options of a pull-down menu having the possible options stored in an array MY_CONS_OPTIONS looking as follows
FOO
BAR
LALA
I could of course build a lengthy case-structure which I have to maintain in addition to MY_CONS_OPTIONS, so I am searching for a more dynanmic approach using the extract() function depending on valueOfPulldown as the chosen value of the pulldown-menu.
Edit: Here the expression-rule (in pseudo-code) I want to avoid:
if (valueOfPulldown = 'FOO') then MY_OBJECT=pv!MY_CONS_FOO
if (valueOfPulldown = 'BAR') then MY_OBJECT=pv!MY_CONS_BAR
if (valueOfPulldown = 'LALA') then MY_OBJECT=pv!MY_CONS_LALA
The goal is to be able to change the data store entity via pulldown-menu.
This can help you find what is behind your constant.
fn!typeName(fn!typeOf(cons!YOUR_CONSTANT)).
Having in mind additional details I would do as follows:
Create separate expression that will combine details into list of Dictionary like below:
Expression results (er):
{
{dd_label: "label1", dd_value: 1, cons: "cons!YOUR_CONSTANT1" }
,{dd_label: "label2", dd_value: 2, cons: "cons!YOUR_CONSTANT2" }
}
on UI for your dropdown control use er.dd_label as choiceLabels and er.dd_value as choiceValues
when user selects value on Dropdown save dropdown value to some local variable and then use it to find your const by doing:
property( index(er, wherecontains(local!dropdownselectedvalue, tointeger(er.dd_value))), "cons")
returned value of step 3 is your constant
This might not be perfect as you still have to maintain your dictionary but you can avoid long if...else statements.
As a alternative have a look on Decisions Tables in Appian https://docs.appian.com/suite/help/21.1/Appian_Decisions.html

SHOW KEYS in Aerospike?

I'm new to Aerospike and am probably missing something fundamental, but I'm trying to see an enumeration of the Keys in a Set (I'm purposefully avoiding the word "list" because it's a datatype).
For example,
To see all the Namespaces, the docs say to use SHOW NAMESPACES
To see all the Sets, we can use SHOW SETS
If I want to see all the unique Keys in a Set ... what command can I use?
It seems like one can use client.scan() ... but that seems like a super heavy way to get just the key (since it fetches all the bin data as well).
Any recommendations are appreciated! As of right now, I'm thinking of inserting (deleting) into (from) a meta-record.
Thank you #pgupta for pointing me in the right direction.
This actually has two parts:
In order to retrieve original keys from the server, one must -- during put() calls -- set policy to save the key value server-side (otherwise, it seems only a digest/hash is stored?).
Here's an example in Python:
aerospike_client.put(key, {'bin': 'value'}, policy={'key': aerospike.POLICY_KEY_SEND})
Then (modified Aerospike's own documentation), you perform a scan and set the policy to not return the bin data. From this, you can extract the keys:
Example:
keys = []
scan = client.scan('namespace', 'set')
scan_opts = { 'concurrent': True, 'nobins': True, 'priority': aerospike.SCAN_PRIORITY_MEDIUM }
for x in (scan.results(policy=scan_opts)): keys.append(x[0][2])
The need to iterate over the result still seems a little clunky to me; I still think that using a 'master-key' Record to store a list of all the other keys will be more performant, in my case -- in this way, I can simply make one get() call to the Aerospike server to retrieve the list.
You can choose not bring the data back by setting includeBinData in ScanPolicy to false.

PostgreSQL full text search doesn't work in some case (Django)

I notice that in django when there is a sentence containing PLAZA/MASTERPIECE then when we search masterpiece I can't find this sentence. Is this a limitation of PostgreSQL full text search. Or how to solve this?
finalquery = SearchQuery("keyword")
vector = SearchVector('thefieldIwanttosearch')
self.search_results = self.search_results.annotate(search=vector).filter(search=finalquery).annotate(rank=SearchRank(vector, finalquery))
Is there any document about this? Thanks!
Yes, this is all documented.
When you write filter(search=finalquery) you're not specifying a lookup type.
As a convenience when no lookup type is provided (like in Entry.objects.get(id=14)) the lookup type is assumed to be exact.
So you're filtering on an exact match for "masterpiece". What you probably want is contains or icontains.

Difference between indexNodeName and :nodeName in OAK Lucene Index

What is the difference (if any) between setting indexNodeName=true on the node type definition and defining a virtual nodeName property with the attribute name=:nodeName. indexNodeName is defined as follows:
Default to false. If set to true then index would also be created for
node name. This would enable faster evaluation of queries involving
constraints on Node name
Index the nodename as property aims the be similar to indexNodeName, but this doesn't imply "the same as". The docs are not saying that much about this:
The string :nodeName - this special case indexes node name as if it’s
a virtual property of the node being indexed. Setting this along with
nodeScopeIndex=true is akin to setting indexNodeName=true on indexing
rule.
So is it required to set both or only one of the settings in order to query the nodename. If just one of them, which one and what is the difference?
Examples:
//element(*, app:Asset)[fn:name() = ‘kite’]
//*[jcr:like(fn:name(), ‘kite%’)]
//element(kite, app:Asset)
//element(*, dam:Asset)[(jcr:like(fn:lower-case(fn:name()), 'kite%')
indexNodeName=true is a shortcut to having a property definition with name=:nodeName AND nodeScopeIndex=true.
The name=:nodeName allows for more flexibility (at the cost of a bit of complexity) to index node names for other usages too - suggestions, spellchecks, etc.
So, if you just want to query for node names using either of the methods should work well (although, imo, indexNodeName=true is simpler and cleaner).
Otoh, if you also want for node names to show up as suggestion/spellcheck results, then you'd have to resort to have a property definition with name=:nodeName AND nodeScopeIndex=true AND useInSuggest=true.

TSearch2 - dots explosion

Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).