Filtering on patterns in AgensGraph - cypher

I'm having a hard time filtering by patterns in AgensGraph. According to the Cypher Query Language Reference I should be allowed to filter on patterns using NOT. However, this does not seem to work in AgensGraph.
CREATE (eric:person {name:'Eric'}), (amy:person {name:'Amy'}), (peter:person {name: 'Peter'})
MERGE (amy)-[:knows]->(peter);
MATCH (persons:person), (peter:person {name: 'Peter'})
WHERE NOT (persons)-[]->(peter)
RETURN persons.name
I'd expect 'Eric' as a result, but I get an error saying Graph object cannot be jsonb.
How would I go about finding all people who don't know Peter using AgensGraph?

Related

Keyword based JPA query with statuses as Enums and with NOT clause [Kotlin]

I have a Keyword based JPA query I need to modify in order to exclude records with a particular status. Currently, I have the following:
findAllByLatestVersion_Entity_DataFieldGreaterThanEqualAndLatestVersion_AnotherFieldNull(datefield: Istant, pageable: Pageable)
I do not want to parameterise, therefore I would like to have the query to work as there was a WHERE clause stating that the status IS NOT C, for example. I am struggling to find clear documentation on how to go about. Is it possible to write something along these lines:
findAllByLatestVersion_Entity_DataFieldGreaterThanEqualAndLatestVersion_AnotherFieldNullAndLatestVersion_StatusCNot(datefield: Istant, pageable: Pageable)
Thank you
No this is not possible with query derivation, i.e. the feature you are using here. And even if it were possible you shouldn't do it.
Query derivation is intended for simple queries where the name of the repository method that you would choose anyway perfectly expresses everything one needs to know about the query to generate it.
It is not intended as a replacement for JPQL or SQL.
It should never be used when the resulting method name isn't a good method name.
So just formulate the query as a JPQL query and use a #Query annotation to specify it.

Using Bookshelf to execute a query on Postgres JSONB array elements

I have a postgres table with jsonb array elements and I'm trying to do sql queries to extract the matching elements. I have the raw SQL query running from the postgres command line interface:
select * from movies where director #> any (array ['70', '45']::jsonb[])
This returns the results I'm looking for (all records from the movies table where the director jsonb elements contain any of the elements in the input element).
In the code, the value for ['70, '45'] would be a dynamic variable ie. fixArr and the length of the array is unknown.
I'm trying to build this into my Bookshelf code but haven't been able to find any examples that address the complexity of the use case. I've tried the following approaches but none of them work:
models.Movies.where('director', '#> any', '(array' + JSON.stringify(fixArr) + '::jsonb[])').fetchAll()
ERROR: The operator "#> any" is not permitted
db.knex.raw('select * from movies where director #> any(array'+[JSON.stringify(fixArr)]+'::jsonb[])')
ERROR: column "45" does not exist
models.Movies.query('where', 'director', '#>', 'any (array', JSON.stringify(fixArr) + '::jsonb[])').fetchAll()
ERROR: invalid input syntax for type json
Can anyone help with this?
As you have noticed, knex nor bookshelf doesn't bring any support for making jsonb queries easier. As far as I know the only knex based ORM that supports jsonb queries etc. nicely is Objection.js
In your case I suppose better operator to find if jsonb column contains any of the given values would be ?|, so query would be like:
const idsAsString = ids.map(val => `'${val}'`).join(',');
db.knex.raw(`select * from movies where director \\?| array[${idsAsString}]`);
More info how to deal with jsonb queries and indexing with knex can be found here https://www.vincit.fi/en/blog/objection-js-postgresql-power-json-queries/
No, you're just running into the limitations of that particular query builder and ORM.
The easiest way is using bookshelf.Model.query and knex.raw (whereRaw, etc.). Alias with AS and subclass your Bookshelf model to add these aliased attributes if you care about such things.
If you want things to look clean and abstracted through Bookshelf, you'll just need to denormalize the JSONB into flat tables. This might be the best approach if your JSONB is mostly flat and simple.
If you end up using lots of JSONB (it can be quite performant with appropriate indexes) then Bookshelf ORM is wasted effort. The knex query builder is only not a waste of time insofar as it handles escaping, quoting, etc.

Is it possible to order lucene documents by matching term?

I'm using Lucene 4.10.3 with Java 1.7
I'm wondering whether it's possible to order query results the matching term?
Simply put, if my documents conatin a text field;
The query is
text:a*
I want documents with ab, then ac, then ad etc.
The real case is more complex however, what I'm actually trying to accomplish is to "stuff" a relational DB into my lucene Index (probably not the best idea?).
An appropriate example would be :
I have documents representing books in a library. every book has a title and also a list of people who has borrowed this book and the date of borrowing.
when a user searches for a book with title containing "JAVA", I want to give priority to books that were borrowed by this user. This could be accomplished by adding a TextField "borrowers", adding a SHOULD clause on it and ordering by score)
also, if there are several books with "JAVA" that this user has borrowed before, I want to show the most recent borrowed ones first. so I thought to create a TextField "borrowers" that will look like
borrowers : "user1__20150505 user2__20150506" etc.
I will add a BooleanClause borrowers: user1* and order by matching term.
any other solution ideas will be welcome
I understand your real problem is more complex, but maybe this is helpful anyway.
You could first search for Tokens in the index that match your query, then for each matching token executing a query using this token specifically.
See https://lucene.apache.org/core/6_0_1/core/org/apache/lucene/index/TermsEnum.html for that. Just seek to the prefix and iterate until the prefix stops matching.
In general it is sometimes easy to just issue two queries. For example one within the corpus of books the user as borrowed before and another witin the whole corpus.
These approaches may not work, but in that case you could implement a custom Scorer somehow mapping the ordering to a number.
See http://opensourceconnections.com/blog/2014/03/12/using-customscorequery-for-custom-solrlucene-scoring/

Lucene: What is the difference between Query and Filter

Lucene query vs filter?
They both does similar things like termquery filters by term value, filter i guess is there for similar purpose.
When would you use filter and when query?
Just starting on lucene today so trying to clear concept
Filter doesn't affect the computation of the score of the non-filtered documents.
For instance imagine the following docs:
1.
loc: "uk", "london"
text: "i live in london, "london is the best"
2.
loc: "london avenue", "london street", "london"
text: "I like the shop in london st."
now let's say you do the following query:
q=+loc:"london" +text:"london"
in this query the score of doc 2 is higher than that of doc 1 (because loc is calculated in the document score)
using a filter:
q=+text:"london" f=+loc:"london"
in this query the score of doc 1 is higher than that of doc 2.
Excuse the Solr style formatting but the overall notion is clear.
Other reasons for using filters are for caching purposes, filters are cached separately from queries so if you have a dynamic query with a static part it would make sense to filter by the static part. In this way the index traversal is limited to the subset of filtered docs.
A Query can be passed to a Searcher to find documents. A Filter cannot; it can only modify the results produced by a Query.
Implementing a new Query type is fairly complicated, and requires an understanding of the relationship of Lucene internals like Weight, Scorer, and Similarity. A Filter implementation could be fairly simple, and not interact with the IndexReader at all.
After you close a database, the filter's selection disappears. But when you close a Query, and open it again, it will still be there.
You can also create a Query using a Form. But you cannot use Filter in a Form.

Search book by title, and author

I got a table with columns: author firstname, author lastname, and booktitle
Multiple users are inserting in the database, through an import, and I'd like to avoid duplicates.
So I'm trying to do something like this:
I have a record in the db:
First Name: "Isaac"
Last Name: "Assimov"
Title: "I, Robot"
If the user tries to add it again, it would be basically a non-split-text
(would not be split up into author firstname, author lastname, and booktitle)
So it would basically look like this:
"Isaac Asimov - I Robot"
or
"Asimov, Isaac - I Robot"
or
"I Robot by Isaac Asimov"
You see where I am getting at?
(I cannot force the user to split up all the books into into author firstname, author lastname, and booktitle, and I don't even like the idea to force the user, because it's not too user-friendly)
What is the best way (in SQL) to compare all this possible bookdata scenarios to what I have in the database, not to add the same book twice. I was thinking about a possibility of suggesting the user: "is THIS the book you are trying to add?" (imagine a list instead of the THIS word, just like on stackoverflow - ask question - Related Questions.
I was thinking about
soundex
and maybe even the
like
operators, but so far i didn't get the results i was hoping.
You can implement significantly better algorithms for fuzzy matching than soundex/difference, take a look at Beyond SoundEx - Functions for Fuzzy Searching in MS SQL Server.
You could also look at implementing a Full Text catalog and using the "search engine" style FREETEXT() which:
Is a predicate used in a WHERE clause
to search columns containing
character-based data types for values
that match the meaning and not just
the exact wording of the words in the
search condition
Depending on what your doing you could also perhaps use an ISBN web service to get normalized data.