Limitations of Filters to search data - neo4j-ogm

I am exploring how I can write generic query for any Node given a set of search parameters and came across org.neo4j.ogm.cypher.Filters (in neo4j-ogm-core-2.0.3.jar)
I would have liked to have more options for ComparisionOperator like CONTAINS, IN, STARTSWITH etc.
Right now the operators supported are:
EQUALS("=")
MATCHES("=~")
LIKE("=~", new CaseInsensitiveLikePropertyValueTransformer())
GREATER_THAN(">")
LESS_THAN("<")
Are there is any plan to enhance this to support more operations?
Here is an example of how I am using Filters:
public Collection<User> findUserByFirstNameLike(String firstName) {
Filters filters = new Filters();
Filter firstNameFilter = new Filter("firstName", firstName);
firstNameFilter.setComparisonOperator(ComparisonOperator.LIKE);
filters.add(firstNameFilter);
Collection<User> users = session.loadAll(User.class, filters);
return users;
}

Filters has been updated in neo4j-ogm-core version 2.1.0.
All 3 options you want to see (CONTAINS, IN, STARTS WITH) are available in this version along with:
LESS_THAN("<")
LESS_THAN_EQUAL("<=")
IS_NULL("IS NULL")
ENDING_WITH("ENDS WITH")
EXISTS("EXISTS")
IS_TRUE("=")

Related

Slick plain sql query with pagination

I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.

SuiteScript 2.0 - Logical operator for multiple filter

I have a Transaction's saved search on my Netsuite account and this saved search has some filter condition on Netsuite UI.
Using SuiteScript 2.0, I am loading this search and taking copy of all the filter defined on Netsuite UI (say defaultfilters) and then applying filter for "trandate" as filterExpression then pushing defaultfilters in savedsearch filter collection.
but what happening is that, netsuite only considering "trandate" filter, not those that are defined on Netsuite.
I assume that somehow between two filters "OR" logical operator being applied.
same issue discussed in another query:
SuiteScript 2.0 search.createFilter with formula not working
please guide me for this issue.
Thanks
You can do the following to add searchFilters or FilterExpressions
Load the search
var savedSearch = search.load({ id: SEARCH_ID });
Push custom filters in savedSearch object as below:
// for searchFilter
savedSearch.filters.push(search.createSearchFilters(SEARCH_FILTER_OBJECT);
// for filter expression
var filterExpressions = savedSearch.filterExpressions;
filterExpressions.push('and', [FIELDID, SEARCH_OPERATOR, VALUES]);
savedSearch.filterExpressions = filterExpressions;
As for using formula in filterExpressions, if you use formulanumeric as the fieldName, your operator should be equalto and not is whereas if you use formulatext as fieldName you can use is operator as per NetSuite's Search Operators

Ravendb. Filter documents considered for suggestions

I would like to use suggest query and filter documents to be considered for suggestions by few fields. Is it even possible? I could not find anything about this in ravendb documentation link to doc
I have tried to add my filter conditions to queryable but no luck
using (IDocumentSession documentSession = _storeProvider.GetStore().OpenSession())
{
var queryable = documentSession.Query<SearchableProduct>("SearchableProducts");
var result = queryable
//I would like to filter by this field!
.Where(m => m.BrandNo == query.BrandNumber)
.Suggest(new SuggestionQuery
{
Term = query.SearchTerm,
Accuracy = 0.4f,
Field = nameof(SearchableProduct.ProductName),
MaxSuggestions = 10,
Distance = (StringDistanceTypes)2,
Popularity = true
});
return result.Suggestions;
}
Ravendb version: 3.0
You cannot use additional filters on suggestion query.
The way suggestion works, it evaluate a search term against the stored terms in the index, without considering other fields that may apply there.
You can use facets, to do filtering based on additional filters, and use the suggestion output as input to the facets, though.

Lucene.net PerFieldAnalyzerWrapper

I've read on how to use the per field analyzer wrapper, but can't get it to work with a custom analyzer of mine. I can't even get the analyzer to run the constructor, which makes me believe I'm actually calling the per field analyzer incorrectly.
Here's what I'm doing:
Create the per field analyzer:
PerFieldAnalyzerWrapper perFieldAnalyzer = new PerFieldAnalyzerWrapper(srchInfo.GetAnalyzer(true));
perFieldAnalyzer.AddAnalyzer("<special field>", dta);
Add all the fields do document as usual, including a special field that we analyze differently.
And add document using the analyzer like this:
iw.AddDocument(doc, perFieldAnalyzer);
Am I on the right track?
The problem was related to my reliance on CMSs (Kentico) built-in Lucene helper classes. Basically, using those classes you need to specify the custom analyzer at index-level through the CMS and I did not wish to do that. So I ended up using Lucene.net directly almost everywhere gaining the flexibility of using any custom analyzer I want
I also did some changes to how I structure data and ended up using the tried-and-true KeywordAnalyzer to analyze document tags. Previously I was trying to do some custom tokenization magic on comma separated values like [tag1, tag2, tag with many parts] and could not get it reliably working with multi-parted tags. I still kept that field, but started adding multiple "tag" fields to the document, each storing one tag. So now I have N "tag" fields for "N" tags, each analyzed as a keyword, meaning each tag (one word or many) is a single token.
I think I overthinked it with my initial approach.
Here is what I ended up with.
On Indexing:
KeywordAnalyzer ka = new KeywordAnalyzer();
PerFieldAnalyzerWrapper perFieldAnalyzer = new PerFieldAnalyzerWrapper(srchInfo.GetAnalyzer(true));
perFieldAnalyzer.AddAnalyzer("documenttags_t", ka);
-- Some procedure to compile all documents by reading from DB and putting into Lucene docs
foreach(var doc in docs)
{
iw.AddDocument(doc, perFieldAnalyzer);
}
On Searching:
KeywordAnalyzer ka = new KeywordAnalyzer();
PerFieldAnalyzerWrapper perFieldAnalyzer = new PerFieldAnalyzerWrapper(srchInfo.GetAnalyzer(true));
perFieldAnalyzer.AddAnalyzer("documenttags_t", ka);
string baseQuery = "documenttags_t:\"" + tagName + "\"";
Query query = _parser.Parse(baseQuery);
var results = _searcher.Search(query, sortBy)

How do you get Endeca to search on a particular target field rather than across all indexed fields?

We have an Endeca index configured across multiple fields of email content - subject and body. But we only want searches to be performed on the subject lines. Endeca is returning matches within the bodies too. How do you limit the search to the subject?
You can search a specific field or fields by specifying it (them) with the Ntk parameter.
Or if you wish to search a specific group of fields frequently you can set up an interface (also specified with the Ntk parameter), that includes that group of fields.
This is how you can do it using presentation API.
final ENEQuery query = new ENEQuery();
final DimValIdList dimValIdList = new DimValIdList("0");
query.setNavDescriptors(dimValIdList);
final ERecSearchList searches = new ERecSearchList();
final StringBuilder builder = new StringBuilder();
for(final String productId : productIds){
builder.append(productId);
builder.append(" ");
}
final ERecSearch eRecSearch = new ERecSearch("product.id", builder.toString().trim(), "mode matchany");
searches.add(eRecSearch);
query.setNavERecSearches(searches);
Please see this post for a complete example.
Use Search Interfaces in Developer Studio.
Refer - http://docs.oracle.com/cd/E28912_01/DeveloperStudio.612/pdf/DevStudioHelp.pdf#page=209