I have documents with the following link.* dynamic fields:
"docs": [{
"id":"id1"
"link.1.text":"mytext"
"link.1.nImg":1
"link.2.text":"mytext"
"link.2.nImg":2
}, {
"id":"id2"
"link.1.text":"mytext"
"link.1.nImg":1
"link.2.text":"mytext"
"link.2.nImg":1
}]
How can I get a query like : link.*.text:"mytext" or link.*.nImg:2 ?
You couldn't do that in Solr.
Dynamic fields allow Solr to index fields that you did not explicitly
define in your schema. This is useful if you discover you have
forgotten to define one or more fields. Dynamic fields can make your
application less brittle by providing some flexibility in the
documents you can add to Solr.
In query you need to list exact name of a field name, so dynamic fields give you an index time flexibility
Some more info - https://cwiki.apache.org/confluence/display/solr/Dynamic+Fields
Related
Perhaps I am missing this in the documentation, but is it possible to store and query against json data in Apache Ignite? For example, let's say I have a "table" called "cars" with the following fields:
model
blueprint
The "blueprint" field is actually a json field that may contain data such as:
{
horsepower: 200,
mpg: 30
}
Those are not the only fields for the "blueprint" field, and it may contain many more or less fields. Is it possible to run a query such as:
SELECT model FROM cars WHERE blueprint.horsepower < 300 AND blueprint.mpg > 20
It is not known in advance what the fields will be for the "blueprint" field, and creating indexes for them is not optional.
Note: This is not a conversation about if this is the logically optimal way to store this information, or how the "blueprint" field should be stored in a separate table. This question is meant to understand if querying against a json field is trivially possible in apache ignite.
This is not supported out of the box as for now. However, you can create conversion logic between JSON and Ignite binary format and save BinaryObjects in caches. To create a BinaryObject without a Java class, you can use binary object builder: https://apacheignite.readme.io/docs/binary-marshaller#modifying-binary-objects-using-binaryobjectbuilder
We have a scenario where we are trying to perform accurate name matching of Items using SOLR.
Query Parameter: Apple
SOLR Indexed Word: Apple-D
In our business case, "Apple" and "Apple-D" are totally different items and therefore SOLR shouldn't return the match.
Is there an option to achieve the same?
You need to change the fieldType used for the field. Use the String fieldType for the your field.
This String fieldType will make sure that the words will be stored as it is by solr.
It won't apply any analysis on the word. Or it won't create any tokes of it.
With the String type applied to it . The Apple and Apple-D are stored/indexed different token. As there won't be any tokenizing on the same. This will help you to achieve the exact match.
Once you change the fieldType. Re-index the same.
You can use the solr analysis tool to check how it is indexing and querying .
Note : Make sure whenever you ask question on it, Share your schema.xml
I am working on neo4j database version 2.0.I have following requirements :
Case 1. I want to fetch all records where name contains some string,for example if i am searching for Neo4j then all records having name Neo4j Data,Neo4j Database,Neo4jDatabase etc. should be returned.
Case 2. When i want to fire field less query,if a set of properties is having matching value then those records should be returned or it may also be global level instead of label level.
Case Sensitivity is also a point.
I have read multiple thing about like,index,full text search,legacy index etc.,so what will be the best fit for my case,or i have to use elastic search etc.
I am using spring-data-neo4j in my application,so provide some configuration for SDN
Annotate your name with #Indexed annotation:
#Indexed(indexName = "whateverIndexName", indexType = IndexType.FULLTEXT)
private String name;
Then query for it following way (example for method in SDN repository, you can use similar anywhere else you use cypher):
#Query("START n=node:whateverIndexName({query}) return n"
Set<Topic> findByName(#Param("query") String query);
Neo4j uses lucene as backend for indexing so the query value must be a valid lucene query, e.g. "name:neo4j" or "name:neo4j*".
There is an article that explains the confusion around various Neo4j indexes http://nigelsmall.com/neo4j/index-confusion.
I don't think you need to be using elastic search-- you can use the legacy indexes or the lucene indexes to do full text searches.
Check out Michael Hunger's blog: jexp.de/blog
thix post specifically: http://jexp.de/blog/2014/03/full-text-indexing-fts-in-neo4j-2-0/
how to index and search for custom fields using Lucene or hibernate search. i cannot find a way to index the custom field. they are dynamic.
'custom fields' in here means they can be editabled by user,those fields are not hard code.
Any help will be thankful!
Query of Custom Fields
Just use the projection API:
FullTextQuery hibernateQuery = fullTextSession
.createFullTextQuery(luceneQuery)
.setProjection("myField1", "myField2");
List results = hibernateQuery.list();
Using projections you get to read any field as long as it's STORED.
If it matches some property name of your indexed entities it will be materialized after being converted to the appropriate type (if you have a TwoWayFieldBridge); if not you will get the String value.
If for some reason you need to bypass this conversion or just want to have fun decoding the raw Lucene Document you can open an IndexReaderdirectly.
Indexing Custom Fields
When defining a FieldBridge you get to add as many fields as you like to the indexed Document, and you can name each of them as you like.
The method parameter name is a hint - useful for example to scope the field name - but you can ignore it.
An example FieldBridge implementation writing multiple fields is the DateSplitBridge in the documentation.
I'm using RavenDB and I'm having trouble extracting a particular value using the Lucene Query.
Here is the JSON in my document:
{
"customer" : "my customer"
"locations": [
{
"name": "vel arcu. Curabitur",
"settings": {
"enabled": true
}
}
]
}
Here is my query:
var list = session.Advanced.LuceneQuery<ExpandoObject>()
.SelectFields<ExpandoObject>("customer", "locations;settings.enabled", "locations;name")
.ToList();
The list is populated and contains a bunch of ExpandoObjects with customer properties but I can't for the life of me get the location -> name or location -> settings -> enabled to come back.
Is the ";" or "." incorrect usage??
It seems that you have misunderstood the concept of indexes and queries in RavenDB. When you load a document in RavenDB you always load the whole document including all of its contents it contains. So in your case, if you load a customer, you already have the collection and all its children loaded. That means, you can use standard linq-to-objects to extract all these values, no need for anything special like indexes or lucene here.
If you want to do this extraction on the database side, so that you can query on those properties, then you need an index. Indexes are written using linq, but it's important to understand that they run on the server and just extract some data to populate the lucene index from. But here again, in most cases you don't even have to write the indexes yourself because RavenDB can create them automatically for you.
I no case, you need to write lucene queries like the one in your question because in RavenDB lucene queries will always be executed against a pre-built index, and these are generally flat. But again, chances are you don't need to do anything with lucene to get what you want.
I hope that makes sense for you. If not, please update your question and tell us more about what you actually want to do.
Technically, you can use the comma operator "," to nest into collections.
That should work, but it isn't recommended. You can just get your whole object and use it, it is easier and faster.