I have the following Cloudant search index
"indexes": {
"search-cloud": {
"analyzer": "standard",
"index": "function(doc) {
if (doc.name) {
index("keywords", doc.name);
index("name", doc.name, {
"store": true,
"index": false
});
}
if (doc.type === "file" && doc.keywords) {
index("keywords", doc.keywords);
}
}"
}
}
For some reason when I search for specific phrases, I get an error:
Search failed: field "keywords" was indexed without position data; cannot run PhraseQuery (term=FIRSTWORD)
So If I search for FIRSTWORD SECONDWORD, it looks like I am getting an error on the first word.
NOTE: This does not happen to every search phrase I do.
Does anyone know why this would be happening?
doc.name and doc.keywords are just string.
doc.name is usually something like "2004/04/14 John Doe 1234 Document Folder"
doc.keywords is usually something random like "testing this again"
And the reason why I am storing name and keywords under the keywords index is because I want anyone to be able to search keywords or name by just typing on string value. Let me know if this is not the best practice.
Likely the problem is that some of your documents contain a keywords field with string values, while other documents contain a keywords field with a different type, probably an array. I believe that this scenario would result in the error that you received. Can you double check that all of the values for your keywords fields are, in fact, strings?
Related
I have a collection named users, it has following attributes
{
“_id”: “937a04d3f516443e87abe8308a1fe83e”,
“username”: “andy”,
“full_name”: “andy white”,
“image” : “https://example.com/xyz.jpg”,
… etc
}
i want to make a text search on full_name and username using aggregation pipeline, so that if a user search for any 3 letters, then the most relevant full_name or username returned sorted by relevancy,
i have already created text index on username and full_name and then i tried query from below link:
https://www.mongodb.com/docs/manual/tutorial/text-search-in-aggregation/#return-results-sorted-by-text-search-score
pipeline_stage = [
{"$match": {"$text": {"$search": “whit”}}},
{"$sort": {“score”: {"$meta": “textScore”}}},
{"$project": {“username”: 1,“full_name”: 1,“image”:1}}
]
stages = [*pipeline_stage]
users = users_db.aggregate(stages)
but i am getting below error:
pymongo.errors.OperationFailure: FieldPath field names may not start with ‘$’. Consider using $getField or $setField., full error: {‘ok’: 0.0, ‘errmsg’: “FieldPath field names may not start with ‘$’. Consider using $getField or $setField.”, ‘code’: 16410, ‘codeName’: ‘Location16410’, ‘$clusterTime’: {‘clusterTime’: Timestamp(1657811022, 14), ‘signature’: {‘hash’: b’a\xb4rem\x02\xc3\xa2P\x93E\nS\x1e\xa6\xaa\xb0\xb1\x85\xb5’, ‘keyId’: 7062773414158663703}}, ‘operationTime’: Timestamp(1657811022, 14)}
I also tried below link (my query also below) but i am getting full text search results, not working for partial text search:
https://www.mongodb.com/docs/manual/tutorial/text-search-in-aggregation/#match-on-text-score
pipeline_stage = [
{"$match": {"$text": {"$search": search_key}}},
{"$project": {"full_name": 1, "score": {"$meta": "textScore"}}},
]
Any help will be appreciated,
Note: I want to do partial text search, sorted by relevant records at top,
Thanks
Your project stage is incorrect, it should be
pipeline_stage = [
{"$match": {"$text": {"$search": "and"}}},
{"$sort": {"score": {"$meta": "textScore"}}},
{"$project": { "username": "$username", "full_name": "$full_name", "image": "$image"}}
]
Also note if you use an English text search, words like and are not indexed.
I have my mapping like this:
{
"doc": {
"mappings": {
"mydocument": {
"properties": {
"file": {
"type": "attachment",
"path": "full",
"fields": {
"file": {
"type": "string",
"store": true,
"term_vector": "with_positions_offsets"
},
"author": {
...
When I search for a complete word I get the result:
"query": {
"fuzzy_like_this" : {
"fields" : ["file"],
"like_text" : "This_is_something_I_want_to_search_for",
"max_query_terms" : 12
}
},
"highlight" : {
"number_of_fragments" : 3,
"fragment_size" : 650,
"fields" : {
"file" : { }
}
}
But if I change the search term to "This_is_something_I_want" I get nothing. What am I missing?
To implement a partial match, we must first understand what fuzzy like this does and then decide what you want partial matching to return. fuzzy like this will perform 2 key functions.
The like_text will be analyzed using the default analyzer. All the resulting tokens will then be used to find documents based on term frequency, or tf-idf
This typically means that the input term will be be split on space and lowercased. This_is_something_I_want will therefore be tokenized to this_is_something_i_want. Unless you have files with this exact term, no documents will match.
Secondly, all terms will be fuzzified. Fuzzy searches score terms based on how many character changes needs to made to a word to match another word. For instance to get from bat to hat we will need to make 1 character change.
For our case to get from this_is_something_i_want to this_is_something_i_want_to_search_for, we will need to make 14 character changes (adding _to_search_for.) Standard fuzzy search only allows for 3 character changes when working with terms longer that 5 or 6 characters. Increasing the fuzzy limit to 14 will however produce severely skewed results
So neither of these functions will help produce the results you seek.
Here is what I can suggest:
You can implement an analyzer that splits on underscore similar to this. Tokens produced will then be ['this', 'is', 'something', 'i', 'want'] which can correctly be matched to to the sample case
Alternatively, if all you want is a document that starts with the specified text, you can use a phrase prefix query instead of fuzzy like this. Documentations here
With SQL we can do the following :
select * from x where concat(x.y ," ",x.z) like "%find m%"
when x.y = "find" and x.z = "me".
How do I do the same thing with MongoDB, When I use a JSON structure similar to this:
{
data:
[
{
id:1,
value : "find"
},
{
id:2,
value : "me"
}
]
}
The comparison to SQL here is not valid since no relational database has the same concept of embedded arrays that MongoDB has, and is provided in your example. You can only "concat" between "fields in a row" of a table. Basically not the same thing.
You can do this with the JavaScript evaluation of $where, which is not optimal, but it's a start. And you can add some extra "smarts" to the match as well with caution:
db.collection.find({
"$or": [
{ "data.value": /^f/ },
{ "data.value": /^m/ }
],
"$where": function() {
var items = [];
this.data.forEach(function(item) {
items.push(item.value);
});
var myString = items.join(" ");
if ( myString.match(/find m/) != null )
return 1;
}
})
So there you go. We optimized this a bit by taking the first characters from your "test string" in each word and compared the tokens to each element of the array in the document.
The next part "concatenates" the array elements into a string and then does a "regex" comparison ( same as "like" ) on the concatenated result to see if it matches. Where it does then the document is considered a match and returned.
Not optimal, but these are the options available to MongoDB on a structure like this. Perhaps the structure should be different. But you don't specify why you want this so we can't advise a better solution to what you want to achieve.
I am a bit lost on how to index these documents in Elasticsearch.
Document 1
{
text: ['chicken']
}
Document 2
{
text: ['chicken'], [['broth', 'stock']]
}
I need to be able to query these using either 'chicken flavored stock' or 'chicken flavored broth' and it should return both documents with the same score, since all of their terms have been matched in the input query. It also shouldn't return doc 2 with only 'chicken' as query.
Basically, I want to know that all the terms in 'text' field have been found somewhere in the query, and the internal array (ie: 'broth' and 'stock' acts like an OR clause).
Is this even possible?
Update:
I did find a (cumbersome) way of doing it. I save the document by combining their fields into phrases (ex: ['chicken broth', 'chicken stock'] for doc 2). Then I search using every combination of the input as a phrase (ex: ['chicken', 'chicken flavored', 'chicken flavored broth', 'chicken broth', ...].)
This solution does give me the results I want, but I can't help but feel this is a common case that could be handled much more elegantly. It feels like the ngrams are along the path to my answer, but I can't quite work it out.
When you index documents without adding a custom mapping, Elasticsearch using the Standard analyzer by default.
You could remove the arrays from the text fields and index your documents as:
Document 1
{
"text": "chicken"
}
Document 2
{
"text": "chicken broth stock"
}
The standard analyzer will create the following tokens in the Lucene index:
Document 1
"chicken"
Document 2
"chicken", "broth", "stock"
Your documents are matching the search terms as follows:
chicken : the term 'chicken' matches in both documents, because the text field is shorter in Document 1 it scores higher than Document 2.
chicken flavored: the term 'chicken' matches in both documents, but there is no match for the term 'flavoured'. Again, as the text field is shorter in Document 1 it scores higher than Document 2.
chicken flavored broth: the term 'chicken' matches in both documents, and the term 'broth' also matched in document 2. There is no match on the term 'flavored' in either of the documents. Document 2 is scored higher than Document 1 as it matches two of the terms in the query.
I don't really see a use case for ngrams as the above does what you want.
So here is something that you can try. Percolator can solve your problem but you will have to change the way you are indexing your documents.
So instead of indexing doc1 the way you are doing, index it like so:
PUT /test-index/.percolator/1
{
"query": {
"term": {
"text": {
"value": "chicken"
}
}
}
}
And, index doc2 like so:
PUT /test-index/.percolator/2
{
"query": {
"bool": {
"must": [
{
"term": {
"text": {
"value": "chicken"
}
}
},
{
"bool": {
"should": [
{
"term": {
"text": {
"value": "broth"
}
}
},
{
"term": {
"text": {
"value": "stock"
}
}
}
]
}
}
]
}
}
}
No instead of querying the way you were querying your documents earlier, percolate them:
GET /test-index/all_terms_search/_percolate
{
"doc": {
"text": "chicken flavored stock"
}
}
This will get both your documents. This also gives you the flexibility to control what and how much you want to match. While you are indexing your document's reverse queries in percolator, you provide an ID for that query and corresponding to that ID, you can maintain the text in a much simpler form for you to consume either in a separate index in Elasticsearch or may be some other datastore which can get matching documents really fast.
I am using Facet Terms to get all the unique values and their count for a field. And I am getting wrong results.
term: web
Count: 1191979
term: misc
Count: 1191979
term: passwd
Count: 1191979
term: etc
Count: 1191979
While the actual result should be:
term: WEB-MISC /etc/passwd
Count: 1191979
Here is my sample query:
{
"facets": {
"terms1": {
"terms": {
"field": "message"
}
}
}
}
If reindexing is an option, it would be the best to change mapping and mark this fields as not_analyzed
"your_field" : { "type": "string", "index" : "not_analyzed" }
You can use multi field type if keeping an analyzed version of the field is desired:
"your_field" : {
"type" : "multi_field",
"fields" : {
"your_field" : {"type" : "string", "index" : "analyzed"},
"untouched" : {"type" : "string", "index" : "not_analyzed"}
}
}
This way, you can continue using your_field in the queries, while running facet searches using your_field.untouched.
Alternatively, if this field is stored, you can use a script field facet instead:
"facets" : {
"term" : {
"terms" : {
"script_field" : "_fields.your_field.value"
}
}
}
As the last resort, if this field is not stored, but record source is stored in the index, you can try this:
"facets" : {
"term" : {
"terms" : {
"script_field" : "_source.your_field"
}
}
}
The first solution is the most efficient. The last solution is the least efficient and may take a lot of time on a large index.
Wow, I also got this same issue today while term aggregating in the recent elastic-search. After googling and some partial understanding, found how this geeky indexing works(which is very simple).
Queries can find only terms that actually exist in the inverted index
When you index the following string
"WEB-MISC /etc/passwd"
it will be passed to an analyzer. The analyzer might tokenize it into
"WEB", "MISC", "etc" and "passwd"
with its position details. And this tokens might filtered to lowercase such as
"web", "misc", "etc" and "passwd"
So, after indexing,the search query can see the above 4 only. not the complete word "WEB-MISC /etc/passwd". For your requirement the following are my options you can use
1.Change the Default Analyzer used by elasticsearch([link][1])
2.If it is not need, just TurnOff the analyzer by setting 'not_analyzed' for the fields you need
3.To convert the already indexed data searchable, re-indexing is the only option
I have briefly explained this problem and proposed two solutions here.
I have talked about multiple approaches here.
One is use of not_analyzed to preserve the string as it is. But then as it has the drawback of being case insensitive , a better approach would be use keyword tokenizer + lowercase filter