How to debug RediSearch Json indexing issues? - redis

When indexing json with RediSearch documents won't be indexed if the type mapping is incorrect, like if a number is mapped as TEXT or similar.
Most often the symptom is that the index is just empty or missing data even when there are json documents in the database.
Troubleshooting this by trial and error through the profiler is time-consuming. The only way I found is to match documents with the index definition (FT.INFO) manually and try to spot the mismatch.
Is there a better way? Like a log or way to get a message where I can look for issues with indexing documents?

Related

Creating dynamic facets using apache solr

I'm new to apache solr.
I have uploaded a few log files using solr-cell and I want to create facets based on the content which is there in the log file.
For example: inside my log file I have a record for transaction, I would like to create transactionid as my facet and clicking it should result in a search in the uploaded log files and give me results according to that particular id.
Note: I need to facet field according to the content which is in the log.
As long as the field is indexed, you can facet on it. So, you can use either schemaless configuration or use dynamicField definitions to match and automatically create fields for your log records.
Go through Solr examples first, there should be enough information there.
(updated based on the comments)
If the text needs to be pre-processed and split, there are two basic avenues:
Using DataImportHandler (DIH), probably with LineEntityProcessor and RegexTransformer to split the field into multiple fields
Using UpdateRequestProcessor chains (in solrconfig.xml) and probably clone the field multiple times and then use RegexReplaceProcessorFactory to extract relevant parts. That's even uglier than DIH though as there is no easy way to split one field into many.
Still, specifically for logs, it is better to use something like Logstash with Solr output plugin.
+1 to Alex's answer.
Another alternative is to write a custom update processor where you figure out what field you want to facet on and explicitly add that field to your document.
This makes sense only if you know what kind of fields to expect, based on some pattern. If that is not the case, then using dynamic fields or a schemaless config is your best bet.

Suggestion around Lucene 4.4 (Log Search)

I am new to Lucene and trying to use it for searching log files/entries generated by a SystemA.
Architecture
Receive each log entry (i.e. XML) in a INPUT Directory. SystemA sends log entries to a MQ queue which is polled by a small utility, that picks the message and create a file in INPUT directory.
WriteIndex.java (i.e. IndexWriter/Lucene) keep checking if a new file received in INPUT directory. If yes, it takes the file, puts in Index and move the file to OUTPUT directory. As part of Indexing, I am putting filename, path, timestamp, contents in Index.
"Note: I am creating index on Content as well putting whole Content as StringField."
SearchIndex.java (ie. SeacherManager/Lucene/refereshIfChanged) is created. As part of Creation I started a new thread as well that keep checking every 1 min if Index has changed on not. I acquire IndexSearcher for every request. It's working fine.
Everything so far worked very fine. But I am not sure what will happen in production as I have tested it for few hundred files but in production, I will be getting like 500K log entries in a day which means 500K small file, each having an XML. "WriteIndex.java" will have to run non-stop to update index whenever new file received.
I have following questions
Anyone has done any similar work? Any issues/best practices I should follow.
Do you see any problem with Index files generated for such large number of xml files. Each XML file would be 2KB max. Remember I am indexing on the content as well as putting content as String in index so that I can retrieve from the index whenever I found a match on index while searching.
I would be exposing SearchIndex.java as Servlet to allow admins to come on a WebPage and search log entries. Any issues you see with it?
Please let me know if anyone need anything specific.
Thanks,
Rohit Goyal
Architecture looks fine.
Few things
Consider using TextField instead of StringField. TextField will be tokenized and hence user would be able to search on tokens. StringField is not tokenized and hence for document to match search, full text should match.
No problem in performance for lucene. Check out Lucene performance graphs. Lucene can generate index for over a billion wikipedia documents in minutes. Searching is fast too.

Exclude versioned documents while Querying-Raven db

I have appended the versioning bundle in midway of my project after having written most of my raven queries in my data access layer. Now because of versioning i have lots of replicated data. Whenever i query a type of document i can see the values replicated as many times as the document is versioned. Is there way to stop querying the re-visioned documents when i query for the current data in common without re-writing all of my queries with Exclude("Revisions").Is there any setting where i can say query on re-visioned document =False which i can set globally? please suggest something to overcome this..
That is the way it works, actually. It appears that you have disabled the versionning bundle, which would cause this to happen.

Sitecore System Lucene Index for custom queries

I have been using Sitecore query and FAST query for some sections of the website. But with growing content these queries have gotten slow and I'd like to implement Lucene querying for content to speed up things.
I am wondering if I can just use the System index instead of having to setup a separate index. Does Sitecore by default index all content in the content editor? Is this a good approach or should I just create my own index?
(I'm going to assume your using Sitecore 6.4->6.6)
As with everything .. it depends .. Sitecore keeps an index of all the Sitecore items in its system index, you are welcome to use that. Sometimes you may want a more specialised or restricted list of items, like being based on a certain template, being indexed or need a checkbox field indexed (as the system one by default only indexes text fields).
Setting up your own search index is pretty easy.. It does require some fiddling with the web.config though (and I'd recommend adding as a .include file).
Create an new <index> node with its own id that will define the name of the collection and the folder it will go into. (You can check its working by looking for the dir in the /data/indexes directory of your installation.
.. next you can tell the crawler which database to look at (most likely master if you want unpublished content to be indexed or web for published stuff) and where to start the search from (in this example I am indexing only the news section). You can tag,boostand tell if whether to IndexAllFields (otherwise it will only index fields it understands as text .. rich-text / multi-line text / text etc).
.. Finally, you can tell the indexer which template types to include or exclude.
How the indexer works is that it will subscribed to item events within sitecore .. so every time an item is changed or moved or deleted the index will be updated automatically. Obviously if you are indexing the web db the items will need to have been published.
More in-depth info on the query syntax & indexing can be found here on SDN.
The search syntax and API is much improved in 6.4/6.5 but if you want to add extra kick then my colleague Alex Shyba's Advanced Database Crawler is worth checking out too.
Hope this helps :D
You will want to implement your own index. For the same reason that you are seeing things slow down when there is a lot of content, indexes slow down when there is a lot of content in it as well.
I prefer targeted indexes meant specifically to drive the functionality I need and only has the data in it that is required. This allows for smaller and more efficient index usage on your components.
Additionally, you probably want to look into the AdvancedDatabaseCrawler put together by Alex Shyba. There are a few blogs out there with some great posts on implementing this lucene indexing module.
A separate index is always a wise decision, you can keep it light. In big environments the system index can grow up to gigabytes.
You can exclude the content from the index, as you will only be using it for performing lookups, not showing content from the index.
Finally: the system index is for the master database, you'll be querying the web database, possibly on a content delivery server.

Program to scrape a webpage into an index

I've been looking for a program to create an index from static webpages. I'm not looking for a program like Solr, or elasticsearch because both are assuming I will be interactively creating an index. I need something that can basically go to a url, and create a search index from the pages that it pulls. It can create the index in whatever way necessary (db, xml, etc.) I just don't need the programs that are so involved with the backend database access and the code, as this search will be very light and mostly for internal purposes, on a site that does not use any of those.
Thanks for any tips that may get me started or answers that will solve my problem!
Investigate Nutch. Nutch can index a URL and what you can index is very configurable.
Once you finish crawling/indexing, that index is searchable. There is no programming involved.