Have you indexed nutch crawl results using elasticsearch before? - lucene

Has anyone had any luck writing custom indexers for nutch to index the crawl results with elasticsearch? Or do you know of any that already exist?

I wrote an ElasticSearch plugin that mocks the Solr api. Using this plugin and the standard Nutch Solr indexer you can easily send crawled data into ElasticSearch. Plugin and an example of how to use it with Nutch can be found on GitHub:
https://github.com/mattweber/elasticsearch-mocksolrplugin

I know that Nutch will be adding pluggable backends and glad to see it. I had a need to integrate elasticsearch with Nutch 1.3. Code is posted here. Piggybacked off the (src/java/org/apache/nutch/indexer/solr) code.
https://github.com/ctjmorgan/nutch-elasticsearch-indexer

Haven't done it but this is definitely doable but would require to piggyback the SOLR code (src/java/org/apache/nutch/indexer/solr) and adapt it to ElasticSearch. Would be a nice contrib to Nutch BTW

Time goes by and now Nucth is already integrated well with ElasticSearch. Here is a nice tutorial.

Related

Working with Solr configuration

We are using solr for indexing in our project. Indexing is working fine as of now.
We need to work on scenario where solr is down for some reason.
What should indexing process do then?
Is there way during solr restart we can inject some script to execute the backlog indexings?
Just share your experiences how you guys handled such scenarios.
Thanks in Advance.

Any descriptive tutorials or clear guidance on crawling web by apache solr 6.6

I read on this question page that solr 5+ supports web crawling which means that we no longer need nutch. Are there any examples or descriptions to explain how to set up solr 6.6 crawl a set of remote websites?
they most probably meant using DIH with the right Datasource, but I doubt this can replace Nutch and such in many scenarios.

How to auto-index data using solr and nutch?

i want to automatically index a document or a website when it is fed to apache solr . How we can achieve this ? I have seen examples of using a CRON job that need to be called via a php script , but they are not quite clear in explaination. Using java api SolrJ , is there any way that we can index data automatically , without having the need to manually do it ??
You can write a scheduler and call the solrJ code which is doing indexing/reindexing.
For writing the scheduler please refer below links
http://www.mkyong.com/java/how-to-run-a-task-periodically-in-java/
http://archive.oreilly.com/pub/a/java/archive/quartz.html
If you are using Apache Nutch, you have to use Nutch solr-index plugin. With using this plugin you can index web documents as soon as they be crawled by Nutch. But the main question would be how can you schedule Nutch to start periodically.
As far as I know you have to use a scheduler for this purpose. I did know an old Nutch project called Nutch-base which uses Apache Quartz for the purpose of scheduling Nutch jobs. You can find the source code of Nutch-base from the following link:
https://github.com/mathieuravaux/nutchbase
If you consider this project there is a plugin called admin-scheduling. Although it is implemented for and old version of Nutch but it could be a nice start point for developing scheduler plugin for Nutch.
It is worth to say that if you are going to crawl website periodically and fetch the new arrival links you can use this tutorial.

ElasticSearch Indexing Confluence pages

Can ElasticSearch index Confluence pages?
There are a lot of river plugins but none for Confluence. http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html
Although there is a github project https://github.com/obazoud/elasticsearch-river-confluence but the last commit is a year ago, so I guess it's not up-to-date.
Elasticsearch deprecated river.
Elasticsearch has a solution built over it called workplace search which could connect to confluence for ingesting data.
Ideally, you might need to do it by the Confluent API via a script to Elasticsearch. You might also need to use the "ingest-attachment" plugin if you need to parse PDF content.

Using Nutch crawler with Solr

Am I able to integrate Apache Nutch crawler with the Solr Index server?
Edit:
One of our devs came up with a solution from these posts
Running Nutch and Solr
Update for Running Nutch and Solr
Answer
Yes
If you're willing to upgrade to nutch 1.0 you can use the solrindex as described in this article by Lucid Imagination: http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/.
It's still an open issue. If you're feeling adventurous you could try applying those patches yourself, although it looks like it's not so simple
nutch 2.x is designed to use solr as default. You can follow the steps in http://wiki.apache.org/nutch/Nutch2Tutorial, or a better instruction in the book "Web Crawling and Data Mining with Apache Nutch".