I started web crawling using Storm Crawler but I do not know where crawled results go to? Im not using Solr or Elastic Search - indexing

The Storm Crawler started crawling data but it seems I cannot find where data is stored I need to save this data to to a database so I can connect the data to a remote server and have it indexed. Storm crawler seems to focus mainly on Solr and Elastic integration!!! I just want to store its data to a database so i can use with any site search solution like Typesense, flex search and any other site search software.
Im very new at the web craping so im just installed required software to run.

SC does not focus on SOLR or ES. There is for instance a SQL module.
As for "where the crawled results go", it depends on what your topology does. Maybe try to explain what bolts are running etc...

Related

Any descriptive tutorials or clear guidance on crawling web by apache solr 6.6

I read on this question page that solr 5+ supports web crawling which means that we no longer need nutch. Are there any examples or descriptions to explain how to set up solr 6.6 crawl a set of remote websites?
they most probably meant using DIH with the right Datasource, but I doubt this can replace Nutch and such in many scenarios.

Change the Solr Server URL on Drupal 7 from an external shell script

I have recently received a task of maintaining a Drupal site, with one of the tasks being writing a backup and import script for the dev site, so it can receive a daily dump of the live data.
I have done this, however we need to revert the Solr details to the dev solr database. However I only know how to do this manually using the UI tools (e.g. "https://WEBSITE.co.uk/admin/config/search/apachesolr/settings" clicking "edit" and changing the "Solr server URL" in the UI menu and clicking "Save" e.g. "https://WEBSITE/admin/config/search/apachesolr/settings/dev_environment_search_server__0_0/edit?destination=admin/config/search/apachesolr/settings").
Would there be a way of changing this without in a script?
Also updating the database table manually doesn't work unless you also clear the cache, is there a way of only clearing the Solr Cache for to update this change (I've been asked to not clear all caches)
Could anyone help me?
If Solr is setup using Search API you can add the Search API Override and Search API Solr Override modules to allow you to control the configuration easily in your settings.local.php file.

How to auto-index data using solr and nutch?

i want to automatically index a document or a website when it is fed to apache solr . How we can achieve this ? I have seen examples of using a CRON job that need to be called via a php script , but they are not quite clear in explaination. Using java api SolrJ , is there any way that we can index data automatically , without having the need to manually do it ??
You can write a scheduler and call the solrJ code which is doing indexing/reindexing.
For writing the scheduler please refer below links
http://www.mkyong.com/java/how-to-run-a-task-periodically-in-java/
http://archive.oreilly.com/pub/a/java/archive/quartz.html
If you are using Apache Nutch, you have to use Nutch solr-index plugin. With using this plugin you can index web documents as soon as they be crawled by Nutch. But the main question would be how can you schedule Nutch to start periodically.
As far as I know you have to use a scheduler for this purpose. I did know an old Nutch project called Nutch-base which uses Apache Quartz for the purpose of scheduling Nutch jobs. You can find the source code of Nutch-base from the following link:
https://github.com/mathieuravaux/nutchbase
If you consider this project there is a plugin called admin-scheduling. Although it is implemented for and old version of Nutch but it could be a nice start point for developing scheduler plugin for Nutch.
It is worth to say that if you are going to crawl website periodically and fetch the new arrival links you can use this tutorial.

Viewing Apache solr logs on windows

I have drupal based site with solr integration. My localhost is on windows and the live site on Linux.
How do I enable and view solr logging for both setups? I can see a log folder in my localhost but its empty.
Just to elaborate, solr search etc works great in both setups. However I built a solr view that works perfectly on local but gives less accurate results on live. So I wanted to see the final solr queries being built to see the source of the difference.
While starting the Solr instance pass the following parameter to enable Solr logging to file.
-Djava.util.logging.config.file=etc/logging.properties
Then modify your /example/etc/logging.properties inside you Solr instance to customize your logging pattern.
Using Solr Version: Apache Solr 8.9.0
You could use the Solr Administration User Interface
Go to Solr Admin UI and click the link for "Logging".
Then you will see log info.
Selecting the Level link on the left, you see the hierarchy of classpaths and classnames for your instance.

What is the SOLR plugin for Liferay meant for?

I am using Liferay 6.1 and I am trying to learn how to incorporate search functionality into Liferay Portal. I was able to run Apache SOLR inside Liferay's tomcat container but I don't understand what the solr plugin for liferay is meant for.
Here is the link
Can someone please explain what are the benefits for using the plugin (for liferay) and what it accomplishes on top of using SOLR?
Thanks
Per this link it is to externalize the search function from the portal.
Using Solr instead of Lucene gives you the additional capabilities of Solr such as Replication, Sharding, Result clustering through Carrot2, Use of custom Analyzers/Stemmers etc.
It also can offload search server processing to a separate cluster.
Opens up the possibilities of search driven UI (facetted classification etc) separate from your portal UI.