host solr after creating a new collection - apache

I tried the example provided by the Apache Solr package.
I was trying to create a new data collection for my own schema and configurations.
There how should I start running Solr? When I was running the example, there was a start.jar in example directory to start it. Will the same jar work for my case?
If not, how to create a executable for it?

The first line on the solr install page says : "Solr already includes a working demo server in the example directory that you may use as a template" . http://wiki.apache.org/solr/SolrInstall#Setup .
Even if the recomended server is tomcat i have a feeling jetty will work just as well for you. Having the index production ready is more about knowing your fields and query patterns really well, as well as optimising the index through the schema and config for speed according to those patterns

Related

Regarding Drupal 7 configuring with apache Solr and apache Nutch

I have installed drupal 7 and the apache solr search module and configured with Apache Solr(solr version:4.10.4). The content has been indexed from the drupal to the apache solr and searching also works fine.I need to configure Nutch(Apache Nutch Version:1.12) web crawler to the apache solr and drupal 7 and to fetch the details from the specific URL (for eg: http://www.w3schools.com) and need to search in the drupal for the contents. My problem is how to configure all three solr nutch and drupal 7.Can any one suggest the solution for this?
Ok... here's my ugly solution that maybe fits in what you are doing.
You can use a php field (a custom field with Display Suite) in your node (or page) which basically reads your full page with CURL and then print the contents right there. This field should be only in a display of your node that will see nobody (except Apache Solr).
Finally in Solr config (which honestly I don't remember well how it worked) you could choose which display of the page to be indexed, or the field to be indexed, which will be your full page.
If all these works, you don't need to integrate Nutch with Solr and Drupal.
Good luck :)
PD: If you have a doubt just ask.
My 2 cents on this: looks like you want to aggregate content from your Drupal site (your nodes) and from an external content hosted on your site but not as a Drupal content right? If this is the case then you don't need to any integration between Nutch and Drupal, just to index everything in the same Solr core/collection. Of course you'll need to make sure that the Solr schema is compatible (Nutch has it's own metadata different from the Drupal nodes). Also if you index in separated cores/collections you could use the shards parameter to span you query to several cores and still get only one result set, but with this approach you'll need to keep and eye on the relevance of your results (the order of the documents) and also keep and eye on what fields the Drupal Solr module uses to show the result, so in the end you'll still need to make the schema of both cores compatible at some degree.

Deploy Typo3 database changes

I wonder if there is a good way of deploying database changes made on a typo3 website (on dev) to a live website?
In Magento for example there are folders containg sql install statements (for the structure, new tables etc.) and data install scripts (inserting data into the tables).
These scripts are automatically executed when deployed to live.
Good ways of getting rid of manual database adaptions are welcome.
Thanks!
cweiske explained it well, for common admin it's enough to know that Install Tool has Database Analyser > Compare functionality, which is dedicated for handling DB schema differences.
TYPO3 extensions have their ext_tables.sql files which define the database structure they need.
When installing the extension, the necessary database structure changes are made by the TYPO3 extension manager. You can also apply the changes yourself by using the install tool -> database update.
So as long as your extensions have the correct table definitions, you're fine and can rely on TYPO3 to update the actual database.

Use SQL to delete old MediaWiki revisions without shell access?

Does anyone know a SQL query that will purge a MediaWiki database of old revisions? My database has grown out of control, and I need to prune it to make it possible to download and manage.
I don't have shell access so, I need to do this with a SQL query.
I have tried the solution suggested here, but it doesn't work http://www.mediawiki.org/wiki/Extension_talk:SpecialDeleteOldRevisions2#Deleting_only_archived_revisions
Thanks for reading :)
Nicholas
As you, I don't have shell access to my MediaWiki. So I can't do a lot of things like maintenance.
Here is my solution : host your MediaWiki web site on your computer just to do your maintenance tasks
Backup your database
Backup your MediaWiki folder
Setup Apache (the web server) on your computer
Setup MySQL on your computer
Restore your MediaWiki database on your computer
Put your MediWiki folder on the Apache root folder
Finally run the maintenance task you want using shell. I suggest you the deleteOldRevisions script
After that, rebackup the folder and the database and restore them on the remote host
Use the Maintenance extension and run the relevant maintenance scripts with it. Direct database manipulation is pure madness, and using a local LAMP install as suggested by the other answer quite cumbersome.
Shell access is really required to properly run a MediaWiki but this is a common problem, please report your experience with the extension on the talk page or file a bug if you find any.

Disable custom index on CMS. Will CD indexing be affected?

I setup a custom Lucene index for several template types in Sitecore in a 1 CA and 3 CD environment. This works fine on the CD servers but this seems to be overloading the CMS server. If I comment out this index on CMS will it in anyway affect the indexing on CD servers?
It shouldn't affect the CD servers unless you have some kind of file or configuration replication between the two that may move the index files or the sitecore config.
On a side note : The lucene indexing part should have very little impact on any server its running on (unless you maybe have custom indexers) so I'm a little confused while it would overload the CMS server.

Symfony 1.4 : How to generate .SQL from fixture files

I have my symfony project on my dev box machine and also on a hosted environment where i have only FTP and HTTP access (i have a phpmyadmin access to my prod DB). This hosted version is an alpha release that should be initialized with fixtures.
Problem is : to do that, i have to write/update .yml fixtures in local, insert them in local DB with the symfony task, go on my local PMA, generate a data export, go on prod PMA and import the data...
Is there any way to generate a .sql file from my local fixtures so that i can insert them directly through my prod PMA ?
Thanks
The easiest is probably for you to look at this website:
http://brentertainment.com/2010/02/15/run-a-symfony-task-from-your-model-or-action/
You may run tasks from the web, you might make it a bit smarter and allow to run tasks from the web. Also clearing the cache might be a useful thing to extend to your admin interface for example.
I think you can't "out of the box" in symfony.
BUT you may be able to use the Doctrine Profiler to log all the SQL requests done by the sfDoctrineDataLoadTask task class into a file.
Are you using the symfony doctrine:build --all command? When I do that, a schema.sql file is created within the data/sql directory.