Sitecore Lucene: content delivery server index not updating on publish - lucene

I created a custom search page using the default sitecore_web_index and everything seemed to work until I migrated to my test environment that has separate content management and content delivery servers. The index on the CD server is not getting updated on publish (the CM server does), if I rebuild the index from the control panel, I see updates. So I believe the index and the search page are working correctly.
The index is using the onPublishEndAsync strategy. The Sitecore Search and Index Guide (http://sdn.sitecore.net/upload/sitecore7/70/sitecore_search_and_indexing_guide_sc70-usletter.pdf) section 4.4.2 states:
This strategy does exactly what the name implies. During the initialization, it subscribes to the
OnPublishEnd event and triggers an incremental index rebuild. With separate CM and CD servers, this
event will be triggered via the EventQueue object, meaning that the EventQueue object needs to be
enabled for this strategy to work in such environment.
My web.config has <setting name="EnableEventQueues" value="true"/>
Also from the Search and Index Guide:
Processing
The strategy will use the EventQueue object from the database it was initialized with:
<param desc="database">web</param>
This means that there are multiple criteria towards successful execution for this strategy:
This database must be specified in the <databases /> section of the configuration file.
The EnableEventQueues setting must be set to true.
The EventQueue table within the preconfigured database should have entries dated later than
index's last update timestamp.
I'm not sure of the <param desc="database">web</param> setting, because the publishing target (and database ID) for the CD server is pub1. I tried changing web to pub1, but then neither servers' index was updated on a publish (so it's changed back to web).
The system was recently upgraded from Sitecore 6.5 to 7.2, so there are a couple indexes using Sitecore.Search API and these indexes are updated on publish.
Is the database param on the EventQueue wrong considering the multiple publishing targets? Is there something else I'm missing, or perhaps a working example of a CM -> CD environment I could compare to?
TIA
EDIT:
If I wouldn't have had a co-worker sitting next to me both Friday and today who can confirm, I would think I'm going crazy. But now, the CD server is getting updates to the index, but the CM server is not getting updates. What would make the CM server not get updates now?

I ran into this same issue last night and have a more predictable resolution than creating a new IIS site:
The resolve was to set a distinct InstanceName in ScalabilitySettings.config for each CD server, instead of relying on the auto-generated name.
Setting this value immediately resolved the issue and restored the index update functionality upon Publish End Remote events.
Note: If you already have an InstanceName defined in your config, then you need to change it for this to work. I just increment the InstanceName with the date to force the change.
This is effectively fixing the same issue in the same way as the original poster did by changing to a new IIS site, as the OP's fix would have modified the auto-generated Instance Name based on the new IIS site name.
I believe the core problem with the OP (and also in my instance) is related to the EventQueue databases going out of sync with the CD instances and none of the servers being able to determine that an event has been generated / what content needs to update in the index. By changing the Instance Name (using either method) the servers appear to be new instances and start from scratch with their EventQueue tracking.
Every time I've seen issues like this in the past it's been related to major manipulations of Sitecore databases. Such as restorations, backup/restore to a new DB name, or rollbacks of databases due to deployment problems. I believe something in the above operations causes the EventQueues to get out of sync and the servers stop responding to the expected events.

I had this issue and it drove me nuts for a few months. I figured out that the answer lied in the rebuild strategy of the Lucene Index. The only way for Lucene to know to rebuild itself when the CM and CD are in separate instances of IIS, is for lucene to watch the EventQueue table and recognize that a change happened to an item that is either at the root, or child of the root that you specify in the crawler node. The strategy that you'll need to specify as the rebuild strategy to guarantee this behavior is below
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />
</strategies>
If you use any other rebuild strategy with a remote instance of a content delivery server, the index will only be rebuilt in the CM instance's file system.

In case anyone runs into this in the future, the solution that worked for me, was creating a new site in IIS manager.
I submitted a ticket to Sitecore support, but after a week of not getting a response, I attempted to recreate my dev environment on my test server. I copied my local/dev files to the test CM server, created a new site and AppPool in IIS, pointed to the newly copied files, and updated the connectionstrings.config to point to the test environment database. This worked (publishing updated the CM web index).
After trying to point the existing IIS site to my new files, and use the new AppPool, publishing from this site would not update the CM web index.
I then pointed my new site to the pre-existing files and pre-existing AppPool, and it still worked. I disabled the pre-existing IIS site, edited the bindings on the new site to match the pre-existing one, and everything worked as it should.
I don't know what was "wrong" with the pre-existing site (I inherited the system, so I don't know how it was created), but comparing the bindings, basic settings, and advanced settings, they were a perfect match to the functional new IIS site. I wish I had the real "cause" of the issue to share, but at least I found a solution that worked for me.
Thanks to all for the responses.
[EDIT] While this solution did work for me, please use Laver's answer as the correct solution for this issue.

#Laver's fix did work for us, but since our InstanceName is generated through our build process I did not want to have to change it. I did some more digging and found the root cause of the issue was data stored in the core database's Properties table.
You can see the full documentation in this Sitecore Stack Exchange Q&A, but the solution is reproduced below.
The solution requires an AppPool recycle to take effect:
Execute the following SQL statement against the core database
DELETE FROM [Properties] WHERE [Key] LIKE '%_LAST_UPDATED_TIMESTAMP%'
Recycle the CD's AppPool
After this, you will want to rebuild the indexes on the CD so that
they pick up any changes that were missed while indexing was broken.

It seems like you are on the right track so far. I believe what is tripping you up is the publishing target. From what I understand you are using pub1 as your Content Delivery (CD) database. It is a best practice to have a separate index defined for each database. So you really should configure you CD server to be pointing to a sitecore_pub1_index and not the sitecore_web_index.
Your CM and CD servers should have your pub1 database configured. An example of what that would look like would be something like this Sitecore include patch config. It is a best practice t not edit the web.config directly if possible and use include config patches instead. This example shows a patched config that would go in your \App_Config\Include directory:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<databases>
<database id="pub1" singleInstance="true" type="Sitecore.Data.Database, Sitecore.Kernel">
<param desc="name">$(id)</param>
<icon>Network/16x16/earth.png</icon>
<securityEnabled>true</securityEnabled>
<dataProviders hint="list:AddDataProvider">
<dataProvider ref="dataProviders/main" param1="$(id)">
<disableGroup>publishing</disableGroup>
<prefetch hint="raw:AddPrefetch">
<sc.include file="/App_Config/Prefetch/Common.config"/>
<sc.include file="/App_Config/Prefetch/Webdb.config"/>
</prefetch>
</dataProvider>
</dataProviders>
<proxiesEnabled>false</proxiesEnabled>
<proxyDataProvider ref="proxyDataProviders/main" param1="$(id)"/>
<archives hint="raw:AddArchive">
<archive name="archive"/>
<archive name="recyclebin"/>
</archives>
<cacheSizes hint="setting">
<data>20MB</data>
<items>10MB</items>
<paths>500KB</paths>
<itempaths>10MB</itempaths>
<standardValues>500KB</standardValues>
</cacheSizes>
</database>
</databases>
</sitecore>
</configuration>
You will then want to configure a pub1 search index on both your CM and CD servers. Assuming you are using lucene that patch config would look like this:
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<configuration type="Sitecore.ContentSearch.ContentSearchConfiguration, Sitecore.ContentSearch">
<indexes hint="list:AddIndex">
<index id="sitecore_pub1_index" type="Sitecore.ContentSearch.LuceneProvider.LuceneIndex, Sitecore.ContentSearch.LuceneProvider">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<!-- This initializes index property store. Id has to be set to the index id -->
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<configuration ref="contentSearch/indexConfigurations/defaultLuceneIndexConfiguration" />
<strategies hint="list:AddStrategy">
<!-- NOTE: order of these is controls the execution order -->
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<commitPolicyExecutor type="Sitecore.ContentSearch.CommitPolicyExecutor, Sitecore.ContentSearch">
<policies hint="list:AddCommitPolicy">
<policy type="Sitecore.ContentSearch.TimeIntervalCommitPolicy, Sitecore.ContentSearch" />
</policies>
</commitPolicyExecutor>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub1</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
</indexes>
</configuration>
</contentSearch>
</sitecore>
</configuration>
You now have a pub1 database and search index setup. You should already have pub1 setup as a remote publishing target in Sitecore. You also stated you have the EnableEventQueues setting configured to true on both CM and CD servers.
This is all you should need. The onPublishEndAsync will keep an eye on the EventQueue table in your pub1 database. When you publish to your pub1 publishing target you should see entries on your CD server's Sitecore log*.txt file with something similar to this:
ManagedPoolThread #7 23:21:00 INFO Job started: Index_Update_IndexName=sitecore_pub1_index
ManagedPoolThread #7 23:21:00 INFO Job ended: Index_Update_IndexName=sitecore_pub1_index (units processed: )
Note: Units processed never seems to be accurately updated and is typically blank. I assume this is a Sitecore bug but have never dug into enough to determine why it is not displaying in the logs correctly. You can use Luke (again if you are using Lucene) to verify the index has updated as expected.

Check yours publish:end:remote event and see if there is any handler there. If so, try to remove all handlers to make sure they are not causing any error.
I had a similar issue when migrating from a Sitecore 6 to 7. The EventArgs for the remote publish in Sitecore 7 is different. The new type is PublishEndRemoteEventArgs.

Here is the Solution we did in our applicaiton, We have setup Web and Pub database and created addition publishingStrategy pointing it to the pub
<onPublishEndAsyncPub type="Sitecore.ContentSearch.Maintenance.Strategies.OnPublishEndAsynchronousStrategy,
Sitecore.ContentSearch">
<param desc="database">pub</param>
<!-- whether full index rebuild should be triggered if the number of items in Event Queue exceeds
Config.FullRebuildItemCountThreshold -->
<CheckForThreshold>true</CheckForThreshold>
</onPublishEndAsyncPub>
in the index section set the newly created strategy to pub index
<index id="sitecore_pub_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
<param desc="name">$(id)</param>
<param desc="core">itembuckets</param>
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsyncPub" />
<!--<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />-->
</strategies>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>

If you are using the Sitecore Scalability settings, please make sure this is correct.
The reason why the indexing is not being triggered on your CD servers is mainly due to your event queue. One quick check that you can perform is to see if there are events in the EventQueue table of the Core database which says that publishing has completed.
Also, check the Sitecore.ContentSearch.config, since when the publishing ends, it will trigger the rebuild index.
Thanks

Related

Nservicebus disable default logger in web.config

I'm using the DefaultFactory LogManager for Nservicebus v5. I'm happy with this but was hoping to be able to disable via the web.config.
I use web.config settings, as found in the help docs
<configSections>
<section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" />
</configSections>
<Logging Threshold="Debug" />
I'd prefer not to set the threshold as fatal. I was hoping for a "None" or Disabled="true"
Also can the directory path be set web.config?
Update: Why would we want to ignore errors?
The short is we don't really have write permission on the servers.
The long is this isn't 100% true.
Our systems is moving towards microservices, the problem with this is that decentralized logging is a tracing/visualization nightmare.
So we moved flow tracing, exceptions, and limited tracing to a centralized system.
Programming Entry points (aka message Handlers, web api endpoints, etc) are nearly always wrapped in a try catch log throw on each handler, this covers all our programming errors. This isn't anything really that different to normal.
The centralized logging location sets of all the nice red flashing real time alarms one could wish for.
Which leaves only configuration type error left like missing queues, bad assembly bindings, faulty config files, or more runtime style stuff like IoC wiring (outside of the handler code).
With the centralized logging and monitoring of the error quests, it is fairly easy to detect when the service is broken and if it is then we turn on logging, restart, try the faulty issue, and fix.
Guaranteed delivery will take care of everything else once it is up again :D Gone are the days of 150mb log files spread across 10 different servers.
The simplicity of DefaultFactory was nice, as was not needing another nuget package and associated configuration.
Is this the correct way forward. Many would argue no.
Could we have done it better? yes we could implement the common logger interface and pass it into NServiceBus but we arn't quiet there just yet and the win isn't critical atm.
A side note: One really really nice thing about the way we log is that in our backoffice tool we have been able to simply show the flow for each "order", similar to using a correlation id in greylog.
Since this was not considered a likely scenario it does not have a first class API. But you can achieve this via passing in a null logger from any of the common logging libraries (NLog, Log4net, CommonLogging). I assume you are using one of these in your website.
So take NLog for example.
Install-Package NServiceBus.NLog
The in your webconfig
<appSettings>
<add key="disableLogging" value="true"/>
</appSettings>
Then in your global startup
if (ConfigurationManager.AppSettings.Get("disableLogging") == "true")
{
LoggingConfiguration config = new LoggingConfiguration();
LogManager.Configuration = config;
NServiceBus.Logging.LogManager.Use<NLogFactory>();
}
This is leveraging the approach documented here http://docs.particular.net/nservicebus/logging-in-nservicebus#nlog

Updating a package with a Windows Service resets service's account and password

I'm working on an MSI installer with WiX. I'm trying to keep this as simple to develop as possible: this is an internal product, and my users are our IT personnel.
The product includes a Windows Service that must be configured to run under a different account for each machine.
The workflow I was planning for my users (for first-time install) is as follows:
Run the installer
(The installer sets up the service under a default account)
Stop the service via sc or Local Services applet
Update the service properties to run under the correct machine-specific account.
(The account is different for each machine, and only the IT personnel has access to the passwords.)
Restart the service
Subsequent updates would consist of installing from updated MSI files.
Testing a "small" update, I was surprised to find that the installer reset the service back to running under the default account. This is a major problem for me because it makes it really hard for my users to update their servers. They would have to re-enter the account information on each machine every time there is an update. I expected that would happen with a "major" update, but not on a "small" one.
Is there a way to configure the installer so that it does not change the existing account/password configuration for a service during a "small" or a "minor" update?
Will this happen during a "repair" as well (I haven't tried that)?
Here's what my component looks like in the .wxs file:
<Component Id="cmpService" Guid="{MYGUIDHERE}">
<File Id="filService" KeyPath="yes" Name="ServiceApp.exe" />
<ServiceInstall Id="ServiceInstall" Name="ServiceApp" DisplayName="My Service"
Type="ownProcess" Start="auto" ErrorControl="normal"
Account="LocalSystem">
<util:PermissionEx ... attributes here... />
</ServiceInstall>
<ServiceControl Id="StartService" Start="install" Stop="both" Remove="uninstall"
Name="ServiceApp" Wait="yes" />
</Component>
I had expected that Remove="uninstall" would preserve the service in place if there were no changes to it. Apparently not. (I'm not too worried if this happens on "major" updates).
I also noticed that the ServiceConfig element has attributes (OnReinstall) that seem to fit the bill, but based on candle error messages, it's pretty clear that OnReinstall is intended to affect only the configuration members of the element (PreShutdownDelay, etc.) rather than the service installation as a whole.
I've looked into these:
Let the user specify in which account a service runs
WiX MajorUpgrade of Windows Service, preserving .config, and avoiding a reboot
How to only stop and not uninstall windows services when major upgrade in wix?
Curiously, this answer suggests that this is an issue only for "major" upgrades. That wasn't my experience. Was my experience a fluke?
How do I create a custom dialog in WiX for user input?
It would have been OK prompting for an account and password during installation, but storing the password in the registry or elsewhere is a not really an option in this case, and having to reenter the credentials on every update is just as disruptive as having to reconfigure the service by hand.
I had a consultation phone-call with FireGiant today about this exact issue and we came to a solution.
Backstory:
Our application install MSI installs a Windows Service using LocalService initially, however our actual desktop software changes this to NetworkService or even a custom user account as may be necessary in certain network environments.
Our <Component> <ServiceInstall> element had Account="NT AUTHORITY\LocalService" and looked like this:
<Component Id="Comp_File_OurServiceExe" Guid="*">
<File Source="$(var.TargetDir)OurService.exe" id="File_OurServiceExe" KeyPath="yes" />
<ServiceInstall
Id = "ServiceInstall_OurServiceExe"
Vital = "yes"
Name = "RussianSpyingService"
DisplayName = "Russian Spying Service"
Description = "Crawls your network for incriminating files to send to the FSB"
Account = "NT AUTHORITY\LocalService"
Type = "ownProcess"
Arguments = "-mode service"
Interactive = "no"
Start = "auto"
ErrorControl = "normal"
>
<ServiceConfig DelayedAutoStart="yes" OnInstall="yes" OnUninstall="no" OnReinstall="yes" />
<util:ServiceConfig FirstFailureActionType="restart" SecondFailureActionType="restart" ThirdFailureActionType="none" ResetPeriodInDays="1" />
</ServiceInstall>
</Component>
When these repro-steps are followed the service registration/configuration would be unintentionally reset:
Complete an install using the MSI version 1.0.0
Open Services.msc and change the RussianSpyingService to use NT AUTHORITY\NetworkService (instead of NT AUTHORITY\LocalService)
Create a new MSI using the same *.wxs files, but higher file versions and give it a higher version, e.g. 1.0.1 (don't forget MSI only uses the first 3 components of a version number and ignores the 4th version)
After that install has finished, observe the the RussianSpyingService has been reset to use NT AUTHORITY\LocalService.
As an aside, I asked FireGiant (their consultants previously worked at Microsoft and helped other teams at the company use MSI) who other software, like SQL Server are able to use MSI to install Windows Services that work fine despite configuration changes between upgrade-installs. They told me that products like SQL Server often use Custom Actions for windows service configuration and despite the general advice to avoid Custom Actions it's acceptable because the SQL Server team at Microsoft is big enough to devote engineering and test resources to ensure they work.
Solution
In short: "Use MSI properties!"
Specifically, define an MSI property that represents the Account attribute value and load that value from the registry during MSI startup and if the value is not present, use a default value of NT AUTHORITY\LocalService.
Ideally the property value would be stored in the application's own registry key and it is the application's responsibility to ensure that value matches the current service configuration.
This can be done by creating a new registry key in HKLM that lets LocalService or NetworkService (or whatever the service account is) write to it, so when the service starts-up it records its user-account's name there - but this is complex.
Do not use HKCU to store the value because that won't work: HKCU resolves to completely different registry hives (that might not even be loaded or accessible) for different users.
The other option is technically not supported by Microsoft because it uses the Windows registry's own services registration key raw ObjectName (Account name) value - which so-happens to be in the same format used by the AccountName="" attribute. It's also the most pragmatic and it's what is described below:
Here's what worked for us:
Within your <Wix> ... <Product>... element, add this <Property> declaration and <RegistrySearch /> element:
<?xml version="1.0" encoding="UTF-8"?>
<Wix
xmlns = "http://schemas.microsoft.com/wix/2006/wi"
xmlns:netfx = "http://schemas.microsoft.com/wix/NetFxExtension"
xmlns:util = "http://schemas.microsoft.com/wix/UtilExtension"
>
<Product
Id="*"
UpgradeCode="{your_const_GUID}"
otherAttributes="goHere"
>
<!-- [...] -->
<Property Id="SERVICE_ACCOUNT_NAME" Value="NT AUTHORITY\LocalService">
<!-- Properties used in <RegistrySearch /> must be public (ALL_UPPERCASE), not private (AT_LEAST_1_lowercase_CHARACTER) -->
<RegistrySearch Id="DetermineExistingServiceAccountName" Type="raw" Root="HKLM" Key="SYSTEM\CurrentControlSet\Services\RussianSpyingService" Name="ObjectName" />
</Property>
<!-- [...] -->
</Product>
</Wix>
Update your <ServiceInstall element to use the new SERVICE_ACCOUNT_NAME MSI property for Account="" instead of the previous hardcoded NT AUTHORITY\LocalService:
<ServiceInstall
Id = "ServiceInstall_OurServiceExe"
Vital = "yes"
Name = "RussianSpyingService"
DisplayName = "Russian Spying Service"
Description = "Crawls your network for incriminating files to send to the FSB"
Account = "[SERVICE_ACCOUNT_NAME]"
Type = "ownProcess"
Arguments = "-mode service"
Interactive = "no"
Start = "auto"
ErrorControl = "normal"
>
<ServiceConfig DelayedAutoStart="yes" OnInstall="yes" OnUninstall="no" OnReinstall="yes" />
<util:ServiceConfig FirstFailureActionType="restart" SecondFailureActionType="restart" ThirdFailureActionType="none" ResetPeriodInDays="1" />
</ServiceInstall>
Build and run your installer and perform the upgrade scenario and you'll see any customized service Account User Name will be preserved between upgrade installs.
You can generalize this approach for other properties too.
Disclaimer:
Microsoft does not officially endorse user-land programs directly fiddling with the HKLM\SYSTEM\CurrentControlSet\Services\ registry key. All operations on Windows Services are meant to go through the documented and supported Win32 Service Control Manager API: https://learn.microsoft.com/en-us/windows/desktop/services/service-control-manager
This means that Microsoft could at their discretion, change Windows Service configuration so it no-longer uses the HKLM\SYSTEM\CurrentControlSet\Services\ key.
(This would presumably break lots of third-party software, if Microsoft were to do this they would probably add some kind of virtualization or re-mapping system to it like they do with SysWow6432Node).
I only tested it with LocalService and NetworkService. I didn't see what happens if you modify the service configuration to use a custom user account post-install before running an upgrade. I do expect that it will also preserve the configuration in that case as it would be performing a string-comparison on the ObjectName value in SCM and it has no access to passwords.
What finally ended up working for me was
<DeleteServices><![CDATA[REMOVE ~= "ALL" AND (NOT UPGRADINGPRODUCTCODE)]]> </DeleteServices>
<InstallServices><![CDATA[NOT Installed]]> </InstallServices>
I arrived at this answer through a series of trial and error attempts and a combination of a few other threads with similar answers.
One of the possible reasons why only the doesn't work is because WIX also removes the service upon re-install.. we only want to install the service once, during the initial install. We also want to make sure that the service is removed upon uninstall. This is the only combination of conditions which worked for me, allowing the service to keep its settings and user account.

RavenDB 2 returns 401 when trying to create database

This is a fresh install of Raven #2230, running on IIS8/Win8. When studio starts it offers to create new database, then browser pops up credentials window (401).
Web.config has add key="Raven/AnonymousAccess" value="All"/ set. Also tried add key="Raven/AnonymousUserAccessMode" value="All"/ as per documentation.
Anonymous Authentication on site is enabled, so is Windows Authentication.
Added Raven.Bundles.Authorization.dll to plugins folder (not sure if needed, but didn't make any difference).
Am I missing something ?
RavenDB as of today, is on version 2750 (stable). Upgrade and this issue should be fixed.
The way to do this is to set the AnonymousAccess setting in web.config to Admin:
<add key="Raven/AnonymousAccess" value="Admin"/>
You should change this back to All once you have created your database.

Blackboard with Struts

I am a beginner here, and this is my first time creating a building block for Blackboard. I understand that I could use Struts in building block thus I used Struts 1.3 to develop the building block for Blackboard version 9.
I am confuse while doing Dispatch Action, Blackboard doesn't seem to be able to find my forwarded page, and I keep ended up in this error "The specified resource was not found, or you do not have permission to access it".
Link in my jsp:
This is a test
struts-config.xml setup:
<action path="/teststruts" type="com.test.action.TestAction" parameter="execute" scope="request" validate="false">
<forward name="success" path="./thistest.jsp" />
<forward name="error" path="./index.jsp" />
My dispatch action simply mapping.findforward to one of the path.
Really scratching my head here.
Fix the relative path by removing the ./ from the front of your link.
Also verify that your Blackboard Building Block is starting up correctly by looking at the
blackboard/logs/tomcat/sdtout-stderr log after you "disable and enable" the code from the Building Blocks management page. Also verify that your servlet contains a error.jsp as sometimes the 404 error comes from struts forwarding the error on to an error page that does not exist.
Try to use
<permission name="suppressAccessChecks" type="java.lang.reflect.ReflectPermission" />
But i don't think is a good idea to use struts or another framework to develop Blackboard BB. It can generate conflicts with the libraries used by Blackboard, if not with the current version, when updating Blackboard version.
One approach we took when trying to do complex modules in blackboard is to create a full webapp for Blackboard tomcat's instead of a BB. Using this technique it is possible to use whatever you want, since it is an independent application, but at the same time you can comunicate with Tomcat through the context. Yoy have to add the application to the server.xml and add some permissions in catalina.policy to do so... but maybe it could be a little bit tricky...

Failure to obtain lock using Lucene and Sitecore

I'm trying to implement Lucene search in Sitecore. Using the default Sitecore.Search implementation, I should be able to get a reference to the index defined in my config file and call index.Rebuild.
I tried using the RebuildDatabaseCrawlers script from the AdvancedDatabaseCrawler, but everytime I call Rebuild, it fails.
The error I receive is:
Lock obtain timed out: SimpleFSLock#C:\sites\MySite\Data\indexes\__mysite\write.lock
I've tried changing permissions (including giving Everyone full perms), restarting databases and IIS, all to no avail. I've also tried stripping my search configuration section down to the bare minimum, with the same result.
Unfortunately I don't have any visibility into what the index.Rebuild() method does, as its inside the Sitecore.Search assembly.
The issue ended up being related to configuration.
Specifically, when trying to remove all superfluous Sitecore.Data.Indexing references from the configuration files after determining that i didn't need both Sitecore.Search and Sitecore.Data.Indexing, I had commented out the following line:
<configuration>
<appSettings>
<add key="Lucene.Net.FSDirectory.class" value="Sitecore.Data.Indexing.FSDirectory, Sitecore.Kernel"/>
</appSettings>
</configuration>
That needs to be there.
Try to adjust the permissions for c:\Temp for your app pool user, e.g. Network Service
You can also try to do the same for: c:\windows\microsoft.net\framework\{version}\Temporary ASP.NET Files