I'm using the DefaultFactory LogManager for Nservicebus v5. I'm happy with this but was hoping to be able to disable via the web.config.
I use web.config settings, as found in the help docs
<configSections>
<section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" />
</configSections>
<Logging Threshold="Debug" />
I'd prefer not to set the threshold as fatal. I was hoping for a "None" or Disabled="true"
Also can the directory path be set web.config?
Update: Why would we want to ignore errors?
The short is we don't really have write permission on the servers.
The long is this isn't 100% true.
Our systems is moving towards microservices, the problem with this is that decentralized logging is a tracing/visualization nightmare.
So we moved flow tracing, exceptions, and limited tracing to a centralized system.
Programming Entry points (aka message Handlers, web api endpoints, etc) are nearly always wrapped in a try catch log throw on each handler, this covers all our programming errors. This isn't anything really that different to normal.
The centralized logging location sets of all the nice red flashing real time alarms one could wish for.
Which leaves only configuration type error left like missing queues, bad assembly bindings, faulty config files, or more runtime style stuff like IoC wiring (outside of the handler code).
With the centralized logging and monitoring of the error quests, it is fairly easy to detect when the service is broken and if it is then we turn on logging, restart, try the faulty issue, and fix.
Guaranteed delivery will take care of everything else once it is up again :D Gone are the days of 150mb log files spread across 10 different servers.
The simplicity of DefaultFactory was nice, as was not needing another nuget package and associated configuration.
Is this the correct way forward. Many would argue no.
Could we have done it better? yes we could implement the common logger interface and pass it into NServiceBus but we arn't quiet there just yet and the win isn't critical atm.
A side note: One really really nice thing about the way we log is that in our backoffice tool we have been able to simply show the flow for each "order", similar to using a correlation id in greylog.
Since this was not considered a likely scenario it does not have a first class API. But you can achieve this via passing in a null logger from any of the common logging libraries (NLog, Log4net, CommonLogging). I assume you are using one of these in your website.
So take NLog for example.
Install-Package NServiceBus.NLog
The in your webconfig
<appSettings>
<add key="disableLogging" value="true"/>
</appSettings>
Then in your global startup
if (ConfigurationManager.AppSettings.Get("disableLogging") == "true")
{
LoggingConfiguration config = new LoggingConfiguration();
LogManager.Configuration = config;
NServiceBus.Logging.LogManager.Use<NLogFactory>();
}
This is leveraging the approach documented here http://docs.particular.net/nservicebus/logging-in-nservicebus#nlog
I created a custom search page using the default sitecore_web_index and everything seemed to work until I migrated to my test environment that has separate content management and content delivery servers. The index on the CD server is not getting updated on publish (the CM server does), if I rebuild the index from the control panel, I see updates. So I believe the index and the search page are working correctly.
The index is using the onPublishEndAsync strategy. The Sitecore Search and Index Guide (http://sdn.sitecore.net/upload/sitecore7/70/sitecore_search_and_indexing_guide_sc70-usletter.pdf) section 4.4.2 states:
This strategy does exactly what the name implies. During the initialization, it subscribes to the
OnPublishEnd event and triggers an incremental index rebuild. With separate CM and CD servers, this
event will be triggered via the EventQueue object, meaning that the EventQueue object needs to be
enabled for this strategy to work in such environment.
My web.config has <setting name="EnableEventQueues" value="true"/>
Also from the Search and Index Guide:
Processing
The strategy will use the EventQueue object from the database it was initialized with:
<param desc="database">web</param>
This means that there are multiple criteria towards successful execution for this strategy:
This database must be specified in the <databases /> section of the configuration file.
The EnableEventQueues setting must be set to true.
The EventQueue table within the preconfigured database should have entries dated later than
index's last update timestamp.
I'm not sure of the <param desc="database">web</param> setting, because the publishing target (and database ID) for the CD server is pub1. I tried changing web to pub1, but then neither servers' index was updated on a publish (so it's changed back to web).
The system was recently upgraded from Sitecore 6.5 to 7.2, so there are a couple indexes using Sitecore.Search API and these indexes are updated on publish.
Is the database param on the EventQueue wrong considering the multiple publishing targets? Is there something else I'm missing, or perhaps a working example of a CM -> CD environment I could compare to?
TIA
EDIT:
If I wouldn't have had a co-worker sitting next to me both Friday and today who can confirm, I would think I'm going crazy. But now, the CD server is getting updates to the index, but the CM server is not getting updates. What would make the CM server not get updates now?
I ran into this same issue last night and have a more predictable resolution than creating a new IIS site:
The resolve was to set a distinct InstanceName in ScalabilitySettings.config for each CD server, instead of relying on the auto-generated name.
Setting this value immediately resolved the issue and restored the index update functionality upon Publish End Remote events.
Note: If you already have an InstanceName defined in your config, then you need to change it for this to work. I just increment the InstanceName with the date to force the change.
This is effectively fixing the same issue in the same way as the original poster did by changing to a new IIS site, as the OP's fix would have modified the auto-generated Instance Name based on the new IIS site name.
I believe the core problem with the OP (and also in my instance) is related to the EventQueue databases going out of sync with the CD instances and none of the servers being able to determine that an event has been generated / what content needs to update in the index. By changing the Instance Name (using either method) the servers appear to be new instances and start from scratch with their EventQueue tracking.
Every time I've seen issues like this in the past it's been related to major manipulations of Sitecore databases. Such as restorations, backup/restore to a new DB name, or rollbacks of databases due to deployment problems. I believe something in the above operations causes the EventQueues to get out of sync and the servers stop responding to the expected events.
I had this issue and it drove me nuts for a few months. I figured out that the answer lied in the rebuild strategy of the Lucene Index. The only way for Lucene to know to rebuild itself when the CM and CD are in separate instances of IIS, is for lucene to watch the EventQueue table and recognize that a change happened to an item that is either at the root, or child of the root that you specify in the crawler node. The strategy that you'll need to specify as the rebuild strategy to guarantee this behavior is below
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />
</strategies>
If you use any other rebuild strategy with a remote instance of a content delivery server, the index will only be rebuilt in the CM instance's file system.
In case anyone runs into this in the future, the solution that worked for me, was creating a new site in IIS manager.
I submitted a ticket to Sitecore support, but after a week of not getting a response, I attempted to recreate my dev environment on my test server. I copied my local/dev files to the test CM server, created a new site and AppPool in IIS, pointed to the newly copied files, and updated the connectionstrings.config to point to the test environment database. This worked (publishing updated the CM web index).
After trying to point the existing IIS site to my new files, and use the new AppPool, publishing from this site would not update the CM web index.
I then pointed my new site to the pre-existing files and pre-existing AppPool, and it still worked. I disabled the pre-existing IIS site, edited the bindings on the new site to match the pre-existing one, and everything worked as it should.
I don't know what was "wrong" with the pre-existing site (I inherited the system, so I don't know how it was created), but comparing the bindings, basic settings, and advanced settings, they were a perfect match to the functional new IIS site. I wish I had the real "cause" of the issue to share, but at least I found a solution that worked for me.
Thanks to all for the responses.
[EDIT] While this solution did work for me, please use Laver's answer as the correct solution for this issue.
#Laver's fix did work for us, but since our InstanceName is generated through our build process I did not want to have to change it. I did some more digging and found the root cause of the issue was data stored in the core database's Properties table.
You can see the full documentation in this Sitecore Stack Exchange Q&A, but the solution is reproduced below.
The solution requires an AppPool recycle to take effect:
Execute the following SQL statement against the core database
DELETE FROM [Properties] WHERE [Key] LIKE '%_LAST_UPDATED_TIMESTAMP%'
Recycle the CD's AppPool
After this, you will want to rebuild the indexes on the CD so that
they pick up any changes that were missed while indexing was broken.
It seems like you are on the right track so far. I believe what is tripping you up is the publishing target. From what I understand you are using pub1 as your Content Delivery (CD) database. It is a best practice to have a separate index defined for each database. So you really should configure you CD server to be pointing to a sitecore_pub1_index and not the sitecore_web_index.
Your CM and CD servers should have your pub1 database configured. An example of what that would look like would be something like this Sitecore include patch config. It is a best practice t not edit the web.config directly if possible and use include config patches instead. This example shows a patched config that would go in your \App_Config\Include directory:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<databases>
<database id="pub1" singleInstance="true" type="Sitecore.Data.Database, Sitecore.Kernel">
<param desc="name">$(id)</param>
<icon>Network/16x16/earth.png</icon>
<securityEnabled>true</securityEnabled>
<dataProviders hint="list:AddDataProvider">
<dataProvider ref="dataProviders/main" param1="$(id)">
<disableGroup>publishing</disableGroup>
<prefetch hint="raw:AddPrefetch">
<sc.include file="/App_Config/Prefetch/Common.config"/>
<sc.include file="/App_Config/Prefetch/Webdb.config"/>
</prefetch>
</dataProvider>
</dataProviders>
<proxiesEnabled>false</proxiesEnabled>
<proxyDataProvider ref="proxyDataProviders/main" param1="$(id)"/>
<archives hint="raw:AddArchive">
<archive name="archive"/>
<archive name="recyclebin"/>
</archives>
<cacheSizes hint="setting">
<data>20MB</data>
<items>10MB</items>
<paths>500KB</paths>
<itempaths>10MB</itempaths>
<standardValues>500KB</standardValues>
</cacheSizes>
</database>
</databases>
</sitecore>
</configuration>
You will then want to configure a pub1 search index on both your CM and CD servers. Assuming you are using lucene that patch config would look like this:
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<configuration type="Sitecore.ContentSearch.ContentSearchConfiguration, Sitecore.ContentSearch">
<indexes hint="list:AddIndex">
<index id="sitecore_pub1_index" type="Sitecore.ContentSearch.LuceneProvider.LuceneIndex, Sitecore.ContentSearch.LuceneProvider">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<!-- This initializes index property store. Id has to be set to the index id -->
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<configuration ref="contentSearch/indexConfigurations/defaultLuceneIndexConfiguration" />
<strategies hint="list:AddStrategy">
<!-- NOTE: order of these is controls the execution order -->
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<commitPolicyExecutor type="Sitecore.ContentSearch.CommitPolicyExecutor, Sitecore.ContentSearch">
<policies hint="list:AddCommitPolicy">
<policy type="Sitecore.ContentSearch.TimeIntervalCommitPolicy, Sitecore.ContentSearch" />
</policies>
</commitPolicyExecutor>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub1</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
</indexes>
</configuration>
</contentSearch>
</sitecore>
</configuration>
You now have a pub1 database and search index setup. You should already have pub1 setup as a remote publishing target in Sitecore. You also stated you have the EnableEventQueues setting configured to true on both CM and CD servers.
This is all you should need. The onPublishEndAsync will keep an eye on the EventQueue table in your pub1 database. When you publish to your pub1 publishing target you should see entries on your CD server's Sitecore log*.txt file with something similar to this:
ManagedPoolThread #7 23:21:00 INFO Job started: Index_Update_IndexName=sitecore_pub1_index
ManagedPoolThread #7 23:21:00 INFO Job ended: Index_Update_IndexName=sitecore_pub1_index (units processed: )
Note: Units processed never seems to be accurately updated and is typically blank. I assume this is a Sitecore bug but have never dug into enough to determine why it is not displaying in the logs correctly. You can use Luke (again if you are using Lucene) to verify the index has updated as expected.
Check yours publish:end:remote event and see if there is any handler there. If so, try to remove all handlers to make sure they are not causing any error.
I had a similar issue when migrating from a Sitecore 6 to 7. The EventArgs for the remote publish in Sitecore 7 is different. The new type is PublishEndRemoteEventArgs.
Here is the Solution we did in our applicaiton, We have setup Web and Pub database and created addition publishingStrategy pointing it to the pub
<onPublishEndAsyncPub type="Sitecore.ContentSearch.Maintenance.Strategies.OnPublishEndAsynchronousStrategy,
Sitecore.ContentSearch">
<param desc="database">pub</param>
<!-- whether full index rebuild should be triggered if the number of items in Event Queue exceeds
Config.FullRebuildItemCountThreshold -->
<CheckForThreshold>true</CheckForThreshold>
</onPublishEndAsyncPub>
in the index section set the newly created strategy to pub index
<index id="sitecore_pub_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
<param desc="name">$(id)</param>
<param desc="core">itembuckets</param>
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsyncPub" />
<!--<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />-->
</strategies>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
If you are using the Sitecore Scalability settings, please make sure this is correct.
The reason why the indexing is not being triggered on your CD servers is mainly due to your event queue. One quick check that you can perform is to see if there are events in the EventQueue table of the Core database which says that publishing has completed.
Also, check the Sitecore.ContentSearch.config, since when the publishing ends, it will trigger the rebuild index.
Thanks
I have a console app using Fluent NHibernate. I have configured it to log to various places using log4net. And it works great. I can see the SQL that I want to see and can sent log output to various appenders. The problem is that I cannot suppress the log4net sql output going to the console.
The extra console output is not being controlled by my log4net config settings. It always appears, no matter if I turn off all appenders.
Any suggestions?
Do you have this property set to true in your nhibernate section in your app.config?
<property name="show_sql">true</property>
If so set it to false.
Edit
Here is a sample piece of code from the nhibernate source:
log.Debug(logMessage);
if (LogToStdout)
{
Console.Out.WriteLine("NHibernate: " + statement);
}
In the above code LogToStdout is directly linked to the show_sql configuration property. If you have this set to true nothing will stop it from writing to the console. In regards to your comment you cannot control this via log4net. You can only control what you are doing with the log.Debug(logMessage) via log4net.
To disable any logs put the following code:
< add key="nhibernate-logger" value="" />
into your appSettings section.
The NHibernate logs don't help me much, I'll turn them on when I need them... I add these settings to the log4net section of my app.config:
<logger name="NHibernate">
<level value="OFF" />
</logger>
<logger name="NHibernate.SQL">
<level value="OFF" />
</logger>
Assume you are using some libraries like NHibernate or Castle ActiveRecord that use log4net internally. Your application uses log4net too. It's possible to configure all applications to save logs into file or any other output. But the problem is by enabling log4net for my own application, other programs save their log into the log file and causes it grow very fast with information that I don't need at the moment.
How can I route logs of each application to different outputs or at least how can I deny other applications from logging?
NHibernate/Castle Active Record generate lot of log information but that is all DEBUG level logging. So you can turn down your log level from "ALL" to "INFO" or "ERROR" in config file and you should be OK.
log4Net also support named logger and logger hierarchy. I am sure both NHibernate/Castle would be using named logger. So you can choose to ignore that particular named logger using configuration. See log4Net help where they have used have different logging level for Com.Foo library.
Using named logger is a typical way of separating log traces from different components/modules/libraries etc. Each application (as in different process) would have different configuration file and you can always have different log files to separate the log traces.
Just direct different loggers to different appenders.
Pseudo example:
<log4net>
<appender name="MyAppender" type="log4net.Appender.FileAppender">
<!--appender properties (file name, layout, etc)-->
</appender>
<appender name="NHAppender" type="log4net.Appender.FileAppender">
<!--ditto-->
</appender>
<logger name="MyAppMainNamespace">
<level value="INFO"/>
<appender-ref ref="MyAppender" />
</logger>
<logger name="NHibernate">
<level value="ERROR"/>
<appender-ref ref="NHAppender" />
</logger>
</log4net>
I'm facing a problem when trying to log my application using log4net.
My application consists of a WCF service, and clients connecting to it.
Logging at client-side is not a problem, everything works perfectly.
Here is how my server-side is made:
A WCF dll, which contains my service's contract and base implementation (including error handling). Actual operations are made in a separate business layer, which throws the needed exceptions which are caught by the implementation (and sent back using FaultContracts).
A data layer (not a problem here).
A "Utils" library, which contains my wrapper log methods.
My log4net.config file is the following:
<?xml version="1.0" encoding="utf-8" ?>
<log4net debug="false">
<appender name="TechnicalFileAppender" type="log4net.Appender.RollingFileAppender,log4net">
<param name="File" value="c:\\log\\technical.log"/>
<param name="AppendToFile" value="true"/>
<param name="RollingStyle" value="Date"/>
<param name="DatePattern" value="'_'yyyyMMdd-HH"/>
<layout type="log4net.Layout.PatternLayout,log4net">
<param name="ConversionPattern" value="%d [%-5t] %-5p - %m%n"/>
</layout>
</appender>
<root>
<level value="ALL"/>
</root>
<logger name="TechnicalLog">
<level value="ALL"/>
<appender-ref ref="TechnicalFileAppender"/>
</logger>
</log4net>
So when an error occurs in the business layer, it is captured, logged and transformed into a FaultException. There is the problem. No log file is created.
I googled and found some clues (access rights generally, but I used ProcMon and found no call to the desired file or directory).
I'm a bit lost now, I don't know what to try.
I publish my service using the "publish" command in visual studio, so on the server I have my application directory, inside there is a svc file, a web.config file, and then a bin directory with all my dll's including the log4net.dll, and log4net.config.
I tried to copy that config file in the root of my application without success.
I also tried to place the
[assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)]
statement in my WCF's AssemblyInfo.cs (it was originally in the Utils's AssemblyInfo.cs), but without any success.
Thanks for you help, any idea is welcome !
I am pretty sure that log4net does not find the configuration file.
I had the same issue and I think I solved it by having an application setting in the web.config file that contains the full path to the log4net config file. In my wrapper I made sure that log4net is configured by calling the ConfigureAndWatch method if log4net is not configured yet.
Alternatively you could simply copy the log4net configuration to the web.config file (but I would not do that because you loose the ability to change the log settings for the running system). In that case you need to add this to you AssemblyInfo.cs (or some other file if you prefer):
[assembly: log4net.Config.XmlConfigurator()]
If that still does not help then I recommend to turn on internal debugging.