my system support team needs one simple log-file, with a maximum size of 10MB. Older log-lines can be deleted when the file reaches 10MB. So roll out the oldest lines.
What is a good appender for this?
I have one appender, but this still created a second file, and then starts again with an empty new file. This is not what my support team wants.
Help is appreciated.
<configuration>
<appender name="TEST" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/test.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>${LOG_HOME}/test.%i.log</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>1</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>10MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%date %-5level [%thread] - %mdc{loginName} - [%logger]- %msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="FILE" />
</root>
</configuration>
Keeping everything in a single file and constantly adding the most recent while deleting the oldest lines is going to perform really really poorly. I suspect that logback can't be made to do this.
What I suggest is you use the regular size based policy, configure it to stay inside your 10MB limit overall, then just concatenate the files when you grab them.
Related
Hard to believe, but I can't seem to find a straight answer for this: How can I get the SQL statement including the parameter values when the statement generates an exception and only when it generates an exception. I know how to log the statement+parameters for every SQL generated, but that's way too much. When there's a System.Data.SqlClient.SqlException, though, it only provides the SQL, not the parameter values. How can I catch that at a point where I have access to the that data so that I can log it?
Based on a number of responses to various questions (not just mine), I've cobbled something together that does the trick. I think it could be useful to others as well, so I'm including a good deal of it here:
The basic idea is to
Have NH log all queries, pretty-printed and with the parameter values in situ
Throw all those logs out except the one just prior to the exception.
I use Log4Net, and the setup is like this:
<?xml version="1.0"?>
<log4net>
<appender name="RockAndRoll" type="Util.PrettySqlRollingFileAppender, Util">
<file type="log4net.Util.PatternString" >
<conversionPattern value="%env{Temp}\\%property{LogDir}\\MyApp.log" />
</file>
<DatePattern value="MM-dd-yyyy" />
<appendToFile value="true" />
<immediateFlush value="true" />
<rollingStyle value="Composite" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="100MB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level %logger - %message%newline" />
</layout>
</appender>
<appender name="ErrorBufferingAppender" type="log4net.Appender.BufferingForwardingAppender">
<bufferSize value="2" />
<lossy value="true" />
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR" />
</evaluator>
<appender-ref ref="RockAndRoll" />
<Fix value="0" />
</appender>
<logger name="NHibernate.SQL">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<logger name="error-buffer">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<root>
<level value="info" />
<appender-ref ref="RockAndRoll" />
</root>
</log4net>
The NHibernate.SQL logger logs all queries to the ErrorBufferingAppender, which keeps throwing them out and saves only the last one in its buffer. When I catch an exception I log one line at ERROR level to logger error-buffer, which passes it to ErrorBufferingAppender which -- because it's at ERROR level -- pushes it, along with the last query, out to RockAndRoll, my RollingFileAppender.
I implemented a subclass of RollingFileAppender called PrettySqlRollingFileAppender (which I'm happy to provide if anyone's interested) that takes the parameters from the end of the query and substitutes them inside the query itself, making it much more readable.
If you are using nhibernate for querying the db (as the tag presence on your question suggests), and your SQL dialect/driver relies on ADO, you should get a GenericADOException from the failing query.
Its Message property normally already include parameters values.
For example, executing the following failing query (provided you have at least one row in DB):
var result = session.Query<Entity>()
.Where(e => e.Name.Length / 0 == 1);
Yields a GenericADOException with message:
could not execute query
[ select entity0_.Id as Id1_0_, entity0_.Name as Name2_0_ from Entity entity0_ where len(entity0_.Name)/#p0=#p1 ]
Name:p1 - Value:0 Name:p2 - Value:1
The two literals, 0 and 1, of the query have been parameterized and their values are included in the message (with an index base mismatch: on hibernate queries, they are 1 based, while on the SQL query with my setup, they end up 0 based).
So there is nothing special to do to have them. Just log the exception message.
Have you just missed it, or were you asking something else indeed?
Your question was not explicit enough in my opinion. You should include a MVCE. It would have show me more precisely in which case you were not able of getting those parameters values.
I have been using IntelliJ IDEA 15 for close to a year now, and using the same project this whole time, where I have created Tasks to basically act as workspaces for various work assignments so I can group files I've touched based on the assignment title. I've recently had an issue in my project workspace where I am basically being forced to create a new workspace, and thus a new project in IntelliJ. The problem is that this new project has none of my Task history in it.
Does anyone know if it's possible, and if so, how, to migrate this Task history from one project to another?
Thanks in advance!
Tasks are saved in YOUR_PROJECT/.idea/workspace.xml so you can backup this file and if needed you can just copy and paste these lines defining tasks to the other workspace.xml file. This is the example of one:
<configuration default="false" name="my-debug-task" type="JavascriptDebugType" factoryName="JavaScript Debug" uri="http://localhost:4200">
<mapping url="webpack:////home/marcin/Sprawozdania/Inzynierka/sqap/sqap-ui/src" local-file="$PROJECT_DIR$/sqap-ui/src" />
<mapping url="webpack:////home/marcin/Sprawozdania/Inzynierka/sqap/sqap-ui" local-file="$PROJECT_DIR$/sqap-ui" />
<method />
</configuration>
Addittionaly In workspace.xml you need to add reference of copied task to:
<project>
<component>
<list>
like here :
<list size="3">
<item index="0" class="java.lang.String" itemvalue="JavaScript Debug.my-debug-task" />
<item index="1" class="java.lang.String" itemvalue="JavaScript Debug.Unnamed" />
<item index="2" class="java.lang.String" itemvalue="Maven.config start -devs" />
</list>
I am bit confused reading Infinispan guide.
I want to have two clustered caches, propably must have separate jgroups files to have different multicast addresses, but should there be only one cache containter?
<?xml version="1.0" encoding="UTF-8"?>
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:8.0 http://www.infinispan.org/schemas/infinispan-config-8.0.xsd"
xmlns="urn:infinispan:config:8.0">
<jgroups>
<stack-file name="file1" path="jgroups1.xml" />
<stack-file name="file2" path="jgroups2.xml" />
</jgroups>
<cache-container default-cache="cache1">
<transport stack="file1" node-name="${nodeName}" />
<invalidation-cache name="cache1" configuration="invalidation-template" />
<invalidation-cache name="cache2" configuration="invalidation-template" />
<invalidation-cache-configuration name="invalidation-template" mode="SYNC" >
<locking isolation="READ_COMMITTED" striping="true"/>
<transaction locking="OPTIMISTIC"/>
<eviction max-entries="20500" strategy="LRU"/>
<expiration interval="10500" />
</invalidation-cache-configuration>
</cache-container>
</infinispan>
You don't need to have a separate jgroups file for each clustered cache. Simply configure the transport element of the cache-container, and then you can define any number of caches in that cache-container.
Different multicast addresses are only needed if you want cache containers to form different clusters. I think the confusing thing here is that you can define multiple stack-file elements in JGroups but you can only really specify a single cache-container element. The XSD is not precise enough, but the parser within the code assumes a single global configuration builder instance, hence a single cache container. So, if you want to create two separate cache containers, these should be currently defined in separate XML files.
Sitecore CMS+DMS 6.6.0 rev.130404 => 7.0 rev.130424
In our project we have been using AdvancedDatabaseCrawler (ADC) for our indexes (specially because of it's dynamic fields feature). Here's a sample index configuration:
<index id="GeoIndex" type="Sitecore.Search.Index, Sitecore.Kernel">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<analyzer ref="search/analyzer" />
<locations hint="list:AddCrawler">
<web type="scSearchContrib.Crawler.Crawlers.AdvancedDatabaseCrawler, scSearchContrib.Crawler">
<database>web</database>
<root>/sitecore/content/Globals/Locations</root>
<IndexAllFields>true</IndexAllFields>
<include hint="list:IncludeTemplate">
<!--Suburb Template-->
<suburb>{FF0D64AA-DCB4-467A-A310-FF905F9393C0}</suburb>
</include>
<dynamicFields hint="raw:AddDynamicFields">
<dynamicField type="OurApp.CustomSearchFields.SearchTextField,OurApp" name="search text" storageType="NO" indexType="TOKENIZED" vectorType="NO" />
<dynamicField type="OurApp.CustomSearchFields.LongNameField,OurApp" name="display name" storageType="YES" indexType="UN_TOKENIZED" vectorType="NO" />
</dynamicFields>
</web>
</locations>
</index>
As you can see, we use scSearchContrib.Crawler.Crawlers.AdvancedDatabaseCrawler as the crawler and it uses the fields defined inside <dynamicFields hint="raw:AddDynamicFields"> section to inject custom fields into the index.
Now we are upgrading our project to sitecore 7. In Sitecore 7, they have ported the DynamicFields functionality from ADC into sitecore. I found out some articles on this and converted our custom search field classes to implement sitecore 7 IComputedIndexField interface instead of inheriting from BaseDynamicField class in ADC. Now my problem is how to change the index configuration to match with new sitecore 7 APIs. There were bits and pieces on the web but couldn't find all the examples I needed to convert my configuration. Can anybody help me on this?
While I'm doing this I'm under the impression that we won't have to rebuild our indexes since it still uses Lucene internally. I don't want to change the index structure. Just want to upgrade the code and configuration from AdvancedDatabaseCrawler to Sitecore 7. Should I be worried about breaking our existing indexes? Please shed some light on this as well.
Thanks
A few quick clarifications :)
We have not merged ADC into Sitecore 7, the ContentSearch layer is a complete rewrite of the old search layer for Sitecore. We have taken some of the core concepts from ADC, such as dynamic fields, and put them in the new implementation (as ComputedFields). They are not 1:1 compatible and you will have to do some work on your indexes.
The version of Lucene has also been upgraded from 2.* to 3.0.3 so all indexes will need to be re-created anyway as they are a very different version of Lucene.
There are two options here, the old Lucene search (Sitecore.Search namespace) (which ADC was built upon) has not been touched and will still work in the same way, although I am not sure about ADC compatibility with SItecore 7 as in theory this has now been superseded.
The next option is to update your index to take advantage of the new search features of Sitecore 7. The configuration you have will not be directly compatible but, while you will need to rework your index into the new configuration, the basic concepts should be familiar to you. The dynamic fields you have should map logically to ComputedFields (fields that are calculated when an item is indexed) and everything else is straightforward.
While it looks like a lot of extra config for ContentSearch a lot of it is default config that you will not need to touch, you will just need to override the configuration parts for the computed fields you want to add and the template you want to include.
An example of creating your own configuration override can be found here : http://www.mikkelhm.dk/post/2013/10/12/Defining-a-custom-index-in-Sitecore-7-and-utilizing-it.aspx
I would also recommend making sure you upgrade to 7.0 rev. 131127 (7.0 Update-3) as this fixes a bug in the IncludeTemplates logic in the version you currently have.
I managed to convert the index configuration for sitecore ContentSearch API. Looking at Sitecore default index configurations was a great help for this.
Note: As also mentioned by Stephen, <include hint="list:IncludeTemplate"> does not work in Sitecore 7.0 initial release. It's fixed in Sitecore 7.0 rev. 131127 (7.0 Update-3) and I'm planning to upgrade to it.
Here's a good article on sitecore 7 index update strategies by John West. It'll help you in configuration your indexes the way you want.
Converted configuration:
<sitecore>
<contentSearch>
<configuration type="Sitecore.ContentSearch.LuceneProvider.LuceneSearchConfiguration, Sitecore.ContentSearch.LuceneProvider">
<DefaultIndexConfiguration type="Sitecore.ContentSearch.LuceneProvider.LuceneIndexConfiguration, Sitecore.ContentSearch.LuceneProvider">
<IndexAllFields>true</IndexAllFields>
<include hint="list:IncludeTemplate">
<!--Suburb Template-->
<suburb>{FF0D64AA-DCB4-467A-A310-FF905F9393C0}</suburb>
</include>
<fields hint="raw:AddComputedIndexField">
<field fieldName="search text" storageType="NO" indexType="TOKENIZED" vectorType="NO">OurApp.CustomSearchFields.SearchTextField,OurApp</field>
<field fieldName="display name" storageType="YES" indexType="UN_TOKENIZED" vectorType="NO">OurApp.CustomSearchFields.LongNameField,OurApp</field>
</fields>
</DefaultIndexConfiguration>
<indexes hint="list:AddIndex">
<index id="GeoIndex" type="Sitecore.ContentSearch.LuceneProvider.LuceneIndex, Sitecore.ContentSearch.LuceneProvider">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<!-- This initializes index property store. Id has to be set to the index id -->
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<strategies hint="list:AddStrategy">
<!-- NOTE: order of these is controls the execution order -->
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<commitPolicy hint="raw:SetCommitPolicy">
<policy type="Sitecore.ContentSearch.TimeIntervalCommitPolicy, Sitecore.ContentSearch" />
</commitPolicy>
<commitPolicyExecutor hint="raw:SetCommitPolicyExecutor">
<policyExecutor type="Sitecore.ContentSearch.CommitPolicyExecutor, Sitecore.ContentSearch" />
</commitPolicyExecutor>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.LuceneProvider.Crawlers.DefaultCrawler, Sitecore.ContentSearch.LuceneProvider">
<Database>web</Database>
<Root>/sitecore/content/Globals/Countries</Root>
</crawler>
</locations>
</index>
</indexes>
</configuration>
</contentSearch>
</sitecore>
I am using log4Net with share point 2010. I have a feature which automatically adds the log4net config when my soultion is deployed in Error mode using following code
SPWebService service = SPWebService.ContentService;
service.WebConfigModifications.Clear();
//ADD log4Net config section
service.WebConfigModifications.Add(new SPWebConfigModification()
{
Path = "configuration/configSections",
Name = "section[#name='log4net']",
Sequence = 0,
Owner = CREATE_NAME,
Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode,
Value = string.Format(#"<section name='log4net' type='log4net.Config.Log4NetConfigurationSectionHandler, log4net, Version={0}, Culture=neutral, PublicKeyToken={1}' />", LOG4NET_VERSION, LOG4NET_PUBLIC_KEY_TOKEN)
});
string log4netConfig = #"<log4net>
<appender name='RollingFileAppender' type='log4net.Appender.RollingFileAppender'>
<file value='C:\\logs\\Logger.log' />
<appendToFile value='true' />
<rollingStyle value='Composite' />
<datePattern value='yyyyMMdd' />
<maxSizeRollBackups value='200' />
<maximumFileSize value='50MB' />
<layout type='log4net.Layout.PatternLayout'>
<conversionPattern value='%d [%t] %-5p %c [%x] <%X{auth}> - %m%n' />
</layout>
</appender>
<root>
<level value='ERROR' />
<appender-ref ref='RollingFileAppender' />
</root>
</log4net>";
//add error default config
service.WebConfigModifications.Add(new SPWebConfigModification()
{
Path = "configuration",
Name = "log4net",
Sequence = 0,
Owner = CREATE_NAME,
Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode,
Value = log4netConfig
});
service.Update();
service.ApplyWebConfigModifications();
I wanted to create another feature which over writes Error mode of the log4net and change it to debug, so that the end user does not have to modifiy web config manually.
The problem is when the second feature is activated, it deletes everything added by the first feature.
Is this standard behaviour ?? Any feature that would activate would delete the changes by other feature.
EDIT 2
Steps to replicate
Create 2 feature. both of them should add some different entries in web config.
Activate feature 1 - Feature 1 changes are in web config
Activate feature 2 - Feature 2 changes are in web config but feature 1
changes are gone
Deactivate both the features
Activate feature 2 - Feature 2 changes are in web config
Activate feature 1 - Feature 1 changes are in web config but feature
2 changes are gone
The reason why the first web configuration is removed is because of the code block:
service.WebConfigModifications.Clear();
Basically you're saying that you will be clearing all other configurations set in your web config prior to adding the items in your feature. Removing the said code block should fix your issue.
If you need to do some other things when activating or deactivating a feature try using Feature Receivers. A good example to start with web configurations and feature receivers can be found here.
Also to give you more idea on the code block above is giving you problems you can check out the problem another guy which had the similar problem here.
if I understand this correctly, what's happening is you add a webconfigmodification, this isn't technically "added" yet as it is hasn't been implemented just yet. When you created the 2nd modification, it overwrites the first modification you created, which in turns, erases the first modification that was made, and this was the one that was applied.