Given the following logging layout pattern:
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - [%X{newTraceIdPlaceholder}, %X{newSpanIdPlaceholder}] - %X{Request-Uri} - %msg%n
Is there a way to tell Sleuth to consider the "newTraceIdPlaceholder" and "newSpanIdPlaceholder" fields as traceId and spanId?
You would have to register your own bean of Brave's CorrelationScopeCustomizer type and over there you can provide additional fields in a similar manner to this:
CurrentTraceContext.ScopeDecorator create() {
return new Builder()
.clear()
.add(SingleCorrelationField.create(BaggageFields.TRACE_ID))
.add(SingleCorrelationField.create(BaggageFields.PARENT_ID))
.add(SingleCorrelationField.create(BaggageFields.SPAN_ID))
.add(SingleCorrelationField.create(BaggageFields.SAMPLED))
.build();
}
You can provide your own fields there
Related
I'm using jdbc for some sql queries and i wanted to execute all separate queries in one method in one transaction. I tried to set configuration setting only for transaction in one query and read it in another:
#Transactional
public void testJDBC() {
SqlRowSet rowSet =jdbcTemplate.queryForRowSet("select set_config('transaction_test','im_here',true)");
String result;
while (rowSet.next()) {
result = rowSet.getString("set_config");
System.out.println("Result1: "+result);
}
SqlRowSet rowSet2 =jdbcTemplate.queryForRowSet("select current_setting('transaction_test',true)");
String result2;
while (rowSet2.next()) {
result2 = rowSet2.getString("current_setting");
System.out.println("Result2: "+result2);
}
}
But my second query uses other transaction or both queries are not transactional, becouse result looks like this:
Result1: im_here
Result2:
I dont get it what is wrong here that despite Transactional annotation it is still not transactional.
Here are my beans setting:
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(emf);
return transactionManager;
}
public BasicDataSource getApacheDataSource(){
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName"));
dataSource.setUrl(getUrl());
dataSource.setUsername(getEnvironmentProperty("spring.datasource.username"));
dataSource.setPassword(getEnvironmentProperty("spring.datasource.password"));
}
#Bean
public JdbcTemplateExtended jdbc(){
return new JdbcTemplateExtended(getApacheDataSource());
}
I think making sure #Transactional annotations are being handled well is the first step in troubleshooting. To do this, add the following settings to application.properties (or application.yml file). I assume you are using spring boot.
logging:
level:
org:
springframework:
transaction:
interceptor: trace
If you run the logic after applying the above settings, you can see the following log message.
2020-10-02 14:45:07,162 TRACE - Getting transaction for [com.Class.method]
2020-10-02 14:45:07,273 TRACE - Completing transaction for [com.Class.method]
Make sure the #Transactional annotation is handled properly by the TransactionInterceptor.
Note: The behavior of the #Transactional annotation works on proxy objects. If you call from a method of the same class or create a class directly instead of autowired, the proxy object is not created and hence the #Transactional annotation's expected behavior is not applied.
I just want to understand what is the use of SerializationFeature.WRAP_ROOT_VALUE.
I have actually tried disabling the SerializationFeature.WRAP_ROOT_VALUE and for the class I have annotated with xmlRootElement. Here in this case after disabling the SerializationFeature.WRAP_ROOT_VALUE still after serializing I am getting the root value. To just avoid the root value I have to use the xmlType.
So trying to understand then what is the use of SerializationFeature.WRAP_ROOT_VALUE?
Sample code which I have tried
#XmlRootElement(name="person")
Public class Person {
#XmlElement(name = "insert")
private int insert;
#XmlElement(name = "update")
private int update;
}
The above is the POJO class which I was trying to serialize and also I have used
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationFeature.WRAP_ROOT_VALUE, true);
So with the above code the output is
"person" {
"insert" : 1,
"update" : 0
}
In the same case if I try to change the xmlRootElement to XmlType in Person class the output is
{
"insert" : 1,
"update" : 0
}
So I am confused like what is the use of SerializationFeature.WRAP_ROOT_VALUE if it is not giving the expected output?
I am using the Jackson version of 2.9.6
After digging more into this found that with the help of CXF I was able to solve this by adding small config in applicationcontext.xml file
<bean class="org.apache.cxf.jaxrs.provider.json.JSONProvider">
<property name="dropRootElement" value="true" />
</bean>
How can I write the following for my properties file using log4j2?
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.null=org.apache.log4j.varia.NullAppender
You can implement NullAppender as a plugin.
Plugin implementation is like this:
package myPlugins;
#Plugin(name = "NullAppenderDemo", category = "Core", elementType = "appender", printObject = true)
public classNullAppenderDemo extends AbstractAppender {
private static final long serialVersionUID = 1L;
protected NullAppenderDemo(String name, Filter filter, Layout<? extends Serializable> layout, boolean ignoreExceptions) {
super(name, filter, layout, ignoreExceptions);
}
#Override
public void append(LogEvent event) {
// Nothing is done here !!!
}
#PluginFactory
public static NullAppender createAppender(
#PluginAttribute("name") String name,
#PluginAttribute("ignoreExceptions") boolean ignoreExceptions,
#PluginElement("Layout") Layout<? extends Serializable> layout,
#PluginElement("Filters") Filter filter) {
if (name == null) {
LOGGER.error("No name provided for NullAppender");
return null;
}
return new NullAppenderDemo(name, filter, layout, ignoreExceptions);
}
}
Specify the package of the plugin class in the log4j2 configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="myPlugins">
Use appenders (I prefer xml format to properties, but you can do the mapping according to the manual, if you prefer the properties:
<Appenders>
<NullAppender name="null">
</NullAppender>
<Console name="console">
<PatternLayout>
<pattern>
%d %level{length=2} (%c{1.}.%M:%L) - %m%n
</pattern>
</PatternLayout>
</Console>
</Appenders>
<Loggers>
<root level="info">
<appenderRef ref="console" />
</root>
<logger name="nullAppenderPackage" additivity="false">
<appenderRef ref="null" />
</logger>
</Loggers>
But actually, you can have the same effect with level="off" without NullAppender at all:
<logger name="nullAppenderPackage" level="off">
</logger>
You can find more details here.
Just curious, but what is the need for a NullAppender when you can just configure any appender to filter out everything?
I'm having trouble with Sitecore Indexing of the general indexes "sitecore_master_index", "sitecore_web_index", which take forever because the crawler/indexer checks all items in the database.
I imported thousands of products with a whole lot of specifications and literally have hundreds of thousands of items in the product repository.
If I could exclude the path from indexing it wouldn't have to check a million items for template exclusion.
FOLLOWUP
I implemented a custom-crawler that excludes a list of paths from being indexed:
<index id="sitecore_web_index" type="Sitecore.ContentSearch.SolrProvider.SwitchOnRebuildSolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
<param desc="name">$(id)</param>
<param desc="core">sitecore_web_index</param>
<param desc="rebuildcore">sitecore_web_index_sec</param>
<param desc="propertyStore" ref="contentSearch/indexConfigurations/databasePropertyStore" param1="$(id)" />
<configuration ref="contentSearch/indexConfigurations/defaultSolrIndexConfiguration" />
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexConfigurations/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.Utilities.Crawler.ExcludePathsItemCrawler, Sitecore.ContentSearch.Utilities">
<Database>web</Database>
<Root>/sitecore</Root>
<ExcludeItemsList hint="list">
<ProductRepository>/sitecore/content/Product Repository</ProductRepository>
</ExcludeItemsList>
</crawler>
</locations>
</index>
In addition I activated SwitchOnSolrRebuildIndex as it's awesome ootb functionality, cheers SC.
using System.Collections.Generic;
using System.Linq;
using Sitecore.ContentSearch;
using Sitecore.Diagnostics;
namespace Sitecore.ContentSearch.Utilities.Crawler
{
public class ExcludePathsItemCrawler : SitecoreItemCrawler
{
private readonly List<string> excludeItemsList = new List<string>();
public List<string> ExcludeItemsList
{
get
{
return excludeItemsList;
}
}
protected override bool IsExcludedFromIndex(SitecoreIndexableItem indexable, bool checkLocation = false)
{
Assert.ArgumentNotNull(indexable, "item");
if (ExcludeItemsList.Any(path => indexable.AbsolutePath.StartsWith(path)))
{
return true;
}
return base.IsExcludedFromIndex(indexable, checkLocation);
}
}
}
You can override SitecoreItemCrawler class which is used by the index you want to change:
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>master</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
You can then add your own parameters, e.g. ExcludeTree or even a list of ExcludedBranches.
And in the implementation of the class just override method
public override bool IsExcludedFromIndex(IIndexable indexable)
and check whether it is under excluded node.
When importing large amounts of data you should try disabling the indexing of data temporarily otherwise you'll run into issues with a crawler that can't keep up.
There's a great post here on disabling the index while importing data - it's for Lucene but I'm sure you can do the same with Solr,
http://intothecore.cassidy.dk/2010/09/disabling-lucene-indexes.html
Another option could be to store your products in a separate Sitecore database rather than in the master db.
Another post from into the core:
http://intothecore.cassidy.dk/2009/05/working-with-multiple-content-databases.html
I am trying to implement persistent store for my ignite cache ,I am using CacheJdbcPojoStoreFactory,My cache store factory initialization looks like this
#Autowired
DataSorce datasource;
#Bean
public CacheJdbcPojoStoreFactory<?, ?> cacheJdbcdPojoStorefactory(){
CacheJdbcPojoStoreFactory<?, ?> factory = new CacheJdbcPojoStoreFactory<>();
factory.setDataSource(dataSource);
return factory;
}
My implementation of the cache looks like this
CacheConfiguration pesonConfig = new CacheConfiguration();
pesonConfig.setName("personCache");
cacheJdbcdPojoStorefactory.setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
Collection<QueryEntity> qryEntities = new ArrayList<>();
qryEntities.add(qryEntity);
pesonConfig.setQueryEntities(qryEntities);
pesonConfig.setCacheStoreFactory((Factory<? extends CacheStore<Integer, Person>>) cacheJdbcdPojoStorefactory);
ROCCache<Integer, Person> personCache= rocCachemanager.createCache(pesonConfig);
personCache.put(1, p1);
personCache.put(2, p2)
(I am passing correct query Entities and JdbcTypes , for simplicity i have not shown that code here)
But when i run this code i get the below stack trace
Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8385)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1269)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:944)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:511)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (datasource is not provided). at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:297)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8381)
... 8 more
When i debug i can see that my datasource parameters are correctly set inside cacheJdbcdPojoStorefactory object. Where am i going wrong ?
Instead of wiring data source bean and setting it to the factory, you can provide its bean ID and the factory will fetch it from the application context. Here is the example:
#Bean
public CacheJdbcPojoStoreFactory<?, ?> cacheJdbcdPojoStorefactory(){
CacheJdbcPojoStoreFactory<?, ?> factory = new CacheJdbcPojoStoreFactory<>();
factory.setDataSourceBean("data-source-bean");
return factory;
}
The issue is that factory will be serialized, but data source field is transient. This makes setDataSource() property very confusing, I think it should be deprecated and reworked.