JGit throws ClassCastException in TransportConfigCallback - classcastexception

I am trying to use JGit. I tried following http://www.codeaffine.com/2014/12/09/jgit-authentication/ and the following block of the code throws a ClassCastException
remoteRepository.setTransportConfigCallback(new TransportConfigCallback() {
#Override
public void configure(Transport transport) {
SshTransport sshTransport = (SshTransport) transport;
sshTransport.setSshSessionFactory(sshSessionFactory);
}
});
Exception:
java.lang.ClassCastException: org.eclipse.jgit.transport.TransportHttp
cannot be cast to org.eclipse.jgit.transport.SshTransport
What am I missing? I am using JGit version 4.10.0.201712302008-r.

The code is only meant to handle SSH connections. If you are connecting through other protocols, you need to adjust the code to be aware that transport can be something different than SshTransport.
For example:
command.setTransportConfigCallback(new TransportConfigCallback() {
#Override
public void configure(Transport transport) {
if(transport instanceof SshTransport) {
SshTransport sshTransport = (SshTransport) transport;
sshTransport.setSshSessionFactory(sshSessionFactory);
} else if(transport instanceof HttpTransport) {
// configure HTTP protocol specifics
}
}
} );

when you set: cloneCommand.setURI("ssh://user#example.com/repo.git" );
Indicate the url with ssh protocol, the repo in github.
Example - (ssh://git#github.com:githubtraining/hellogitworld.git)
Refer this https://github.com/allegro/axion-release-plugin/issues/101

Related

spring.rabbitmq.listener.simple.retry.enabled=true is ignored if I configure manually DirectMessageListenerContainer

I'm trying to activate deadletterqueue on rabbitmq with properties
spring.rabbitmq.listener.simple.retry.enabled=true
spring.rabbitmq.listener.simple.retry.max-attempts=10
It works fine when I use annotation
public class SimpleConsumer {
#RabbitListener(queues = "messages.queue")
public void handleMessage(String message){
throw new RuntimeException();
}
}
but if I configure manually MessageListenerContainer, it doesn't work.
Below my configurations:
#Bean
SimpleMessageListenerContainer directMessageListenerContainer(
ConnectionFactory connectionFactory,
Queue simpleQueue,
MessageConverter jsonMessageConverter,
SimpleConsumer simpleConsumer)
{
return new SimpleMessageListenerContainer(connectionFactory){{
setQueues(simpleQueue);
setMessageListener(new MessageListenerAdapter(simpleConsumer, jsonMessageConverter));
// setDefaultRequeueRejected(false);
}};
}
If I set setDefaultRequeueRejected to true it try to resolve consumer infinite time (if throw exception).
If I set setDefaultRequeueRejected to false it try to resolve consumer one time and then use deadLetterConsumer.
What #RabbitListener(queues = "messages.queue") do under the hood for use spring.rabbitmq.listener configurations?
below my code on github
https://github.com/crakdelpol/dead-letter-spike.git
see branch "retry-by-configuration"
It adds a retry interceptor to the container's advice chain. See the documentation.
Spring Retry provides a couple of AOP interceptors and a great deal of flexibility to specify the parameters of the retry (number of attempts, exception types, backoff algorithm, and others). Spring AMQP also provides some convenience factory beans for creating Spring Retry interceptors in a convenient form for AMQP use cases, with strongly typed callback interfaces that you can use to implement custom recovery logic. See the Javadoc and properties of StatefulRetryOperationsInterceptor and StatelessRetryOperationsInterceptor for more detail.
...
#Bean
public StatefulRetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateful()
.maxAttempts(5)
.backOffOptions(1000, 2.0, 10000) // initialInterval, multiplier, maxInterval
.build();
}
Then add the interceptor to the container adviceChain.
EDIT
See the documentation I pointed you to; you need to add the recoverer to the interceptor:
The MessageRecover is called when all retries have been exhausted. The RejectAndDontRequeueRecoverer does exactly that. The default MessageRecoverer consumes the errant message and emits a WARN message.
Here is a complete example:
#SpringBootApplication
public class So67433138Application {
public static void main(String[] args) {
SpringApplication.run(So67433138Application.class, args);
}
#Bean
Queue queue() {
return QueueBuilder.durable("so67433138")
.deadLetterExchange("")
.deadLetterRoutingKey("so67433138.dlq")
.build();
}
#Bean
Queue dlq() {
return new Queue("so67433138.dlq");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory cf) {
SimpleMessageListenerContainer smlc = new SimpleMessageListenerContainer(cf);
smlc.setQueueNames("so67433138");
smlc.setAdviceChain(RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.backOffOptions(1_000, 2.0, 10_000)
.recoverer(new RejectAndDontRequeueRecoverer())
.build());
smlc.setMessageListener(msg -> {
System.out.println(new String(msg.getBody()));
throw new RuntimeException("test");
});
return smlc;
}
#RabbitListener(queues = "so67433138.dlq")
void dlq(String in) {
System.out.println("From DLQ: " + in);
}
}
test
test
test
test
test
2021-05-12 11:19:42.034 WARN 70667 ---[ container-1] o.s.a.r.r.RejectAndDontRequeueRecoverer : Retries exhausted for message ...
...
From DLQ: test

How do I hook into micronaut server on error handling from a filter?

For any 4xx or 5xx response given out by my micronaut server, I'd like to log the response status code and endpoint it targeted. It looks like a filter would be a good place for this, but I can't seem to figure out how to plug into the onError handling
for instance, this filter
#Filter("/**")
class RequestLoggerFilter: OncePerRequestHttpServerFilter() {
companion object {
private val log = LogManager.getLogger(RequestLoggerFilter::class.java)
}
override fun doFilterOnce(request: HttpRequest<*>, chain: ServerFilterChain): Publisher<MutableHttpResponse<*>>? {
return Publishers.then(chain.proceed(request), ResponseLogger(request))
}
class ResponseLogger(private val request: HttpRequest<*>): Consumer<MutableHttpResponse<*>> {
override fun accept(response: MutableHttpResponse<*>) {
log.info("Status: ${response.status.code} Endpoint: ${request.path}")
}
}
}
only logs on a successful response and not on 4xx or 5xx responses.
How would i get this to hook into the onError handling?
You could do the following. Create your own ApplicationException ( extends RuntimeException), there you could handle your application errors and in particular how they result into http error codes. You exception could hold the status code as well.
Example:
class BadRequestException extends ApplicationException {
public HttpStatus getStatus() {
return HttpStatus.BAD_REQUEST;
}
}
You could have multiple of this ExceptionHandler for different purposes.
#Slf4j
#Produces
#Singleton
#Requires(classes = {ApplicationException.class, ExceptionHandler.class})
public class ApplicationExceptionHandler implements ExceptionHandler<ApplicationException, HttpResponse> {
#Override
public HttpResponse handle(final HttpRequest request, final ApplicationException exception) {
log.error("Application exception message={}, cause={}", exception.getMessage(), exception.getCause());
final String message = exception.getMessage();
final String code = exception.getClass().getSimpleName();
final ErrorCode error = new ErrorCode(message, code);
log.info("Status: ${exception.getStatus())} Endpoint: ${request.path}")
return HttpResponse.status(exception.getStatus()).body(error);
}
}
If you are trying to handle Micronaut native exceptions like 400 (Bad Request) produced by ConstraintExceptionHandler you will need to Replace the beans to do that.
I've posted example here how to handle ConstraintExceptionHandler.
If you want to only handle responses itself you could use this mapping each response code (example on #Controller so not sure if it works elsewhere even with global flag:
#Error(status = HttpStatus.NOT_FOUND, global = true)
public HttpResponse notFound(HttpRequest request) {
<...>
}
Example from Micronaut documentation.
Below code I used for adding custom cors headers in the error responses, in doOnError you can log errors
#Filter("/**")
public class ResponseCORSAdder implements HttpServerFilter {
#Override
public Publisher<MutableHttpResponse<?>> doFilter(HttpRequest<?> request, ServerFilterChain chain) {
return this.trace(request)
.switchMap(aBoolean -> chain.proceed(request))
.doOnError(error -> {
if (error instanceof MutableHttpResponse<?>) {
MutableHttpResponse<?> res = (MutableHttpResponse<?>) error;
addCorsHeaders(res);
}
})
.doOnNext(res -> addCorsHeaders(res));
}
private MutableHttpResponse<?> addCorsHeaders(MutableHttpResponse<?> res) {
return res
.header("Access-Control-Allow-Origin", "*")
.header("Access-Control-Allow-Methods", "OPTIONS,POST,GET")
.header("Access-Control-Allow-Credentials", "true");
}
private Flowable<Boolean> trace(HttpRequest<?> request) {
return Flowable.fromCallable(() -> {
// trace logic here, potentially performing I/O
return true;
}).subscribeOn(Schedulers.io());
}
}

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

Exception thrown for large number of Vertx connecting to Redis

Trying to simulate scenario for heavy load with Redis (default config only).
To keep it simple, when multi is issued immediately excute then close the connection.
import io.vertx.core.*;
import io.vertx.core.json.Json;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
import io.vertx.redis.RedisTransaction;
class MyVerticle extends AbstractVerticle {
private int index;
public MyVerticle(int index) {
this.index = index;
}
private void run2() {
RedisClient client = RedisClient.create(vertx, new RedisOptions().setHost("127.0.0.1"));
RedisTransaction tr = client.transaction();
tr.multi(ev2 -> {
if (ev2.succeeded()) {
tr.exec(ev3 -> {
if (ev3.succeeded()) {
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
else {
System.out.println("FAIL EXEC");
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
else {
System.out.println("FAIL MULTI");
tr.close(i -> {
if (i.failed()) {
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
#Override
public void start(Future<Void> startFuture) {
long timerID = vertx.setPeriodic(1, new Handler<Long>() {
public void handle(Long aLong) {
run2();
}
});
}
#Override
public void stop(Future stopFuture) throws Exception {
System.out.println("MyVerticle stopped!");
}
}
public class Periodic {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
for (int i = 0; i < 8000; i++) {
vertx.deployVerticle(new MyVerticle(i));
}
}
}
Although connections are closed properly I still get warning errors.
All of them are thrown even before I put more logic within multi.
2017-06-20 16:29:49 WARNING io.netty.util.concurrent.DefaultPromise notifyListener0 An exception was thrown by io.vertx.core.net.impl.ChannelProvider$$Lambda$61/1899599620.operationComplete()
java.lang.IllegalStateException: Uh oh! Event loop context executing with wrong thread! Expected null got Thread[globalEventExecutor-1-2,5,main]
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:316)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:193)
at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:258)
at io.vertx.core.net.impl.NetClientImpl.lambda$connect$5(NetClientImpl.java:233)
at io.vertx.core.net.impl.ChannelProvider.lambda$connect$0(ChannelProvider.java:42)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:233)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Is there a reason for this error ?
You'll continue to get errors, because you test the wrong things.
First of all, vertices are not fat coroutines. They are thin actors. Meaning creating 500 of them won't speed things up, but probably will slow everything down, because event loop still needs to switch between them.
Second, if you want to prepare for 2K concurrent requests, put your Vertx application on one machine, and run wrk or similar tool over the network.
Third, your Redis is also on the same machine. I hope that won't be the case in your production, since Redis will compete with Vertx over CPU.
Once everything is setup correctly, I believe that you'll be able to handle 10K requests quite easily. I've seen Vertx handle 8K requests on modest machines with PostgreSQL.

What is the reason that Policy.getPolicy() is considered as it will retain a static reference to the context and can cause memory leak

I just read some source code is from org.apache.cxf.common.logging.JDKBugHacks and also in
http://svn.apache.org/viewvc/tomcat/trunk/java/org/apache/catalina/core/JreMemoryLeakPreventionListener.java. In order to make my question clear not too broad. :)
I just ask one piece of code in them.
// Calling getPolicy retains a static reference to the context
// class loader.
try {
// Policy.getPolicy();
Class<?> policyClass = Class
.forName("javax.security.auth.Policy");
Method method = policyClass.getMethod("getPolicy");
method.invoke(null);
} catch (Throwable e) {
// ignore
}
But I didn't understand this comment. "Calling getPolicy retains a static reference to the context class loader". And they trying to use JDKBugHacks to work around it.
UPDATE
I overlooked the static block part. Here it is. This is the key. Actually it already has policy cached. So why cache contextClassLoader also? In comment, it claims #deprecated as of JDK version 1.4 -- Replaced by java.security.Policy.
I have double checked the code of java/security/Policy.java. It really removed the cached classloader. So my doubt is valid! :)
#Deprecated
public abstract class Policy {
private static Policy policy;
private static ClassLoader contextClassLoader;
static {
contextClassLoader = java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction<ClassLoader>() {
public ClassLoader run() {
return Thread.currentThread().getContextClassLoader();
}
});
};
I also add the getPolicy source code.
public static Policy getPolicy() {
java.lang.SecurityManager sm = System.getSecurityManager();
if (sm != null) sm.checkPermission(new AuthPermission("getPolicy"));
return getPolicyNoCheck();
}
static Policy getPolicyNoCheck() {
if (policy == null) {
synchronized(Policy.class) {
if (policy == null) {
String policy_class = null;
policy_class = java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction<String>() {
public String run() {
return java.security.Security.getProperty
("auth.policy.provider");
}
});
if (policy_class == null) {
policy_class = "com.sun.security.auth.PolicyFile";
}
try {
final String finalClass = policy_class;
policy = java.security.AccessController.doPrivileged
(new java.security.PrivilegedExceptionAction<Policy>() {
public Policy run() throws ClassNotFoundException,
InstantiationException,
IllegalAccessException {
return (Policy) Class.forName
(finalClass,
true,
contextClassLoader).newInstance();
}
});
} catch (Exception e) {
throw new SecurityException
(sun.security.util.ResourcesMgr.getString
("unable to instantiate Subject-based policy"));
}
}
}
}
return policy;
}
Actually I dig deeper, I find some interesting thing. Someone report a bug to apache CXF about the org.apache.cxf.common.logging.JDKBugHacks for this piece code recently.
In order for disabling url caching, JDKBugHacks runs:
URL url = new URL("jar:file://dummy.jar!/");
URLConnection uConn = url.openConnection();
uConn.setDefaultUseCaches(false);
When having the java.protocol.handler.pkgs system property set, that can lead to deadlocks between the system classloader and the file protocol Handler in particular situations (for instance if the file protocol URLStreamHandler is a signleton).
Besides that, the code above is really there for the sake of setting defaultUseCaches to false only, so actually opening a connection can be avoided, to speed up the execution.
So the fix is
URL url = new URL("jar:file://dummy.jar!/");
URLConnection uConn = new URLConnection(url) {
#Override
public void connect() throws IOException {
// NOOP
}
};
uConn.setDefaultUseCaches(false);
It's normal that JDK or apache cxf to have some minor bugs. And normally they will fix it.
javax.security.auth.login.Configuration has the same issues with Policy but it's not Deprecated.
The Policy class in java 6 contains a static reference to a classloader that is initialized to the current threads context classloader on the first access to the class:
private static ClassLoader contextClassLoader;
static {
contextClassLoader =
(ClassLoader)java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction() {
public Object run() {
return Thread.currentThread().getContextClassLoader();
}
});
};
Tomcats lifecycle listener is making sure to to initialize this class from within a known environment where the context classloader is set to the system classloader. If this class was first accessed from within a webapp, it would retain a reference to the webapps classloader. This would prevent the webapps classes from getting garbage collected, creating a leak of perm gen space.