Guide to run Apache Ignite on Kubernetes - Server Mode - ignite

One of my backend service uses Ignite on Server mode, with persistence, fully Replicated, running inside Kubernetes. I tried the document provided in the website & also the example here. The application pods starts but the app instances does not connect with each other or the data replicated between each other.
Part of error
Caused by: java.io.FileNotFoundException:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source) ~[na:na]
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) ~[na:na]
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unknown Source) ~[na:na]
at org.apache.ignite.internal.kubernetes.connection.KubernetesServiceAddressResolver.getServiceAddresses(KubernetesServiceAddressResolver.java:109) ~[ignite-kubernetes-2.11.0.jar:2.11.0]
... 90 common frames omitted
2021-10-23 05:52:31.895 ERROR 1 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Failed to get registered addresses from IP finder (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries) [maxTimeout=0]
org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses.
at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:78) ~[ignite-kubernetes-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:2052) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1987) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1293) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1121) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:668) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:590) ~[ignite-core-2.11.0.jar:2.11.0]
and another
2021-10-23 05:47:30.579 ERROR 1 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Failed to get registered addresses from IP finder (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries) [maxTimeout=0]
org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses.
at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:78) ~[ignite-kubernetes-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:2052) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1987) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1293) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1121) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:668) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:590) ~[ignite-core-2.11.0.jar:2.11.0]
at org.apache.ignite.Ignition.start(Ignition.java:328) ~[ignite-core-2.11.0.jar:2.11.0]
Below is the Java based Ignite configuration
package myapp;
import java.util.Collections;
import java.util.UUID;
import java.util.stream.Stream;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteEvents;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.CacheRebalanceMode;
import org.apache.ignite.cluster.ClusterState;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.configuration.WALMode;
import org.apache.ignite.events.DiscoveryEvent;
import org.apache.ignite.events.EventType;
import org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration;
import org.apache.ignite.lang.IgnitePredicate;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;
#Configuration
#Slf4j
public class IgniteConfig {
#Bean
public IgniteConfiguration igniteConfiguration(CacheConfiguration<String, State>[] cacheConfiguration,
DataStorageConfiguration storageCfg, TcpDiscoverySpi discoverySpi) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("schedules-cache-instance-" + UUID.randomUUID().toString());
cfg.setCacheConfiguration(cacheConfiguration);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setClientMode(false);
cfg.setDiscoverySpi(discoverySpi);
return cfg;
}
#Bean
// creating set of caches before hand
public CacheConfiguration<String, State>[] cacheConfig(IgniteProps igniteProps) {
return Stream.of(Cache.values()).map(c -> {
CacheConfiguration<String, Schedule> cacheCfg = new CacheConfiguration<String, Schedule>(c.getName());
cacheCfg.setRebalanceDelay(0);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setRebalanceMode(CacheRebalanceMode.SYNC);
cacheCfg.setCacheMode(CacheMode.REPLICATED);
cacheCfg.setGroupName("schedules");
return cacheCfg;
}).toArray(c -> new CacheConfiguration[c]);
}
#Bean
public TcpDiscoverySpi discovery(IgniteProps igniteProps) {
TcpDiscoverySpi discoverySpi = null;
if (igniteProps.isKubernetesDeployment()) {
discoverySpi = kubernetesDiscovery(igniteProps);
} else {
discoverySpi = localDiscovery();
}
return discoverySpi;
}
private TcpDiscoverySpi kubernetesDiscovery(IgniteProps igniteProps) {
log.info("++ creating k8s based discovery, {}", igniteProps);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryKubernetesIpFinder k8sIpFinder = new TcpDiscoveryKubernetesIpFinder();
KubernetesConnectionConfiguration kubernetesConnectionConfiguration = new KubernetesConnectionConfiguration();
kubernetesConnectionConfiguration.setNamespace("default");
kubernetesConnectionConfiguration.setServiceName("state-manager");
spi.setIpFinder(k8sIpFinder);
log.info("++ creating k8s based discovery, {}", spi);
return spi;
}
private TcpDiscoverySpi localDiscovery() {
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setLocalPort(48500);
TcpDiscoveryVmIpFinder firstIpFinder = new TcpDiscoveryVmIpFinder();
firstIpFinder.setAddresses(Collections.singletonList("127.0.0.1:48500..48520"));
discoverySpi.setIpFinder(firstIpFinder);
return discoverySpi;
}
#Bean
public DataStorageConfiguration storageConfig(IgniteProps igniteProps) {
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(igniteProps.getStoragePath());
if (igniteProps.isWALenabled()) {
storageCfg.setWalMode(WALMode.FSYNC);
storageCfg.setWalPath(igniteProps.getWalStoragePath());
}
return storageCfg;
}
#Bean
public Ignite ignite(IgniteConfiguration igniteConfiguration) {
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().baselineAutoAdjustTimeout(15000);
IgniteEvents events = ignite.events(ignite.cluster().forCacheNodes("schedules"));
events.localListen(new IgnitePredicate<DiscoveryEvent>() {
#Override
public boolean apply(DiscoveryEvent e) {
ignite.cluster().baselineAutoAdjustEnabled(false);
log.info(">> new node joined {} {} {} - {}", e.name(), e.eventNode().id(), e.message(), e.localOrder());
ignite.cluster().state(ClusterState.ACTIVE);
return true;
}
}, EventType.EVT_NODE_JOINED);
events.localListen(new IgnitePredicate<DiscoveryEvent>() {
#Override
public boolean apply(DiscoveryEvent e) {
ignite.cluster().baselineAutoAdjustEnabled(false);
log.info(">> node left {} {} {} - {}", e.name(), e.eventNode().id(), e.message(), e.localOrder());
ignite.cluster().state(ClusterState.ACTIVE);
return true;
}
}, EventType.EVT_NODE_JOINED);
ignite.cluster().baselineAutoAdjustEnabled(true);
ignite.cluster().state(ClusterState.ACTIVE);
return ignite;
}
}
What is missing in the configuration?

Related

GRPC Java load-balancing - start method is not being called from NameResolver

We are trying to implement gRPC load balancing in Java with Consul Service Discovery.
Version info: grpc-java v1.30.0
The problem is that when the app runs, the start method from our custom NameResolver class not being called !
Here is our code:
Here is the custom NameResolver class (start method here is not being called)
I have put breakpoint at start method to check and it's not being called !
package com.bht.saigonparking.common.loadbalance;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.net.URI;
import java.util.ArrayList;
import java.util.List;
import org.apache.logging.log4j.Level;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import com.bht.saigonparking.common.util.LoggingUtil;
import io.grpc.Attributes;
import io.grpc.EquivalentAddressGroup;
import io.grpc.NameResolver;
import lombok.Getter;
/**
*
* #author bht
*/
#Getter
public final class SaigonParkingNameResolver extends NameResolver {
private final URI consulURI;
private final String serviceId;
private final DiscoveryClient discoveryClient;
private Listener listener;
private List<ServiceInstance> serviceInstances;
public SaigonParkingNameResolver(DiscoveryClient discoveryClient,
URI consulURI,
String serviceId,
int pauseInSeconds) {
this.consulURI = consulURI;
this.serviceId = serviceId;
this.discoveryClient = discoveryClient;
/* run connection check timer */
ConnectionCheckTimer connectionCheckTimer = new ConnectionCheckTimer(this, pauseInSeconds);
connectionCheckTimer.runTimer();
}
#Override
public String getServiceAuthority() {
return consulURI.getAuthority();
}
#Override
public void start(Listener2 listener) {
this.listener = listener;
loadServiceInstances();
}
#Override
public void shutdown() {
// implement shutdown...
}
void loadServiceInstances() {
List<EquivalentAddressGroup> addressList = new ArrayList<>();
serviceInstances = discoveryClient.getInstances(serviceId);
if (serviceInstances == null || serviceInstances.isEmpty()) {
LoggingUtil.log(Level.WARN, "loadServiceInstances", "Warning",
String.format("no serviceInstances of %s", serviceId));
return;
}
serviceInstances.forEach(serviceInstance -> {
String host = serviceInstance.getHost();
int port = serviceInstance.getPort();
LoggingUtil.log(Level.INFO, "loadServiceInstances", serviceId, String.format("%s:%d", host, port));
List<SocketAddress> socketAddressList = new ArrayList<>();
socketAddressList.add(new InetSocketAddress(host, port));
addressList.add(new EquivalentAddressGroup(socketAddressList));
});
if (!addressList.isEmpty()) {
listener.onAddresses(addressList, Attributes.EMPTY);
}
}
}
Here is the custom NameResolverProvider class
package com.bht.saigonparking.common.loadbalance;
import java.net.URI;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import io.grpc.NameResolver;
import io.grpc.NameResolverProvider;
import lombok.AllArgsConstructor;
/**
* #author bht
*/
#AllArgsConstructor
public final class SaigonParkingNameResolverProvider extends NameResolverProvider {
private final String serviceId;
private final DiscoveryClient discoveryClient;
private final int pauseInSeconds;
#Override
protected boolean isAvailable() {
return true;
}
#Override
protected int priority() {
return 5;
}
#Override
public String getDefaultScheme() {
return "consul";
}
#Override
public NameResolver newNameResolver(URI targetUri, NameResolver.Args args) {
return new SaigonParkingNameResolver(discoveryClient, targetUri, serviceId, pauseInSeconds);
}
}
Here is a class from Client
package com.bht.saigonparking.service.auth.configuration;
import java.util.concurrent.TimeUnit;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
import com.bht.saigonparking.api.grpc.user.UserServiceGrpc;
import com.bht.saigonparking.common.interceptor.SaigonParkingClientInterceptor;
import com.bht.saigonparking.common.loadbalance.SaigonParkingNameResolverProvider;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import lombok.AllArgsConstructor;
/**
*
* #author bht
*/
#Component
#AllArgsConstructor(onConstructor = #__(#Autowired))
public final class ChannelConfiguration {
private final SaigonParkingClientInterceptor clientInterceptor;
#Bean("userResolver")
public SaigonParkingNameResolverProvider userServiceNameResolverProvider(#Value("${connection.user-service.id}") String serviceId,
#Value("${connection.refresh-period-in-seconds}") int refreshPeriod,
#Autowired DiscoveryClient discoveryClient) {
return new SaigonParkingNameResolverProvider(serviceId, discoveryClient, refreshPeriod);
}
/**
*
* channel is the abstraction to connect to a service endpoint
*
* note for gRPC service stub:
* .newStub(channel) --> nonblocking/asynchronous stub
* .newBlockingStub(channel) --> blocking/synchronous stub
*/
#Bean
public ManagedChannel managedChannel(#Value("${spring.cloud.consul.host}") String host,
#Value("${spring.cloud.consul.port}") int port,
#Value("${connection.idle-timeout}") int timeout,
#Value("${connection.max-inbound-message-size}") int maxInBoundMessageSize,
#Value("${connection.max-inbound-metadata-size}") int maxInBoundMetadataSize,
#Value("${connection.load-balancing-policy}") String loadBalancingPolicy,
#Qualifier("userResolver") SaigonParkingNameResolverProvider nameResolverProvider) {
return ManagedChannelBuilder
.forTarget("consul://" + host + ":" + port) // build channel to server with server's address
.keepAliveWithoutCalls(false) // Close channel when client has already received response
.idleTimeout(timeout, TimeUnit.MILLISECONDS) // 10000 milliseconds / 1000 = 10 seconds --> request time-out
.maxInboundMetadataSize(maxInBoundMetadataSize * 1024 * 1024) // 2KB * 1024 = 2MB --> max message header size
.maxInboundMessageSize(maxInBoundMessageSize * 1024 * 1024) // 10KB * 1024 = 10MB --> max message size to transfer together
.defaultLoadBalancingPolicy(loadBalancingPolicy) // set load balancing policy for channel
.nameResolverFactory(nameResolverProvider) // using Consul service discovery for DNS querying
.intercept(clientInterceptor) // add internal credential authentication
.usePlaintext() // use plain-text to communicate internally
.build(); // Build channel to communicate over gRPC
}
/* asynchronous user service stub */
#Bean
public UserServiceGrpc.UserServiceStub userServiceStub(#Autowired ManagedChannel channel) {
return UserServiceGrpc.newStub(channel);
}
/* synchronous user service stub */
#Bean
public UserServiceGrpc.UserServiceBlockingStub userServiceBlockingStub(#Autowired ManagedChannel channel) {
return UserServiceGrpc.newBlockingStub(channel);
}
}
Is there anything wrong on our code ?
We are looking forward to hearing from you soon !
We thought that start will be called as the channel created. It's wrong !
Sorry as we misunderstood about gRPC load-balancing.
It's now called start on new service call !
Thanks !
Saigon Parking team.

Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException using ReactiveRedisTemplate

I am new to Reactive Programming. i need to connect to Redis to save and get some data. The redis instance is present in cloud.
Am using Lettuce Connection factory to establish the connection.
when establishing the connection to redis, the request fails.
Here is my Redis configuration class :
package com.sap.slh.tax.attributes.determination.springwebfluxdemo.config;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.ReactiveRedisConnectionFactory;
import org.springframework.data.redis.connection.RedisPassword;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.core.ReactiveRedisOperations;
import org.springframework.data.redis.core.ReactiveRedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.scheduling.annotation.EnableAsync;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxDetails;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxLine;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.util.JsonUtil;
#Configuration
#EnableAsync
public class RedisConfig {
private static final Logger log = LoggerFactory.getLogger(RedisConfig.class);
#Value("${vcap.services.redis.credentials.hostname:10.11.241.101}")
private String host;
#Value("${vcap.services.redis.credentials.port:36516}")
private int port;
#Value("$vcap.services.redis.credentials.password:123456788")
private String password;
#Bean
public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(host, port);
redisStandaloneConfiguration.setPassword(RedisPassword.of(password));
redisStandaloneConfiguration.setDatabase(0);
log.error("Redis standalone configuration{}",JsonUtil.toJsonString(redisStandaloneConfiguration));
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder().build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration, clientConfig);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
#Bean
ReactiveRedisOperations<TaxDetails, TaxLine> redisOperations(
ReactiveRedisConnectionFactory reactiveRedisConnectionFactory) {
Jackson2JsonRedisSerializer<TaxDetails> serializer = new Jackson2JsonRedisSerializer<>(TaxDetails.class);
Jackson2JsonRedisSerializer<TaxLine> serializer1 = new Jackson2JsonRedisSerializer<>(TaxLine.class);
RedisSerializationContext.RedisSerializationContextBuilder<TaxDetails, TaxLine> builder = RedisSerializationContext
.newSerializationContext(new StringRedisSerializer());
RedisSerializationContext<TaxDetails, TaxLine> context = builder.key(serializer).value(serializer1).build();
;
return new ReactiveRedisTemplate<>(
reactiveRedisConnectionFactory, context);
}
}
and here is my look up service class which actually communicates with redis during the request
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.ReactiveRedisOperations;
import org.springframework.stereotype.Service;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.RedisRepo;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxDetails;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxLine;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.util.JsonUtil;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
#Service
public class RedisTaxLineLookUpService {
private static final Logger log = LoggerFactory.getLogger(RedisTaxLineLookUpService.class);
#Autowired
private ReactiveRedisOperations<TaxDetails, TaxLine> redisOperations;
public Flux<TaxLine> get(TaxDetails taxDetails) {
log.info("going to call redis to fetch tax lines{}", JsonUtil.toJsonString(taxDetails));
return redisOperations.keys(taxDetails).flatMap(redisOperations.opsForValue()::get);
}
public Mono<RedisRepo> set(RedisRepo redisRepo) {
log.info("going to call redis to save tax lines{}", JsonUtil.toJsonString(redisRepo.getTaxDetails()));
return redisOperations.opsForValue().set(redisRepo.getTaxDetails(), redisRepo.getTaxLine())
.map(__ -> redisRepo);
}
}
Stack trace :
2020-03-26T16:27:54.513+0000 [APP/PROC/WEB/0] OUT org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to 10.11.241.101:36516 | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getNativeConnection(LettuceConnectionFactory.java:1199) | Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: | Error has been observed at the following site(s): | |_ checkpoint ? Handler com.sap.slh.tax.attributes.determination.springwebfluxdemo.controller.TaxLinesDeterminationController#saveTaxLines(RedisRepo) [DispatcherHandler] | |_ checkpoint ? HTTP POST "/tax/lines/save/" [ExceptionHandlingWebHandler] | Stack trace: | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getNativeConnection(LettuceConnectionFactory.java:1199) | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$SharedConnection.getConnection(LettuceConnectionFactory.java:1178) | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getSharedReactiveConnection(LettuceConnectionFactory.java:952) | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getReactiveConnection(LettuceConnectionFactory.java:429) | at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory.getReactiveConnection(LettuceConnectionFactory.java:94) | at org.springframework.data.redis.core.ReactiveRedisTemplate.lambda$doInConnection$0(ReactiveRedisTemplate.java:198) | at reactor.core.publisher.MonoSupplier.call(MonoSupplier.java:85) | at reactor.core.publisher.FluxUsingWhen.subscribe(FluxUsingWhen.java:80) | at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:55) | at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) | at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) | at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) | at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoIgnoreThen$ThenAcceptInner.onNext(MonoIgnoreThen.java:296) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:144) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:247) | at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:329) | at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:173) | at reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:92) | at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67) | at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:144) | at reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103) | at reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287) | at reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:330) | at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1705) | at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:160) | at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136) | at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252) | at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136) | at reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:419) | at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:209) | at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:367) | at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:363) | at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:489) | at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:90) | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
Any suggestions or answers would be highly helpful ! Thanks in Advance !
I use this RedisConfig.java and it works for me.
#Configuration
#ConfigurationProperties(prefix = "spring.redis")
#Setter
public class RedisConfig {
private String host;
private String password;
#Bean
#Primary
public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory(RedisConfiguration defaultRedisConfig) {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.useSsl().build();
return new LettuceConnectionFactory(defaultRedisConfig, clientConfig);
}
#Bean
public RedisConfiguration defaultRedisConfig() {
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration();
config.setHostName(host);
config.setPassword(RedisPassword.of(password));
return config;
}
}
I had similar problem with Redis running on AWS (EC2 instance). It works after:
sudo vi /etc/redis/redis.conf
Comment line: bind 127.0.0.1 ::1
Set the line protected-mode no
Set the line supervised systemd
sudo systemctl restart redis.service
Check the AWS security groups just in case.
i updated my RedisConfig class as follows :
import java.time.Duration;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.redis.connection.ReactiveRedisConnectionFactory;
import org.springframework.data.redis.connection.RedisConfiguration;
import org.springframework.data.redis.connection.RedisNode;
import org.springframework.data.redis.connection.RedisPassword;
import org.springframework.data.redis.connection.RedisSentinelConfiguration;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.core.ReactiveRedisOperations;
import org.springframework.data.redis.core.ReactiveRedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxDetails;
import com.sap.slh.tax.attributes.determination.springwebfluxdemo.model.TaxLine;
import io.lettuce.core.RedisURI;
import io.pivotal.cfenv.core.CfEnv;
#Configuration
public class RedisConfig {
CfEnv cfEnv = new CfEnv();
String tag = "redis";
String redisHost = cfEnv.findCredentialsByTag(tag).getHost();
#Bean
#Primary
public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory(RedisConfiguration defaultRedisConfig) {
LettuceClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
.commandTimeout(Duration.ofMillis(60000)).build();
return new LettuceConnectionFactory(defaultRedisConfig, clientConfig);
}
#Bean
public RedisConfiguration defaultRedisConfig() {
if (redisHost != null) {
// RedisStandaloneConfiguration config = new RedisStandaloneConfiguration("127.0.0.1", 6379);
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration();
String redisPort = cfEnv.findCredentialsByTag(tag).getPort();
String redisPassword = cfEnv.findCredentialsByTag(tag).getPassword();
config.setHostName(redisHost);
config.setPassword(RedisPassword.of(redisPassword));
config.setPort(Integer.parseInt(redisPort));
config.setDatabase(2);
return config;
} else {
RedisSentinelConfiguration config = new RedisSentinelConfiguration();
String uri = cfEnv.findCredentialsByTag(tag).getUri();
RedisURI redisURI = RedisURI.create(uri);
config.master(redisURI.getSentinelMasterId());
List<RedisNode> nodes = redisURI.getSentinels().stream()
.map(redisUri -> populateNode(redisUri.getHost(), redisUri.getPort())).collect(Collectors.toList());
nodes.forEach(node -> config.addSentinel(node));
config.setPassword(RedisPassword.of(redisURI.getPassword()));
config.setDatabase(2);
return config;
}
}
#Bean
public ReactiveRedisOperations<TaxDetails, TaxLine> reactiveRedisTemplate(
ReactiveRedisConnectionFactory factory) {
StringRedisSerializer keySerializer = new StringRedisSerializer();
Jackson2JsonRedisSerializer<TaxLine> valueSerializer = new Jackson2JsonRedisSerializer<>(
TaxLine.class);
Jackson2JsonRedisSerializer<TaxDetails> valueSerializer1 = new Jackson2JsonRedisSerializer<>(
TaxDetails.class);
RedisSerializationContext.RedisSerializationContextBuilder<TaxDetails, TaxLine> builder = RedisSerializationContext
.newSerializationContext(keySerializer);
RedisSerializationContext<TaxDetails, TaxLine> context = builder.key(valueSerializer1).value(valueSerializer).build();
return new ReactiveRedisTemplate<>(factory, context);
}
private RedisNode populateNode(String host, Integer port) {
return new RedisNode(host, port);
}
}
dependencies for cfEnv:
<groupId>io.pivotal.cfenv</groupId>
<artifactId>java-cfenv-boot</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>

Spring data rest application not getting data from database after implementing redis caching

I am working on implementing Redis caching for my spring data rest (hal) api.
Requirement: cache all data to redis after first call to database and perform operations on redis.
like Add record should first happen in cache and then inserted in database in a transaction.
I implemented caching for one of the JpaRepository, but when I do implicit findAll by calling the /states endpoint, I get no records, even when I have 10k records in database.
Please help guys!!
Below is my config:
MyServicesApplication.java
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.data.jpa.repository.config.EnableJpaAuditing;
import org.springframework.data.redis.repository.configuration.EnableRedisRepositories;
import javax.persistence.EntityManagerFactory;
#SpringBootApplication
#EnableJpaAuditing
#EnableCaching
#EnableRedisRepositories
public class MyServicesApplication {
public static void main(String[] args) {
SpringApplication.run(PartnerServicesApplication.class, args);
}
#Bean
ApplicationRunner init(EntityManagerFactory entityManagerFactory) {
return args -> {
};
}
}
application.yml
springdoc:
api-docs:
path: /api-docs
swagger-ui:
path: /swagger-ui-custom.html
spring:
main:
allow-bean-definition-overriding: true
jackson:
serialization:
write-dates-as-timestamps: false
FAIL_ON_EMPTY_BEANS: false
date-format: MM/dd/yyyy
# time-zone: EST
datasource:
driverClassName: org.postgresql.Driver
password: mysecretpassword
url: jdbc:postgresql://localhost:5432/postgres?currentSchema=public
username: postgres
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
enable_lazy_load_no_trans: true
jdbc:
lob:
non_contextual_creation: true
max_size: 2
min_size: 2
temp:
use_jdbc_metadata_defaults: false
show-sql: true
cache:
redis:
cache-null-values: false
time-to-live: 600000
use-key-prefix: true
type: redis
redis:
host: localhost
port: 6379
password:
Jpa Entity State.java
import org.springframework.data.redis.core.RedisHash;
import javax.persistence.*;
import java.time.LocalDate;
import java.util.Optional;
#Entity
#Table(name = "my_state")
#RedisHash
public class State extends BaseAuditDetails {
#Id
#org.springframework.data.annotation.Id
#Column(name = "cde_st", nullable = false, length = 2)
private String cdeSt;
#Basic(optional = false)
#Column(name = "nam_st", nullable = false, length = 30)
private String namSt;
#Basic
#Column(name = "dte_inact", table = "state")
private LocalDate dteInact;
#Basic(optional = false)
#Column(name = "ind_dst_obsv", nullable = false)
private Character indDstObsv;
public String getCdeSt() {
return cdeSt;
}
public void setCdeSt(String cdeSt) {
this.cdeSt = cdeSt;
}
public Optional<String> getNamSt() {
return Optional.ofNullable(namSt);
}
public void setNamSt(String namSt) {
this.namSt = namSt;
}
public Optional<LocalDate> getDteInact() {
return Optional.ofNullable(dteInact);
}
public void setDteInact(LocalDate dteInact) {
this.dteInact = dteInact;
}
public Optional<Character> getIndDstObsv() {
return Optional.ofNullable(indDstObsv);
}
public void setIndDstObsv(Character indDstObsv) {
this.indDstObsv = indDstObsv;
}
}
MyStateRepository.java
import com.devstartshop.myapp.entities.State;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;
import java.util.List;
import java.util.Optional;
/**
* Generated by Spring Data Generator on 16/03/2020
*/
#RepositoryRestResource
public interface MyStateRepository extends JpaRepository<State, String> {
#Override
#Cacheable(value = "state")
List<State> findAll();
#Override
#Cacheable(value = "state")
List<State> findAll(Sort sort);
#Override
#Cacheable(value = "state")
State getOne(String s);
#Override
#Cacheable(value = "state")
Page<State> findAll(Pageable pageable);
#Override
#Cacheable(value = "state")
Optional<State> findById(String s);
}
Dependencies in build.gradle
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springdoc:springdoc-openapi-core:1.1.49'
implementation 'org.springdoc:springdoc-openapi-ui:1.1.49'
implementation 'io.github.classgraph:classgraph:4.8.44'
implementation 'com.querydsl:querydsl-jpa:4.2.2'
// implementation 'org.slf4j:slf4j-log4j12:1.7.30'
implementation 'org.springframework.boot:spring-boot-devtools'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-data-rest'
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'org.apache.commons:commons-pool2'
implementation 'org.springframework.data:spring-data-rest-hal-explorer'
implementation 'org.projectlombok:lombok:1.18.10'
implementation 'org.hibernate:hibernate-entitymanager:5.4.10.Final'
implementation 'org.hibernate.javax.persistence:hibernate-jpa-2.1-api:1.0.2.Final'
implementation 'com.github.kuros:random-jpa:1.0.3'
implementation 'com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.10.0'
runtimeOnly 'com.h2database:h2:1.4.200'
runtimeOnly 'org.postgresql:postgresql:42.2.9'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
compileOnly 'com.querydsl:querydsl-apt:4.2.2'
compileOnly 'org.projectlombok:lombok:1.18.10'
annotationProcessor 'org.projectlombok:lombok:1.18.10'
}
I figured out that using #RedisHash annotation will only make transactions to the Redis database.
So I took a different approach to use #Cacheable on all GET calls and #CacheEvict on all other calls responsible to make changes to database.
Probably #RedisHash is meant for using Redis as a transaction database which can be persisted to a persistent database like postgres using some other process.

javax.validation Compatibility issue with WAS 8.5 server

I am trying to deploy my application in WAS 8.5 server, but I see something very weired happending.
When I use the below jar while building , the application builds along with my wsdl without any issue. But it fails during deployment in WAS 8.5 server.
WAS 8.5.5.3 is using jdk 1.6.0.
Dependency used:
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.0.0.GA</version>
<scope>provided</scope>
</dependency>
My investigation conclude that the wsdl generated from the Endpoint Service during maven build needs the validation api but have to keep the scope as provided, as this jars are
provided by WAS server at runtime.
The above configuration gives the below error.
[7/24/16 10:10:25:501 IST] 00000063 WASWSDLGenera E WSWS7054E: The Web Services Description Language (WSDL) file could not be generated for the com.hex.rbm.erds.ws.endpoint.impl.EntitySearchServiceEndpoint Web service implementation class because of the following error: java.lang.ArrayStoreException
[7/24/16 10:10:25:507 IST] 00000063 WSModuleDescr E WSWS7027E: JAX-WS Service Descriptions could not be correctly built because of the following error: javax.xml.ws.WebServiceException: WSWS7054E: The Web Services Description Language (WSDL) file could not be generated for the com.hex.rbm.erds.ws.endpoint.impl.EntitySearchServiceEndpoint Web service implementation class because of the following error: java.lang.ArrayStoreException
at com.ibm.ws.websvcs.wsdl.WASWSDLGenerator.generateWsdl(WASWSDLGenerator.java:268)
at org.apache.axis2.jaxws.description.impl.EndpointDescriptionImpl.generateWSDL(EndpointDescriptionImpl.java:2084)
at org.apache.axis2.jaxws.description.impl.EndpointDescriptionImpl.<init>(EndpointDescriptionImpl.java:449)
at org.apache.axis2.jaxws.description.impl.ServiceDescriptionImpl.<init>(ServiceDescriptionImpl.java:401)
at org.apache.axis2.jaxws.description.impl.ServiceDescriptionImpl.<init>(ServiceDescriptionImpl.java:297)
at org.apache.axis2.jaxws.description.impl.DescriptionFactoryImpl.createServiceDescriptionFromDBCMap(DescriptionFactoryImpl.java:277)
at org.apache.axis2.jaxws.description.DescriptionFactory.createServiceDescriptionFromDBCMap(DescriptionFactory.java:524)
at com.ibm.ws.websvcs.desc.WSModuleDescriptorImpl.buildJAXWSServices(WSModuleDescriptorImpl.java:1345)
at com.ibm.ws.websvcs.desc.WSModuleDescriptorImpl._containsJAXWSWebServices(WSModuleDescriptorImpl.java:519)
at com.ibm.ws.websvcs.desc.WSModuleDescriptorImpl.containsJAXWSWebServices(WSModuleDescriptorImpl.java:494)
at com.ibm.ws.websvcs.deploy.WSCacheWriter.writeModuleCache(WSCacheWriter.java:571)
at com.ibm.ws.websvcs.deploy.WSCacheWriter.writeApplicationCache(WSCacheWriter.java:242)
at com.ibm.ws.websvcs.deploy.WSCacheWriter.writeApplicationCache(WSCacheWriter.java:167)
at com.ibm.ws.websvcs.deploy.PersistentStorageInstallSaveTask.performTask(PersistentStorageInstallSaveTask.java:196)
at com.ibm.ws.management.application.sync.AppBinaryProcessor$ExpandApp.expand(AppBinaryProcessor.java:1711)
at com.ibm.ws.management.application.sync.AppBinaryProcessor.postProcessSynchronousExt(AppBinaryProcessor.java:751)
at com.ibm.ws.management.bla.sync.BLABinaryProcessor.postProcess(BLABinaryProcessor.java:599)
at com.ibm.ws.management.bla.sync.BLABinaryProcessor.onChangeCompletion(BLABinaryProcessor.java:476)
at com.ibm.ws.management.bla.sync.BinaryProcessorWrapper.onChangeCompletion(BinaryProcessorWrapper.java:109)
at com.ibm.ws.management.repository.FileRepository.postNotify(FileRepository.java:1938)
at com.ibm.ws.management.repository.FileRepository.update(FileRepository.java:1442)
at com.ibm.ws.management.repository.client.LocalConfigRepositoryClient.update(LocalConfigRepositoryClient.java:189)
at com.ibm.ws.sm.workspace.impl.WorkSpaceMasterRepositoryAdapter.update(WorkSpaceMasterRepositoryAdapter.java:665)
at com.ibm.ws.sm.workspace.impl.RepositoryContextImpl.update(RepositoryContextImpl.java:1998)
at com.ibm.ws.sm.workspace.impl.RepositoryContextImpl.synch(RepositoryContextImpl.java:1946)
at com.ibm.ws.sm.workspace.impl.WorkSpaceImpl.synch(WorkSpaceImpl.java:549)
at com.ibm.ws.console.core.action.SyncWorkSpaceAction.execute(SyncWorkSpaceAction.java:271)
at org.apache.struts.action.RequestProcessor.processActionPerform(Unknown Source)
at org.apache.struts.action.RequestProcessor.process(Unknown Source)
at org.apache.struts.action.ActionServlet.process(Unknown Source)
at org.apache.struts.action.ActionServlet.doGet(Unknown Source)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:575)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:779)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:478)
at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:178)
at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:136)
at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:97)
at com.ibm.ws.console.core.servlet.WSCUrlFilter.setUpCommandAssistance(WSCUrlFilter.java:955)
at com.ibm.ws.console.core.servlet.WSCUrlFilter.continueStoringTaskState(WSCUrlFilter.java:504)
at com.ibm.ws.console.core.servlet.WSCUrlFilter.doFilter(WSCUrlFilter.java:325)
at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:195)
at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:960)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1064)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3878)
at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:304)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:981)
at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1662)
at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:200)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:461)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewRequest(HttpInboundLink.java:528)
at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.processRequest(HttpInboundLink.java:314)
at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:88)
at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175)
at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204)
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775)
at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1864)
Caused by: java.lang.ArrayStoreException
at sun.reflect.annotation.AnnotationParser.parseClassArray(AnnotationParser.java:665)
at sun.reflect.annotation.AnnotationParser.parseArray(AnnotationParser.java:472)
at sun.reflect.annotation.AnnotationParser.parseMemberValue(AnnotationParser.java:298)
at sun.reflect.annotation.AnnotationParser.parseAnnotation(AnnotationParser.java:234)
at sun.reflect.annotation.AnnotationParser.parseAnnotations2(AnnotationParser.java:81)
at sun.reflect.annotation.AnnotationParser.parseAnnotations(AnnotationParser.java:64)
at com.ibm.oti.reflect.AnnotationParser.parseAnnotations(AnnotationParser.java:63)
at java.lang.Class.getDeclaredAnnotations(Class.java:1879)
at java.lang.Class.getAnnotations(Class.java:1836)
at java.lang.Class.getAnnotation(Class.java:1816)
at com.ibm.jtc.jax.xml.bind.v2.model.annotation.RuntimeInlineAnnotationReader.getClassAnnotation(RuntimeInlineAnnotationReader.java:106)
at com.ibm.jtc.jax.xml.bind.v2.model.annotation.RuntimeInlineAnnotationReader.getClassAnnotation(RuntimeInlineAnnotationReader.java:57)
at com.ibm.jtc.jax.xml.bind.v2.model.impl.ModelBuilder.getTypeInfo(ModelBuilder.java:329)
at com.ibm.jtc.jax.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:483)
at com.ibm.jtc.jax.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319)
at com.ibm.jtc.jax.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1178)
at com.ibm.jtc.jax.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:194)
at com.ibm.jtc.jax.xml.bind.api.JAXBRIContext.newInstance(JAXBRIContext.java:111)
at com.ibm.jtc.jax.xml.ws.developer.JAXBContextFactory$1.createJAXBContext(JAXBContextFactory.java:109)
at com.ibm.jtc.jax.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:161)
at com.ibm.jtc.jax.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:154)
at java.security.AccessController.doPrivileged(AccessController.java:327)
at com.ibm.jtc.jax.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:153)
at com.ibm.jtc.jax.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)
at com.ibm.jtc.jax.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:258)
at com.ibm.jtc.jax.tools.ws.wscompile.WsgenTool.buildModel(WsgenTool.java:248)
at com.ibm.jtc.jax.tools.ws.wscompile.WsgenTool.run(WsgenTool.java:123)
at com.ibm.jtc.jax.tools.ws.util.WSToolsObjectFactoryImpl.wsgen(WSToolsObjectFactoryImpl.java:61)
at com.ibm.jtc.jax.tools.ws.spi.WSToolsObjectFactory.wsgen(WSToolsObjectFactory.java:107)
at com.ibm.ws.websvcs.wsdl.WASWSDLGenerator.wsgen(WASWSDLGenerator.java:610)
at com.ibm.ws.websvcs.wsdl.WASWSDLGenerator.generateWsdl(WASWSDLGenerator.java:245)
... 62 more
But when I remove the provided scope , I am able to deploy without error. But we not suppose to use the validation api jar due to the requirement.
Can anyone help me what can be done to make this work with the scope as provided , I tried to use higher version of this jar but that did'nt help.
Temporarily I disabled the com.ibm.ws.beanvalidation but that is not the correct way.
Hi Scott,
Tried to generate wsdl using the wsgen where I use:
%Java_Home% as \IBM\WebSphere\AppServer\java. But no issue during the build and it geneates without any issue.
%Java_Home%\bin\wsgen -d target\classes -cp target\classes;%Nexus_Home%\org\springframework\spring-web\3.0.4.RELEASE\spring-web-3.0.4.RELEASE.jar;%Nexus_Home%\org\springframework\spring-beans\3.0.4.RELEASE\spring-beans-3.0.4.RELEASE.jar;%Nexus_Home%\com\hex\cobam\rds\cobam-rds-services\2.14.0-SNAPSHOT\cobam-rds-services-2.14.0-SNAPSHOT.jar;%Nexus_Home%\com\hex\cobam\core\cobam-core-domain\1.0.64\cobam-core-domain-1.0.64.jar;%Nexus_Home%\com\hex\cobam\rds\cobam-rds-domain\2.4.0-SNAPSHOT\cobam-rds-domain-2.4.0-SNAPSHOT.jar;%Nexus_Home%\com\hex\cobam\core\cobam-core-exception\1.0.20\cobam-core-exception-1.0.20.jar;%Nexus_Home%\joda-time\joda-time\2.9.4\joda-time-2.9.4.jar;%Java_Home%\jre\..\lib\tools.jar -wsdl -r target\classes com.hex.cobam.rds.ws.endpoint.impl.EntitySearchServiceEndpoint
The endpoint internally calls this class which uses validation api:
import javax.validation.ConstraintViolation;
import javax.xml.ws.WebFault;
#WebFault(faultBean = "com.hex.bam.core.exception.BusinessError", name = "searchRequestValidationException", targetNamespace = RDS_SERVICE_NAMESPACE)
public class SearchRequestValidationException extends BamBusinessException {
public SearchRequestValidationException(Set<ConstraintViolation<Criteria>> constraintViolations) {
super(constraintViolations);
}
Adding some more details to it:
import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
public abstract class BamBusinessException extends Exception {
private static final long serialVersionUID = 1L;
protected <C> BamBusinessException(Set<ConstraintViolation<C>> constraintViolations) {
error = ErrorFactory.buildBusinessError(this, constraintViolations);
}
}
public class ErrorFactory {
public static <T extends BamBusinessException, C> BusinessError buildBusinessError(
T exception, Set<ConstraintViolation<C>> constraintViolations) {
BusinessError error = buildBusinessError(exception);
List<BasicConstraintViolation> violations = new ArrayList<BasicConstraintViolation>();
if (constraintViolations != null) {
for (ConstraintViolation<C> violation : constraintViolations) {
violations.add(parse(violation));
}
}
error.setConstraintViolationList(violations);
return error;
}
}
import static com.hex.bam.core.dto.NamespaceConstants.CORE_DTO_NAMESPACE;
import javax.xml.bind.annotation.XmlType;
#XmlType(namespace = CORE_DTO_NAMESPACE)
public class BasicConstraintViolation {
private String propertyPath;
private String resourceKey;
public void setPropertyPath(String propertyPath) {
this.propertyPath = propertyPath;
}
public void setResourceKey(String resourceKey) {
this.resourceKey = resourceKey;
}
public String getResourceKey() {
return resourceKey;
}
public String getPropertyPath() {
return propertyPath;
}
#Override
public String toString() {
return "{" + getPropertyPath() + ":::" + getResourceKey() + "}";
}
import javax.jws.HandlerChain;
import javax.xml.bind.annotation.XmlSeeAlso;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.context.support.SpringBeanAutowiringSupport;
#HandlerChain(file = "../../../../../../../handler-chain.xml")
#javax.jws.WebService(endpointInterface = ENDPOINT_INTERFACE, targetNamespace = RDS_SERVICE_NAMESPACE, serviceName = SERVICE_NAME, portName = PORT_NAME)
#XmlSeeAlso({ com.hex.bam.rds.domain.Organisation.class, com.hex.bam.rds.domain.Individual.class })
public class EntitySearchServiceEndpoint extends SpringBeanAutowiringSupport implements EntitySearchService {
#Autowired
private SearchService searchService;
#Autowired
private AuthenticationService authenticationService;
#Autowired
private ModelSupportService defaultModelSupportService;
#Autowired
private RegulatoryClassificationService regulatoryClassificationService;
#Override
public IndividualSearchResults findIndividuals(IndividualSearchCriteria individualSearchCriteria,
ClientIdentification clientIdentification) throws CobamSystemException, SearchRequestValidationException {
try {
authenticationService.authenticateOnBehalfOfUser(clientIdentification);
assertParameterSuppliedThrowsSearchRequestValidationException(individualSearchCriteria);
defaultModelSupportService.initialiseReferenceDatum(individualSearchCriteria, TreeWalker.MAX_DEPTH);
return searchService.findIndividuals(individualSearchCriteria);
} catch (RuntimeException runtimeException) {
throw ErrorFactory.buildAndLogCobamSystemException(runtimeException);
}
}
}
#WebService(name = "EntitySearchService", targetNamespace = RDS_SERVICE_NAMESPACE)
#SOAPBinding(parameterStyle = ParameterStyle.BARE)
#XmlSeeAlso({ Individual.class, Organisation.class })
public interface EntitySearchService {
#WebMethod(operationName = "findOrganisations", action = RDS_SERVICE_NAMESPACE + "findOrganisations")
#WebResult(name = "organisationSearchResults", targetNamespace = ERDS_SERVICE_NAMESPACE)
OrganisationSearchResults findOrganisations(
#WebParam(name = "organisationSearchCriteria") OrganisationSearchCriteria organisationSearchCriteria,
#WebParam(name = "findOrganisationsClientIdentification", header = true) ClientIdentification clientIdentification)
throws BamSystemException, SearchRequestValidationException;
}
import java.util.Date;
import java.util.List;
import javax.xml.bind.annotation.XmlType;
import com.hex.bam.core.dto.BasicConstraintViolation;
#XmlType(name = "businessError", namespace = EXCEPTION_NAMESPACE)
public class BusinessError extends Error {
private final List<BasicConstraintViolation> constraintViolationList;
public BusinessError(Date occuredAt, String resourceKey, String guid,
List<BasicConstraintViolation> constraintViolationList) {
super(occuredAt, resourceKey, guid);
this.constraintViolationList = constraintViolationList;
}
public List<BasicConstraintViolation> getConstraintViolationList() {
return constraintViolationList;
}
#Override
public String toString() {
String violations = constraintViolationList != null ? constraintViolationList.toString() : " none ";
return getGuid() + " - Business Error - " + getResourceKey() + " [" + violations + "]";
}
}

Neo4j error caused by Lucene (Too many open files)

I've just started evaluating Neo4j to see how well its fits our use case.
I'm using the embedded Java API to insert edges and nodes into a graph.
After creating around 5000 nodes I get the following error (using Neo4j 2.1.6 and 2.1.7 on OS X Yosemite)
org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
Caused by: javax.transaction.xa.XAException
Caused by: org.neo4j.kernel.impl.nioneo.store.UnderlyingStorageException: java.io.FileNotFoundException: /Users/mihir.k/IdeaProjects/Turant/target/neo4j-hello-db/schema/label/lucene/_8zr.frq (Too many open files)
Caused by: java.io.FileNotFoundException: /Users/mihir.k/IdeaProjects/Turant/target/neo4j-hello-db/schema/label/lucene/_8zr.frq (Too many open files)
I've looked at numerous similar StackOverFlow questions and other related threads online. They all suggest increasing the max open files limit.
I've tried doing that.
These are my settings:
kern.maxfiles: 65536
kern.maxfilesperproc: 65536
However this hasn't fixed the error.
While the Neo4j code runs I tried using the lsof|wc -l command. The code always breaks when around 10000 files are open.
The following is the main class that deals with Neo4j:
import java.io.File;
import java.io.Serializable;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.neo4j.cypher.internal.compiler.v1_9.commands.True;
import org.neo4j.cypher.internal.compiler.v2_0.ast.False;
import org.neo4j.graphdb.*;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.graphdb.schema.Schema;
import org.neo4j.graphdb.schema.IndexDefinition;
import org.neo4j.graphdb.index.UniqueFactory;
import org.neo4j.graphdb.index.Index;
import org.neo4j.graphdb.index.IndexHits;
public class Neo4jDB implements Serializable {
private static final String DB_PATH = "target/neo4j-hello-db-spark";
IndexDefinition indexDefinition;
private static GraphDatabaseFactory dbFactory;
public static GraphDatabaseService db;
public void main(String[] args) {
System.out.println("Life is a disease, sexually transmitted and irrevocably fatal. Stop coding and read some Neil Gaiman.");
}
public void startDbInstance() {
db =new GraphDatabaseFactory().newEmbeddedDatabase(DB_PATH);
}
public Node createOrGetNode ( LabelsUser360 label , String key, String nodeName ,Map<String,Object> propertyMap)
{
System.out.println("Creating/Getting node");
try ( Transaction tx = db.beginTx() ) {
Node node;
if (db.findNodesByLabelAndProperty(label, key, nodeName).iterator().hasNext()) {
node = db.findNodesByLabelAndProperty(label, key, nodeName).iterator().next();
} else {
node = db.createNode(label);
node.setProperty(key, nodeName);
}
for (Map.Entry<String, Object> entry : propertyMap.entrySet()) {
node.setProperty(entry.getKey(), entry.getValue());
}
tx.success();
return node;
}
}
public void createUniquenessConstraint(LabelsUser360 label , String property)
{
try ( Transaction tx = db.beginTx() )
{
db.schema()
.constraintFor(label)
.assertPropertyIsUnique(property)
.create();
tx.success();
}
}
public void createOrUpdateRelationship(RelationshipsUser360 relationshipType ,Node startNode, Node endNode, Map<String,Object> propertyMap)
{
try ( Transaction tx = db.beginTx() ) {
if (startNode.hasRelationship(relationshipType, Direction.OUTGOING)) {
Relationship relationship = startNode.getSingleRelationship(relationshipType, Direction.OUTGOING);
for (Map.Entry<String, Object> entry : propertyMap.entrySet()) {
relationship.setProperty(entry.getKey(), entry.getValue());
}
} else {
Relationship relationship = startNode.createRelationshipTo(endNode, relationshipType);
for (Map.Entry<String, Object> entry : propertyMap.entrySet()) {
relationship.setProperty(entry.getKey(), entry.getValue());
}
}
tx.success();
}
}
public void registerShutdownHook( final GraphDatabaseService graphDb )
{
Runtime.getRuntime().addShutdownHook( new Thread()
{
#Override
public void run()
{
db.shutdown();
}
} );
}
}
There is another Neo4jAdapter class that is used to implement domain specific logic. It uses the Neo4jDB class to do add/update nodes/properties/relationships
import org.apache.lucene.index.IndexWriter;
import org.codehaus.jackson.map.ObjectMapper;
import org.json.*;
import org.neo4j.graphdb.*;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.graphdb.schema.IndexDefinition;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
public class Neo4jAdapter implements Serializable {
static Neo4jDB n4j = new Neo4jDB();
public static GraphDatabaseService db = Neo4jDB.db ;
public void begin() {
n4j.startDbInstance();
}
public static void main(String[] args) {}
public String graphPut(String jsonString) {
System.out.println("graphput called");
HashMap<String, Object> map = jsonToMap(jsonString); //Json deserializer
Node startNode = n4j.createOrGetNode(...);
Node endNode = n4j.createOrGetNode(...);
propertyMap = new HashMap<String, Object>();
propertyMap.put(....);
try (Transaction tx = Neo4jDB.db.beginTx()) {
Relationship relationship = startNode.getSingleRelationship(...);
if (relationship != null) {
Integer currentCount = (Integer) relationship.getProperty("count");
Integer updatedCount = currentCount + 1;
propertyMap.put("count", updatedCount);
} else {
Integer updatedCount = 1;
propertyMap.put("count", updatedCount);
}
tx.success();
}
n4j.createOrUpdateRelationship(RelationshipsUser360.BLAH, startNode, endNode, propertyMap);
}
}
}
return "Are you sponge worthy??";
}
}
Finally, there is a Sprak App that calls the "graphput" method of the Neo4jAdapter class. The relevant code snippet is (the following is scala+spark code) :
val graphdb : Neo4jAdapter = new Neo4jAdapter()
graphdb.begin()
linesEnriched.foreach(a=>graphdb.graphPutMap(a))
where 'a' is a json string and linesEnriched is a Spark RDD (basically a set of strings)