Spring Cloud Config Composite Repositories - spring-cloud-config

I was trying to configure the spring cloud config server to use a composite configuration but i got a weird error.
What am I doing wrong?
1. Native profile
application.properties
server.port=8888
spring.profiles.active=native
spring.cloud.config.server.native.search-locations=file:///C:/tmp/config-repo
http://localhost:8888/app1/dev/ -> got the loaded properties
2. Composite profile (Native + custom)
application.properties
server.port=8888
spring.profiles.active=composite
spring.cloud.config.server.native.search-locations=file:///C:/tmp/config-repo
spring.cloud.config.server.plugins.search-locations=file:///C:/tmp/plugins-repo
Error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.actuate.autoconfigure.EndpointAutoConfiguration': Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.autoconfigure.EndpointAutoConfiguration$$EnhancerBySpringCGLIB$$271d7a4d]: Constructor threw exception; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'configServerHealthIndicator' defined in class path resource [org/springframework/cloud/config/server/config/EnvironmentRepositoryConfiguration.class]: Unsatisfied dependency expressed through method 'configServerHealthIndicator' parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.cloud.config.server.config.CompositeConfiguration': Unsatisfied dependency expressed through method 'setEnvironmentRepos' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'defaultEnvironmentRepository' defined in class path resource [org/springframework/cloud/config/server/config/DefaultRepositoryConfiguration.class]: Invocation of init method failed; nested exception is java.lang.IllegalStateException: You need to configure a uri for the git repository
The PluginsEnvironmentRepository returns an Environment filled up with all the properties found in each property file existing in the configured folder.
This class is just a sample so I din't implement any filter on the application/profile/label parameters.
#ConfigurationProperties("spring.cloud.config.server.plugins")
public class PluginsEnvironmentRepository implements EnvironmentRepository, Ordered {
private int order = Ordered.LOWEST_PRECEDENCE;
private String searchLocations;
#Override
public Environment findOne(String application, String profile, String label) {
String[] profiles = StringUtils.commaDelimitedListToStringArray(profile);
Environment env = new Environment(application, profiles, label, null, null);
String[] locations = StringUtils.commaDelimitedListToStringArray(searchLocations);
for (String location : locations) {
File f = new File(URI.create(location));
if (f.exists() && f.isDirectory()) {
File[] propFiles = f.listFiles(new FileFilter() {
#Override
public boolean accept(File pathname) {
return pathname.getName().endsWith(".properties");
}
});
for (File propFile : propFiles) {
env.add(new PropertySource(propFile.getName(), loadProperties(propFile)));
}
}
}
return env;
}
private Map<String, String> loadProperties(File f) {
....
}
#Override
public int getOrder() {
return order;
}
public String getSearchLocations() {
return searchLocations;
}
public void setSearchLocations(String searchLocations) {
this.searchLocations = searchLocations;
}
}
Spring Cloud version : Edgware.SR2
Spring Boot version : 1.5.10.RELEASE

Related

Around annotion executed twice using WebFlux

I'm facing a weird behaviour while using AOP with AspectJ.
Basically the #Around method its called either once either twice and while trying to debugging I can't find the reason why it's being executing twice (I mean what triggers the second execution of the method)
here is some code :
#Aspect
#Slf4j
public class ReactiveRedisCacheAspect {
#Pointcut("#annotation(com.xxx.xxx.cache.aop.annotations.ReactiveRedisCacheable)")
public void cacheablePointCut() {}
#Around("cacheablePointCut()")
public Object cacheableAround(final ProceedingJoinPoint proceedingJoinPoint) {
log.debug("ReactiveRedisCacheAspect cacheableAround.... - {}", proceedingJoinPoint);
MethodSignature methodSignature = (MethodSignature) proceedingJoinPoint.getSignature();
Method method = methodSignature.getMethod();
Class<?> returnTypeName = method.getReturnType();
Duration duration = Duration.ofHours(getDuration(method));
String redisKey = getKey(method, proceedingJoinPoint);
if (returnTypeName.isAssignableFrom(Flux.class)) {
log.debug("returning Flux");
return cacheRepository.hasKey(redisKey)
.filter(found -> found)
.flatMapMany(found -> cacheRepository.findByKey(redisKey))
.flatMap(found -> saveFlux(proceedingJoinPoint, redisKey, duration));
} else if (returnTypeName.isAssignableFrom(Mono.class)) {
log.debug("Returning Mono");
return cacheRepository.hasKey(redisKey)
.flatMap(found -> {
if (found) {
return cacheRepository.findByKey(redisKey);
} else {
return saveMono(proceedingJoinPoint, redisKey, duration);
}
});
} else {
throw new RuntimeException("non reactive object supported (Mono,Flux)");
}
}
private String getKey(final Method method, final ProceedingJoinPoint proceedingJoinPoint) {
ReactiveRedisCacheable annotation = method.getAnnotation(ReactiveRedisCacheable.class);
String cacheName = annotation.cacheName();
String key = annotation.key();
cacheName = (String) AspectSupportUtils.getKeyValue(proceedingJoinPoint, cacheName);
key = (String) AspectSupportUtils.getKeyValue(proceedingJoinPoint, key);
return cacheName + "_" + key;
}
}
public class AspectSupportUtils {
private static final ExpressionEvaluator evaluator = new ExpressionEvaluator();
public static Object getKeyValue(JoinPoint joinPoint, String keyExpression) {
if (keyExpression.contains("#") || keyExpression.contains("'")) {
return getKeyValue(joinPoint.getTarget(), joinPoint.getArgs(), joinPoint.getTarget().getClass(),
((MethodSignature) joinPoint.getSignature()).getMethod(), keyExpression);
}
return keyExpression;
}
private static Object getKeyValue(Object object, Object[] args, Class<?> clazz, Method method, String keyExpression) {
if (StringUtils.hasText(keyExpression)) {
EvaluationContext evaluationContext = evaluator.createEvaluationContext(object, clazz, method, args);
AnnotatedElementKey methodKey = new AnnotatedElementKey(method, clazz);
return evaluator.key(keyExpression, methodKey, evaluationContext);
}
return SimpleKeyGenerator.generateKey(args);
}
}
#Target({ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface ReactiveRedisCacheable {
String key();
String cacheName();
long duration() default 1L;
}
#RestController
#RequestMapping("api/pub/v1")
public class TestRestController{
#ReactiveRedisCacheable(cacheName = "test-cache", key = "#name", duration = 1L)
#GetMapping(value = "test")
public Mono<String> getName(#RequestParam(value = "name") String name){
return Mono.just(name);
}
}
#Configuration
public class Config {
#Bean
public ReactiveRedisCacheAspect reactiveRedisCache (ReactiveRedisCacheAspect reactiveRedisCacheAspect) {
return reactiveRedisCacheAspect;
}
}
logs:
ReactiveRedisCacheAspect cacheableAround.... - {}execution(Mono com.abc.def.xxx.rest.TestRestcontroller.getName(String))
2021-06-04 15:36:23.096 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ctor-http-nio-3] c.m.s.c.a.i.ReactiveRedisCacheAspect : Returning Mono
2021-06-04 15:36:23.097 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ctor-http-nio-3] c.m.s.c.repository.CacheRepositoryImpl : searching key: (bff_pippo)
ReactiveRedisCacheAspect cacheableAround.... - {}execution(Mono com.abc.def.xxx.rest.TestRestcontroller.getName(String))
2021-06-04 15:36:23.236 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ioEventLoop-7-2] c.m.s.c.a.i.ReactiveRedisCacheAspect : Returning Mono
2021-06-04 15:36:23.236 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ioEventLoop-7-2] c.m.s.c.repository.CacheRepositoryImpl : searching key: (bff_pippo)
2021-06-04 15:36:23.250 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ioEventLoop-7-2] c.m.s.c.repository.CacheRepositoryImpl : saving obj: (key:bff_pippo) (expiresIn:3600s)
2021-06-04 15:36:23.275 INFO [fo-bff,f688025287be7e7c,f688025287be7e7c] 20060 --- [ioEventLoop-7-2] c.m.s.c.repository.CacheRepositoryImpl : saving obj: (key:bff_pippo) (expiresIn:3600s)
So far I would have expected the cacheableAround would be executed only once, but what happens its a bit weird, if the object is present on redis the method is executed only once but if is not present the method is executed twice which it doesn't make sense, moreover it should be the business logic to manage what to do inside the method.
Thanks in advance!
You did not mention whether you use native AspectJ via load- or compile-time weaving or simply Spring AOP. Because I see not #Component annotation on your aspect, it might as well be native AspectJ, unless you configure your beans via #Bean factory methods in a configuration class or XML.
Assuming that you are using full AspectJ, a common problem newbies coming from Spring AOP have, is that they are not used to the fact that AspectJ not only intercepts execution joinpoints, but also call ones. This leads to the superficial perception that the same joinpoint is intercepted twice. But in reality, it is once the method call (in the class from which the call is made) and once the method execution (in the class where the target method resides). This is easy to determine if at the beginning of your advice method you simply log the joinpoint. In your case:
System.out.println(proceedingJoinPoint);
If then on the console you see something like
call(public void org.acme.MyClass.myMethod())
execution(public void org.acme.MyClass.myMethod())
then you know what is happening.
In case you use Spring AOP, probably it is an issue with the aspect or the Redis caching behaviour that is different from your expectation.

The implementation of the FlinkKafkaConsumer010 is not serializable error

I created a custom class that is based on Apache Flink. The following are some parts of the class definition:
public class StreamData {
private StreamExecutionEnvironment env;
private DataStream<byte[]> data ;
private Properties properties;
public StreamData(){
env = StreamExecutionEnvironment.getExecutionEnvironment();
}
public StreamData(StreamExecutionEnvironment e , DataStream<byte[]> d){
env = e ;
data = d ;
}
public StreamData getDataFromESB(String id, int from) {
final Pattern TOPIC = Pattern.compile(id);
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", Long.toString(System.currentTimeMillis()));
properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
properties.put("metadata.max.age.ms", 30000);
properties.put("enable.auto.commit", "false");
if (from == 0)
properties.setProperty("auto.offset.reset", "earliest");
else
properties.setProperty("auto.offset.reset", "latest");
StreamExecutionEnvironment e = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<byte[]> stream = env
.addSource(new FlinkKafkaConsumer011<>(TOPIC, new AbstractDeserializationSchema<byte[]>() {
#Override
public byte[] deserialize(byte[] bytes) {
return bytes;
}
}, properties));
return new StreamData(e, stream);
}
public void print(){
data.print() ;
}
public void execute() throws Exception {
env.execute() ;
}
Using class StreamData, trying to get some data from Apache Kafka and print them in the main function:
StreamData stream = new StreamData();
stream.getDataFromESB("original_data", 0);
stream.print();
stream.execute();
I got the error:
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: The implementation of the FlinkKafkaConsumer010 is not serializable. The object probably contains or references non serializable fields.
Caused by: java.io.NotSerializableException: StreamData
As mentioned here, I think it's because of some data type in getDataFromESB function is not serializable. But I don't know how to solve the problem!
Your AbstractDeserializationSchema is an anonymous inner class, which as a result contains a reference to the outer StreamData class which isn't serializable. Either let StreamData implement Serializable, or define your schema as a top-level class.
It seems that you are importing FlinkKafkaConsumer010 in your code but using FlinkKafkaConsumer011. Please use the following dependency in your sbt file:
"org.apache.flink" %% "flink-connector-kafka-0.11" % flinkVersion

getting apache ignite continuous query to work without enabling p2p class loading

I have been trying to get my ignite continuous query code to work without setting the peer class loading to enabled. However I find that the code does not work.I tried debugging and realised that the call to cache.query(qry) errors out with the message "Failed to marshal custom event" error. When I enable the peer class loading , the code works as expected. Could someone provide guidance on how I can make this work without peer class loading?
Following is the code snippet that calls the continuous query.
public void subscribeEvent(IgniteCache<String,String> cache,String inKeyStr,ServerWebSocket websocket ){
System.out.println("in thread "+Thread.currentThread().getId()+"-->"+"subscribe event");
//ArrayList<String> inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
ContinuousQuery<String, String> qry = new ContinuousQuery<>();
/****
* Continuous Query Impl
*/
inKeys = ","+inKeyStr+",";
qry.setInitialQuery(new ScanQuery<String, String>((k, v) -> inKeys.contains(","+k+",")));
qry.setTimeInterval(1000);
qry.setPageSize(1);
// Callback that is called locally when update notifications are received.
// Factory<CacheEntryEventFilter<String, String>> rmtFilterFactory = new com.ccx.ignite.cqfilter.FilterFactory().init(inKeyStr);
qry.setLocalListener(new CacheEntryUpdatedListener<String, String>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends String, ? extends String>> evts) {
for (CacheEntryEvent<? extends String, ? extends String> e : evts)
{
System.out.println("websocket locallsnr data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
try{
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
catch (Exception e1){
System.out.println("exception local listener "+e1.getMessage());
qry.setLocalListener(null) ; }
}
}
} );
qry.setRemoteFilterFactory( new com.ccx.ignite.cqfilter.FilterFactory().init(inKeys));
try{
cur = cache.query(qry);
for (Cache.Entry<String, String> e : cur)
{
System.out.println("websocket initialqry data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
}
catch (Exception e){
System.out.println("exception cache.query "+e.getMessage());
}
}
Following is the remote filter class that I have made into a self contained jar and pushed into the libs folder of ignite, so that this can be picked up by the server nodes
public class FilterFactory
{
public Factory<CacheEntryEventFilter<String, String>> init(String inKeyStr ){
System.out.println("factory init called jun22 ");
return new Factory <CacheEntryEventFilter<String, String>>() {
private static final long serialVersionUID = 5906783589263492617L;
#Override public CacheEntryEventFilter<String, String> create() {
return new CacheEntryEventFilter<String, String>() {
#Override public boolean evaluate(CacheEntryEvent<? extends String, ? extends String> e) {
//List inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
System.out.println("inside remote filter factory ");
String inKeys = ","+inKeyStr+",";
return inKeys.contains(","+e.getKey()+",");
}
};
}
};
}
}
Overall logic that I'm trying to implement is to have a websocket client subscribe to an event by specifying a cache name and key(s) of interest.
The subscribe event code is called which creates a continuous query and registers a local listener callback for any update event on the key(s) of interest.
The remote filter is expected to filter the update event based on the key(s) passed to it as a string and the local listener is invoked if the filter event succeeds. The local listener writes the updated key value to the web socket reference passed to the subscribe event code.
The version of ignite Im using is 1.8.0. However the behaviour is the same in 2.0 as well.
Any help is greatly appreciated!
Here is the log snippet containing the relevant error
factory init called jun22
exception cache.query class org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate#269707de, clsName=null, depInfo=null, hnd=CacheContinuousQueryHandlerV2 [rmtFilterFactory=com.ccx.ignite.cqfilter.FilterFactory$1#5dc301ed, rmtFilterFactoryDep=null, types=0], bufSize=1, interval=1000, autoUnsubscribe=true], keepBinary=false, routineId=b40ada9f-552d-41eb-90b5-3384526eb7b9]
From FilterFactory you are returning an instance of an anonymous class which in turn refers to the enclosing FilterFactory which is not serializable.
Just replace the returned anonymous CacheEntryEventFilter based class with a corresponding nested static class.
You need to explicitly deploy you CQ classes (remote filters specifically) on all nodes in topology. Just create a JAR file with them and put into libs folder prior to starting nodes.

Hazelcast 3.6.1 "There is no suitable de-serializer for type" exception

I am using Hazelcast 3.6.1 to read from a Map. The object class stored in the map is called Schedule.
I have configured a custom serializer on the client side like this.
ClientConfig config = new ClientConfig();
SerializationConfig sc = config.getSerializationConfig();
sc.addSerializerConfig(add(new ScheduleSerializer(), Schedule.class));
...
private SerializerConfig add(Serializer serializer, Class<? extends Serializable> clazz) {
SerializerConfig sc = new SerializerConfig();
sc.setImplementation(serializer).setTypeClass(clazz);
return sc;
}
The map is created like this
private final IMap<String, Schedule> map = client.getMap("schedule");
If I get from the map using schedule id as key, the map returns the correct value e.g.
return map.get("zx81");
If I try to use an SQL predicate e.g.
return new ArrayList<>(map.values(new SqlPredicate("statusActive")));
then I get the following error
Exception in thread "main" com.hazelcast.nio.serialization.HazelcastSerializationException: There is no suitable de-serializer for type 2. This exception is likely to be caused by differences in the serialization configuration between members or between clients and members.
The custom serializer is using Kryo to serialize (based on this blog http://blog.hazelcast.com/comparing-serialization-methods/)
public class ScheduleSerializer extends CommonSerializer<Schedule> {
#Override
public int getTypeId() {
return 2;
}
#Override
protected Class<Schedule> getClassToSerialize() {
return Schedule.class;
}
}
The CommonSerializer is defined as
public abstract class CommonSerializer<T> implements StreamSerializer<T> {
protected abstract Class<T> getClassToSerialize();
#Override
public void write(ObjectDataOutput objectDataOutput, T object) {
Output output = new Output((OutputStream) objectDataOutput);
Kryo kryo = KryoInstances.get();
kryo.writeObject(output, object);
output.flush(); // do not close!
KryoInstances.release(kryo);
}
#Override
public T read(ObjectDataInput objectDataInput) {
Input input = new Input((InputStream) objectDataInput);
Kryo kryo = KryoInstances.get();
T result = kryo.readObject(input, getClassToSerialize());
input.close();
KryoInstances.release(kryo);
return result;
}
#Override
public void destroy() {
// empty
}
}
Do I need to do any configuration on the server side? I thought that the client config would be enough.
I am using Hazelcast client 3.6.1 and have one node/member running.
Queries require the nodes to know about the classes as the bytestream has to be deserialized to access the attributes and query them. This means that when you want to query on objects you have to deploy the model classes (and serializers) on the server side as well.
Whereas when you use key-based access we do not need to look into the values (neither into the keys as we compare the byte-arrays of the key) and just send the result. That way neither model classes nor serializers have to be available on the Hazelcast nodes.
I hope that makes sense.

Error while resolving type because of constructor?

I get an error:
Resolution of the dependency failed, type = "MyAppApp.ServiceAgents.IMyAppServiceAgent", name = "(none)".
Exception occurred while: while resolving.
Exception is: InvalidOperationException - The type Int32 cannot be constructed. You must configure the container to supply this value.
-----------------------------------------------
At the time of the exception, the container was:
Resolving MyAppApp.ServiceAgents.MyAppServiceAgent,(none) (mapped from MyAppApp.ServiceAgents.IMyAppServiceAgent, (none))
Resolving parameter "AuthHandlerId" of constructor MyAppApp.ServiceAgents.MyAppServiceAgent(System.Int32 AuthHandlerId, System.String AuthSessionGuid, System.ServiceModel.EndpointAddress ServiceEndPointAddress)
Resolving System.Int32,(none)
in the method which is below:
internal ServiceLocator()
{
services = new Dictionary<object, object>();
// fill the map
this.services.Add(typeof(IMyAppServiceAgent), _container.Resolve<IMyAppServiceAgent>());
}
This is how I call this method
I have a standard method in the ViewModelLocator (from MVVM Light Toolkit) method
public static void CreateShowroomLog()
{
if (_showroomLog == null)
{
_showroomLog = new ShowroomLogViewModel(ServiceLocator.Instance(_container).GetService<IMyAppServiceAgent>());
}
}
and constructor is
public ViewModelLocator()
{
_container=new UnityContainer();
_container.RegisterType<IMyAppServiceAgent, MyAppServiceAgent>();
}
The class of which I need an instance is:
protected static EndpointAddress ServiceEndPointAddress
{
get { return (App.Current as App).ServiceEndpointAddr; }
}
protected static string AuthSessionGuid
{
get { return (App.Current as App).W2OGuid; }
}
protected static int AuthHandlerId
{
get { return (App.Current as App).OriginalHandlerId; }
}
public MyAppServiceAgent(int AuthHandlerId, string AuthSessionGuid, System.ServiceModel.EndpointAddress ServiceEndPointAddress)
{
_proxy = new MyAppService.Service1Client(new BasicHttpMessageInspectorBinding(new SilverlightAuthMessageInspector(AuthHandlerId.ToString(), AuthSessionGuid)), ServiceEndPointAddress);
}
public MyAppServiceAgent()
: this(AuthHandlerId, AuthSessionGuid, ServiceEndPointAddress)
{
}
How can I resolve this problem with cosntructor?
When you register your type you didn't specify which constructor to call on MyAppServiceAgent. By default Unity will choose the constructor with the most parameters but you didn't specify how those parameters should be resolved.
You could try this and see if it will cause the the default constructor (paramaterless) of MyAppServiceAgent to be called when this type is resolved..
_container=new UnityContainer();
_container.RegisterType<IMyAppServiceAgent, MyAppServiceAgent>(new InjectionConstructor());
What I think would be even better is to remove the ServiceEndPointAddress, AuthSessionGuid and AuthHandlerId static properties from your MyAppServiceAgent class. Then register the type like this
_container=new UnityContainer();
_container.RegisterType<IMyAppServiceAgent, MyAppServiceAgent>(
new InjectionConstructor(
(App.Current as App).OriginalHandlerId,
(App.Current as App).W2OGuid,
(App.Current as App).ServiceEndpointAddr
));
Which should cause this constructor to be called.
public MyAppServiceAgent(int AuthHandlerId, string AuthSessionGuid, System.ServiceModel.EndpointAddress ServiceEndPointAddress)
{
_proxy = new MyAppService.Service1Client(new BasicHttpMessageInspectorBinding(new SilverlightAuthMessageInspector(AuthHandlerId.ToString(), AuthSessionGuid)), ServiceEndPointAddress);
}
That way your MyAppServiceAgent class is not dependent on the App class.