How to create cache on infinispan cluster during the startup - infinispan

I have following configuration using
org.infinispan', name: 'infinispan-spring4-common', version: '9.1.7.Final'
The question is how can I create cache programmatically? or how can I create cache during the infinispan server bootup?
I am unable to create a tag 'infinispan-as-spring-cache-provider' can someone help me for that?
#Configuration
#Profile("infinispan-standalone")
#EnableCaching
public class InfinispanStandaloneConfig {
private static final Logger logger = LoggerFactory.getLogger(InfinispanCacheConfiguration.class);
#Autowired
#Bean
public RemoteCacheManager remoteCacheManager(#Value("${infinispan.remote.server-list}") String serverlist,
#Value("infinispan.admin.user") String user,
#Value("infinispan.admin.password") String pwd) {
logger.info("inside the remote cache manager");
Properties properties = new Properties();
properties.setProperty("infinispan.client.hotrod.client_intelligence", "BASIC");
properties.setProperty("infinispan.client.hotrod.marshaller", "org.infinispan.commons.marshall.jboss.GenericJBossMarshaller");
RemoteCacheManager remoteCacheManager = new RemoteCacheManager(new ConfigurationBuilder().addServers(serverlist).withProperties(properties)
.security().authentication().username(user).password(pwd)
.build());
remoteCacheManager.getCache("cart",true);
return remoteCacheManager;
}
#Bean
public SpringRemoteCacheManager cacheManager(RemoteCacheManager remoteCacheManager) {
return new SpringRemoteCacheManager(remoteCacheManager);
}
}

You will need to use Infinispan 9.2, where you can use the following:
remoteCacheManager.administration().getOrCreateCache("cart", "template-name");
provided "template-name" is a configuration template defined on the server.
Alternatively, you can also pass an XML configuration for the cache:
String xml = "<infinispan><cache-container><distributed-cache name="cart"><expiration interval="10000" lifespan="10" max-idle="10"/></distributed-cache></cache-container></infinispan>";
remoteCacheManager.administration().getOrCreateCache("cart", new XMLConfiguration(xml));

Related

Apache Ignite data distribution issue

I am trying to load data into an Apache Ignite cluster using a Java thin client, but the data is not getting loaded evenly. I see that only one machine's memory is increased.
I followed this, added a custom affinity mapper. Version is 2.14.0
https://ignite.apache.org/docs/2.14.0/thin-clients/java-thin-client#partition-awareness
Update1: Adding code
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("host1:10800","host2:10800",......)
.setTimeout(10*60*1000)
.setPartitionAwarenessEnabled(true)
.setPartitionAwarenessMapperFactory(new ClientPartitionAwarenessMapperFactory() {
#Override public ClientPartitionAwarenessMapper create(String cacheName, int partitions) {
final AffinityFunction aff = new MyAffinityFunction(partitions);
return new ClientPartitionAwarenessMapper() {
#Override
public int partition(Object key) {
return aff.partition(key);
}
};
}
});
ignite = Ignition.startClient(cfg)
kmvpCache = ignite.cache(s"cacheName")
val kmvpBatch = new java.util.HashMap[java.lang.Long, Array[Byte]](params.batchSize)
for(....){
for(....){
kmvpBatch.put(<Key>, <value>)
}
kmvpCache.putAll(kmvpBatch)
kmvpBatch.clear()
}

How to associate a redis CrudRepository to a database

I am using spring-data-redis on my spring-boot 1.4 application. I have two distinct CrudRepositories. However, I am struggling to associate them with their respective Connection factories.
Bottom line is: I'd like PersonRedisRepository to use db #6 and OtherPurposeRedisRepository to use db #3. To be hoehest, I am not 100% sure if the way I am tackling the matter is correct.
The repository
interface PersonRedisRepository extends CrudRepository<Person, String> {
}
interface OtherPurposeRedisRepository extends CrudRepository<OtherPurpose, String> {
}
Configuration for person repository
#EnableRedisRepositories(basePackageClasses = [PersonRedisRepository.class], redisTemplateRef = "personRedisTemplate")
class RedisConfigurationForPerson {
#Bean(name = "personFactory")
public RedisConnectionFactory personJedisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory()
jedisConnectionFactory.usePool = true
jedisConnectionFactory.hostName = "127.0.0.1"
jedisConnectionFactory.database = 6
return jedisConnectionFactory
}
#Bean(name = "personRedisTemplate")
public RedisTemplate<byte[], byte[]> availabilityCacheRedisTemplate() {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<byte[], byte[]>()
template.setConnectionFactory(personJedisConnectionFactory())
template
}
}
Configuration for other purpose repository
#EnableRedisRepositories(basePackageClasses = [OtherPurpsoseRepository.class], redisTemplateRef = "otherPurposeRedisTemplate")
class RedisConfigurationForOtherPurpose {
#Bean(name = "otherPurposeFactory")
public RedisConnectionFactory otherPurposeJedisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory()
jedisConnectionFactory.usePool = true
jedisConnectionFactory.hostName = "127.0.0.1"
jedisConnectionFactory.database = 3
return jedisConnectionFactory
}
#Bean(name = "otherPurposeRedisTemplate")
public RedisTemplate<byte[], byte[]> otherPurposeRedisTemplate() {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<byte[], byte[]>()
template.setConnectionFactory(otherPurposeJedisConnectionFactory())
template
}
}
Everything works just fine, I can read/write using both Repositories. However, they both read/write on the db 6.
Another guy had the same problem as you. Even if the examples are for jpa repositories these links should help you :
Spring Boot Configure and Use Two DataSources
http://www.baeldung.com/spring-data-jpa-multiple-databases
you have first to bind the configuration datasource with the #Primay annotation and specify the datasource you are working on. This is the first part. I've looked quickly the second part and I will go deeper later. Will update my psot when done ;)

Storm Kafkaspout KryoSerialization issue for java bean from kafka topic

Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm

#RabbitListener Not receiving messages from queue

I am using #RabbitListner annotation to recieve messages from a RabbitMq queue.
Although I have done all steps required to do this (i.e. Add #EnableRabbit annotation in my config class) and declare SimpleRabbitListenerContainerFactory as a bean , still my method is not recieving messages from the queue . Can anybody suggest what I am missing :
I am using Spring Boot to launch my application
My launch class
#Configuration
#EnableAutoConfiguration
#EnableRabbit
#EnableConfigurationProperties
#EntityScan("persistence.mysql.domain")
#EnableJpaRepositories("persistence.mysql.dao")
#ComponentScan(excludeFilters = { #ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = ApiAuthenticationFilter.class),#ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = ApiVersionValidationFilter.class)},basePackages = {"common", "mqclient","apache", "dispatcher" })
public class Application {
public static void main(final String[] args) {
final SpringApplicationBuilder appBuilder = new SpringApplicationBuilder(
Application.class);
appBuilder.application().setWebEnvironment(false);
appBuilder.profiles("common", "common_mysql_db", "common_rabbitmq")
.run(args);
}
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
}
Here is my Bean to define SimpleRabbitListenerContainerFactory inside a component class
#Component(value = "inputQueueManager")
public class InputQueueManagerImpl extends AbstractQueueManagerImpl {
..///..
#Bean(name = "inputListenerContainerFactory")
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory()
{
SimpleRabbitListenerContainerFactory factory = new
SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(this.rabbitConnectionFactory);
factory.setConcurrentConsumers(Integer.parseInt(this.concurrentConsumers));
factory.setMaxConcurrentConsumers(Integer.parseInt(this.maxConcurrentConsumers));
factory.setMessageConverter(new Jackson2JsonMessageConverter());
return factory;
}
}
And finally my Listener inside another Controller component
#Controller
public class RabbitListner{
#RabbitListener(queues = "Storm1", containerFactory = "inputListenerContainerFactory")
#Override
public void processMessage(QueueMessage message) {
String topic = message.getTopic();
String payload = message.getPayload();
dispatcher.bean.EventBean eventBean = new dispatcher.bean.EventBean();
System.out.println("Data read from the queue");
Unfortunately , I am sending the messages to the queue but the code inside processMessage is not getting executed ever.
I am not sure what is the problem here . Can anybody help ??
By default, the Json message converter requires hints in the message properties as to what type of object to create.
If your producer does not set those properties, it won't be able to do the conversion without some help.
You can inject a ClassMapper into the converter.
The framework provides a DefaultClassMapper which can be customized - either to look at a different message property than the default __TypeId__ property.
If you always want to convert the json to the same object, you can simply set the default type:
DefaultClassMapper classMapper = newDefaultClassMapper();
classMapper.setDefaultType(QueueMessage.class);
Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
converter.setClassMapper(classMapper);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
The documentation already shows how to configure this.

Ninject manual initialization with constructor injection

How do I do constructor injection when I'm manually initializing the class?
public class ApiKeyHandler : DelegatingHandler
{
private IApiService apiService;
public ApiKeyHandler(IApiService apiService)
{
this.apiService = apiService;
}
}
Initializing:
var apiKey = new ApiKeyHandler(/*inject here */);
How do I accomplish this? My bindings and everything is already setup.
You want to do something like this:
var apiKey = new ApiKeyHandler(kernel.Get<IApiService>());
However, why not inject the ApiKeyHandler itself?
var apiKey = kernel.Get<ApiKeyHandler>();
Here is an article about Ninject:
You basically want to set this up at the beginning of your code and have it available globally:
public IKernel kernel = new StandardKernel();
...
kernel.Bind<IApiService>()
.To<SomeConcreteApiService>();