Hazelcast 3.6.1 "There is no suitable de-serializer for type" exception - serialization

I am using Hazelcast 3.6.1 to read from a Map. The object class stored in the map is called Schedule.
I have configured a custom serializer on the client side like this.
ClientConfig config = new ClientConfig();
SerializationConfig sc = config.getSerializationConfig();
sc.addSerializerConfig(add(new ScheduleSerializer(), Schedule.class));
...
private SerializerConfig add(Serializer serializer, Class<? extends Serializable> clazz) {
SerializerConfig sc = new SerializerConfig();
sc.setImplementation(serializer).setTypeClass(clazz);
return sc;
}
The map is created like this
private final IMap<String, Schedule> map = client.getMap("schedule");
If I get from the map using schedule id as key, the map returns the correct value e.g.
return map.get("zx81");
If I try to use an SQL predicate e.g.
return new ArrayList<>(map.values(new SqlPredicate("statusActive")));
then I get the following error
Exception in thread "main" com.hazelcast.nio.serialization.HazelcastSerializationException: There is no suitable de-serializer for type 2. This exception is likely to be caused by differences in the serialization configuration between members or between clients and members.
The custom serializer is using Kryo to serialize (based on this blog http://blog.hazelcast.com/comparing-serialization-methods/)
public class ScheduleSerializer extends CommonSerializer<Schedule> {
#Override
public int getTypeId() {
return 2;
}
#Override
protected Class<Schedule> getClassToSerialize() {
return Schedule.class;
}
}
The CommonSerializer is defined as
public abstract class CommonSerializer<T> implements StreamSerializer<T> {
protected abstract Class<T> getClassToSerialize();
#Override
public void write(ObjectDataOutput objectDataOutput, T object) {
Output output = new Output((OutputStream) objectDataOutput);
Kryo kryo = KryoInstances.get();
kryo.writeObject(output, object);
output.flush(); // do not close!
KryoInstances.release(kryo);
}
#Override
public T read(ObjectDataInput objectDataInput) {
Input input = new Input((InputStream) objectDataInput);
Kryo kryo = KryoInstances.get();
T result = kryo.readObject(input, getClassToSerialize());
input.close();
KryoInstances.release(kryo);
return result;
}
#Override
public void destroy() {
// empty
}
}
Do I need to do any configuration on the server side? I thought that the client config would be enough.
I am using Hazelcast client 3.6.1 and have one node/member running.

Queries require the nodes to know about the classes as the bytestream has to be deserialized to access the attributes and query them. This means that when you want to query on objects you have to deploy the model classes (and serializers) on the server side as well.
Whereas when you use key-based access we do not need to look into the values (neither into the keys as we compare the byte-arrays of the key) and just send the result. That way neither model classes nor serializers have to be available on the Hazelcast nodes.
I hope that makes sense.

Related

Apache Ignite Caching and PeerClassLoading

1. Is it possible to put non-POJO class instances as the value of a cache?
For example, I have a QueryThread class which is a subclass of java.lang.Thread and I am trying to put this instance in a cache. It looks like the put operation is failing because this cache is always empty.
Consider the following class:
public class QueryThread extends Thread {
private IgniteCache<?, ?> cache;
private String queryId;
private String query;
private long timeIntervalinMillis;
private volatile boolean running = false;
public QueryThread(IgniteCache<?, ?> dataCache, String queryId, String query, long timeIntervalinMillis) {
this.queryId = queryId;
this.cache = dataCache;
this.query = query;
this.timeIntervalinMillis = timeIntervalinMillis;
}
public void exec() throws Throwable {
SqlFieldsQuery qry = new SqlFieldsQuery(query, false);
while (running) {
List<List<?>> queryResult = cache.query(qry).getAll();
for (List<?> list : queryResult) {
System.out.println("result : "+list);
}
System.out.println("..... ");
Thread.sleep(timeIntervalinMillis);
}
}
}
This class is not a POJO. How do I store an instance of this class in the cache?
I tried implementing Serializable (didn't help).
I need to be able to do this:
queryCache.put(queryId, queryThread);
Next I tried broadcasting the class using the IgniteCallable interface. But my class takes multiple arguments in the constructor. I feel PeerClassLoading is easy if the class takes a no-arg constructor:
IgniteCompute compute = ignite.compute(ignite.cluster().forServers());
compute.broadcast(new IgniteCallable<MyServiceImpl>() {
#Override
public MyServiceImpl call() throws Exception {
MyServiceImpl myService = new MyServiceImpl();
return myService;
}
});
2. How do I do PeerClassLoading in the case of a class with multi-arg constructor?
It's restricted to put Thread instances to the cache, Thread instance cannot be serialized due to call to Native Methods. Thats why you always get empty value.
PeerClassLoading is a special distributed ClassLoader in Ignite for inter-node byte-code exchange. So, it's only about sharing classes between nodes. It doesn't make sense how many arguments in constructor class have.
But, on the other hand, object, that you created, will be serialised and sent to other nodes and for deserialisation it will need a default(non-arg) constructor.

How to use a factory to create objects which use Strategy pattern?

Let's assume we have a simple payment feature on an online shop. We want to manage different transactions with different processors of transactions:
A transaction can be a payment or a refund.
A processor of transactions can be Paypal or Payplug.
So we have the following classes:
class PaymentTransaction implements Transaction {
}
class RefundTransaction implements Transaction {
}
class PaypalProcessor implements Processor {
}
class PayplugProcessor implements Processor {
}
As suggested in this answer, we could use the following class which uses Strategy and polymorphism.
class PaymentProcessor {
private Processor processor;
private Transaction transaction;
public PaymentProcessor(Processor processor, Transaction transaction) {
this.processor = processor;
this.transaction = transaction;
}
public void processPayment() {
processor.process(transaction);
}
}
We assume the processor and the transaction to use are given from the database. I wonder how to create the PaymentProcessor object.
It seems that an abstract factory class with only one method is still a valid Abstract Factory pattern. So, in this case I wonder if using Abstract Factory would be relevant.
If yes, how to implement it?
If no, should we use Factory Method pattern with a PaymentProcessorFactory class to create PaymentProcessor with his two attributes according the details given from the database?
What is a best practice to use a factory in this case?
We assume the processor and the transaction to use are given from the database. I wonder how to create the PaymentProcessor object.
I would define an interface that I can adapt to the database result or any other source that can provide the data needed to create a PaymentProcessor. This is also useful for unittests.
public interface PaymentProcessorFactoryArgs {
String getProcessorType();
String getTransactionType();
}
and then implement a factory like this.
public class PaymentProcessorFactory {
private Map<String, Processor> processorMap = new HashMap<>();
private Map<String, Transaction> transactionMap = new HashMap<>();
public PaymentProcessorFactory() {
processorMap.put("paypal", new PaypalProcessor());
processorMap.put("payplug", new PayplugProcessor());
transactionMap.put("refund", new RefundTransaction());
transactionMap.put("payment", new PaymentTransaction());
}
public PaymentProcessor create(PaymentProcessorFactoryArgs factoryArgs) {
String processorType = factoryArgs.getProcessorType();
Processor processor = processorMap.get(processorType);
if(processor == null){
throw new IllegalArgumentException("Unknown processor type " + processorType);
}
String transactionType = factoryArgs.getTransactionType();
Transaction transaction = transactionMap.get(transactionType);
if(transaction == null){
throw new IllegalArgumentException("Unknown transaction type " + processorType);
}
return new PaymentProcessor(processor, transaction);
}
}
This is just a quick example. It would be better if you can register Processors and Transactions. E.g.
public void register(String processorType, Processor processor){
...
}
public void register(String transactionType, Transaction transaction){
...
}
You also might want to use anther type then String for the keys, maybe an enum.
In this example the Processor and Transaction objects are re-used every time a PaymentProcessor is created. If you want to create new objects for each PaymentProcessor, you can replace the Maps type
private Map<String, Factory<Processor>> processorMap = new HashMap<>();
private Map<String, Factory<Transaction>> transactionMap = new HashMap<>();
with anther factory interface. E.g.
public interface Factory<T> {
public T newInstance();
}
Maybe you can use the builder pattern. In the builder pattern there is a class called the director, which knows the algorithm of creating a complex object. To create the components the complex object is build of the director uses a builder. Like this you can change specific components to build up the whole complex object.
In your case the PaymentProcessor (the complex object) is composed out of a Payment and a Processor, so the algorithm is to inject them into a PaymentProcessor. The builder should build the parts. To build a paypal-refund combination you should create a builder which returns a PaypalProcessor and a RefundTransaction. When you want to create a payplug-payment the builder should return a PaymentTransaction and a PayPlugProcessor.
public interface PaymentProcessorBuilder {
public Transaction buildTransaction();
public Processor buildProcessor();
}
public class PaypalRefundProcessorBuilder implements PaymentProcessorBuilder {
public Transaction buildTransaction {
return new RefundTransaction();
}
public Processor buildProcessor {
return new PayPalProcessor();
}
}
public class PayPlugPaymentProcessorBuilder implements PaymentProcessorBuilder {
public Transaction buildTransaction {
return PaymentTransaction();
}
public Processor buildProcessor {
return new PayPlugProcessor();
}
}
Now the Director can use the builder to compose the PaymentProcessor:
publi PaymentProcessorDirector {
public PaymentProcessor createPaymentProcessor(PaymentProcessorBuilder builder) {
PaymentProcessor paymentProcessor = new PaymentProcessor();
paymentProcessor.setTransaction(builder.buildTransaction());
paymentProcessor.setProcessor(builder.buildProcessor());
return paymentProcessor;
}
}
The created PaymentProcessor depends now on the passed Builder:
...
PaymentProcessorDirector director = new PaymentProcessorDirector();
PaymentProcessorBuilder builder = new PaypalRefundProcessorBuilder();
PaymentProcessor paymentProcessor = director.createPaymentProcessor(builder);
...
For each combination you can create a builder. If you pass the right builder to the director you get the wanted PaymentProcessor back.
Now the question how could you get the right builder. Therefore you can use a factory, that takes some event arguments and decides then which builder has to be made. This builder you pass in the director an get the wanted PaymentProcessor.
CAUTION: This is only one possible solution for this problem. Every solution has is advantages and disadvantages. To find the right solution you to balance the good and the bad things.
PS: Hope the syntax is correct. Im not a java developer.
EDIT:
You could interprete the director of the builder pattern as a PaymentProcessorFactory with the builder itself as strategy for building the parts of the PaymentProcessor

Storm Kafkaspout KryoSerialization issue for java bean from kafka topic

Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm

Scout SmartField suggestions from Hibernate

I am working in Scout and need SmartField. For this I need to set up lookup for suggestions.
I see the example with creating Lookup Call and than implement in Lookup Service getConfiguredSqlSelect
but I use Hibernate to work with classes, so my question is how to connect Smart field with Hibernate object filled service?
create a new lookup call according to [1] with the following differences:
don't select AbstractSqlLookupService as a lookup servic super type, but AbstractLookupService
in the associated lookup service you now need to implement getDataByAll, getDataByKey, and getDataByText
to illustrate the following snippet should help:
public class TeamLookupService extends AbstractLookupService<String> implements ITeamLookupService {
private List<ILookupRow<String>> m_values = new ArrayList<>();
public TeamLookupService() {
m_values.add(new LookupRow<String>("CRC", "Costa Rica"));
m_values.add(new LookupRow<String>("HON", "Honduras"));
m_values.add(new LookupRow<String>("MEX", "Mexico"));
m_values.add(new LookupRow<String>("USA", "USA"));
}
#Override
public List<? extends ILookupRow<String>> getDataByAll(ILookupCall<String> call) throws ProcessingException {
return m_values;
}
#Override
public List<? extends ILookupRow<String>> getDataByKey(ILookupCall<String> call) throws ProcessingException {
List<ILookupRow<String>> result = new ArrayList<>();
for (ILookupRow<String> row : m_values) {
if (row.getKey().equals(call.getKey())) {
result.add(row);
}
}
return result;
}
...
[1] https://wiki.eclipse.org/Scout/Tutorial/4.0/Minicrm/Lookup_Calls_and_Lookup_Services#Create_Company_Lookup_Call

Can I specify the jackson #JsonView to use for method result transformation in RestEasy?

I'm working with a serialization model based on #JsonView. I normally configure jackson with a ContextResolver like this:
#Override
public ObjectMapper getContext(Class<?> aClass) {
// enable a view by default, else Views are not processed
Class view = Object.class;
if (aClass.getPackage().getName().startsWith("my.company.entity")) {
view = getViewNameForClass(aClass);
}
objectMapper.setSerializationConfig(
objectMapper.getSerializationConfig().withView(view));
return objectMapper;
}
This works fine if I serialize single entities. However, for certain use cases I want to serialize lists of my entities using the same view as for single entities. In this case, aClass is ArrayList, so the usual logic doesn't help much.
So I'm looking for a way to tell Jackson which view to use. Ideally, I'd write:
#GET #Produces("application/json; charset=UTF-8")
#JsonView(JSONEntity.class)
public List<T> getAll(#Context UriInfo uriInfo) {
return getAll(uriInfo.getQueryParameters());
}
And have that serialized under the view JSONEntity. Is this possible with RestEasy? If not, how can I emulate that?
Edit: I know I can do the serialization myself:
public String getAll(#Context UriInfo info, #Context Providers factory) {
List<T> entities = getAll(info.getQueryParameters());
ObjectMapper mapper = factory.getContextResolver(
ObjectMapper.class, MediaType.APPLICATION
).getContext(entityClass);
return mapper.writeValueAsString(entities);
}
However, this is clumsy at best and defeats the whole idea of having the framework deal with this boilerplate.
Turns out, it is possible to simply annotate a specific endpoint with #JsonView (just as in my question) and jackson will use this view. Who would have guessed.
You can even do this in the generic way (more context in my other question), but that ties me to RestEasy:
#Override
public void writeTo(Object value, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType,
MultivaluedMap<String, Object> httpHd,
OutputStream out) throws IOException {
Class view = getViewFromType(type, genericType);
ObjectMapper mapper = locateMapper(type, mediaType);
Annotation[] myAnn = Arrays.copyOf(annotations, annotations.length + 1);
myAnn[annotations.length] = new JsonViewQualifier(view);
super.writeTo(value, type, genericType, myAnn, mediaType, httpHd, out);
}
private Class getViewFromType(Class<?> type, Type genericType) {
// unwrap collections
Class target = org.jboss.resteasy.util.Types.getCollectionBaseType(
type, genericType);
target = target != null ? target : type;
try {
// use my mix-in as view class
return Class.forName("example.jackson.JSON" + target.getSimpleName());
} catch (ClassNotFoundException e) {
LOGGER.info("No view found for {}", target.getSimpleName());
}
return Object.class;
}