AbstractStringBuilder.ensureCapacityInternal get NullPointerException in storm bolt - nullpointerexception

online system, the storm Bolt get NullPointerException,though I think I check it before line 61; It gets NullPointerException once in a while;
import ***.KeyUtils;
import ***.redis.PipelineHelper;
import ***.redis.PipelinedCacheClusterClient;
import **.redis.R2mClusterClient;
import org.apache.commons.lang3.StringUtils;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.IRichBolt;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.tuple.Tuple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.util.Map;
/**
* RedisBolt batch operate
*/
public class RedisBolt implements IRichBolt {
static final long serialVersionUID = 737015318988609460L;
private static ApplicationContext applicationContext;
private static long logEmitNumber = 0;
private static StringBuffer totalCmds = new StringBuffer();
private Logger logger = LoggerFactory.getLogger(getClass());
private OutputCollector _collector;
private R2mClusterClient r2mClusterClient;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
_collector = outputCollector;
if (applicationContext == null) {
applicationContext = new ClassPathXmlApplicationContext("spring/spring-config-redisbolt.xml");
}
if (r2mClusterClient == null) {
r2mClusterClient = (R2mClusterClient) applicationContext.getBean("r2mClusterClient");
}
}
#Override
public void execute(Tuple tuple) {
String log = tuple.getString(0);
String lastCommands = tuple.getString(1);
try {
//log count
if (StringUtils.isNotEmpty(log)) {
logEmitNumber++;
}
if (StringUtils.isNotEmpty(lastCommands)) {
if(totalCmds==null){
totalCmds = new StringBuffer();
}
totalCmds.append(lastCommands);//line 61
}
//日志数量控制
int numberLimit = 1;
String flow_log_limit = r2mClusterClient.get(KeyUtils.KEY_PIPELINE_LIMIT);
if (StringUtils.isNotEmpty(flow_log_limit)) {
try {
numberLimit = Integer.parseInt(flow_log_limit);
} catch (Exception e) {
numberLimit = 1;
logger.error("error", e);
}
}
if (logEmitNumber >= numberLimit) {
StringBuffer _totalCmds = new StringBuffer(totalCmds);
try {
//pipeline submit
PipelinedCacheClusterClient pip = r2mClusterClient.pipelined();
String[] commandArray = _totalCmds.toString().split(KeyUtils.REDIS_CMD_SPILT);
PipelineHelper.cmd(pip, commandArray);
pip.sync();
pip.close();
totalCmds = new StringBuffer();
} catch (Exception e) {
logger.error("error", e);
}
logEmitNumber = 0;
}
} catch (Exception e) {
logger.error(new StringBuffer("====RedisBolt error for log=[ ").append(log).append("] \n commands=[").append(lastCommands).append("]").toString(), e);
_collector.reportError(e);
_collector.fail(tuple);
}
_collector.ack(tuple);
}
#Override
public void cleanup() {
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
#Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
exception info:
java.lang.NullPointerException at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:113) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415) at java.lang.StringBuffer.append(StringBuffer.java:237) at com.jd.jr.dataeye.storm.bolt.RedisBolt.execute(RedisBolt.java:61) at org.apache.storm.daemon.executor$fn__5044$tuple_action_fn__5046.invoke(executor.clj:727) at org.apache.storm.daemon.executor$mk_task_receiver$fn__4965.invoke(executor.clj:459) at org.apache.storm.disruptor$clojure_handler$reify__4480.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__5044$fn__5057$fn__5110.invoke(executor.clj:846) at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
can anyone give me some advice to find the reason.

That is really odd thing to happen. Please read the code for two classes.
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/AbstractStringBuilder.java
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/StringBuffer.java
AbstractStringBuilder has constructor with no args which doesn't allocate the field 'value', which makes accessing the 'value' field being NPE. Any constructors in StringBuffer use that constructor. So maybe some odd thing happens in serialization/deserialization and unfortunately 'value' field in AbstractStringBuilder is being null.
Maybe initializing totalCmds in prepare() would be better, and also you need to consider synchronization (thread-safety) between bolts. prepare() can be called per bolt instance so fields are thread-safe, but class fields are not thread-safe.

I think I find the problem maybe;
the key point is
"StringBuffer _totalCmds = new StringBuffer(totalCmds);" and " totalCmds.append(lastCommands);//line 61"
when new a object, It takes serval steps:
(1) allocate memory and return reference
(2) initialize
if append after (1) and before (2) then the StringBuffer.java extends AbstractStringBuilder.java
/**
* The value is used for character storage.
*/
char[] value;
value is not initialized;so this will get null:
#Override
public synchronized void ensureCapacity(int minimumCapacity) {
if (minimumCapacity > value.length) {
expandCapacity(minimumCapacity);
}
}
this blot has a another question, some data maybe lost under a multithreaded environment

Related

Spring AOP - passing arguments between annotated methods

i've written a utility to monitor individual business transactions. For example, Alice calls a method which calls more methods and i want info on just Alice's call, separate from Bob's call to the same method.
Right now the entry point creates a Transaction object and it's passed as an argument to each method:
class Example {
public Item getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
Order order = getOrder(orderId, transaction);
transaction.stop();
logger.info(transaction);
return item;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
Order order = getItems(itemId, t);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer) {
t.incrementStat("offers", 1);
}
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.id + ")", TraceCategory.Database);
return offerRepository.getByItem(item);
t.stop();
}
}
This will print to the log something like:
Transaction started by Alice at 10:42
Statistics:
number of items : 3
offers : 1
Category Timings (longest first):
DB : 2s 903ms
code : 187ms
Timings (longest first):
getSpecialOffer(1013) : 626ms
getItems : 594ms
Trace:
getOrderEntryPoint (7ms)
getOrder (594ms)
getSpecialOffer(911) (90ms)
getSpecialOffer(1013) (626ms)
getSpecialOffer(2942) (113ms)
It works great but passing the transaction object around is ugly. Someone suggested AOP but i don't see how to pass the transaction created in the first method to all the other methods.
The Transaction object is pretty simple:
public class Transaction {
private String uuid = UUID.createRandom();
private List<TraceEvent> events = new ArrayList<>();
private Map<String,Int> stats = new HashMap<>();
}
class TraceEvent {
private String name;
private long durationInMs;
}
The app that uses it is a Web app, and this multi-threaded, but the individual transactions are on a single thread - no multi-threading, async code, competition for resources, etc.
My attempt at an annotation:
#Around("execution(* *(..)) && #annotation(Trace)")
public Object around(ProceedingJoinPoint point) {
String methodName = MethodSignature.class.cast(point.getSignature()).getMethod().getName();
//--- Where do i get this call's instance of TRANSACTION from?
if (null == transaction) {
transaction = TransactionManager.createTransaction();
}
transaction.trace(methodName);
Object result = point.proceed();
transaction.stop();
return result;
Introduction
Unfortunately, your pseudo code does not compile. It contains several syntactical and logical errors. Furthermore, some helper classes are missing. If I did not have spare time today and was looking for a puzzle to solve, I would not have bothered making my own MCVE out of it, because that would actually have been your job. Please do read the MCVE article and learn to create one next time, otherwise you will not get a lot of qualified help here. This was your free shot because you are new on SO.
Original situation: passing through transaction objects in method calls
Application helper classes:
package de.scrum_master.app;
public class Item {
private int id;
public Item(int id) {
this.id = id;
}
public int getId() {
return id;
}
#Override
public String toString() {
return "Item[id=" + id + "]";
}
}
package de.scrum_master.app;
public class SpecialOffer {}
package de.scrum_master.app;
public class OfferRepository {
public SpecialOffer getByItem(Item item) {
if (item.getId() < 30)
return new SpecialOffer();
return null;
}
}
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset, this));
items.add(new Item(22 + offset, this));
items.add(new Item(33 + offset, this));
return items;
}
}
Trace classes:
package de.scrum_master.trace;
public enum TraceCategory {
Code, Database
}
package de.scrum_master.trace;
class TraceEvent {
private String name;
private TraceCategory category;
private long durationInMs;
private boolean finished = false;
public TraceEvent(String name, TraceCategory category, long startTime) {
this.name = name;
this.category = category;
this.durationInMs = startTime;
}
public long getDurationInMs() {
return durationInMs;
}
public void setDurationInMs(long durationInMs) {
this.durationInMs = durationInMs;
}
public boolean isFinished() {
return finished;
}
public void setFinished(boolean finished) {
this.finished = finished;
}
#Override
public String toString() {
return "TraceEvent[name=" + name + ", category=" + category +
", durationInMs=" + durationInMs + ", finished=" + finished + "]";
}
}
Transaction classes:
Here I tried to mimic your own Transaction class with as few as possible changes, but there was a lot I had to add and modify in order to emulate a simplified version of your trace output. This is not thread-safe and the way I am locating the last unfinished TraceEvent is not nice and only works cleanly if there are not exceptions. But you get the idea, I hope. The point is to just make it basically work and subsequently get log output similar to your example. If this was originally my code, I would have solved it differently.
package de.scrum_master.trace;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.UUID;
public class Transaction {
private String uuid = UUID.randomUUID().toString();
private List<TraceEvent> events = new ArrayList<>();
private Map<String, Integer> stats = new HashMap<>();
public void trace(String message) {
trace(message, TraceCategory.Code);
}
public void trace(String message, TraceCategory category) {
events.add(new TraceEvent(message, category, System.currentTimeMillis()));
}
public void stop() {
TraceEvent event = getLastUnfinishedEvent();
event.setDurationInMs(System.currentTimeMillis() - event.getDurationInMs());
event.setFinished(true);
}
private TraceEvent getLastUnfinishedEvent() {
return events
.stream()
.filter(event -> !event.isFinished())
.reduce((first, second) -> second)
.orElse(null);
}
public void addStat(String text, int size) {
stats.put(text, size);
}
public void incrementStat(String text, int increment) {
Integer currentCount = stats.get(text);
if (currentCount == null)
currentCount = 0;
stats.put(text, currentCount + increment);
}
#Override
public String toString() {
return "Transaction {" +
toStringUUID() +
toStringStats() +
toStringEvents() +
"\n}\n";
}
private String toStringUUID() {
return "\n uuid = " + uuid;
}
private String toStringStats() {
String result = "\n stats = {";
for (Entry<String, Integer> statEntry : stats.entrySet())
result += "\n " + statEntry;
return result + "\n }";
}
private String toStringEvents() {
String result = "\n events = {";
for (TraceEvent event : events)
result += "\n " + event;
return result + "\n }";
}
}
package de.scrum_master.trace;
public class TransactionManager {
public Transaction create() {
return new Transaction();
}
}
Example driver application:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.Transaction;
import de.scrum_master.trace.TransactionManager;
public class Example {
private TransactionManager transactionManager = new TransactionManager();
private OfferRepository offerRepository = new OfferRepository();
public Order getOrderEntryPoint(int orderId) {
Transaction transaction = transactionManager.create();
transaction.trace("getOrderEntryPoint");
sleep(100);
Order order = getOrder(orderId, transaction);
transaction.stop();
System.out.println(transaction);
return order;
}
private Order getOrder(int orderId, Transaction t) {
t.trace("getOrder");
sleep(200);
Order order = new Order(orderId);
t.addStat("number of items", order.getItems().size());
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item, t);
if (null != offer)
t.incrementStat("special offers", 1);
}
t.stop();
return order;
}
private SpecialOffer getSpecialOffer(Item item, Transaction t) {
t.trace("getSpecialOffer(" + item.getId() + ")", TraceCategory.Database);
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
t.stop();
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
If you run this code, the output is as follows:
Transaction {
uuid = 62ec9739-bd32-4a56-b6b3-a8a13624961a
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=561, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=451, finished=true]
TraceEvent[name=getSpecialOffer(11), category=Database, durationInMs=117, finished=true]
TraceEvent[name=getSpecialOffer(22), category=Database, durationInMs=69, finished=true]
TraceEvent[name=getSpecialOffer(33), category=Database, durationInMs=63, finished=true]
}
}
Transaction {
uuid = a420cd70-96e5-44c4-a0a4-87e421d05e87
stats = {
special offers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint, category=Code, durationInMs=469, finished=true]
TraceEvent[name=getOrder, category=Code, durationInMs=369, finished=true]
TraceEvent[name=getSpecialOffer(12), category=Database, durationInMs=53, finished=true]
TraceEvent[name=getSpecialOffer(23), category=Database, durationInMs=63, finished=true]
TraceEvent[name=getSpecialOffer(34), category=Database, durationInMs=53, finished=true]
}
}
AOP refactoring
Preface
Please note that I am using AspectJ here because two things about your code would never work with Spring AOP because it works with a delegation pattern based on dynamic proxies:
self-invocation (internally calling a method of the same class or super-class)
intercepting private methods
Because of these Spring AOP limitations I advise you to either refactor your code so as to avoid the two issues above or to configure your Spring applications to use full AspectJ via LTW (load-time weaving) instead.
As you noticed, my sample code does not use Spring at all because AspectJ is completely independent of Spring and works with any Java application (or other JVM languages, too).
Refactoring idea
Now what should you do in order to get rid of passing around tracing information (Transaction objects), polluting your core application code and tangling it with trace calls?
You extract transaction tracing into an aspect taking care of all trace(..) and stop() calls.
Unfortunately your Transaction class contains different types of information and does different things, so you cannot completely get rid of context information about how to trace for each affected method. But at least you can extract that context information from the method bodies and transform it into a declarative form using annotations with parameters.
These annotations can be targeted by an aspect taking care of handling transaction tracing.
Added and updated code, iteration 1
Annotations related to transaction tracing:
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionEntryPoint {}
package de.scrum_master.trace;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(METHOD)
public #interface TransactionTrace {
String message() default "__METHOD_NAME__";
TraceCategory category() default TraceCategory.Code;
String addStat() default "";
String incrementStat() default "";
}
Refactored application classes with annotations:
package de.scrum_master.app;
import java.util.ArrayList;
import java.util.List;
import de.scrum_master.trace.TransactionTrace;
public class Order {
private int id;
public Order(int id) {
this.id = id;
}
#TransactionTrace(message = "", addStat = "number of items")
public List<Item> getItems() {
List<Item> items = new ArrayList<>();
int offset = id == 12345 ? 0 : 1;
items.add(new Item(11 + offset));
items.add(new Item(22 + offset));
items.add(new Item(33 + offset));
return items;
}
}
Nothing much here, only added an annotation to getItems(). But the sample application class changes massively, getting much cleaner and simpler:
package de.scrum_master.app;
import de.scrum_master.trace.TraceCategory;
import de.scrum_master.trace.TransactionEntryPoint;
import de.scrum_master.trace.TransactionTrace;
public class Example {
private OfferRepository offerRepository = new OfferRepository();
#TransactionEntryPoint
#TransactionTrace
public Order getOrderEntryPoint(int orderId) {
sleep(100);
Order order = getOrder(orderId);
return order;
}
#TransactionTrace
private Order getOrder(int orderId) {
sleep(200);
Order order = new Order(orderId);
for (Item item : order.getItems()) {
SpecialOffer offer = getSpecialOffer(item);
// Do something with special offers
}
return order;
}
#TransactionTrace(category = TraceCategory.Database, incrementStat = "specialOffers")
private SpecialOffer getSpecialOffer(Item item) {
sleep(50);
SpecialOffer specialOffer = offerRepository.getByItem(item);
return specialOffer;
}
private void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new Example().getOrderEntryPoint(12345);
new Example().getOrderEntryPoint(23456);
}
}
See? Except for a few annotations there is nothing left of the transaction tracing logic, the application code only takes care of its core concern. If you also remove the sleep() method which only makes the application slower for demonstration purposes (because we want some nice statistics with measured times >0 ms), the class gets even more compact.
But of course we need to put the transaction tracing logic somewhere, more precisely modularise it into an AspectJ aspect:
Transaction tracing aspect:
package de.scrum_master.trace;
import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.Collection;
import java.util.stream.Collectors;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
#Aspect("percflow(entryPoint())")
public class TransactionTraceAspect {
private static TransactionManager transactionManager = new TransactionManager();
private Transaction transaction = transactionManager.create();
#Pointcut("execution(* *(..)) && #annotation(de.scrum_master.trace.TransactionEntryPoint)")
private static void entryPoint() {}
#Around("execution(* *(..)) && #annotation(transactionTrace)")
public Object doTrace(ProceedingJoinPoint joinPoint, TransactionTrace transactionTrace) throws Throwable {
preTrace(transactionTrace, joinPoint);
Object result = joinPoint.proceed();
postTrace(transactionTrace);
addStat(transactionTrace, result);
incrementStat(transactionTrace, result);
return result;
}
private void preTrace(TransactionTrace transactionTrace, ProceedingJoinPoint joinPoint) {
String traceMessage = transactionTrace.message();
if ("".equals(traceMessage))
return;
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
if ("__METHOD_NAME__".equals(traceMessage)) {
traceMessage = signature.getName() + "(";
traceMessage += Arrays.stream(joinPoint.getArgs()).map(arg -> arg.toString()).collect(Collectors.joining(", "));
traceMessage += ")";
}
transaction.trace(traceMessage, transactionTrace.category());
}
private void postTrace(TransactionTrace transactionTrace) {
if ("".equals(transactionTrace.message()))
return;
transaction.stop();
}
private void addStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.addStat()) || result == null)
return;
if (result instanceof Collection)
transaction.addStat(transactionTrace.addStat(), ((Collection<?>) result).size());
else if (result.getClass().isArray())
transaction.addStat(transactionTrace.addStat(), Array.getLength(result));
}
private void incrementStat(TransactionTrace transactionTrace, Object result) {
if ("".equals(transactionTrace.incrementStat()) || result == null)
return;
transaction.incrementStat(transactionTrace.incrementStat(), 1);
}
#After("entryPoint()")
public void logFinishedTransaction(JoinPoint joinPoint) {
System.out.println(transaction);
}
}
Let me explain what this aspect does:
#Pointcut(..) entryPoint() says: Find me all methods in the code annotated by #TransactionEntryPoint. This pointcut is used in two places:
#Aspect("percflow(entryPoint())") says: Create one aspect instance for each control flow beginning at a transaction entry point.
#After("entryPoint()") logFinishedTransaction(..) says: Execute this advice (AOP terminology for a method linked to a pointcut) after an entry point methods is finished. The corresponding method just prints the transaction statistics just like in the original code at the end of Example.getOrderEntryPoint(..).
#Around("execution(* *(..)) && #annotation(transactionTrace)") doTrace(..)says: Wrap methods annotated by TransactionTrace and do the following (method body):
add new trace element and start measuring time
execute original (wrapped) method and store result
update trace element with measured time
add one type of statistics (optional)
increment another type of statistics (optional)
return wrapped method's result to its caller
The private methods are just helpers for the #Around advice.
The console log when running the updated Example class and active AspectJ is:
Transaction {
uuid = 4529d325-c604-441d-8997-45ca659abb14
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(12345), category=Code, durationInMs=468, finished=true]
TraceEvent[name=getOrder(12345), category=Code, durationInMs=366, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=11]), category=Database, durationInMs=59, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=22]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=33]), category=Database, durationInMs=51, finished=true]
}
}
Transaction {
uuid = ef76a996-8621-478b-a376-e9f7a729a501
stats = {
specialOffers=2
number of items=3
}
events = {
TraceEvent[name=getOrderEntryPoint(23456), category=Code, durationInMs=452, finished=true]
TraceEvent[name=getOrder(23456), category=Code, durationInMs=351, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=12]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=23]), category=Database, durationInMs=50, finished=true]
TraceEvent[name=getSpecialOffer(Item[id=34]), category=Database, durationInMs=50, finished=true]
}
}
You see, it looks almost identical to the original application.
Idea for further simplification, iteration 2
When reading method Example.getOrder(int orderId) I was wondering why you are calling order.getItems(), looping over it and calling getSpecialOffer(item) inside the loop. In your sample code you do not use the results for anything other than updating the transaction trace object. I am assuming that in your real code you do something with the order and with the special offers in that method.
But just in case you really do not need those calls inside that method, I suggest
you factor the calls out right into the aspect, getting rid of the TransactionTrace annotation parameters String addStat() and String incrementStat().
The Example code would get even simpler and
the annotation #TransactionTrace(message = "", addStat = "number of items") in class would go away, too.
I am leaving this refactoring to you if you think it makes sense.

How do I configure spring-kafka to ignore messages in the wrong format?

We have an issue with one of our Kafka topics which is consumed by the DefaultKafkaConsumerFactory & ConcurrentMessageListenerContainer combination described here with a JsonDeserializer used by the Factory. Unfortunately someone got a little enthusiastic and published some invalid messages onto the topic. It appears that spring-kafka silently fails to process past the first of these messages. Is it possible to have spring-kafka log an error and continue? Looking at the error messages which are logged it seems that perhaps the Apache kafka-clients library should deal with the case that when iterating a batch of messages one or more of them may fail to parse?
The below code is an example test case illustrating this issue:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;
import org.springframework.kafka.listener.config.ContainerProperties;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.ContainerTestUtils;
import org.springframework.util.concurrent.ListenableFuture;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
import static org.springframework.kafka.test.hamcrest.KafkaMatchers.hasKey;
import static org.springframework.kafka.test.hamcrest.KafkaMatchers.hasValue;
/**
* #author jfreedman
*/
public class TestSpringKafka {
private static final String TOPIC1 = "spring.kafka.1.t";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 1, TOPIC1);
#Test
public void submitMessageThenGarbageThenAnotherMessage() throws Exception {
final BlockingQueue<ConsumerRecord<String, JsonObject>> records = createListener(TOPIC1);
final KafkaTemplate<String, JsonObject> objectTemplate = createPublisher("json", new JsonSerializer<JsonObject>());
sendAndVerifyMessage(records, objectTemplate, "foo", new JsonObject("foo"), 0L);
// push some garbage text to Kafka which cannot be marshalled, this should not interrupt processing
final KafkaTemplate<String, String> garbageTemplate = createPublisher("garbage", new StringSerializer());
final SendResult<String, String> garbageResult = garbageTemplate.send(TOPIC1, "bar","bar").get(5, TimeUnit.SECONDS);
assertEquals(1L, garbageResult.getRecordMetadata().offset());
sendAndVerifyMessage(records, objectTemplate, "baz", new JsonObject("baz"), 2L);
}
private <T> KafkaTemplate<String, T> createPublisher(final String label, final Serializer<T> serializer) {
final Map<String, Object> producerProps = new HashMap<>();
producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
producerProps.put(ProducerConfig.CLIENT_ID_CONFIG, "TestPublisher-" + label);
producerProps.put(ProducerConfig.ACKS_CONFIG, "all");
producerProps.put(ProducerConfig.RETRIES_CONFIG, 2);
producerProps.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1);
producerProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);
producerProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 5000);
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, serializer.getClass());
final DefaultKafkaProducerFactory<String, T> pf = new DefaultKafkaProducerFactory<>(producerProps);
pf.setValueSerializer(serializer);
return new KafkaTemplate<>(pf);
}
private BlockingQueue<ConsumerRecord<String, JsonObject>> createListener(final String topic) throws Exception {
final Map<String, Object> consumerProps = new HashMap<>();
consumerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "TestConsumer");
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
consumerProps.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
consumerProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
final DefaultKafkaConsumerFactory<String, JsonObject> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
cf.setValueDeserializer(new JsonDeserializer<>(JsonObject.class));
final KafkaMessageListenerContainer<String, JsonObject> container = new KafkaMessageListenerContainer<>(cf, new ContainerProperties(topic));
final BlockingQueue<ConsumerRecord<String, JsonObject>> records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<String, JsonObject>) records::add);
container.setBeanName("TestListener");
container.start();
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
return records;
}
private void sendAndVerifyMessage(final BlockingQueue<ConsumerRecord<String, JsonObject>> records,
final KafkaTemplate<String, JsonObject> template,
final String key, final JsonObject value,
final long expectedOffset) throws InterruptedException, ExecutionException, TimeoutException {
final ListenableFuture<SendResult<String, JsonObject>> future = template.send(TOPIC1, key, value);
final ConsumerRecord<String, JsonObject> record = records.poll(5, TimeUnit.SECONDS);
assertThat(record, hasKey(key));
assertThat(record, hasValue(value));
assertEquals(expectedOffset, future.get(5, TimeUnit.SECONDS).getRecordMetadata().offset());
}
public static final class JsonObject {
private String value;
public JsonObject() {}
JsonObject(final String value) {
this.value = value;
}
public String getValue() {
return value;
}
public void setValue(final String value) {
this.value = value;
}
#Override
public boolean equals(final Object o) {
if (this == o) { return true; }
if (o == null || getClass() != o.getClass()) { return false; }
final JsonObject that = (JsonObject) o;
return Objects.equals(value, that.value);
}
#Override
public int hashCode() {
return Objects.hash(value);
}
#Override
public String toString() {
return "JsonObject{" +
"value='" + value + '\'' +
'}';
}
}
}
I have a solution but I don't know if it's the best one, I extended JsonDeserializer as follows which results in a null value being consumed by spring-kafka and requires the necessary downstream changes to handle that case.
class SafeJsonDeserializer[A >: Null](targetType: Class[A], objectMapper: ObjectMapper) extends JsonDeserializer[A](targetType, objectMapper) with Logging {
override def deserialize(topic: String, data: Array[Byte]): A = try {
super.deserialize(topic, data)
} catch {
case e: Exception =>
logger.error("Failed to deserialize data [%s] from topic [%s]".format(new String(data), topic), e)
null
}
}
Starting from the spring-kafka-2.x.x, we now have the comfort of declaring beans in the config file for the interface KafkaListenerErrorHandler with a implementation something as
#Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() {
return (m, e, c) -> {
this.listen3Exception = e;
MessageHeaders headers = m.getHeaders();
c.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
more resources can be found at https://docs.spring.io/spring-kafka/reference/htmlsingle/#annotation-error-handling There is also another link with the similar issue: Spring Kafka error handling - v1.1.x and How to handle SerializationException after deserialization
Use ErrorHandlingDeserializer2. This is a delegating key/value deserializer that catches exceptions, returning them in the headers as serialized java objects.
Under consumer configuration, add/update the below lines:
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
classOf[ErrorHandlingDeserializer2[JsonDeserializer]].getName)
configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, classOf[ErrorHandlingDeserializer2[StringDeserializer]].getName)
configProps.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, classOf[StringDeserializer].getName)
configProps.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, classOf[JsonDeserializer].getName)

Exception in thread "pool-1-thread-7" java.lang.NullPointerException

I'm trying to figure it out y i get this exception as i wrote in the title.
i'm building a multi thread MMU (memory management unit) project, first i initialize and add some pages into the ram and to my HD also and then i create processes and running all the system. a bit of information about the project:
into runConfig i;m reading a json file with a list of processCycles and every processCycles including a list of processCycle that including a list of pageIds with a list of data of those pages.
i think i did everything ok but still i get this exception, can somebody help me?
MMUDriver is the class that run all the systems:
package hit.driver;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.concurrent.Executor;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import com.google.gson.Gson;
import com.google.gson.JsonIOException;
import com.google.gson.JsonSyntaxException;
import com.google.gson.stream.JsonReader;
import com.hit.algorithm.IAlgoCache;
import com.hit.algorithm.LRUAlgoCacheImpl;
import com.hit.algorithm.MRUAlgoCacheImpl;
import com.hit.algorithm.RandomReplacementAlgoCacheImpl;
import hit.memoryunits.HardDisk;
import hit.memoryunits.MemoryManagementUnit;
import hit.memoryunits.MemoryManagementUnitTest;
import hit.memoryunits.Page;
import hit.processes.ProcessCycles;
import hit.processes.Process;
import hit.processes.RunConfiguration;
#SuppressWarnings("unused")
public class MMUDriver {
private static final String CONFIG_FILE_NAME="Configuration.json";
// private static final String CONFIG_FILE_NAME="test.json";
//read from JSON file the lists of processCycles
private static RunConfiguration readConfigurationFile() throws JsonIOException, JsonSyntaxException, FileNotFoundException{
RunConfiguration runConfig = new Gson().fromJson(new JsonReader(new FileReader(CONFIG_FILE_NAME)), RunConfiguration.class);
return runConfig;
}
//
private static List<Process> createProcesses(List<ProcessCycles> processCycles,MemoryManagementUnit mmu ){
int i=0;
List<Process> processList = new ArrayList<Process>();
for ( ProcessCycles currentListProcess : processCycles) {
Process process = new Process(++i , mmu , currentListProcess);
processList.add(process);
}
return processList;
}
//
private static void runProcesses(List<Process> processes){
//Executor is a interface that take care of threads מתי מתחיל טרד מתי נגמר איך הם ירוצו במקביל וכו'
//thread pool - אוסף של threads
ExecutorService executorservice = Executors.newCachedThreadPool();
for (Process process : processes)
executorservice.execute(process);
executorservice.shutdown();//
}
//
public static void main(String[] args) throws java.lang.InterruptedException,
InvocationTargetException, JsonIOException, JsonSyntaxException, ClassNotFoundException, IOException{
CLI cli = new CLI(System.in,System.out);
String [] configuration;
while((configuration = cli.getConfiguration()) != null){
IAlgoCache<Long, Long> algo = null;
int capacity = Integer.valueOf(configuration[1]);
switch(configuration[0]){ //which configuration to play on LRU|MRU|RR
case "LRU":
algo = new LRUAlgoCacheImpl<Long, Long>(capacity);
break;
case "MRU":
algo = new MRUAlgoCacheImpl<Long, Long>(capacity);
break;
case "RR":
algo = new RandomReplacementAlgoCacheImpl<Long, Long>(capacity);
break;
}
MemoryManagementUnit mmu = new MemoryManagementUnit(algo, capacity);
MemoryManagementUnitTest.test(mmu);//add some pages to ram for test
RunConfiguration runConfig = readConfigurationFile(); //Getting the user request pages throw json file
List<ProcessCycles> processCycles = runConfig.getProcessesCycles();
List<Process> processes = createProcesses(processCycles , mmu);
runProcesses(processes);
Map<Long, Page<byte[]>> bla= mmu.getRam().getRamPages();
System.out.println("1");
// mmu.shoutDown();
}
}
}
Process is like a thread in this project
package hit.processes;
import java.io.IOException;
import java.util.List;
import hit.memoryunits.MemoryManagementUnit;
import hit.memoryunits.Page;
#SuppressWarnings("unused")
public class Process implements Runnable {
private int id;
private MemoryManagementUnit mmu;
private ProcessCycles processCycles;
private Thread processThread=null;
public Process(int id, MemoryManagementUnit mmu, ProcessCycles processCycles){
this.id=id;
this.mmu=mmu;
this.processCycles=processCycles;
}
public int getId(){
return id;
}
public void setId(int id){
this.id=id;
}
/*
public void start(){
if(processThread==null)
processThread = new Thread(this,"process"); //
processThread.start();
}*/
#Override
public void run() {
synchronized(mmu){
//runing on all the list of processCycles
for (ProcessCycle currentProcessCycle : processCycles.getProcessCycles()) {
List<Long> pages = currentProcessCycle.getPages();
List<byte[]> data = currentProcessCycle.getData();
Page<byte[]>[] pagesList = null;
//creating a boolean array to know if the page is for writing or reading in size of pages.size()
boolean [] writePages = new boolean[pages.size()];
for (int i = 0; i < pages.size(); i++)
if(data.get(i) == null) //if the page is empty
writePages[i] = false;
else{
writePages[i] = true ;//if there is data in the page and we want to write it to the ram
}
try {
// Getting the pages from ram to see which we shall update
pagesList = mmu.getPages(pages.toArray(new Long[pages.size()]), writePages);
}
catch (ClassNotFoundException e1) {
e1.printStackTrace();
} catch (IOException e1) {
e1.printStackTrace();
}
for (int j = 0; j < pages.size(); j++)
if(writePages[j] == true) //if the page isn't for read only
pagesList[j].setContent(data.get(j));
try {
Thread.sleep(currentProcessCycle.getSleepMs());
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}
MMU class is that class that take care of all the requests from the user and manage all the pagging
package hit.memoryunits;
import java.io.FileNotFoundException;
import java.io.IOException;
import com.hit.algorithm.IAlgoCache;
public class MemoryManagementUnit {
private IAlgoCache<Long,Long> algo;
private RAM<Long, Page<byte[]>> ram;
public MemoryManagementUnit(IAlgoCache<Long, Long> algo, int ramCapacity) {
this.algo = algo;
this.ram= new RAM<Long, Page<byte[]>>(ramCapacity);
}
#SuppressWarnings("null")
public Page<byte[]> [] getPages(Long [] pageIds , boolean [] writePages) throws IOException, ClassNotFoundException{
for (int i = 0; i < pageIds.length; i++) {
if(algo.getElement((long) pageIds[i].hashCode())== null)//the page isn't exist in the cache
if(!ram.isFull())
{
if(writePages[i] == true){//if the page is for write we need to overwrite data
Page<byte[]> pageToRam = null;
pageToRam.setPageId(pageIds[i]);
pageToRam.setContent(null);
ram.addPage(pageToRam);// ram is not full need to upload from hd
}
else{//if the page is for read-only so we need to upload the page from HD
ram.addPage(HardDisk.getInstance().pageFault(pageIds[i]));// ram is not full need to upload from hd
}
algo.putElement(Long.valueOf(pageIds[i].hashCode()),pageIds[i]);//update cache
}
else
{
Page<byte[]> pageToRemove = ram.getPage(algo.putElement(Long.valueOf(pageIds[i].hashCode()),pageIds[i]));
ram.removePage(pageToRemove);
ram.addPage(HardDisk.getInstance().pageReplacement(pageToRemove,pageIds[i]));// ram full need to do replacement
algo.putElement(Long.valueOf(pageIds[i].hashCode()),pageIds[i]);//update cache
}
}
return ram.getPages(pageIds);
}
public RAM<Long, Page<byte[]>> getRam(){
return ram;
}
public IAlgoCache<Long,Long> getAlgo(){
return algo;
}
public void setAlgo(IAlgoCache<Long,Long> algo){
this.algo = algo;
}
public void setRam(RAM<Long, Page<byte[]>> ram){
this.ram = ram;
}
public void shoutDown() throws FileNotFoundException, IOException, ClassNotFoundException{
HardDisk.getInstance().saveToDisk(this.getRam().getRamPages());
}
}

How to serialize a Predicate<T> from Nashorn engine in java 8

How can i serialize a predicate obtained from java ScriptEngine nashorn? or how can i cast jdk.nashorn.javaadapters.java.util.function.Predicate to Serializable?
Here is the case:
I have this class
import java.io.Serializable;
import java.util.function.Predicate;
public class Filter implements Serializable {
private Predicate<Object> filter;
public Predicate<Object> getFilter() {
return filter;
}
public void setFilter(Predicate<Object> filter) {
this.filter = filter;
}
public boolean evaluate(int value) {
return filter.test(value);
}
}
and
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.function.Predicate;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
public class TestFilterSer {
public static void main(String[] args) throws ScriptException {
Filter f = new Filter();
//This works
//f.setFilter(getCastedPred());
// But I want this to work
f.setFilter(getScriptEnginePred());
System.out.println(f.evaluate(6));
try (ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(new File("pred.ser")))) {
oos.writeObject(f);
} catch (IOException e) {
e.printStackTrace();
}
f= null;
try (ObjectInputStream ois = new ObjectInputStream(new FileInputStream(new File("pred.ser")))) {
f= (Filter)ois.readObject();
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
System.out.println(f.evaluate(7));
}
public static Predicate<Object> getCastedPred() {
Predicate<Object> isEven = (Predicate<Object> & Serializable)(i) -> (Integer)i%2 == 0;
return isEven;
}
public static Predicate<Object> getScriptEnginePred() throws ScriptException {
ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn");
Predicate<Object> p = (Predicate<Object> & Serializable)engine.eval(
String.format("new java.util.function.Predicate(%s)", "function(i) i%2==0")
);
return p;
}
}
Requirement: To be able to serialize the Predicate obtained from Nashorn engine.
Observation: When I get Predicate from method getCastedPred(). It works because it is java.util.function.Predicate. it does Serialize after casting to Serializable. But when I get the Predicate from the Nashorn engine, Internally it returns me the jdk.nashorn.javaadapters.java.util.function.Predicate this one doesn't Serialize and casting to Serializable doesn't work.
Any idea how can i serialize this type of Predicate?
The problem is your API uses Predicate, not AggregateFilter. So the target type for the lambda in
setAggregatePredicate(x -> true)
will be Predicate, not AggregateFilter -- and the compiler won't know to make it serializable. If you change your API to use the more specific functional interface, serializable lambdas will be generated.

Caused by: java.lang.ClassCastException Wrapper cannot be cast to

I am trying to run Remote EJB running on a glassfish 3.1 container with a Javafax 2.2 client and I throw an exeption when I "lookup" the remote EJB.
The purpose of my Application is to get/put with Javafx Client objects which are save/retrieve as XML files on the server side.
On the server side the EJB is packaged into an EAR.
A controller "scrubber_S_Controller" is the Stateless session EJB
package scrubber_S_Controller;
import java.io.Serializable;
import javax.ejb.Stateless;
import javax.xml.bind.JAXBException;
import scrubber_S_Model.SimpleObject;
/**
* Session Bean implementation class Session
*/
#Stateless
public class Session implements SessionRemote, Serializable {
/**
*
*/
private static final long serialVersionUID = -5718452084852474986L;
/**
* Default constructor.
*/
public Session() {
// TODO Auto-generated constructor stub
}
#Override
public SimpleObject getSimpleObject() throws JAXBException {
SimpleObject simpleobjet = new SimpleObject();
return simpleobjet.retrieveSimpleObject();
}
#Override
public void setSimpleObject(SimpleObject simpleobject) throws JAXBException {
simpleobject.saveSimpleObject(simpleobject);
}
}
The remote interface used is
package scrubber_S_Controller;
import javax.ejb.Remote;
import javax.xml.bind.JAXBException;
import scrubber_S_Model.SimpleObject;
#Remote
public interface SessionRemote {
public SimpleObject getSimpleObject() throws JAXBException;
public void setSimpleObject(SimpleObject simpleobject) throws JAXBException;
}
SimpleObject are managed in a scrubber_S_Model package:
package scrubber_S_Model;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
#XmlRootElement(name = "SimpleObject")
public class SimpleObject implements java.io.Serializable {
/**
*
*/
private static final long serialVersionUID = 306212289216139111L;
/**
* Used to define a simpleObject Value Object
*/
#XmlElement(name = "scrubberValveValue")
private int scrubberValveValue;
#XmlElement(name = "bypassValveValue")
private int bypassValveValue;
#XmlElement(name = "exhaustState")
private boolean exhaustState;
#XmlElement(name = "layoutColor")
private String layoutColor;
#XmlElement(name = "textValue")
private String textValue;
#XmlElement(name = "textColor")
private String textColor;
#XmlElement(name = "pressureThreshold")
private int pressureThreshold;
public SimpleObject(int bypassvalvevalue, int scrubbervalvevalue,
boolean exhauststate, String layoutcolor, String textvalue,
String textcolor, int pressurethreshold) {
this.bypassValveValue = bypassvalvevalue;
this.scrubberValveValue = scrubbervalvevalue;
this.exhaustState = exhauststate;
this.layoutColor = layoutcolor;
this.textValue = textvalue;
this.textColor = textcolor;
this.pressureThreshold = pressurethreshold;
}
/**
* Empty constructor, just to enable JAXB.
*/
public SimpleObject() {
}
/**
* Gets the value of the scrubberValveValue property.
*
*/
public int getScrubberValveValue() {
return this.scrubberValveValue;
}
/**
* Sets the value of the scrubberValveValue property.
*
*/
public void setScrubberValveValue(int value) {
this.scrubberValveValue = value;
}
/**
* Gets the value of the bypassValveValue property.
*
*/
public int getBypassValveValue() {
return this.bypassValveValue;
}
/**
* Sets the value of the bypassValveValue property.
*
*/
public void setBypassValveValue(int value) {
this.bypassValveValue = value;
}
/**
* Gets the value of the exhaustState property.
*
*/
public boolean isExhaustState() {
return this.exhaustState;
}
/**
* Sets the value of the exhaustState property.
*
*/
public void setExhaustState(boolean value) {
this.exhaustState = value;
}
/**
* Gets the value of the layoutColor property.
*
* #return possible object is {#link String }
*
*/
public String getLayoutColor() {
return this.layoutColor;
}
/**
* Sets the value of the layoutColor property.
*
* #param value
* allowed object is {#link String }
*
*/
public void setLayoutColor(String value) {
this.layoutColor = value;
}
/**
* Gets the value of the textValue property.
*
* #return possible object is {#link String }
*
*/
public String getTextValue() {
return this.textValue;
}
/**
* Sets the value of the textValue property.
*
* #param value
* allowed object is {#link String }
*
*/
public void setTextValue(String value) {
this.textValue = value;
}
/**
* Gets the value of the textColor property.
*
* #return possible object is {#link String }
*
*/
public String getTextColor() {
return this.textColor;
}
/**
* Sets the value of the textColor property.
*
* #param value
* allowed object is {#link String }
*
*/
public void setTextColor(String value) {
this.textColor = value;
}
/**
* Gets the value of the pressureThreshold property.
*
*/
public int getPressureThreshold() {
return this.pressureThreshold;
}
/**
* Sets the value of the pressureThreshold property.
*
*/
public void setPressureThreshold(int value) {
this.pressureThreshold = value;
}
public void saveSimpleObject(SimpleObject simpleobjet) throws JAXBException {
FileOutputStream fileout = null;
JAXBContext jc = JAXBContext.newInstance(SimpleObject.class);
Marshaller marshaller = jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
try {
fileout = new FileOutputStream("simpleobjectfile.xml");
} catch (FileNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
marshaller.marshal(simpleobjet, fileout);
try {
fileout.flush();
fileout.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public SimpleObject retrieveSimpleObject() throws JAXBException {
FileInputStream fileinput = null;
JAXBContext jc = JAXBContext.newInstance(SimpleObject.class);
Unmarshaller unmarshaller = jc.createUnmarshaller();
try {
fileinput = new FileInputStream("simpleobjectfile.xml");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
SimpleObject simpleobjet = (SimpleObject)unmarshaller.unmarshal(fileinput);
try {
fileinput.close();
} catch (IOException e) {
e.printStackTrace();
}
return simpleobjet;
}
}
Junit test of the marshalling/unmashalling are working fine.
Deployment of the EJB give the following JNDI naming:
INFO: EJB5181:Portable JNDI names for EJB Session: [java:global/Scrubber_S_EAR/Scrubber_S/Session, java:global/Scrubber_S_EAR/Scrubber_S/Session!scrubber_S_Controller.SessionRemote]
INFO: CORE10010: Loading application Scrubber_S_EAR done in 4 406 ms
On the client side the Javafx application as follow:
package ScrubberView;
import java.util.Properties;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.xml.bind.JAXBException;
import scrubber_CView_Model.SimpleObject;
import session.SessionRemote;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.BorderPane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
public class scrubberView extends Application {
#Override
public void start(Stage primaryStage) throws JAXBException {
try {
Properties propriétés = new Properties();
propriétés.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");
Context ctx = new InitialContext(propriétés);
SessionRemote mySession = (SessionRemote)ctx.lookup("java:global/Scrubber_S_EAR/Scrubber_S/Session");
//Create an object to exchange
SimpleObject simpleObject = new SimpleObject(1, 2, true, "layoutcolor", "text", "textcolor", 10 );
mySession.setSimpleObject(simpleObject);
#SuppressWarnings("unused")
SimpleObject simpleObject2 = new SimpleObject();
simpleObject2 = mySession.getSimpleObject();
} catch (NamingException e) {
// TODO Auto-generated catch block
System.out.println(e.toString() );
e.printStackTrace();
}
//compose the scrubberview scene and show it
primaryStage.setTitle("scrubberView");
BorderPane borderpane = new BorderPane();
Scene scene = new Scene(borderpane, 350, 80, Color.GREY);
primaryStage.setScene(scene);
scene.getStylesheets().add("./CleanRoomControl.css");
primaryStage.show();
}
public static void main(String[] args) {
launch(args);
}
}
The following jar are in the buid enc of the Application
C:\glassfish3\glassfish\modules\auto-depends.jar
C:\glassfish3\glassfish\modules\common-util.jar
C:\glassfish3\glassfish\modules\config-api.jar
C:\glassfish3\glassfish\modules\config-types.jar
C:\glassfish3\glassfish\modules\config.jar
C:\glassfish3\glassfish\modules\deployment-common.jar
C:\glassfish3\glassfish\modules\dol.jar
C:\glassfish3\glassfish\modules\ejb-container.jar
C:\glassfish3\glassfish\modules\ejb.security.jar
C:\glassfish3\glassfish\modules\glassfish-api.jar
C:\glassfish3\glassfish\modules\glassfish-corba-asm.jar
C:\glassfish3\glassfish\modules\glassfish-corba-codegen.jar
C:\glassfish3\glassfish\modules\glassfish-corba-csiv2-idl.jar
C:\glassfish3\glassfish\modules\glassfish-corba-newtimer.jar
C:\glassfish3\glassfish\modules\glassfish-corba-omgapi.jar
C:\glassfish3\glassfish\modules\glassfish-corba-orb.jar
C:\glassfish3\glassfish\modules\glassfish-corba-orbgeneric.jar
C:\glassfish3\glassfish\modules\glassfish-naming.jar
C:\glassfish3\glassfish\modules\gmbal.jar
C:\glassfish3\glassfish\modules\hk2-core.jar
C:\glassfish3\glassfish\modules\internal-api.jar
C:\glassfish3\glassfish\modules\javax.ejb.jar
C:\glassfish3\glassfish\modules\kernel.jar
C:\glassfish3\glassfish\modules\management-api.jar
C:\glassfish3\glassfish\modules\orb-connector.jar
C:\glassfish3\glassfish\modules\orb-iiop.jar
C:\glassfish3\glassfish\modules\security.jar
C:\glassfish3\glassfish\modules\transaction-internal-api.jar
when running the
SessionRemote mySession = (SessionRemote)ctx.lookup("java:global/Scrubber_S_EAR/Scrubber_S/Session");
line, it raises the following exeption:
Exception in Application start method
Exception in thread "main" java.lang.RuntimeException: Exception in Application start method
at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:403)
at com.sun.javafx.application.LauncherImpl.access$000(LauncherImpl.java:47)
at com.sun.javafx.application.LauncherImpl$1.run(LauncherImpl.java:115)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.ClassCastException: scrubber_S_Controller._SessionRemote_Wrapper cannot be cast to session.SessionRemote
at ScrubberView.scrubberView.start(scrubberView.java:27)
at com.sun.javafx.application.LauncherImpl$5.run(LauncherImpl.java:319)
at com.sun.javafx.application.PlatformImpl$5.run(PlatformImpl.java:215)
at com.sun.javafx.application.PlatformImpl$4$1.run(PlatformImpl.java:179)
at com.sun.javafx.application.PlatformImpl$4$1.run(PlatformImpl.java:176)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.application.PlatformImpl$4.run(PlatformImpl.java:176)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.access$100(WinApplication.java:29)
at com.sun.glass.ui.win.WinApplication$3$1.run(WinApplication.java:73)
... 1 more
If I try with the other naming:
SessionRemote mySession = (SessionRemote)ctx.lookup("java:global/Scrubber_S_EAR/Scrubber_S/Session!scrubber_S_Controller.SessionRemote");
It raises the same exeption.
It will be great if somebody can help me to fix this issue.
Many thanks in advance for your help.
I hope that my english is not too bad for the understanding.
I fix this issue with the following:
Remove the jar files from the env and only put gf-client.jar in the classpath as explain in
How do I access a Remote EJB component from a stand-alone java client?
Rename "scrubber_S_Controler" to "session" (as the corresponding package in the client side)
Remove "retrieveSimpleObject()" and "saveSimpleObject(SimpleObject simpleobjet)" from "SimpleObject" class and add it to a new "SimpleObjectPersist class"
use the "SimpleObjectPersist" to save and retreive a SimpleObject.
Ough! After this, its running well.