Multithread and Chronicle queue - chronicle-queue

so I'm trying to see what's the fastest way to write to Chronicle Queue in a multithread env so I have the following:
public static void main(String[] args) throws Exception{
final String path = args[0];
int times = Integer.parseInt(args[1]);
int num = Integer.parseInt(args[2]);
AtomicInteger nextid = new AtomicInteger(0);
ThreadLocal<Integer> id = ThreadLocal.withInitial(() -> nextid.getAndIncrement());
ChronicleTest test = new ChronicleTest();
ChronicleWriter writer = test.new ChronicleWriter(path);
CountDownLatch start = new CountDownLatch(1);
CountDownLatch done = new CountDownLatch(num);
Thread[] threads = new Thread[num];
long[] samples = new long[times * num];
for (int i = 0; i < num; i ++) {
threads[i] = new Thread(new Runnable() {
#Override
public void run() {
try {
start.await();
for (int i = 0; i < times; i++) {
int j = i + times*id.get().intValue();
long s = System.nanoTime();
writer.write(j + " DGirr5JgGVmxhvmaoO0c5MVVOUIEJxWa6nVStPnqmRl3T4hKE9tiwNjn6322uhgr2Fs4hDG8aKYvIB4O0733fx18EqGqNsshaSKoouky5ZekGK3vO87nfSUOz6uDD0olOp35QJQKPgr7tNlFgQP7BImcCyMPFCCm3yhSvOUgiVAD9W9BC3cqlKjQebOG4EkqzRIzwZjxbnIeK2YttfrvOvUJs0e9WBhXUVibi5Ks2j9ROQu2q0PJII4NYyN1a5YW2UKxyng3bRrtBVFqMSamtFzJ23EE4Y7rbQyeCVJhIKRM1LRvcGLUYZqKICWwDtOjGcbXUIlLLYiJcnVRZ4gNRvbFXvTL4XDjhD3uP5S8DjnkeAIBZcQ4VEUf30x65pTGLhWjOMV6jtiEQOWKB3nsuPMhcS1lP3wTQztViW7T8IsQlA5kvVAsnT5A7pojS1CffcYz4a2Rwqf9w6mhTPPZXgpDGArtThW3a69rwjsQxEY9aTwi0Pu0jlSAMmvOA158QFsSeJvLoJXILfocgjNEkj7iVcO0Rc6eC6b5EhJU3wv80EEHnORMXpQVuAuPyao7vJsx06TMcXf9t7Py4qxplVPhptIzrKs2qke2t5A8O4LQzq19OfEQsQGEjqHSbnfWXjfuntuFR8rV4VMyLZO1z3K7HmHtCEy14p5u0C0lj7vmtCnrOum0bnG2MwaAR7DJPIpOtwRObli5K5grv54AWnJyagpRX5e3eTEA8NAKO8oDZuLaoCvgavv9ciFrJcIDmyleVweiVrHs1fQXJoELzFpH4BmvzBuUjfZ8ORSIZsVuq4Hpls19GIA8opb1mSBt7MTifLPauo8WDWRoNi9DvjL4Z08Je6DvrhAFXasU2CMugg5EZ06OXexU17qnvxx2Vz9s9E5U50jDemySZ78KcZ6nqhQKIXUrkTktoEan2JXxI2NNSqEYifwly8ZO2MDquPe4J11rAcOqYp9y6Kb4NtFpNysM1evrLPvCx8oe");
long e = System.nanoTime();
samples[j] = e - s;
}
done.countDown();
} catch (Exception e){
e.printStackTrace();
}
}
});
}
for (int i = 0; i < num; i ++) {
try {
threads[i].start();
} catch (Exception e){
}
}
long startT = System.currentTimeMillis();
start.countDown();
done.await();
long endT = System.currentTimeMillis();
System.out.println("Time to complete [" + times + "] iteration in [" + (endT - startT) + " ms] and threads [" + num + "]");
System.out.println("#######");
for (int i = 0; i < times * num; i ++){
System.out.println(samples[i]);
}
}
private class ChronicleWriter {
SingleChronicleQueue m_cqueue;
ThreadLocal<ExcerptAppender> m_appender;
ChronicleWriter(String path ) {
m_cqueue = SingleChronicleQueueBuilder.binary(path).build();
m_appender = new ThreadLocal<ExcerptAppender>() {
protected ExcerptAppender initialValue() {
return m_cqueue.acquireAppender();
}
};
}
void write(String msg){
m_appender.get().writeText(msg);
}
}
And I ran with parameters:
path 2500 40
For some reason, this keeps crashing with core dump. What am I doing wrong? My disk has lots of disk space so that shouldn't matter. Thanks!!

If your program is crashing due to OutOfMemory error then
note that the disk space and the actual space used by the program may differ.
You may need to increase jvm heap size.
Refer below link to increase jvm heap size
What are the Xms and Xmx parameters when starting JVMs?
Or
Refer below link if you are running your program through eclipse
http://www.planetofbits.com/eclipse/increase-jvm-heap-size-in-eclipse/
I have tried your program with following version of chronicle-queue and it works fine.
<dependency>
<groupId>net.openhft</groupId>
<artifactId>chronicle-queue</artifactId>
<version>4.5.14</version>
</dependency>

Related

jvm condition and locksupport which is faster?

An experimental example of synchronous call is made. Each thread task waits for a task callback with its own value of + 1. The performance difference between condition and locksupport is compared. The result is unexpected. The two times are the same, but the difference on the flame diagram is very big. Does it mean that the JVM has not optimized locksupport
enter image description here
public class LockerTest {
static int max = 100000;
static boolean usePark = true;
static Map<Long, Long> msg = new ConcurrentHashMap<>();
static ExecutorService producer = Executors.newFixedThreadPool(4);
static ExecutorService consumer = Executors.newFixedThreadPool(16);
static AtomicLong record = new AtomicLong(0);
static CountDownLatch latch = new CountDownLatch(max);
static ReentrantLock lock = new ReentrantLock();
static Condition cond = lock.newCondition();
static Map<Long, Thread> parkMap = new ConcurrentHashMap<>();
static AtomicLong cN = new AtomicLong(0);
public static void main(String[] args) throws InterruptedException {
long start = System.currentTimeMillis();
for (int num = 0; num < max; num++) {
consumer.execute(() -> {
long id = record.incrementAndGet();
msg.put(id, -1L);
call(id);
if (usePark) {
Thread thread = Thread.currentThread();
parkMap.put(id, thread);
while (msg.get(id) == -1) {
cN.incrementAndGet();
LockSupport.park(thread);
}
} else {
lock.lock();
try {
while (msg.get(id) == -1) {
cN.incrementAndGet();
cond.await();
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
lock.unlock();
}
}
latch.countDown();
});
}
latch.await();
consumer.shutdown();
producer.shutdown();
System.out.printf("park %s suc %s cost %s cn %s"
, usePark
, msg.entrySet().stream().noneMatch(entry -> entry.getKey() + 1 != entry.getValue())
, System.currentTimeMillis() - start
, cN.get()
);
}
private static void call(long id) {
producer.execute(() -> {
try {
Thread.sleep((id * 13) % 100);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (usePark) {
msg.put(id, id + 1);
LockSupport.unpark(parkMap.remove(id));
} else {
lock.lock();
try {
msg.put(id, id + 1);
cond.signalAll();
} finally {
lock.unlock();
}
}
});
}
}

Optimizing Transposing and Comparison of Concurrent List in Java 8

I have an application in Java 8 Collecting Data of multiple Threads using BlockingQueue.
I need to perform comparison of samples.
But my application is very large, I implemented a mock application (Github) in Java 8.
I'm generating a chunk of bytes (really is random order).
The bytes are stored into ChunkDTO class.
I implemented a capturer the ChunkDTO in a List, code in Capturer class.
Each ChunkDTO of List is translated into a List of Samples (TimePitchValue exactly) returning a nested List (or List of List of TimePitchValue).
Later the nested List is transposed in order to performs comparisons between TimePitchValue with the same time value.
Due to enormous volume of TimePitchValue instances it's consumes huge time in my application.
Here some code (The complete functional Code is in Github) because is still large for this site).
public class Generator {
final static Logger LOGGER = Logger.getLogger("SampleComparator");
public static void main(String[] args) {
long previous = System.nanoTime();
final int minBufferSize = 2048;
int sampleRate = 8192;
int numChannels = 1;
int numBytesPerSample = 1;
int samplesChunkPerSecond = sampleRate / minBufferSize;
int minutes = 0;
int seconds = 10;
int time = 60 * minutes + seconds;
int chunksBySecond = samplesChunkPerSecond * numBytesPerSample * numChannels;
int pitchs = 32;
boolean signed = false;
boolean endianness = false;
AudioFormat audioformat = new AudioFormat(sampleRate, 8 * numBytesPerSample, numChannels, signed, endianness);
ControlDSP controlDSP = new ControlDSP(audioformat);
BlockingQueue<ChunkDTO> generatorBlockingQueue = new LinkedBlockingQueue<>();
Capturer capturer = new Capturer(controlDSP, pitchs, pitchs * time * chunksBySecond, generatorBlockingQueue);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(capturer));
for (int i = 0; i < time * chunksBySecond; i++) {
for (int p = 0; p < pitchs; p++) {
ChunkDTO chunkDTO = new ChunkDTO(UtilClass.getArrayByte(minBufferSize), i, p);
LOGGER.info(String.format("chunkDTO: %s", chunkDTO));
try {
generatorBlockingQueue.put(chunkDTO);
} catch (InterruptedException ex) {
LOGGER.info(ex.getMessage());
}
}
try {
Thread.sleep(1000 / chunksBySecond);
} catch (Exception ex) {
}
}
controlDSP.tryFinishThreads(Thread.currentThread());
long current = System.nanoTime();
long interval = TimeUnit.NANOSECONDS.toSeconds(current - previous);
System.out.println("Seconds Interval: " + interval);
}
}
Capturer Class
public class Capturer implements Callable<Void> {
private final ControlDSP controlDSP;
private final int pitchs;
private final int totalChunks;
private final BlockingQueue<ChunkDTO> capturerBlockingQueue;
private final Counter intCounter;
private final Map<Long, List<ChunkDTO>> mapIndexListChunkDTO = Collections.synchronizedMap(new HashMap<>());
private volatile boolean isRunning = false;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public Capturer(ControlDSP controlDSP, int pitchs, int totalChunks, BlockingQueue<ChunkDTO> capturerBlockingQueue) {
this.controlDSP = controlDSP;
this.pitchs = pitchs;
this.totalChunks = totalChunks;
this.capturerBlockingQueue = capturerBlockingQueue;
this.intCounter = new Counter();
this.controlDSP.getListFuture().add(this.controlDSP.getExecutorService().submit(() -> {
while (intCounter.getValue() < totalChunks) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
capturerBlockingQueue.add(new ChunkDTOStopper());
}));
this.threadName = this.getClass().getSimpleName();
}
#Override
public Void call() throws Exception {
long quantity = 0;
isRunning = true;
while (isRunning) {
try {
ChunkDTO chunkDTO = capturerBlockingQueue.take();
if (chunkDTO instanceof ChunkDTOStopper) {
break;
}
//Find or Create List (according to Index) to add the incoming Chunk
long index = chunkDTO.getIndex();
int sizeChunk = chunkDTO.getChunk().length;
List<ChunkDTO> listChunkDTOWithIndex = getListChunkDTOByIndex(chunkDTO);
//When the List (according to Index) is completed and processed
if (listChunkDTOWithIndex.size() == pitchs) {
mapIndexListChunkDTO.remove(index);
TransposerComparator transposerComparator = new TransposerComparator(controlDSP, controlDSP.getAudioformat(), index, sizeChunk, listChunkDTOWithIndex);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(transposerComparator));
}
quantity++;
intCounter.setValue(quantity);
LOGGER.info(String.format("%s\tConsumes:%s\ttotal:%05d", threadName, chunkDTO, quantity));
} catch (Exception ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
LOGGER.info(String.format("%s\tReceived:%05d\tQty:%s\tPitchs:%s\tEND\n", threadName, quantity, quantity / pitchs, pitchs));
return null;
}
private List<ChunkDTO> getListChunkDTOByIndex(ChunkDTO chunkDTO) {
List<ChunkDTO> listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
if (listChunkDTOWithIndex == null) {
listChunkDTOWithIndex = new ArrayList<>();
mapIndexListChunkDTO.put(chunkDTO.getIndex(), listChunkDTOWithIndex);
listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
}
listChunkDTOWithIndex.add(chunkDTO);
return listChunkDTOWithIndex;
}
}
TransposerComparator class.
The optimization required is in this code, specifically on transposedNestedList method.
public class TransposerComparator implements Callable<Void> {
private final ControlDSP controlDSP;
private final AudioFormat audioformat;
private final long index;
private final int sizeChunk;
private final List<ChunkDTO> listChunkDTOWithIndex;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public TransposerComparator(ControlDSP controlDSP, AudioFormat audioformat, long index, int sizeChunk, List<ChunkDTO> listChunkDTOWithIndex) {
this.controlDSP = controlDSP;
this.audioformat = audioformat;
this.index = index;
this.sizeChunk = sizeChunk;
this.listChunkDTOWithIndex = listChunkDTOWithIndex;
this.threadName = this.getClass().getSimpleName() + "_" + String.format("%05d", index);
}
#Override
public Void call() throws Exception {
Thread.currentThread().setName(threadName);
LOGGER.info(String.format("%s\tINI", threadName));
try {
int numBytesPerSample = audioformat.getSampleSizeInBits() / 8;
int quantitySamples = sizeChunk / numBytesPerSample;
long baseTime = quantitySamples * index;
// Convert the List of Chunk Bytes to Nested List of TimePitchValue
List<List<TimePitchValue>> nestedListTimePitchValue = listChunkDTOWithIndex
.stream()
.map(chunkDTO -> {
return IntStream
.range(0, quantitySamples)
.mapToObj(time -> {
int value = extractValue(chunkDTO.getChunk(), numBytesPerSample, time);
return new TimePitchValue(chunkDTO.getPitch(), baseTime + time, value);
}).collect(Collectors.toList());
}).collect(Collectors.toList());
List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue);
} catch (Exception ex) {
ex.printStackTrace();
LOGGER.log(Level.SEVERE, null, ex);
throw ex;
}
return null;
}
private static int extractValue(byte[] bytesSamples, int numBytesPerSample, int time) {
byte[] bytesSingleNumber = Arrays.copyOfRange(bytesSamples, time * numBytesPerSample, (time + 1) * numBytesPerSample);
int value = numBytesPerSample == 2
? (UtilClass.Byte2IntLit(bytesSingleNumber[0], bytesSingleNumber[1]))
: (UtilClass.byte2intSmpl(bytesSingleNumber[0]));
return value;
}
private static List<List<TimePitchValue>> transposedNestedList(List<List<TimePitchValue>> nestedList) {
List<List<TimePitchValue>> outNestedList = new ArrayList<>();
nestedList.forEach(pitchList -> {
pitchList.forEach(pitchValue -> {
List<TimePitchValue> listTimePitchValueWithTime = listTimePitchValueWithTime(outNestedList, pitchValue.getTime());
if (!outNestedList.contains(listTimePitchValueWithTime)) {
outNestedList.add(listTimePitchValueWithTime);
}
listTimePitchValueWithTime.add(pitchValue);
});
});
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});
return outNestedList;
}
private static List<TimePitchValue> listTimePitchValueWithTime(List<List<TimePitchValue>> nestedList, long time) {
List<TimePitchValue> listTimePitchValueWithTime = nestedList
.stream()
.filter(innerList -> innerList.stream()
.anyMatch(timePitchValue -> timePitchValue.getTime() == time))
.findAny()
.orElseGet(ArrayList::new);
return listTimePitchValueWithTime;
}
}
I was testing:
With 5 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 7 Seconds needed, Uncommented 211 Seconds needed.
With 10 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 12 Seconds needed, Uncommented 574 Seconds needed.
I need to use the application at least 60 minutes.
With the purpose of reduce the needed (consumed) time, I have two ways:
I choose for short is to optimize the methods that I am currently using.
That should be successful but longer and is to use GPGPU, but I don't know where to start implementing it yet.
QUESTIONS
This Question is for the first way: What changes do you recommend in the code of the transposedNestedList method in order to improve speed?
Is there better alternative to use this Comparison?
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});

Rabbitmq Priority Queue Write Performance Reduced as number of priorities increases

I am using Rabbitmq 3.7.3 with Java client 5.1.2 [amqp-client-5.1.2.jar] for priority queue. In my usecase I will be having maximum of 60 priorities in a single non persistence queue where only few upto 10-15 will be mostly used.
Case 1. If I have defined a queue to have 10 priorities, and messages ranging from 0 to 9 priorities are pushed to the queue, I am getting 12500 writes per second.
Case 2. If I have defined a queue to have 60 priorities, and messages ranging from 0-9 priorities are pushed to the queue, I am getting 4200 writes per second.
Case 3: If I have defined a queue to have 250 priorities, and messages ranging from 0-9 priorities are pushed to the queue, I am getting only 1500 writes per second.
What observed here is, as and when we increase the priorities capacity of a queue, though only very few being used, the write performance degrades.
Below is the code snippet: [Writes are done using single thread]
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.AMQP.BasicProperties;
import java.io.*;
import java.util.*;
public class Write implements Runnable{
int start;
int count;
int end;
private long unixTime;
private long timeTaken;
private long totalTimeTaken;
private long totalRequests;
private long minLatency;
private long maxLatency;
private double avgLatency;
private FileOutputStream out;
int parity = 1;
int ndnc = 0;
int mnp = 1001;
int blist = 0;
String value;
ConnectionFactory factory;
Connection connection;
Channel channel;
String QUEUE_NAME;
public Write(int s, int c){
this.start = s;
this.count = c;
this.end = this.count;
this.totalTimeTaken = 0;
this.totalRequests = 0;
this.minLatency = 1000000;
this.maxLatency = 0;
try{
this.QUEUE_NAME = "queuedata_4";
this.factory = new ConnectionFactory();
factory.setHost("192.168.1.100");
factory.setUsername("user");
factory.setPassword("pass");
this.connection = factory.newConnection();
this.channel = this.connection.createChannel();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-priority", 60);
this.channel.queueDeclare(QUEUE_NAME, false, false, false, args);
}catch(Exception e){
System.out.println("Create Exception"+e);
}
}
public void run(){
String message;
byte[] data = null;
for(int k=this.start; k<=(this.end); k++){
message = "Message_" + k;
unixTime = System.nanoTime();
try{
this.channel.basicPublish(
"",
this.QUEUE_NAME,
new BasicProperties.Builder()
.deliveryMode(1)
.priority(k%10+1)
.build(),
message.getBytes("UTF-8")
);
}catch(Exception e){
System.out.println("New connection made"+e);
}
timeTaken = System.nanoTime() - unixTime;
totalTimeTaken += timeTaken;
if(timeTaken < minLatency){
minLatency = timeTaken;
}
if(timeTaken > maxLatency){
maxLatency = timeTaken;
}
totalRequests ++;
}
avgLatency = totalTimeTaken / totalRequests;
System.out.println("TotalReqs:" + totalRequests + "
TotalTime:" + ((float)totalTimeTaken/1000000.0) + "
MinLatency:" + ((float)minLatency/1000000.0) + " MaxLatency:"
+ ((float)maxLatency/1000000.0) + " AvgLatency:" +
((float)avgLatency/1000000.0));
try{
channel.close();
connection.close();
}catch(Exception e){
System.out.println("Close Exception");
}
}
}

outofDirectMemory exception with redisson

i'm trying to learn Redis through Redisson. Here is my code to insert into redis using multiple threads.
package redisson
import java.io.File;
import java.util.concurrent.atomic.AtomicInteger;
import org.redisson.Redisson;
import org.redisson.api.RBatch;
import org.redisson.api.RMap;
import org.redisson.api.RedissonClient;
import org.redisson.config.Config;
public class RedisTest extends Thread {
static RMap<String, String> dMap = null;
static RMap<String, String> wMap = null;
static RMap<String, String> mMap = null;
static RedissonClient redisson = null;
public static void main(String[] args) throws Exception {
Config config = Config.fromJSON(new File("C:\\Users\\neon-workspace\\RedisProject\\src\\main\\resources\\SingleNodeConfig.json"));
RedissonClient redisson = Redisson.create(config);
dMap = redisson.getMap("Daily");
wMap = redisson.getMap("Weekly");
mMap = redisson.getMap("Monthly");
connectHbse(dMap,wMap,mMap,redisson);
redisson.shutdown();
}
public static void connectHbse(RMap<String, String> dMap,RMap<String, String> wMap,RMap<String, String> mMap,RedissonClient redisson) {
int totalSize=500000;
int totalThread=2;
int chunkSize = totalSize/totalThread;
AtomicInteger total = new AtomicInteger(chunkSize);
RedisTest test1[] = new RedisTest[totalThread];
for (int i = 0; i < test1.length; i++) {
test1[i] = new RedisTest(total,dMap,wMap,mMap,redisson);
total.set(total.intValue()+chunkSize);
}
long t1 = System.currentTimeMillis();
for (int i = 0; i < test1.length; i++) {
test1[i].start();
}
try {
for (int i = 0; i < test1.length; i++) {
test1[i].join();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Final Total Time Taken ::>>>>>>>>>>>>>>>>> " + ((System.currentTimeMillis() - t1))+"ms");
}
private AtomicInteger total = null;
public RedisTest(AtomicInteger total,RMap<String, String> dMap,RMap<String, String> wMap,RMap<String, String> mMap,RedissonClient redisson) {
this.total = new AtomicInteger(total.intValue());
this.dMap = dMap;
this.wMap = wMap;
this.mMap = mMap;
this.redisson = redisson;
}
public static int getRandomInteger(int maximum, int minimum) {
return ((int) (Math.random() * (maximum - minimum))) + minimum;
}
public void run() {
try {
long t1 = System.currentTimeMillis();
dMap.clear();
wMap.clear();
mMap.clear();
RBatch batch = redisson.createBatch();
for (;total.decrementAndGet()>=0;) {
String dvalue = ""+getRandomInteger(100,200);
String wvalue = "" +getRandomInteger(200, 300);
String mvalue = "" +getRandomInteger(300, 400);
batch.getMap("Daily").fastPutAsync(""+total.get(), dvalue);
batch.getMap("Weekly").fastPutAsync(""+total.get(), wvalue);
batch.getMap("Monthly").fastPutAsync(""+total.get(), mvalue);
synchronized (total) {
if(total.get()%100==0)
System.out.println(total.get()+" Records in Seconds:::::" + ((System.currentTimeMillis() - t1))/1000);
}
}
batch.execute();
System.out.println("Time Taken for completion::::: " + ((System.currentTimeMillis() - t1))+" by thread:::::"+Thread.currentThread().getName());
System.out.println("Done !!!");
} catch (Exception e) {
System.out.println("Done !!!" + e.getMessage());
e.printStackTrace();
} finally {
}
}
}
This code works fine until totalSize=400000.
When i put the totalSize=500000, its throwing the following exception.
io.netty.handler.codec.EncoderException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 939524096, max: 954466304)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:125)
at org.redisson.client.handler.CommandBatchEncoder.write(CommandBatchEncoder.java:45)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
... 25 more
Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 939524096, max: 954466304)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:627)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:581)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:226)
at io.netty.buffer.PoolArena.reallocate(PoolArena.java:397)
at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:118)
at io.netty.buffer.AbstractByteBuf.ensureWritable0(AbstractByteBuf.java:285)
at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:265)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1046)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
at org.redisson.client.handler.CommandEncoder.writeArgument(CommandEncoder.java:169)
at org.redisson.client.handler.CommandEncoder.encode(CommandEncoder.java:110)
at org.redisson.client.handler.CommandBatchEncoder.encode(CommandBatchEncoder.java:52)
at org.redisson.client.handler.CommandBatchEncoder.encode(CommandBatchEncoder.java:32)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
... 27 more
But i have about 7Gb ram free.
Can someone explain to me the reason i'm getting this exception?
It seems i should provide more memory to my JVM instance using -Xmx which solved the issue for me.

Sales force EMP connector, stops receiving notification after some time

I am performing a POC to check Streaming API stability, POC is as follows
Program 1 : subscribe to pushtopic created against Account object
Program 2 : create, update & delete single record after every 10 min interval
Both this programs were kept running for more than 12 hours (left overnight), after that I verified if all notification are received or not and found that after sometime (in this case it was nearly ~ 2 hours 45 min ) no notification were received, I repeated this twice and both case it stops getting notification after sometime.
Test code used
Streaming API client (using EMP connector)
public class SFPoc {
static Long count = 0L;
static Long Leadcount = 0L;
public static void main(String[] argv) throws Exception {
String userName = "<user_name>";
String password = "<pwd>";
String pushTopicName = "/topic/AccountPT";
String pushTopicNameLead = "/topic/Leadwhere";
long replayFrom = EmpConnector.REPLAY_FROM_EARLIEST;
String securityToken = "<token>";
BayeuxParameters custom = getBayeuxParamWithSpecifiedAPIVersion("37.0");
BayeuxParameters params = null;
try {
params = login(userName, password + securityToken, custom);
} catch (Exception e) {
e.printStackTrace();
}
Consumer<Map<String, Object>> consumer = event -> System.out.println(String.format("Received:\n%s ** Recieved at %s, event count total %s", event, LocalDateTime.now() , ++count));
Consumer<Map<String, Object>> consumerLead = event -> System.out.println(String.format("****** LEADS ***** Received:\n%s ** Recieved at %s, event count total %s", event, LocalDateTime.now() , ++Leadcount));
EmpConnector connector = new EmpConnector(params);
connector.start().get(10, TimeUnit.SECONDS);
TopicSubscription subscription = connector.subscribe(pushTopicName, replayFrom, consumer).get(10, TimeUnit.SECONDS);
TopicSubscription subscriptionLead = connector.subscribe(pushTopicNameLead, replayFrom, consumerLead).get(10, TimeUnit.SECONDS);
System.out.println(String.format("Subscribed: %s", subscription));
System.out.println(String.format("Subscribed: %s", subscriptionLead));
}
private static BayeuxParameters getBayeuxParamWithSpecifiedAPIVersion(String apiVersion) {
BayeuxParameters params = new BayeuxParameters() {
#Override
public String version() {
return apiVersion;
}
#Override
public String bearerToken() {
return null;
}
};
return params;
}
}
Code which is doing record create/update/delete periodically to generate events
import com.sforce.soap.enterprise.*;
import com.sforce.soap.enterprise.Error;
import com.sforce.soap.enterprise.sobject.Account;
import com.sforce.soap.enterprise.sobject.Contact;
import com.sforce.ws.ConnectionException;
import com.sforce.ws.ConnectorConfig;
import java.time.LocalDateTime;
public class SFDCDataAdjustment {
static final String USERNAME = "<username>";
static final String PASSWORD = "<pwd&securitytoken>";
static EnterpriseConnection connection;
static Long count = 0L;
public static void main(String[] args) {
ConnectorConfig config = new ConnectorConfig();
config.setUsername(USERNAME);
config.setPassword(PASSWORD);
//config.setTraceMessage(true);
try {
connection = Connector.newConnection(config);
// display some current settings
System.out.println("Auth EndPoint: "+config.getAuthEndpoint());
System.out.println("Service EndPoint: "+config.getServiceEndpoint());
System.out.println("Username: "+config.getUsername());
System.out.println("SessionId: "+config.getSessionId());
// run the different examples
while (true) {
createAccounts();
updateAccounts();
deleteAccounts();
Thread.sleep(1 * 10 * 60 * 1000);
}
} catch (ConnectionException e1) {
e1.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
// queries and displays the 5 newest contacts
private static void queryContacts() {
System.out.println("Querying for the 5 newest Contacts...");
try {
// query for the 5 newest contacts
QueryResult queryResults = connection.query("SELECT Id, FirstName, LastName, Account.Name " +
"FROM Contact WHERE AccountId != NULL ORDER BY CreatedDate DESC LIMIT 5");
if (queryResults.getSize() > 0) {
for (int i=0;i<queryResults.getRecords().length;i++) {
// cast the SObject to a strongly-typed Contact
Contact c = (Contact)queryResults.getRecords()[i];
System.out.println("Id: " + c.getId() + " - Name: "+c.getFirstName()+" "+
c.getLastName()+" - Account: "+c.getAccount().getName());
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
// create 5 test Accounts
private static void createAccounts() {
System.out.println("Creating a new test Account...");
Account[] records = new Account[1];
try {
// create 5 test accounts
for (int i=0;i<1;i++) {
Account a = new Account();
a.setName("OptyAccount "+i);
records[i] = a;
}
// create the records in Salesforce.com
SaveResult[] saveResults = connection.create(records);
// check the returned results for any errors
for (int i=0; i< saveResults.length; i++) {
if (saveResults[i].isSuccess()) {
System.out.println(i+". Successfully created record - Id: " + saveResults[i].getId() + "At " + LocalDateTime.now());
System.out.println("************Event Count************" + ++count);
} else {
Error[] errors = saveResults[i].getErrors();
for (int j=0; j< errors.length; j++) {
System.out.println("ERROR creating record: " + errors[j].getMessage());
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
// updates the 5 newly created Accounts
private static void updateAccounts() {
System.out.println("Update a new test Accounts...");
Account[] records = new Account[1];
try {
QueryResult queryResults = connection.query("SELECT Id, Name FROM Account ORDER BY " +
"CreatedDate DESC LIMIT 1");
if (queryResults.getSize() > 0) {
for (int i=0;i<queryResults.getRecords().length;i++) {
// cast the SObject to a strongly-typed Account
Account a = (Account)queryResults.getRecords()[i];
System.out.println("Updating Id: " + a.getId() + " - Name: "+a.getName());
// modify the name of the Account
a.setName(a.getName()+" -- UPDATED");
records[i] = a;
}
}
// update the records in Salesforce.com
SaveResult[] saveResults = connection.update(records);
// check the returned results for any errors
for (int i=0; i< saveResults.length; i++) {
if (saveResults[i].isSuccess()) {
System.out.println(i+". Successfully updated record - Id: " + saveResults[i].getId() + "At " + LocalDateTime.now());
System.out.println("************Event Count************" + ++count);
} else {
Error[] errors = saveResults[i].getErrors();
for (int j=0; j< errors.length; j++) {
System.out.println("ERROR updating record: " + errors[j].getMessage());
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
// delete the 5 newly created Account
private static void deleteAccounts() {
System.out.println("Deleting new test Accounts...");
String[] ids = new String[1];
try {
QueryResult queryResults = connection.query("SELECT Id, Name FROM Account ORDER BY " +
"CreatedDate DESC LIMIT 1");
if (queryResults.getSize() > 0) {
for (int i=0;i<queryResults.getRecords().length;i++) {
// cast the SObject to a strongly-typed Account
Account a = (Account)queryResults.getRecords()[i];
// add the Account Id to the array to be deleted
ids[i] = a.getId();
System.out.println("Deleting Id: " + a.getId() + " - Name: "+a.getName());
}
}
// delete the records in Salesforce.com by passing an array of Ids
DeleteResult[] deleteResults = connection.delete(ids);
// check the results for any errors
for (int i=0; i< deleteResults.length; i++) {
if (deleteResults[i].isSuccess()) {
System.out.println(i+". Successfully deleted record - Id: " + deleteResults[i].getId() + "At " + LocalDateTime.now());
System.out.println("************Event Count************" + ++count);
} else {
Error[] errors = deleteResults[i].getErrors();
for (int j=0; j< errors.length; j++) {
System.out.println("ERROR deleting record: " + errors[j].getMessage());
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Further updates got below mentioned error after which notification were
2017-03-09T19:30:28.346 ERROR [com.salesforce.emp.connector.EmpConnector] - connection failure, reconnecting
org.cometd.common.TransportException: {httpCode=503}
at org.cometd.client.transport.LongPollingTransport$2.onComplete(LongPollingTransport.java:278)
at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:193)
After this reconnect also happened and handshake also happened but error seems to be in resubscribe() EMP connector seems to be not able to resubscribe for some reason
Note I am using "resubscribe-on-disconnect" branch of EMP connetor
We have determined there was a bug on the server side in a 403 case. The Streaming API uses a session routing cookie and this cookie periodically expires. When it expires, the session is routed to another server, and this responds with a 403. In the current version, this 403 response does not include connect advice, and the client does not attempt to reconnect. This has been fixed and the fix is currently live. My understanding is that this should fix the reconnect problem exhibited by the clients.