ClassCastException using ByteBuffer as key for ChronicleMap - chronicle-map

I tried a very simple example where I try to write int -> ByteBuffer and reuse the array while reading the values from the map.
try (ChronicleMap<Integer, ByteBuffer> map = ChronicleMap
.of(Integer.class, ByteBuffer.class)
.entries(1)
.averageValueSize(30)
.createPersistedTo(new File("e:/temp/test.obj"))) {
byte[] out = new byte[] { 1, 2, 3};
map.put(1, ByteBuffer.wrap(out)); // ClassCastException
ByteBuffer buff = ByteBuffer.allocate(30);
ByteBuffer in = map.getUsing(1, buff); // hoping that 'ByteBuffer in'
// is redundant
}
Unfortunately I get an unexpected ClassCastException, because Chronicle expected a byte[]
Exception in thread "main" java.lang.ClassCastException: ChronicleMap{name=null, file=E:\temp\test.obj, identityHashCode=809762318}: Value must be a [B but was a class java.nio.HeapByteBuffer
at net.openhft.chronicle.map.VanillaChronicleMap.checkValue(VanillaChronicleMap.java:190)
at net.openhft.chronicle.map.VanillaChronicleMap.put(VanillaChronicleMap.java:707)
at test.SerTest.main(SerTest.java:28)
What am I doing wrong? Trying with byte[] worked, but getUsing() did not use my input array, that remained empty.

Related

random corrupted data in serial Arduino

I have issue to read serial data from Arduino Uno in UWP C#. Sometimes when I start app I get corrupted data.
But in arduino monitor always is good.
Corrupted data is like this:
57
5
But should be 557.
Arduino codes:
String digx, digx2, digx3, digx1 ;
void setup() {
delay(500);
Serial.begin(9600);
}
void loop() {
int sensorValue = analogRead(A1);
sensorValue = map(sensorValue, 0, 1024, 0, 999);
digx1 = String(sensorValue % 10);
digx2 = String((sensorValue / 10) % 10);
digx3 = String((sensorValue / 100) % 10);
digx = (digx3 + digx2 + digx1);
Serial.println(digx);
Serial.flush();
delay(100);
}
And windows universal codes:
public sealed partial class MainPage : Page
{
public DataReader dataReader;
public SerialDevice serialPort;
private string data;
private async void InitializeConnection()
{
var aqs = SerialDevice.GetDeviceSelectorFromUsbVidPid(0x2341, 0x0043);
var info = await DeviceInformation.FindAllAsync(aqs);
serialPort = await SerialDevice.FromIdAsync(info[0].Id);
Thread.Sleep(100);
serialPort.DataBits = 8;
serialPort.BaudRate = 9600;
serialPort.Parity = SerialParity.None;
serialPort.Handshake = SerialHandshake.None;
serialPort.StopBits = SerialStopBitCount.One;
serialPort.ReadTimeout = TimeSpan.FromMilliseconds(1000);
serialPort.WriteTimeout = TimeSpan.FromMilliseconds(1000);
Thread.Sleep(100);
dataReader = new DataReader(serialPort.InputStream);
while (true)
{
await ReadAsync();
}
}
private async Task ReadAsync()
{
try
{
uint readBufferLength = 5;
dataReader.InputStreamOptions = InputStreamOptions.Partial;
//dataReader.UnicodeEncoding = UnicodeEncoding.Utf8;
var loadAsyncTask = dataReader.LoadAsync(readBufferLength).AsTask();
var ReadAsyncBytes = await loadAsyncTask;
if (ReadAsyncBytes > 0)
{
data = dataReader.ReadString(ReadAsyncBytes);
serialValue.Text = data;
}
}
catch (Exception e)
{
}
}
}
My idea was if it happened, Program skip that data. I tried different buffer length but more or less than 5 will always receive incorrect.
Thanks for any help.
Two methods that can fix your problem.
Method1: Make your UART communication robust
That there is no chance of an error but this is a tricky and time-consuming task in the case of windows software communicating with Arduino.
But still have some tricks which can help you because I have tried them and some of them work but please don't ask for the reason why it works.
Change your UART baud rate from 9600 to something else (mine if works perfect at 38400).
Replace Serial.begin(9600); to Serial.begin(38400,SERIAL_8N1)where 8 is data bit, 1 is stop bit and N is parity none in SERIAL_8N1. And you can also try different settings in data bit, stop bit, and parity. check here
don't send data after converting it into a string Serial.println(digx);. Send data in form of bytes array use Serial.write(digx,len);. check here
Method2: Add CRC Packet
At the end of each data packet and then send. When it reaches the windows software first fetch all UART data then calculate CRC from that data and then just compare calculated CRC and received CRC.If CRC matches then the data is correct and if not then ignore data and wait for new data.
you can check for CRC calculation its direct formula base implement and for little head start check this library by vinmenn.
I used two things to fix the problem,
1. In Windows codes added SerialDevice.IsDataTerminalReadyEnabled = true property to read bytes with correct order. From Microsoft docs
Here's a link!
2. As dharmik helped, I used array and send it with Serial.write(array, arrayLength) in Arduino and read them with dataReader.ReadBytes(array[]) and right buffer size in Windows app.
And result is so stable even with much lower Delays.

Beam Java SDK with TFRecord and Compression GZIP

We're using Beam Java SDK (and Google Cloud Dataflow to run batch jobs) a lot, and we noticed something weird (possibly a bug?) when we tried to use TFRecordIO with Compression.GZIP. We were able to come up with some sample code that can reproduce the errors we face.
To be clear, we are using Beam Java SDK 2.4.
Suppose we have PCollection<byte[]> which can be a PC of proto messages, for instance, in byte[] format.
We usually write this to GCS (Google Cloud Storage) using Base64 encoding (newline delimited Strings) or using TFRecordIO (without compression). We have had no issue reading the data from GCS in this manner for a very long time (2.5+ years for the former and ~1.5 years for the latter).
Recently, we tried TFRecordIO with Compression.GZIP option, and sometimes we get an exception as the data is seen as invalid (while being read). The data itself (the gzip files) is not corrupted, and we've tested various things, and reached the following conclusion.
When a byte[] that is being compressed under TFRecordIO is above certain threshold (I'd say when at or above 8192), then TFRecordIO.read().withCompression(Compression.GZIP) would not work.
Specifically, it will throw the following exception:
Exception in thread "main" java.lang.IllegalStateException: Invalid data
at org.apache.beam.sdk.repackaged.com.google.common.base.Preconditions.checkState(Preconditions.java:444)
at org.apache.beam.sdk.io.TFRecordIO$TFRecordCodec.read(TFRecordIO.java:642)
at org.apache.beam.sdk.io.TFRecordIO$TFRecordSource$TFRecordReader.readNextRecord(TFRecordIO.java:526)
at org.apache.beam.sdk.io.CompressedSource$CompressedReader.readNextRecord(CompressedSource.java:426)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.advanceImpl(FileBasedSource.java:473)
at org.apache.beam.sdk.io.FileBasedSource$FileBasedReader.startImpl(FileBasedSource.java:468)
at org.apache.beam.sdk.io.OffsetBasedSource$OffsetBasedReader.start(OffsetBasedSource.java:261)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.processElement(BoundedReadEvaluatorFactory.java:141)
at org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:161)
at org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:125)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This can be reproduced easily, so you can refer to the code at the end. You will also see comments about the byte array length (as I tested with various sizes, I concluded that 8192 is the magic number).
So I'm wondering if this is a bug or known issue -- I couldn't find anything close to this on Apache Beam's Issue Tracker here but if there is another forum/site I need to check, please let me know!
If this is indeed a bug, what would be the right channel to report this?
The following code can reproduce the error we have.
A successful run (with parameters 1, 39, 100) would show the following message at the end:
------------ counter metrics from CountDoFn
[counter] plain_base64_proto_array_len: 8126
[counter] plain_base64_proto_in: 1
[counter] plain_base64_proto_val_cnt: 39
[counter] tfrecord_gz_proto_array_len: 8126
[counter] tfrecord_gz_proto_in: 1
[counter] tfrecord_gz_proto_val_cnt: 39
[counter] tfrecord_uncomp_proto_array_len: 8126
[counter] tfrecord_uncomp_proto_in: 1
[counter] tfrecord_uncomp_proto_val_cnt: 39
With parameters (1, 40, 100) which would push the byte array length over 8192, it will throw the said exception.
You can tweak the parameters (inside CreateRandomProtoData DoFn) to see why the length of byte[] being gzipped matters.
It may help you also to use the following protoc-gen Java class (for TestProto used in the main code above. Here it is: gist link
References:
Main Code:
package exp.moloco.dataflow2.compression; // NOTE: Change appropriately.
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Random;
import java.util.TreeMap;
import org.apache.beam.runners.direct.DirectRunner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.io.Compression;
import org.apache.beam.sdk.io.TFRecordIO;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.metrics.Counter;
import org.apache.beam.sdk.metrics.MetricResult;
import org.apache.beam.sdk.metrics.Metrics;
import org.apache.beam.sdk.metrics.MetricsFilter;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import org.apache.commons.codec.binary.Base64;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.protobuf.InvalidProtocolBufferException;
import com.moloco.dataflow.test.StackOverflow.TestProto;
import com.moloco.dataflow2.Main;
// #formatter:off
// This code uses TestProto (java class) that is generated by protoc.
// The message definition is as follows (in proto3, but it shouldn't matter):
// message TestProto {
// int64 count = 1;
// string name = 2;
// repeated string values = 3;
// }
// Note that this code does not depend on whether this proto is used,
// or any other byte[] is used (see CreateRandomData DoFn later which generates the data being used in the code).
// We tested both, but are presenting this as a concrete example of how (our) code in production can be affected.
// #formatter:on
public class CompressionTester {
private static final Logger LOG = LoggerFactory.getLogger(CompressionTester.class);
static final List<String> lines = Arrays.asList("some dummy string that will not used in this job.");
// Some GCS buckets where data will be written to.
// %s will be replaced by some timestamped String for easy debugging.
static final String PATH_TO_GCS_PLAIN_BASE64 = Main.SOME_BUCKET + "/comp-test/%s/output-plain-base64";
static final String PATH_TO_GCS_TFRECORD_UNCOMP = Main.SOME_BUCKET + "/comp-test/%s/output-tfrecord-uncompressed";
static final String PATH_TO_GCS_TFRECORD_GZ = Main.SOME_BUCKET + "/comp-test/%s/output-tfrecord-gzip";
// This DoFn reads byte[] which represents a proto message (TestProto).
// It simply counts the number of proto objects it processes
// as well as the number of Strings each proto object contains.
// When the pipeline terminates, the values of the Counters will be printed out.
static class CountDoFn extends DoFn<byte[], TestProto> {
private final Counter protoIn;
private final Counter protoValuesCnt;
private final Counter protoByteArrayLength;
public CountDoFn(String name) {
protoIn = Metrics.counter(this.getClass(), name + "_proto_in");
protoValuesCnt = Metrics.counter(this.getClass(), name + "_proto_val_cnt");
protoByteArrayLength = Metrics.counter(this.getClass(), name + "_proto_array_len");
}
#ProcessElement
public void processElement(ProcessContext c) throws InvalidProtocolBufferException {
protoIn.inc();
TestProto tp = TestProto.parseFrom(c.element());
protoValuesCnt.inc(tp.getValuesCount());
protoByteArrayLength.inc(c.element().length);
}
}
// This DoFn emits a number of TestProto objects as byte[].
// Input to this DoFn is ignored (not used).
// Each TestProto object contains three fields: count (int64), name (string), and values (repeated string).
// The three parameters in DoFn determines
// (1) the number of proto objects to be generated,
// (2) the number of (repeated) strings to be added to each proto object, and
// (3) the length of (each) string.
// TFRecord with Compression (when reading) fails when the parameters are 1, 40, 100, for instance.
// TFRecord with Compression (when reading) succeeds when the parameters are 1, 39, 100, for instance.
static class CreateRandomProtoData extends DoFn<String, byte[]> {
static final int NUM_PROTOS = 1; // Total number of TestProto objects to be emitted by this DoFn.
static final int NUM_STRINGS = 40; // Total number of strings in each TestProto object ('repeated string').
static final int STRING_LEN = 100; // Length of each string object.
// Returns a random string of length len.
// For debugging purposes, the string only contains upper-case English alphabets.
static String getRandomString(Random rd, int len) {
StringBuffer sb = new StringBuffer();
for (int i = 0; i < len; i++) {
sb.append('A' + (rd.nextInt(26)));
}
return sb.toString();
}
// Returns a randomly generated TestProto object.
// Each string is generated randomly using getRandomString().
static TestProto getRandomProto(Random rd) {
TestProto.Builder tpBuilder = TestProto.newBuilder();
tpBuilder.setCount(rd.nextInt());
tpBuilder.setName(getRandomString(rd, STRING_LEN));
for (int i = 0; i < NUM_STRINGS; i++) {
tpBuilder.addValues(getRandomString(rd, STRING_LEN));
}
return tpBuilder.build();
}
// Emits TestProto objects are byte[].
#ProcessElement
public void processElement(ProcessContext c) {
// For debugging purposes, we set the seed here.
Random rd = new Random();
rd.setSeed(132475);
for (int n = 0; n < NUM_PROTOS; n++) {
byte[] data = getRandomProto(rd).toByteArray();
c.output(data);
// With parameters (1, 39, 100), the array length is 8126. It works fine.
// With parameters (1, 40, 100), the array length is 8329. It breaks TFRecord with GZIP.
System.out.println("\n--------------------------\n");
System.out.println("byte array length = " + data.length);
System.out.println("\n--------------------------\n");
}
}
}
public static void execute() {
PipelineOptions options = PipelineOptionsFactory.create();
options.setJobName("compression-tester");
options.setRunner(DirectRunner.class);
// For debugging purposes, write files under 'gcsSubDir' so we can easily distinguish.
final String gcsSubDir =
String.format("%s-%d", DateTime.now(DateTimeZone.UTC), DateTime.now(DateTimeZone.UTC).getMillis());
// Write PCollection<TestProto> in 3 different ways to GCS.
{
Pipeline pipeline = Pipeline.create(options);
// Create dummy data which is a PCollection of byte arrays (each array representing a proto message).
PCollection<byte[]> data = pipeline.apply(Create.of(lines)).apply(ParDo.of(new CreateRandomProtoData()));
// 1. Write as plain-text with base64 encoding.
data.apply(ParDo.of(new DoFn<byte[], String>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(new String(Base64.encodeBase64(c.element())));
}
})).apply(TextIO.write().to(String.format(PATH_TO_GCS_PLAIN_BASE64, gcsSubDir)).withNumShards(1));
// 2. Write as TFRecord.
data.apply(TFRecordIO.write().to(String.format(PATH_TO_GCS_TFRECORD_UNCOMP, gcsSubDir)).withNumShards(1));
// 3. Write as TFRecord-gzip.
data.apply(TFRecordIO.write().withCompression(Compression.GZIP)
.to(String.format(PATH_TO_GCS_TFRECORD_GZ, gcsSubDir)).withNumShards(1));
pipeline.run().waitUntilFinish();
}
LOG.info("-------------------------------------------");
LOG.info(" READ TEST BEGINS ");
LOG.info("-------------------------------------------");
// Read PCollection<TestProto> in 3 different ways from GCS.
{
Pipeline pipeline = Pipeline.create(options);
// 1. Read as plain-text.
pipeline.apply(TextIO.read().from(String.format(PATH_TO_GCS_PLAIN_BASE64, gcsSubDir) + "*"))
.apply(ParDo.of(new DoFn<String, byte[]>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(Base64.decodeBase64(c.element()));
}
})).apply("plain-base64", ParDo.of(new CountDoFn("plain_base64")));
// 2. Read as TFRecord -> byte array.
pipeline.apply(TFRecordIO.read().from(String.format(PATH_TO_GCS_TFRECORD_UNCOMP, gcsSubDir) + "*"))
.apply("tfrecord-uncomp", ParDo.of(new CountDoFn("tfrecord_uncomp")));
// 3. Read as TFRecord-gz -> byte array.
// This seems to fail when 'data size' becomes large.
pipeline
.apply(TFRecordIO.read().withCompression(Compression.GZIP)
.from(String.format(PATH_TO_GCS_TFRECORD_GZ, gcsSubDir) + "*"))
.apply("tfrecord_gz", ParDo.of(new CountDoFn("tfrecord_gz")));
// 4. Run pipeline.
PipelineResult res = pipeline.run();
res.waitUntilFinish();
// Check CountDoFn's metrics.
// The numbers should match.
Map<String, Long> counterValues = new TreeMap<String, Long>();
for (MetricResult<Long> counter : res.metrics().queryMetrics(MetricsFilter.builder().build()).counters()) {
counterValues.put(counter.name().name(), counter.committed());
}
StringBuffer sb = new StringBuffer();
sb.append("\n------------ counter metrics from CountDoFn\n");
for (Entry<String, Long> entry : counterValues.entrySet()) {
sb.append(String.format("[counter] %40s: %5d\n", entry.getKey(), entry.getValue()));
}
LOG.info(sb.toString());
}
}
}
This looks clearly like a bug in TFRecordIO. Channel.read() can read fewer bytes than the capacity of the input buffer. 8192 seems to be the buffer size in GzipCompressorInputStream. I filed https://issues.apache.org/jira/browse/BEAM-5412.
It is a bug, please see: https://issues.apache.org/jira/browse/BEAM-7695, I have solved it.

redis exception in jedis java code

I have this code on java via jedis :
int shb1 = jds.storeHypnoBeats(id1, arr1);
which calls this function :
int storeHypnoBeats(String id,byte[] data)
{
db.lpush(id.getBytes(),data);
return 1;
}
but when I run the java code I get this exception :
Exception in thread "main" redis.clients.jedis.exceptions.JedisDataException: ERR Operation against a key holding the wrong kind of value
here is the defintion of arr1 and id :
byte[] arr1 = new byte[]{1,2,3,4,5,6,7,8,9};
String id1 = "id1";
every thing is correct as I have checked, why do I gwt that?!
thanks in advance
id.getBytes() returns an array of byte but signature of lpush is:
public Long lpush(String key,String... strings)
Therefore, key has to be string not an array of byte.

How to turn a polygon array to an IndexedTriangelArray in java3d

I have two polygons and want to turn them into a triangle mesh, using java3d.
But whatever I try results in some kind of error. What am I missing?
Here is some code I've tried:
final int n = points.length;
final int m = opoints.length;
final GeometryInfo gi = new GeometryInfo(GeometryInfo.POLYGON_ARRAY);
final Point3d[] npoints = Arrays.copyOf(points, n + m);
System.arraycopy(opoints, 0, npoints, n, m);
gi.setCoordinates(npoints);
gi.setStripCounts(new int[] { n, m });
gi.convertToIndexedTriangles();
final IndexedTriangleArray it = (IndexedTriangleArray) gi.getIndexedGeometryArray();
final Point3d[] newPoints = new Point3d[it.getVertexCount()];
it.getCoordinates(0, newPoints);
// Exception in thread "main" java.lang.NullPointerException
// at javax.media.j3d.GeometryArrayRetained.getCoordinates(GeometryArrayRetained.java:5425)
// at javax.media.j3d.GeometryArray.getCoordinates(GeometryArray.java:3699)
final int[] nidxs = new int[it.getValidIndexCount()];
it.getCoordinateIndices(0, nidxs);
I advise you to use the debug mode first to know exactly how the GeometryInfo mutates after the call of convertToIndexedTriangles.
As Java 3D 1.6.0 is open source, look at its source code: https://github.com/hharrison/java3d-utils/blob/master/src/classes/share/com/sun/j3d/utils/geometry/GeometryInfo.java#L507
https://github.com/hharrison/java3d-utils/blob/master/src/classes/share/com/sun/j3d/utils/geometry/Triangulator.java#L625
You can call getCoordinateIndices() and getCoordinates() directly on the GeometryInfo object as far as I know.
I cannot guarantee that my suggestion works with an obsolete version of Java 3D.

How to use boost serialization with CORBA (ACE/TAO)

I'm trying to serialize a data struct, send it over the network and deserialize it on the other side. Works perfectly fine if both sides are consistently compiled as x64 or x86 but it won't work between the two, even if I only serialize a single boolean.
Serialization code:
Myclass myc;
std::stringstream ss;
boost::archive::text_oarchive oa(ss);
oa << myc;
// get stringstream's length
ss.seekg(0, ios::end);
int len = ss.tellg();
ss.seekg(0, ios::beg);
// allocate CORBA type and copy the stringstream buffer over
const std::string copy = ss.str(); // copy since str() only generates a temporary object
const char* buffer = copy.c_str();
// ByteSequence is a sequence of octets which again is defined as a raw, platform-independent byte type by the CORBA standard
IDL::ByteSequence_var byteSeq(new IDL::ByteSequence());
byteSeq->length(len);
memcpy(byteSeq->get_buffer(), buffer, len);
return byteSeq._retn();
Deserialization code:
IDL::ByteSequence_var byteSeq;
byteSeq = _remoteObject->getMyClass();
// copy CORBA input sequence to local char buffer
int seqLen = byteSeq->length();
std::unique_ptr<char[]> buffer(new char[seqLen]);
memcpy(buffer.get(), byteSeq->get_buffer(), seqLen);
// put buffer into a stringstream
std::stringstream ss;
std::stringbuf* ssbuf = ss.rdbuf();
ssbuf->sputn(buffer.get(), seqLen);
// deserialize from stringstream
// throws exception 'Unsupported version' between x86 and x64
boost::archive::text_iarchive ia(ss);
MyClass result;
ia >> result;
As far as I can tell without testing this is caused by different boost versions on the various architectures, see also Boost serialization: archive "unsupported version" exception