Does anyone know why Spring Integration (AMQP 1.3.5) requires the correlation-id to be a byte array? Rabbit's AMQP-Client 3.3.5 takes a String for the correlation-id in the AMQP.BasicProperties class. Doesn't Spring need to convert the byte array to this String at some point? We're finding that the correlation-id in the message Rabbit sends is still a byte array, and is never converted to a String. Any insight?
Good question, I have no insight; it was before my time on the project and it's a day one issue.
Spring AMQP converts the byte[] to a String in DefaultMessagePropertiesConverter (outbound); invoked by the RabbitTemplate using UTF-8 by default. The resulting string is added to the BasicProperties.
On the listener container (inbound) side, UTF-8 is used unconditionally.
The rabbit client converts to byte[] when writing to the wire (in ValueWriter.writeShortStr()), unconditionally using charset UTF-8.
So, unless you change the charset in RabbitTemplate (which would be bad), it's a no-op (but unnecessary if you already have a String).
I can only speculate that, since MessageProperties is an abstraction (and not tied to RabbitMQ), some other client had it as a byte[] when the abstraction was being designed.
Since we only have a rabbitmq implementation, I wouldn't be averse to adding an optimization to the abstraction to avoid the unnecessary conversion.
Feel free to open an Improvement JIRA Issue and we'll take a look at it for the upcoming 1.5 release.
Related
I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.
What are the delimiters for protobuf messages? I'm working with serialized messages. I would like to know if the messages begins with $$__$$ and ends with the same sign.
For top level messages (i.e. separate calls to serialize): there literally isn't one. Unless you add your own framing, messages actively bleed into each-other, as the deserializer will (by default) just read to the end of a stream. So: if you have blindly concatenated multiple objects without your own framing protocol: you now have problems.
For the internals of messages, there are two ways of encoding sub-objects - length prefix and groups. Groups are largely deprecated, and the encoding of sub-objects is ambiguous in that it is also the same markers that describe strings, blobs (bytes), and "packed arrays". You probably don't want to try to handle that.
So: it sounds like you need to add your own framing protocol, in which case the answer will be : whatever your framing protocol defines. Just remember that protobuf is binary, so you cannot rely on any byte sequence as a sentinel / terminator. You should ideally use a length prefix approach instead.
(In addition to existing answers 1, 2)
Common framing method for protocol buffers is to prepend a varint before actual protobuf message.
The implementation is already part of the protobuf library, e.g.:
for java: MessageLite.writeDelimitedTo(), Parser.parseDelimitedFrom()
for C: methods in header google/protobuf/util/delimited_message_util.h (e.g. SerializeDelimitedToFileDescriptor())
Good luck with your project!
EDIT> The official reference states that:
If you want to write multiple messages to a single file or stream, it is up to you to keep track of where one message ends and the next begins. The Protocol Buffer wire format is not self-delimiting, so protocol buffer parsers cannot determine where a message ends on their own. The easiest way to solve this problem is to write the size of each message before you write the message itself. When you read the messages back in, you read the size, then read the bytes into a separate buffer, then parse from that buffer. (If you want to avoid copying bytes to a separate buffer, check out the CodedInputStream class (in both C++ and Java) which can be told to limit reads to a certain number of bytes.)
The Protocol Buffer wire format is not self-delimiting, so protocol buffer parsers cannot determine where a message ends on their own. The easiest way to solve this problem is to write the size of each message before you write the message itself. When you read the messages back in, you read the size, then read the bytes into a separate buffer, then parse from that buffer.
I was looking through the OpenJDK class file parser source and I came across something I've never heard of - Constant Pool Patching. What is this? I've read the JVM specification before but it didn't mention anything like this, and searching on Google failed to turn anything up.
To put it simply, the patching procedure replaces constant pool entries at class parse time in order to handle JSR-229, which implements invokedynamic. It is used to rewrite UTF-8, class, and value (float, int, etc.) entries when loading anonymous classes.
For a primer on how invokedynamic is implemented, see http://blog.headius.com/2008/09/first-taste-of-invokedynamic.html
When using Scala RemoteActors I was getting a ClassNotFoundException that referred to scala.actors.remote.NetKernel. I copied someone else's example and added RemoteActor.classLoader = getClass.getClassLoader to my Actor and now everything works. Why is this necessary?
Remote Actors use Java serialization to send messages back and forth. Inside the actors library, you'll find a custom object input stream ( https://lampsvn.epfl.ch/trac/scala/browser/scala/trunk/src/actors/scala/actors/remote/JavaSerializer.scala ) that is used to serialize objects to/from a socket. There's also some routing code and other magic.
In any case, the ClassLoader used for remoting is rather important. I'd recommend looking up Java RMI if you're unfamiliar with it. In any case, the ClassLoader that Scala picks when serializing/deserializing actors is the one Located on RemoteActor which defaults to null.
This means that by default, you will be unhappy without specifying a ClassLoader ;).
If you were in an environment that controls classloaders, such as OSGi, you'd want to make sure you set this value to a classloader that has access to all classes used by all serialized actors.
Hope that helps!
Are there any compatibility issues to take care of when serailizing an object in .NET and then deserializing in Java?
I am facing problems in de-serializing an object in java which has been serialized in .NET
Here is the detailed problem statement:
In .NET platform i have a cookie.
1. Cookie is serialized
2. then it is encrypted using Triple DES algo.
3. Send it across to Java application
In Java platform
1. Decrypt the cookie using Triple DES which gives some bytes
2. Deserialize the bytes using something like
new ObjectInputStream( new ByteArrayInputStream(byte[] decryptedCookie)).readObject();
The exception stack trace I get is:
java.io.StreamCorruptedException: invalid stream header: 2F774555
at java.io.ObjectInputStream.readStreamHeader(Unknown Source)
at java.io.ObjectInputStream.(Unknown Source)
The WOX serializer provides interoperable serialization for .Net and Java.
If you serialize in xml then you shouldnt face any problems de-serializing in java since at worse you have to write your own bit of code to reconstruct the objects.
The way java and .Net serialise to binary differs.
How does one know the objects of the other e.g. .Net will have Dictionaries and Java Maps? (plus the bnary representation of a string might differ.
You have to use some data format that both understand and code to do the object mappings. Thus the above answers mentioning XML and WOX. I have worked with internal company produces as well.