ImmutableCollection declarations for GWT-RPC serialization - serialization

My understanding is that DTOs to be serialized for GWT RPC ought to declare their fields of the lowest possible implementation type for performance reasons. For example, one should favor ArrayList over List or Collection, in defiance of the advice we normally receive to the contrary (e.g., Effective Java, Item 52).
With the JDK collections, this is no problem—most of the time, a Map is a HashMap, a Set is a HashSet and a List is an ArrayList. However, I am using Guava's Immutable* collections (e.g., ImmutableList), where I really don't know which implementation I'll end up getting. Do I need to just suck it up and let GWT emulate all of them, or is there any way to do damage control here?

Right. Just use the most specific type that is part of the API.
Subtypes that are annotated with #GwtCompatible(serializable = true) are serializable over GWT RPC unless otherwise specified (by another #GwtCompatible(serializable = false)). You can safely use Immutable* types as GWT RPC interfaces.

Related

Why do we use only [List, Map, Set] collections in Kotlin?

I've been learning Kotlin and I've faced with Collections API. Before Kotlin I'd been learning Java and I know that in Java there's a lot of different types of Collections API. For example, instead of general List, Map, Queue, Set we use ArrayList, HashMap, LinkedList, LinkedMap and etc. Though in Kotlin we only use general types like Map, List, Set but also we can use HashMap and etc. So, what's going on there? Can you help me to figure out?
While Kotlin's original and primary target is the JVM, there is a huge push by JetBrains to make it multiplatform, and support JS and Native as well.
If you're using Kotlin on the JVM, the implementations of any collections you're using will still be the original JDK classes, e.g. java.util.ArrayList or java.util.HashSet. These are not reimplemented by the Kotlin standard library, which has some great benefits:
These are well-tested implementations, which are maintained anyway.
Using the exact same classes makes interop with Java a breeze, as you can pass them back and forth without having to perform conversions or mapping of any kind.
What Kotlin does do is introduce its own collection semantics over these existing implementations, in the form of the standard library interfaces such as List, Map, MutableList, MutableMap and so on. A small bit of compiler magic makes it so that these interfaces are implemented by the existing JDK classes as well.
If you don't need a specific implementation of a certain type of collection, you can use your collections via these interfaces plus the respective factory methods of the standard library (listOf, mapOf, mutableListOf, mutableMapOf, etc.). This keeps your code more generic, and independent of the concrete underlying implementations. You don't know what specific class the standard library mutableListOf function will create for you, only that it will be an object that satisfies the contract of the MutableList interface.
You should basically use these interfaces by default in your code, especially in public API:
In the case of function parameters, this lets clients provide the function with whatever implementation of the collection they wish to give you. If your function can operate on anything that's a List, you should ask for just that interface - no reason to require an ArrayList or LinkedList specifically.
If this is a return type, using these interfaces lets you change the specific implementation that you create internally in the future, without breaking client code. You can promise to just return a MutableList of things, and what implementation backs that list is not exposed to your clients.
If you look at all the collection handling functions of the Kotlin standard library, you'll see that on the surface, they almost exclusively operate on these interfaces. If you dig down deep enough, you'll find ArrayList instances being created, but this is not exposed to the client code, as it doesn't have to care about the concrete implementation most of the time.
Going back to the multiplatform point once more, if you write your code in a way such that it only relies on Kotlin standard library defined types, that code will be easily usable for non-JVM targets. If you reference kotlin.MutableList in your imports, that can immediately compile to JS code, because there's a Kotlin standard library implementation of that interface on each platform. Whether that maps to an existing class directly, wraps an existing class somehow, or is implemented for Kotlin from scratch, again, doesn't have to concern you. But if you refer to java.util.TreeSet in your code, that won't fly for the JS target, as the Java platform classes are not available there.
Can you still use classes such as java.util.ArrayList directly? Of course.
If you don't see your code going multiplatform at some point, using Java collections directly is perfectly okay.
If you need a specific implementation for a List or a Set for performance reasons, sometimes you'll have to use the Java classes directly.
Interestingly, in recent releases of Kotlin, these specific types of implementations (such as an array based list) are wrapped under standard library typealiases too, so that they're platform independent by default: see kotlin.collections.ArrayList or kotlin.collections.HashSet for examples of this. These Kotlin-defined types will usually show up first in IntelliJ completion, so you'll find yourself being pushed towards using them wherever possible. Same thing goes for most exceptions, e.g. IllegalArgumentException.
TL;DR: You can use either Kotlin collection types of Java types in Kotlin, but you should probably do the former whenever you can.

Why they integrate stream API to collection framework in java 8

When learning about design patterns I heard that delegation is better than inheritance in most cases.
Thus I wonder why the java8 team made the decision to integrate Stream API into the existing Collections framework instead of using delegation (construct a Stream based on the given Collection)?
Especially by doing so, they have to introduce the new concept of Interface's default method implementation that in turn, blur out the semantic of Interfaces vs. Abstract classes?
delegation is better than inheritance
I think you have wrote some wrong in your question. Actually, the correct form is Composition over Inheritance.
Indeed, Collection#stream & Collection#spliterator is designed for applying Factory Method pattern. which means subclasses can provided it own Stream/Spliteartor instance to enable features & promote the performance purpose in java.
IF there is no such factory methods in Collection, you must back to procedure code and check the actually type to create appropriate Streams in runtime.
You only see the default methods declared on Collection, have you saw the override methods on sub-classes, for example:
Collections#nCopies uses a CopiesList as below to create a Stream<E> by IntStream to promote the performance.
public Stream<E> stream() {
return IntStream.range(0, n).mapToObj(i -> element);
}
ArrayList#spliterator uses a ArrayListSpliterator as below to create a fail-fast spliterator:
public Spliterator<E> spliterator() {
return new ArrayListSpliterator<>(this, 0, -1, 0);
}
First of all, the support for default and static methods in interfaces was not added only to support the stream() method in the Collection interface.
It is a natural desire to provide utility methods and useful defaults when defining interface, leading to the pattern of having two different classes to host them, the interface and an associated utility class. In the case of useful defaults, implementors were still required to write delegation method to use them. Of course, this would even become worse with functional interfaces restricted to a single method. See also this answer.
So when you consider the language feature of default methods as already existing, the choice of using it in the Collection API for convenience is not so big. It still is delegation, just with a convenient entry point. It’s not different to, e.g. String.format(…) delegating to the java.util.Formatter facility or String.replaceAll delegating to java.util.regex.
But the Stream API can’t run on its own without any active support from the Collection API side. The minimum it would require, is an Iterator from the Collection. The Iterator has been superseded by Spliterator for the Stream API, but whether a Collection provides an Iterator or a Spliterator is not a fundamental design change. It’s something that lies in the collection’s responsibility. There is a default method creating a Spliterator from an Iterator to allow every existing collection to work with the new framework out-of-the-box, but each Collection implementation has the opportunity to override that method, providing a better suited, potentially more efficient Spliterator implementation. Most of the JRE’s standard collection implementations and nowadays a lot the 3rd party collections use that opportunity.
Being overridable is a property of the default methods that you can’t emulate with static methods in another class or package. So it actually works the other way round. E.g. you can still invoke the old Collections.sort methods, but they will delegate to the new List.sort default method, which can be overridden with a more efficient implementation.
Like you normally don’t deal with an Iterator manually when using for(Variable v: collection) …, you don’t deal with a Spliterator manually, when using collection.stream(). … .
Providing a framework that is intended to be used by millions of users is always about balancing ease of use and following strict guidelines.
And the very first point to understand: favor composition over inheritance is a good practice. But that doesn't mean that always prefer composition.
The new streams architecture is intended as a new core building block of the Java APIs. In that sense: it is a natural choice to allow turning collections into streams via a member function.
Beyond that, it seems that the people behind the java language favor fluent interfaces. In that sense they probably prefer
list.stream(). stream operations
over
Streams.stream(list). stream operations
And of course: when you think about the one suggestion I put up here how to alternatively implement a conversion from list to stream - that would only work for the known collections classes. So another concept would be required, like
YourSpecialStreamCreator.(yourSpecialCollection).stream()
whereas the fact that the interface based stream()method allows you to simply #Override the default implementation when your special collection implementation has a need to do so. But you can still use all the other things around streams in that interface!
Beyond that: the idea of default methods in interfaces actually helps with many problems. Yes, they were needed to add those new functionality to interface - but heck: adding new methods to interfaces is something that most people would like to do at some point. So having a clear, defined way backed into the core language is just extremely helpful.
My personal two cent here: adding V2, V3, ... interfaces like:
interface MyFunctionality { ...
interface MyFunctionalityV2 extends MyFunctionality {
void someNewThing();
is just ugly and painful.
Why they integrate stream API to collection framework in java 8
Because before Java 8, Java collections features had a strong delay on concurrent languages such as C#, Scala, Ruby, etc... that provides out of the box and most of time in a conciser way a rich set of functional methods for collections but also pipeline processing for collections.
Thus I wonder why the java8 team made the decision to integrate Stream
API into the existing Collections framework instead of using
delegation (construct a Stream based on the given Collection)?
It would make the API of Stream less fluent, with more boiler plate code and as a consequence, it would not make intrinsically evolve the Java Collections features.
Imagine writing this code with a wrapper at each time :
List<String> strings = new ArrayList<>();
...
Streams.stream(strings).filter(....);
instead of :
List<String> strings = new ArrayList<>();
...
strings.stream().filter(....);
Imagine with a chained stream :
List<Integer> listOne = new ArrayList<>();
List<Integer> listTwo = new ArrayList<>();
...
List<int[]> values = Streams.stream(listOne)
.flatMap(i -> Streams.stream(listTwo)
.map(j -> new int[] { i, j }))
.collect(Collectors.toList());
instead of
List<Integer> listOne = new ArrayList<>();
List<Integer> listTwo = new ArrayList<>();
...
List<int[]> values = listOne.stream()
.flatMap(i -> listTwo.stream()
.map(j -> new int[] { i, j }))
.collect(Collectors.toList());
Especially by doing so, they have to introduce the new concept of
Interface's default method implementation that in turn, blur out the
semantic of Interfaces vs. Abstract classes?
It may be disconcerting at the beginning but finally it gives a powerful way to make evolve an API without breaking the client code using the old API.
Default methods should not be considered as a way to create abstract classes with common processings for all subclasses but a way to make evolve an API without breaking the compatibility with clients of older versions of the API.
Extract of the documentation on default methods :
Default methods enable you to add new functionality to the interfaces
of your libraries and ensure binary compatibility with code written
for older versions of those interfaces.
You could view in this way- If lambda expressions were introduced in back jdk1.2 then the way Collection api must have designed/implemented will be like the 'stream library' or in another words stream is enhanced collection.
Most of the existing codes are using Collection APIs as the data source. Therefore no one is going to use stream if stream is not able to create from a Collection APIs. For that reason they have to modify the existing interface. 'default' methods avoided the breaking of all existing interface and allowed to enhance your code without any issues.
More over 'default methods' provided a way to enhance your interface in future also. Still there are huge differences between default methods in interface and abstract classes.
There are some methods in some interface's where implementation will be same in most of the inherited classes. In that case you can implement it as default method if possible. Once you start using default method you will love it :)
Last but not least "Language should always evolve; it should not stuck up on what you designed 20 years ago" :)

Why the message classes generated by the protocol buffer compiler are all immutable?

The protocol buffer compiler generated message classes are immutable. The message classes contain appropriate setter methods but no getter methods on it. This constraint does not apply to other serialization technologies like Java binary serialization, XML, JSON, etc.
As per my understanding, immutability is of use while doing concurrent programming. Immutability could be of help in achieving thread-safety. But, I assume, that is not the reason in case of protocol buffer.
What could be the reason of making message classes immutable?
After reading the protocol buffer documentation, it seems the above stated only applies to Java (at the least) and not to C++ and other supported platforms/languages.
Note: This question is only to satisfy my curiosity.
Thanks.
The google implementation indeed uses a builder pattern - i.e. a mutable (but not very usable in terms of entity) builder, which creates an immutable object instance. This is not a requirement - indeed, there are alternative implementations for several platforms that do not use this design pattern. But frankly, it simply isn't an issue, because if there is any friction (and what you describe: friction) then you should simply avoid using your DTO types (i.e. the objects used for serialization) as your primary domain entity types. As soon as you do that, it becomes a non-issue: you write your own domain entity types with whatever pattern you like (including any domain logic etc), and then map to/from the DTO types as and when you need to; then the choice of design pattern used by the DTO tier is a mere uninteresting implementation detail.
But again: for your chosen platform, take a look to see if any alternative implementations might suit your requirements more closely.

Which types cannot be used for WCF?

I know for a matter of fact that Type cannot be used when passing in to a WCF service. Does anyone have a complete list?
I'm not sure anyone bothered compiling a list, and i'm not sure there is any use in compiling one. Instead, there are requirements that a type must meet in order to be used in WCF contracts. Mainly, it has to be serializable.
I think it is the programmer's responsibility to verify that the types used in contracts are all serializable, and to make sure all custom types are serializing and deserializing properly.
Anything that you want to use in a WCF service needs to be serializable first, and secondly, it needs to be able to be expressed using XML schema. Also, WCF is interoprable by nature, so anything that's too specific to .NET (like exceptions, the .NET Type and so forth) should be avoided.
Anything non-serializable is out from the get go, and anything that cannot be expressed in XML schema can't be used either. This includes interfaces - you can only use concrete classes - and it also exludes generic types, since XML schema doesn't know how to handle generic types.
You're quite okay as long as you stick to the basic types (int, string, datetime etc.) and anything that is directly composed from those types.
Anything not marked Serializable, for starters.

Serialization of Objects

how does Serialization of objects works? How object got deserialized and a instance is created from serialized date without a call to any constructor?
I've kept this answer language agnostic since a language wasn't given.
When the object is serialized, all the require information to rebuild it is encoded in way which can be retrieved. This typically includes the type of the object, as well as the value of all the instance variables.
When the object is deserialized, an area in memory of the correct size is allocated and is populated using the serialized information such that the new object is identical to the serialized one.
The running program can then refer to this new object in memory without having to actually call the constructor.
There are lots of little details which this doesn't explain, but this is the general idea of serialization/deserialization.
Are you talking about Java? If so, serialization is an extralingual object creation mechanism. It's a backdoor that uses native code to create the object without calling any constructors. Therefore, when designing a class for serializability, you need to make sure that a class created through deserialization maintains the same invariants (key fields being initialized) as you would through the constructor path. A third way to create objects in Java is through cloning, and similar issues apply.
Cloning and serialization don't interact well with the use of final fields if you need to set the value of that field to something different than what is returned by clone or the deserialization process.
Josh Bloch's "Effective Java" has some chapters that explain these issues in more depth.
(this answer may apply to other languages too, but I've only used serialization in Java)
Regarding .NET: this isn't a definitive or textbook answer, and I might be all-out wrong...
.NET Serialization needs to be seperated out into Binary vs. others (XML or an XML derivitave typically). Binary serialization is mostly a black-box to me, but it allows the object to be serialized and restored in their current state. XML serialization typically only serialized the public fields/properties of an object, unless overriden by adding a custom ISerializable implementation.
In the case of XML serialization I believe .NET uses Reflection to determine which fields and properties get converted to their equivalent Elements. Adding an [XMLSerializable] attribute will implement a default behavior which can be adjusted by applying other attributes at the field level (such as [XMLAttribute]).
The metadata (which Reflection depends on) stores all the object members as well as their attributes and addresses, which allows the serializer to determine how it should build the output.