Why does collection+json use anonymous objects instead of key value pairs - schema

I'm trying to find a data schema which can be used for different scenaries and a promissing format I found so far is the collection+json format (http://amundsen.com/media-types/collection/ ).
So far it has a lot of the functionallity I need and is very flexible, however I don't get why it uses anonymous objects ( example: {"name" : "full-name", "value" : "J. Doe", "prompt" : "Full Name"}, ) instead of simple key value pairs. (example: "full-name": "J. Doe", ).
I see how you can transfer more information like the prompt, etc. but the parsing is much slower and it is harder to create a client for it since he has to access the fields by searching in an array. When binding the data to a spezific view, it has to be know which fields exists, so the anonymous objects have to be converted into a key value map again.
So is there a real advante using this anonymous objects instead of a key value map?

I think that the main reason is because a consumer client does not need to know in advance the format of the data.
As it is proposed now in collection+json, you know that in the data object you will find stuff about data simply by parsing through it, 'name' is always the identifying name for the field, 'value' is the value and so on, your client can be agnostic about how many fields or their names:
{
"name" : "full-name",
"value" : "J. Doe",
"prompt" : "Full Name"
},
{
"name" : "age",
"value" : "42",
"prompt" : "Age"
}
if you had instead
{
"full-name" : "J. Doe",
"age" : "42"
}
the client needs to have previous knowledge about your representation, so it should expect and understand 'full-name', 'age, and all the application specific fields.

I wrote this question and then forgott about it, but found the answer I looked for here:
https://groups.google.com/forum/#!searchin/collectionjson/key/collectionjson/_xaXs2Q7N_0/GQkg2mvPjqMJ
From Mike Amundsen the creator of collection+JSON
I understand your desire to add object serialization patterns to CJ.
However, one of the primary goals of my CJ design is to not support
object serialization. I know that serialization is an often-requested
feature for Web messages. It's a great way to optimize both the code
and the programmer experience. But it's just not what I am aiming for
in this particular design.
I think the extension Kevin ref'd is a good option. Not sure if anyone
is really using it, tho. If you'd like to design one (based on your
"body" idea), that's cool. If you'd like to start w/ a gist instead of
doing the whole "pull" thing, that's cool. Post it and let's talk
about it.
On another note, I've written a very basic client-side parser for CJ
(it's in the basic folder of the repo) to show how to hide the
"costly" parts of navigating CJ's state model cleanly. I actually
have a work item to create a client-side lib that can convert the
state representation into an arbitrary local object model, but haven't
had time to do the work yet. Maybe this is something you'd like help
me with?
At a deeper (and possibly more boring) level, this state-model
approach is part of a bigger pattern I have in mind to treat messages
as "type-less" containers and to allow clients and servers to utilize
whatever local object models they prefer - without the need for
cross-web agreement on that object model. This is an "opinionated
design model" that I am working toward.
Finally, as you rightly point out at the top of the thread, HAL and
Siren are much better able to support object serialization style
messages. I think that's very cool and I encourage folks (including my
clients) to use these other formats when object serialization is the
preferred pattern and to use CJ when state transfer is the preferred
pattern.

Related

Are extensible records useless in Elm 0.19?

Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.

Method and parameter naming conforming to Swift API Design Guideline

It's not obvious to me from reading the current API design guideline, which of the following version is better.
class MediaLoader {}
class MediaRequest {}
let mediaLoader = MediaLoader()
let mediaRequest = MediaRequest()
// Option 1
mediaLoader.add(request: mediaRequest)
// Option 2
mediaLoader.add(mediaRequest: mediaRequest)
// Option 3
mediaLoader.addRequest(mediaRequest)
// Option 4
mediaLoader.add(mediaRequest)
Which of the above conforms to the current API design guideline the best?
The answer really depends on the purpose and semantics of MediaLoader. If MediaLoader is only a collection of mediaRequests, then .add(mediaRequest) is the way to go because it would flow grammatically and be meaningful in context.
On the other hand, if a mediaRequest is merely one of many different things contributing to its purpose, then .add() alone would not convey enough context to properly read the statement. For example, if you could also add display channels or filters, then merely saying .add(something) would not be clear enough. This is when you would use an extended name that describes the relationship. e.g. .addRequest(), addChannel(), addFilter().
But not .add(request:...), because, using a name for the first parameter is not the ideal way to distinguish between relationships. It should be used instead to clarify the method by which the addition will be performed or the way the request will be accessed. This will leave the "nameless" variant for the most frequent and straightforward use case. e.g. .add(fromTemplate:webRequesTemplate) or .addRequest(fromTemplate:webTemplate).

RavenDb GenerateDocumentKey

I need to generate Id for child object of my document. What is the current syntax for generating document key?
session.Advanced.Conventions.GenerateDocumentKey(document) is not there anymore. I've found _documentSession.Advanced.DocumentStore.Conventions.GenerateDocumentKey method but its' signature is weird: I am okay with default key generation algorithm I just want to pass an object and receive an Id.
The default implementation of GenerateDocumentKey is to get the "dynamic tag name" for the class, and append a slash. For example, class Foo would turn into Foos/ which then goes through the HiLoKeyGenerator so that ids can be assigned on the client-side without having to consult the server each time.
If you really want this behavior, you could try to use the HiLoKeyGenerator on your own, but have you considered something simpler? I don't know what your model is, but if the child thing is fully owned by the containing document (which it should be, to be in the same document) have you have several much easier options:
Just use the index within the collection
Keep a int NextChildThingId property on the document and increment that every time you add a ChildThing
Just use a Guid, although those are no fun to read, type, look at, compare, or speak to someone over the phone.

Overextending object design by adding many trivial fields?

I have to add a bunch of trivial or seldom used attributes to an object in my business model.
So, imagine class Foo which has a bunch of standard information such as Price, Color, Weight, Length. Now, I need to add a bunch of attributes to Foo that are rarely deviating from the norm and rarely used (in the scope of the entire domain). So, Foo.DisplayWhenConditionIsX is true for 95% of instances; likewise, Foo.ShowPriceWhenConditionIsY is almost always true, and Foo.PriceWhenViewedByZ has the same value as Foo.Price most of the time.
It just smells wrong to me to add a dozen fields like this to both my class and database table. However, I don't know that wrapping these new fields into their own FooDisplayAttributes class makes sense. That feels like adding complexity to my DAL and BLL for little gain other than a smaller object. Any recommendations?
Try setting up a separate storage class/struct for the rarely used fields and hold it as a single field, say "rarelyUsedFields" (for example, it will be a pointer in C++ and a reference in Java - you don't mention your language.)
Have setters/getters for these fields on your class. Setters will check if the value is not the same as default and lazily initialize rarelyUsedFields, then set the respective field value (say, rarelyUsedFields.DisplayWhenConditionIsX = false). Getters they will read the rarelyUsedFields value and return default values (true for DisplayWhenConditionIsX and so on) if it is NULL, otherwise return rarelyUsedFields.DisplayWhenConditionIsX.
This approach is used quite often, see WebKit's Node.h as an example (and its focused() method.)
Abstraction makes your question a bit hard to understand, but I would suggest using custom getters such as Foo.getPrice() and Foo.getSpecialPrice().
The first one would simply return the attribute, while the second would perform operations on it first.
This is only possible if there is a way to calculate the "seldom used version" from the original attribute value, but in most common cases this would be possible, providing you can access data from another object storing parameters, such as FooShop.getCurrentDiscount().
The problem I see is more about the Foo object having side effects.
In your example, I see two features : display and price.
I would build one or many Displayer (who knows how to display) and make the price a component object, with a list of internal price modificators.
Note all this is relevant only if your Foo objects are called by numerous clients.

DoJo get/set overriding possible

I don't know much about Dojo but is the following possible:
I assume it has a getter/setter for access to its datastore, is it possible to override this code.
For example:
In the dojo store i have 'Name: #Joe'
is it possible to check the get to:
get()
if name.firstChar = '#' then just
return 'Joe'
and:
set(var)
if name.firstChar = '#' then set to #+var
Is this sort of thing possible? or will i needs a wrapper API?
You can get the best doc from http://docs.dojocampus.org/dojo/data/api/Read
First, for getting the data from a store you have to use
getValue(item, "key")
I believe you can solve the problem the following way. It assumes you are using a ItemFileReadStore, you may use another one, just replace it.
dojo.require("dojo.data.ItemFileReadStore");
dojo.declare("MyStore", dojo.data.ItemFileReadStore, {
getValue:function(item, key){
var ret = this.inherited(arguments);
return ret.charAt(0)=="#" ? ret.substr(1) : ret;
}
})
And then just use "MyStore" instead of ItemFileReadStore (or whatever store you are using).
I just hacked out the code, didn't try it, but it should very well show the solution.
Good luck
Yes, I believe so. I think what you'll want to do is read this here and determine how, if it will work:
The following statement leads me to believe the answer is yes:
...
By requiring access to go through
store functions, the store can hide
the internal structure of the item.
This allows the item to remain in a
format that is most efficient for
representing the datatype for a
particular situation. For example, the
items could be XML DOM elements and,
in that case, the store would access
the values using DOM APIs when
store.getValue() is called.
As a second example, the item might be
a simple JavaScript structure and the
store can then access the values
through normal JavaScript accessor
notation. From the end-users
perspective, the access is exactly the
same: store.getValue(item,
"attribute"). This provides a
consistent look and feel to accessing
a variety of data types. This also
provides efficiency in accessing items
by reducing item load times by
avoiding conversion to a defined
internal format that all stores would
have to use.
...
Going through store accessor function
provides the possibility of
lazy-loading in of values as well as
lazy reference resolution.
http://www.dojotoolkit.org/book/dojo-book-0-9/part-3-programmatic-dijit-and-dojo/what-dojo-data/dojo-data-design
I'd love to give you an example but I think it's going to take a lot more investigation.