Ember: global mapping from typekey to model name coming and going - ember-data

The server API changed from using type keys of "foo" to "bar". My Ember app uses foo everywhere, and I don't want to change them all to bar. I see I can use typeForRoot to map root-level hash keys to a model name, but that appears only to apply to downloading. When I do a save on my Foo model, I want the PUT to be generated with a hash key of boo as well. Is there any simple way to completely map type keys to models, both coming and going?

The answer I posted on another stack overflow question might help you..
Basically the function you need to look at is the stores _normalizeTypeKey.
You can override the stores _normalizeTypeKey then alter the camelCase behaviour to become what you want (e.g. dasherized or just fix this one case).
You can also override the serialisers typeForRoot when going the other way - this lets you tell ember what the model key is (e.g. tellMeAStory) for a particular key in your data (e.g. tell_me_a_story).
It appears there is work underway to make everything work like the container does (which is dasherized)

Related

FactoryImpl to set atts via props for bound inputs

First, thanks for any advice. I am new to all of this and apologize for any obvious blunders.
Second, the question:
In an interface for entering clients that often possess a number of roles, it seemed efficient to create a set of inputs which possessed both visual characteristics and associated data binding based simply on the inputs name.
For example, inquirerfirstname would be any caller or emailer who contacted our company.
The name would dictate a label, placeholder, and the location in firebase where the data would be stored.
The single name could be used--I thought--with a relational table (state machine or series of nested ifs) to define the properties of the input and change its outward appearance and inner bindings through property manipulation.
I created a set of nested iffs, and console logged the property changes in the inputs, but their representation in the host element (a collection of inputs that generated messages to clients as well as messages to sales staff) remained unaffected.
I attempted using the ready callback. I forced the state change with a button.
I was unable to use the var name = new MyInput( name). I believe using this method would be most effective but am unsure how to "stamp" the JavaScript into a heavyweight stamped parent element.
An example of a more complicated and dynamic use of a constructor and a factory implementation that can read database (J-son) objects and respond to generate HTML elements would be awesome.
In vanilla a for each would seem to do the trick but definitions and structure as well as binding would not be organic--read it might be easier just to HTML stamp the inputs in polymer by hand.
I would be really greatful for any help. I have looked for a week and failed to find one example that took data binding, physical appearance, attribute swapping, property binding and object reading into account.
I guess it's a lot, but each piece independently (save the use of the constructor) I think I get.
Thanks again.
Jason
Ps: I am aware that the stamping of the element seems to preclude dynamic property attribute and binding assignments. I was hoping a compute attribute mixed with a factoryimpl would be an option (With a nice example).

RESTful way to preallocate an ID

For some reasons my application needs to have an API that flows like:
Client calls server to get ID for a new resource.
Then user spends a while filling out the forms for the resource.
Then user clicks save (or not...), and when he does the client saves by writing to /myresource/{id}
What is the RESTful way to design this?
How should the first call look like? On server side, it's a matter of generating an ID and returning it. It has side effects (increments sequence and thus "reserves space"), but it doesn't explicitly create a resource.
If I understand correctly, the 3rd call should be a PUT because it creates something with a known URI.
One way you could do it is:
client POSTs empty body to /myresource/
server answers with status code 302 (Found) with a Location response header set to /myresource/newresourceid (to indicate the ID / URI that should be used to create the resource)
client PUTs the new resource to /myresource/newresourceid once the user is done filling the form.
Seems RESTful enough. ;)
I'm interested to see the other answers to this question as I imagine there's a lot of ways to do this.
If possible I would let your auto-incrementing ID in the database serve as your surrogate key and assign another field to be your business identifier. It could be something like a product code or a GUID.
With this in mind the client can now create the ID themselves which removes the need for step 1 at all. They would do a PUT to a url such as /myresource/MLN5001 or /myresource/3F2504E0-4F89-11D3-9A0C-0305E82C3301 to create the resource. If the ID is already in use return a 409 Conflict with the conflict in the response body. Otherwise return a 201 Created and include the URI to the resource in the response body and location header.
I would go with
GET /myresource/new-id
POST /myresource/{id}
Your walkthrough is pretty clear on the verb:
"to GET [an] ID for a new resource"
you could rename new-id to whatever you think makes it most clear. If you have multiple resources you need to do this for, it would probably be better to split out the generator into its own resource, such as
GET /id-generator/myresource
GET /id-generator/my-other-resource
If there are multiple cases, the user will quickly learn they need to hit id-generator to get their ID. If it's only one case, it's annoying for them to only have to use it infrequently.
I guess you could also do
GET /myresource-id-generator/next
which looks a little clearer, but then if you ever need another ID to be generated you have to make another resource to do it.
ID allocation is non-idempotent — two invokes of the allocation operation will get different IDs — so that should always be a POST. From that point on, the resource should conceptually exist. However, what I'd do at that point is fill it out with reasonable default values (whether that involves doing POSTs or PUTs is rather immaterial to the RESTfulness of the overall design), so the user can then take their time to alter the things that they want to look like they want them to.
The question then becomes one of whether there should be some kind of “release this; I'm done with altering it” operation at the end. Strict RESTfulness says there shouldn't, as if you know the resource identifier (the URL) then you can talk about it. On the other hand, that doesn't mean the hosting server has to tell anyone else about the resource until the creating user is happy with it; general HATEOAS principles say nothing about when others can discover that a resource exists or whether knowing the name lets you read the thing, but it's entirely reasonable to deny to third parties that a resource exists until the owner of the resource turns on the “make this public” flag.

Rails - where do the model instance variables come from?

I come from an ASP.NET MVC background and am currently going through the following Rails tutorial: http://guides.rubyonrails.org/getting_started.html
I have created a "Post" model which contains some instance variables, but they do not seem to have been defined in the model. They must come from somewhere else. Where are they defined?
Googled "activerecord model" and this was in the top result:
Active Record objects don’t specify their attributes directly, but rather infer them from the table definition with which they’re linked. Adding, removing, and changing attributes and their type is done directly in the database. Any change is instantly reflected in the Active Record objects. The mapping that binds a given Active Record class to a certain database table will happen automatically in most common cases, but can be overwritten for the uncommon ones.
You can have virtual variables that don't correlate to fields in a table/model. A common example is the 'password' and 'password_confirmation' variables used in authentication. You have them exist temporarily until you encrypt it and save it to another field like 'encrypted_password'.
You can declare them but that's not required. You don't have to define or declare them anywhere... just start using them. Of course, they're not persistent though, so won't be saved.

In Symfony2, should I use an Entity or a custom Repository

I am creating a new web app and would like some help on design plans.
I have "store" objects, and each one has a number of "message" objects. I want to show a store page that shows this store's messages. Using Doctrine, I have mapped OneToMany using http://symfony.com/doc/current/book/doctrine.html
However, I want to show messages in reverse chronological order. So I added a:
* #ORM\OrderBy({"whenCreated" = "DESC"})
Still I am calling the "store" object, then calling
$store->getMessages();
Now I want to show messages that have been "verified". At this point, I am unsure how to do this using #ORM so I was thinking I need a custom Repository layer.
My question is twofold:
First, can I do this using the Entity #ORM framework?
And second, which is the correct way to wrap this database query?
I know I eventually want the SQL SELECT * FROM message WHERE verified=1 AND store_id=? ORDER BY myTime DESC but how to make this the "Symfony2 way"?
For part 1 of your question... technically I think you could do this, but I don't think you'd be able to do it in an efficient way, or a way that doesn't go against good practices (i.e. injecting the entity manager into your entity).
Your question is an interesting one, because at first glance, I would also think of using $store->getMessages(). But because of your custom criteria, I think you're better off using a custom repository class for Messages. You might then have methods like
$messageRepo->getForStoreOrderedBy($storeId, $orderBy)
and
$messageRepo->getForStoreWhereVerified($storeId).
Now, you could do this from the Store entity with methods like $store->getMessagesWhereVerified() but I think that you would be polluting the store entity, especially if you need more and more of these custom methods. I think by keeping them in a Message repository, you're separating your concerns in a cleaner fashion. Also, with the Message repository, you might save yourself a query by not needing to first fetch your Store object, since you would only need to query to Message table and use its store_id in your WHERE clause.
Hope this helps.

Fetched Properties, cross store relationships

I've got a store that is synchronized externally and a store that is unique to the application instance, so in order to cleanly differentiate the two I want to have some join entities between them and then resolve through to the entities between using Fetched Properties, as "discussed" in the Core Data Programming Guide:
developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdRelationships.html#//apple_ref/doc/uid/TP40001857-SW5
I think I just don't really "get" how Fetched Properties are supposed to be used - and I've spent a fair number of hours looking for examples with no real luck.
The way I think of it is,
I have the following Entities each in a different store
Foo with attribute relatedBarName in store A
Bar with attribute barName in store B
I need to create a fetched property on Foo named findRelatedBar that relates Foo to Bar loosely through barName = relatedBarName.
However, I don't understand how since Foo and Bar are in different stores how to declare any relationship of any sort, whether through the fetched property or not, from Foo to Bar?
The predicate builder in XCode seems to want a Destination entity. If they are in different schemas, how can you declare the destination? If you don't declare a destination, how do you at runtime indicate that findRelatedBar on Foo is describing Bar?
Otherwise, do they need to be in the same schema but just stored in different stores?
In crafting this question, I thought of these questions and answered them myself by more focused examination of the documentation. I assume if I found it confusing, others might as well, so I'll inline them with this post to make it easier to find related answers to fetched properties / core data stores.
Q) If a store coordinator have more than one store associated with it of the same schema, how do insertions know which store to insert to?
A) You use the assignObject:toPersistentStore: method on the managed object context.
Q) What does FETCH_SOURCE refer to in specific?
A) It's simply the managed object which has the fetched property associated with it. Sort of like "self"
Q) What does FETCHED_PROPERTY refer to in specific?
A) It is a reference to the fetched property description instance you are using to query with - you can use this to insert per query variable substitution. By setting a property (as in the Core Data Programming example) on the userInfo of the property description instance you're using, you can inject that value into the expression.
Thanks!!!!
The answer is:
Yes, you need to do a cross store fetched property with shared schemas. If you do this, you need to make sure you attribute the inserts with the assignObject:: method as described in the question. However, due to the limitations of the SQLLITE persistent store, natural things like IN $FETCH_SOURCE.attribute do not work.
Q) If a store coordinator have more
than one store associated with it of
the same schema, how do insertions
know which store to insert to?
This is what configurations are for. You create a configuration for each store and then assign entities to that configuration. You then create the store with the proper configuration. When you save the context, each entity will automatically go to the correct store.