Only a node can be linked! Not "undefined"! - gun

I want to put nodes in Gun set.
const Gun = require('gun');
const _ = require('lodash');
require( "gun/lib/path" );
const gun = new Gun({peers:['http://localhost:8080/gun']});
const watchers = [
{
_id: '123',
_type: 'skeleton',
_source: {
trigger: {
schedule: {
later: 'every 1 sec'
}
}
}
},
{
_id: '456',
_type: 'snowman',
_source: {
trigger: {
schedule: {
later: 'every 1 sec'
}
}
}
}
];
const tasks = gun.get('tasks');
_.forEach(watchers, function (watcher) {
let task = gun.get(`watcher/${watcher._id}`).put(watcher);
tasks.set(task);
});
In the end, I receive only the following message. And script stuck in the terminal.
Only a node can be linked! Not "undefined"!
There is nothing on the listener side:
const tasks = gun.get('tasks');
tasks.map().val(function (task) {
console.log('task', task);
});
What is wrong?
The result is received on the listener side only if I change the watchers objects to simpler ones, for example:
_.forEach(watchers, function (watcher) {
let task = gun.get(`watcher/${watcher._id}`).put({id: '123'});
tasks.set(task);
});
Results:
task { _: { '#': 'watcher/123', '>': { id: 1506953120419 } },
id: '123' }
task { _: { '#': 'watcher/456', '>': { id: 1506953120437 } },
id: '123' }

#trex you correctly reported this as a bug, and we got this fixed here: https://github.com/amark/gun/issues/427 .
When a node is referenced, it should not act as if it is undefined. This was a bug.
However, in the future, some people may attempt to link non-node references. As such, I would like to answer the title of your subject (but note, your actual issue has been fixed, and your code should now work correctly in v0.8.8+).
Why do I get "Only a node can be linked!" error?
Say you have a reference to a thing in gun:
var thing = gun.get('alice').get('age');
You may want to add it to a set (otherwise called a table, or list, or collection) like so:
gun.get('list').set(thing);
You will get a "Only a node can be linked!" error. This is annoying! But here is why:
Because age (or any other example data) is a primitive value, adding it to a table would cause it to lose its context. Instead, we can achieve the exact same end result using the following approach:
var person = gun.get('alice').get('age');
gun.get('list').set(thing);
gun.get('list').map().get('age').on(callback);
Now we get back a list of ages, but those ages will always reflect their latest/current realtime context. Had we just added the age to the table, it would no longer have a realtime context.
This is why only nodes can be linked, because any of the data on that node that you were trying to link can just be linked to by traversing via the node. In this case, it was by doing .get('age') after the list. There are a couple really cool things about this:
Bandwidth is saved. GUN will only load the age property from each item in the list, it will not load the rest of the item. It syncs the data you ask about.
Everything is traversable. No matter where your data is in a graph, whether it is a document, a key/value pair, a table, relational data, or anything else, it will always be traversable from its node in the graph. This is possible because it is always the node that is linked, not the primitive data.
Note: What can be frustrating is that you may not know in advance whether a certain gun reference is a node or a primitive, as you could always allow your users to dynamically change the data on that reference. This would require you to handle the error gracefully and do whatever you can best guess the user intended. You can avoid this problem by enforcing a schema on the data in your app. If your app is deployed, we strongly recommend using a schema.
But what if I want the raw data linked, not a realtime context?
Then all you have to do is pass the actual value of the data, not the reference to it. Like so:
gun.get('list').set(thing);
As always, the chatroom is super friendly so come say hi. Please use StackOverflow for asking questions, but notify us in the chatroom. The chatroom is for quick help, and SO is for long standing questions that others would benefit from.
Thanks for asking this question! I hope this answered it, give us a shout if you have any further questions or concerns.

Related

AngularJS/Ionic How to correctly get data from database to display in view?

What is the correct/best approach to get data from a database for something like calendar and its events.
I think there are two possible ways:
1) Should I get all data from the database and save it in a $scope object, so that I access the SQLite db only once and work with the object later on
2) Should I only get the data for the currently selected day of the calendar, and if I want to display another day, I do another sql query.
EDIT: the database is local, so network perfomance is no matter
I think it depends on what is more important for you and how big the data is.
If you are working with a local database you have keep in mind that you always have to do async operations to get data from your database. So you always have a little time (depending on the device performance) where the promises have to get resolved even if its on local db.
My ionic application follows the concept 2 of your options without using any cache. The trick to render correct data is to resolve relevant data before entering the view.
My stateprovider looks like this:
.state('app.messages', {
url: "/messages",
views: {
'menuContent': {
templateUrl: "pathToTemplate.html",
controller: 'ExampleMessageListCtrl'
}
},
resolve: {
messages: function($q,DatabaseService) {
var deferred = $q.defer();
//All db functions are async operations
DatabaseService.getMessageList().then(function (messageList) {
deferred.resolve(messageList);
}, function (err) {
deferred.reject(err);
});
return deferred.promise;
}
},
//Disable cache to always get the values from db
cache:false
In the controller you can access the messages variable via injection:
.controller('ExampleMessageListCtrl', function ($scope,messages) {
var loadMessages = function() {
$scope.messages = messages;
};
...
});
The benefits of this concept with resolving the data before entering the state is that you always get the data which is inside the db without rendering frames with empty data inside, which comes from default data inside the scope variables.
I think options 1 is the right option if you want to display data very quickly. The challenge in this case is to hold your cached data synchronized with the data in the database without loosing much performance, which I think is not very easy.

Asynchronous completion handling in a function with multiple closures/API requests in swift

I just started developing in Swift, so im totally new to closures. I'm also new how to handle asynchronous API request.
I have read a lot of similar question such as, How to get data to return from NSURLSessionDataTask in Swift and How to use completionHandler Closure with return in Swift?. These helped me, but my problem it a little bit different.
In my function I want to first make a API request to get a JSON payload. With some data in this JSON payload I want to make multiple other API request. In this case, I will for each of API request receive a JSON payload, where I want to store some of the data in my own JSON data structure.
The problem is that, for every multiple API request I make I can only return part of my own JSON data in my CompletionHandler - This is only way to return data when making an API request using a closure, as far as I understand.
So instead of getting multiple completion handlers, when calling my function, I just want to receive a single.
The thing is I dont know to how to completion handling several closures in a function, in this case two closures.
I have posted my code below - I know its quite long and maybe not that clean.
However, the point is that when im updating offers to my storeDict this will be empty, due to the offers dict array is getting its information from inside the second closure. This is shown at the bottom of the function.
func getOffersFromWishList(offerWishList: [String], latitude: Double, longitude: Double, radius: Int, completionHandler: ([NSDictionary] -> Void)) {
var master: [NSDictionary] = []
var nearby_params: NSDictionary = ["r_lat": latitude, "r_lng": longitude, "r_radius": radius]
//println(nearby_params)
var store_id_list: [String] = []
// Get all store_ids for store which are nearby (Radius determines how nearby)
singleton_eta.api("/v2/stores", type: ETARequestTypeGET, parameters: nearby_params, useCache: true, completion: { (response, error, fromCache) -> Void in
if error == nil {
let json = JSON(response)
storeArray = json.arrayValue
//println(storeArray)
for store in storeArray {
var storeDict = [String: AnyObject]()
var metaData = [String: String]()
var offers: [NSDictionary] = []
let name = store["branding"]["name"].stringValue
let store_id = store["id"].stringValue
let street = store["street"].stringValue
let city = store["city"].stringValue
let zip_code = store["zip_code"].stringValue
let dealer_id = store["dealer_id"].stringValue
let logo = store["branding"]["logo"].stringValue
metaData = ["name": name, "store_id": store_id, "street": street, "city": city, "zip_code": zip_code, "dealer_id": dealer_id, "logo": logo]
store_id_list.append(store_id)
//println("Butiks ID: \(store_id)")
var offset = 0
let limit = 100
// Loop through the offers for the specific store id - only possible to request 100 offers each time
// A while loop would be more suitable, but I dont know when to stop, as the length of the offerArray can not be counted as it is cant be accessed outside of the closure.
for x in 1...2 {
var store_params: NSDictionary = ["store_ids:": store_id, "limit": limit, "offset": offset]
println(store_params)
// Get offers for a specific store_id
singleton_eta.api("/v2/offers", type: ETARequestTypeGET, parameters: store_params, useCache: true, completion: { (response, error, fromCache) -> Void in
if error == nil {
offerArray = JSON(response).arrayValue
//println( "TypeName0 = \(_stdlib_getTypeName(offerArray))")
//Loop through the recieved offers
for of in offerArray {
let name = of["branding"]["name"].stringValue
let dealer_id = of["dealer_id"].stringValue
let heading = of["heading"].stringValue
let description = of["description"].stringValue
let price = of["pricing"]["price"].stringValue
let image = of["images"]["view"].stringValue
//println(heading)
// Loop through our offerWishList
for owl in offerWishList {
let headingContainsWish = (heading.lowercaseString as NSString).containsString(owl.lowercaseString)
// Check if offer match with our wish list
if(headingContainsWish) {
// Save neccesary meta data about each offer to a tuple array
var offer = Dictionary<String, String>()
offer = ["name": name, "dealer_id": dealer_id, "heading": heading, "description": description, "price": price, "image": image, "offerWishItem": owl]
offers.append(offer)
}
}
}
}
})
//println(storeDict)
offset = offset + limit + 1
}
storeDict.updateValue(metaData, forKey: "meta_data")
storeDict.updateValue(offers, forKey: "offers") // offers is empty due to its appending inside the closure
master.append(storeDict)
}
completionHandler(master)
}
else {
println(error)
}
})
}
Calling the above function
getOffersFromWishList(offerWishList, latitude, longitude, radius) { (master) -> Void in
println(master)
}
This is what the master will print when calling the function, where offers is empty.
{
"meta_data" = {
city = "Kongens Lyngby";
"dealer_id" = d8adog;
logo = "https://d3ikkoqs9ddhdl.cloudfront.net/img/logo/default/d8adog_3qvn3g8xp.png";
name = "d\U00f8gnNetto";
"store_id" = d2283Zm;
street = "Kollegiebakken 7";
"zip_code" = 2800;
};
offers = (
);
}
{
...
}
So my questions, what is the proper way to return data from the second closure to the first closure inside a function? Or am I doing this in the completely wrong way?
The thing is, I need all this data for a tableview and therefore need all the data at once.
A couple of thoughts:
If there's any possibility of returning all of this in a single request to the server, that might offer better performance. Often, the time required to performing the requests on server is inconsequential in comparison to the network latency. If you can avoid needing to make one request, get a response, and then issue more requests, that would be ideal.
Or perhaps you request the locations within some distance in advance, cache that, and then the "show me deals for nearby locations" might not require these two sets of requests.
(I recognize that neither of these may work for you, but it's something to consider if you can. If you can eliminate consecutive requests and focus on largely concurrent requests, you'll have much better performance.)
Let's assume for a second that the above is not an option, and you're stuck with one request to get the nearby locations and another set to get the deals. Then you have a couple of options:
You can definitely go down the road that you're contemplating with a single callback. You can, for example, issue all of your requests, doing a dispatch_group_enter before you initiate each request, do a dispatch_group_leave upon the completion of each request, and then issue a dispatch_group_notify that will be called when each enter call has been offset by a corresponding leave call. So, build your response object as each request finishes, and only when they're done, call the completion closure.
Another approach would be to have a closure that behaves more like an enumeration closure, one that is called as each site's deals come in. That way, the UI can be updated as things come in, rather than waiting for everything. If you're on a slow network, updating the UI as data comes in may be far more tolerable. (E.g., consider ten requests, each which takes 1 second complete on a slow 3G cellular connection: watching them pop in one per second is far more tolerable than seeing nothing for ten seconds).
Having said that, you may want to abandon closures completely. You could consider a delegate-protocol pattern, where you specify a delegate for your request, and then implement protocol methods for each of the responses you get from the server. That way you can update the UI as new responses come in, rather than holding everything up until the last one comes in. But we're recognizing that there are very different types of responses (one is a list of sites, another is the list deals for a given site, a third would be the "I'm all done" and/or "there was an error), so when it starts to get this complicated, it might be better to define a protocol for this interface, and handle it that way.

Meteor.loginWithPassword callback doesn't provide custom object in User accounts doc

Meteors loginWithPassword() function doesn't provide me the object systemData, which I adding to user doc (not to profile obj) during registration. The thing is, that if I look into console after logging in, I can see that object systemData (that means probably it's not publish issue), but not in callback of loginWithPassword() function, where I need them (to dynamically redirect user to proper page). Is there way to get this object, without any ugly things like timers?
Meteor.loginWithPassword(email, password, function(errorObject) {
if (errorObject) {
...
} else {
// returns true
if (Meteor.userId()) {
// returns false
if (Meteor.user().systemData) {
...
}
// user doc without systemData object
console.log(JSON.stringify(Meteor.user());
}
}
I've adding object systemData on creating user:
Accounts.onCreateUser(function(options, user) {
if (options.profile) {
user.profile = options.profile;
}
...
user.systemData = systemDataRegularUser;
return user;
});
Are you sure publish data to Client ?
I get User Info Using loginWithPassword in callback function.
Meteor.loginWithPassword username,password,(error,result1)->
options =
username: username
password: password
email: result['data']['email']
profile:
name: result['data']['display-name']
roles: result.roles
console.log Meteor.user(), result1
I Create user flowing code: (options contains systemData)
Accounts.createUser option
The first problem is that you want a custom field on a user document published to the client. This is a common question - see the answer here.
The next problem is that even after you add something like:
Meteor.publish("userData", function () {
return Meteor.users.find(this.userId, {fields: {systemData: 1}});
});
I think you still have a race condition. When you call loginWithPassword, the server will publish your user document, but it will also publish another version of the same document with the systemData field. You are hoping that both events have completed by the time Meteor.user() is called. In practice this may just work, but I'm not sure there is any guarantee that it always will. As you suggested, if you added a slight delay with a timer that would probably work but it's an ugly hack.
Alternatively, can you just add systemData to the user's profile so it will always be published?
I didn't find exact way how to solve this, but found easy workaround.
To make some action right after user logged in (eg. dynamically redirect user to proper page), you can hook on your home page with Iron router.(If you using it.) :
this.route('UsersListingHome', {
path: '/',
template: 'UsersListing',
data: function() { ... },
before: function() {
if (isCurrentUserAdmin() && Session.get('adminJustLogged') !== 'loggedIn') {
Router.go('/page-to-redirect');
Session.set('adminJustLogged','loggedIn');
}
}
});
After click on logout of course if (isCurrentUserAdmin()) { Session.set('adminJustLogged', null); }
I've further thought about calling Meteor.call('someMethod') to fetch userData object in Method callback, but now I'm satisfied.
PS: I know that it's not recommended to have plenty session variables or other reactive data source for speeding-up my app, but I believe, that one is not tragedy :)
PS2: Anyway, thanks for your answers.

belongsTo only being set on first and last member of hasMany

My adapter uses findHasMany to load child records for a hasMany relationship.
My findHasMany adapter method is directly based on the test case for findHasMany. It retrieves the contents of the hasMany on demand, and eventually does the following two operations:
store.loadMany(type, hashes);
// ...
store.loadHasMany(record, relationship.key, ids);
(The full code for the findHasMany is below, in case the issue is there, but I don't think so.)
The really strange behavior is: it seems that somewhere within loadHasMany (or in some subsequent async process) only the first and last child records get their inverse belongsTo property set, even though all the child records are added to the hasMany side. I.e., if posts/1 has 10 comments, this is what I get, after everything has loaded:
var post = App.Posts.find('1');
post.get('comments').objectAt(0).get('post'); // <App.Post:ember123:1>
post.get('comments').objectAt(1).get('post'); // null
post.get('comments').objectAt(2).get('post'); // null
// ...
post.get('comments').objectAt(8).get('post'); // null
post.get('comments').objectAt(9).get('post'); // <App.Post:ember123:1>
My adapter is a subclass of DS.RESTAdapter, and I don't think I'm overloading anything in my adapter or serializer that would cause this behavior.
Has anybody seen something like this before? It's weird enough I though someone might know why it's happening.
Extra
Using findHasMany lets me load the contents of the hasMany only when the property is accessed (valuable in my case because calculating the array of IDs would be expensive). So say I have the classic posts/comments example models, the server returns for posts/1:
{
post: {
id: 1,
text: "Linkbait!"
comments: "/posts/1/comments"
}
}
Then my adapter can retrieve /posts/1/comments on demand, which looks like this:
{
comments: [
{
id: 201,
text: "Nuh uh"
},
{
id: 202,
text: "Yeah huh"
},
{
id: 203,
text: "Nazi Germany"
}
]
}
Here is the code for the findHasMany method in my adapter:
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
var query = relationship.options.query ? relationship.options.query(record) : {};
this.ajax(url, "GET", {
data: query,
success: function(json) {
var serializer = this.get('serializer');
var pluralRoot = serializer.pluralize(root);
var hashes = json[pluralRoot]; //FIXME: Should call some serializer method to get this?
store.loadMany(type, hashes);
// add ids to record...
var ids = [];
var len = hashes.length;
for(var i = 0; i < len; i++){
ids.push(serializer.extractId(type, hashes[i]));
}
store.loadHasMany(record, relationship.key, ids);
}
});
}
Solution
Override the DS.RelationshipChange.getByReference method by inserting the following code into your app:
DS.RelationshipChange.prototype.getByReference = function(reference) {
var store = this.store;
// return null or undefined if the original reference was null or undefined
if (!reference) { return reference; }
if (reference.record) {
return reference.record;
}
return store.materializeRecord(reference);
};
Yes, this is overriding a private, internal method in Ember Data. Yes, it may break at any time with any update. I'm pretty sure this is a bug in Ember Data, but I'm not 100% certain this is the right solution. But it does solve this problem, and possibly other relationship-related problems.
This fix is designed to be applied to Ember Data master as of 29 Apr 2013.
Reason
DS.Store.loadHasMany calls DS.Model.hasManyDidChange, which retrieves references for all the child records and then sets the hasMany's content to the array of references. This kicks off a chain of observers., eventually calling DS.ManyArray.arrayContentDidChange, in which the first line is this._super.apply(this, arguments);, calling the superclass method Ember.Array.arrayContentDidChange. That Ember.Array method includes an optimization that caches the first and last object in the array and calls objectAt on only those two array members. So there's the part that singles out the first and last record.
Next, since DS.RecordArray implements an objectAtContent method (from Ember.ArrayProxy), the objectAtContent implementation calls DS.Store.recordForReference, which in turn calls DS.Store.materializeRecord. This last function adds a record property to the reference that is passed in as a side effect.
Now we get to what I think is a bug. In DS.ManyArray.arrayContentDidChange, after calling the superclass method, it loops through all the new references and creates a DS.RelationshipChangeAdd instance that encapsulates the owner and child record references. But the first line inside the loop is:
var reference = get(this, 'content').objectAt(i);
Unlike what happens above to the first and last record, this calls objectAt directly on the Ember.NativeArray and bypasses the ArrayProxy methods including the objectAtContent hook, which means that DS.Store.materializeRecord--which adds the record property on the reference object--may have never been called on some references.
Next, the relationship changes created in the loop are immediately afterward (in the same run loop) applied with this call tree: DS.RelationshipChangeAdd.sync -> DS.RelationshipChange.getFirstRecord -> DS.RelationshipChange.getByReference. This last method expects the reference object to have a record property. However, the record property is only set on the first and last reference objects, for reasons explained above. Therefore, for all but the first and last records, the relationship fails to be established because it doesn't have access to the child record object!
The above fix calls DS.Store.materializeRecord whenever the record property doesn't exist on the reference. The last line in the function is the only thing added. On the one hand, it looks like this was the original intention: that var store = this.store; line in the original declares a variable that isn't otherwise used in the function, so what's it there for? Also, without the added line, the function doesn't always return a value, which is a little unusual for a function which is expected to do so. On the other hand, this could lead to mass materialization in some cases where that would be undesirable (but, the relationships just won't work without it in some cases, it seems).
Possibly related
The "chain of observers" I mentioned takes a bit of an odd path. The initiating event was setting the content property on a DS.ManyArray, which extends Ember.ArrayProxy--therefore the content property has a dependent property arrangedContent. Importantly, the observers on arrangedContent are executed before observers on content are executed (see Ember.propertyDidChange). However, the default implementation of Ember.ArrayProxy.arrangedContentArrayDidChange simply calls Ember.Array.arrayContentDidChange, which DS.ManyArray implements! The point being, this looks like a recipe for some code to execute in an unintended order. That is, I think Ember.ManyArray.arrayContentDidChange may getting executed earlier than expected. If this is the case, the above mentioned code that expects the record property to already exist on all references may have been expecting this reasonably, as one of the observers directly on the content property may call DS.Store.materializeRecord on each reference. But I haven't dug deep enough to find out if this is true.

Sproutcore datasources and model relationships

I currently have a Sproutcore app setup with the following relationships on my models:
App.Client = SC.Record.extend({
name: SC.Record.attr(String),
brands: SC.Record.toMany('App.Brand', {isMaster: YES, inverse: 'client'})
});
App.Brand = SC.Record.extend({
name: SC.Record.attr(String),
client: SC.Record.toOne('App.Client, {isMaster: NO, inverse: 'brands'})
});
When I was working with fixtures my fixture for a client looked like this:
{
guid: 1,
name: 'My client',
brands: [1, 2]
}
And my fixture for a brand looked like this:
{
guid: 1,
name: 'My brand',
client: 1
}
Which all worked fine for me getting a clients brands and getting a brands client.
My question is in regards to how Datasources then fit into this and how the server response should be formatted.
Should the data returned from the server mirror exactly the format of the fixtures file? So clients should always contain a brands property containing an array of brand ids? And vice versa.
If I have a source list view which displays Clients with brands below them grouped. How would I go about loading that data for the source view with my datasource? Should I make a call to the server to get all the Clients and then follow that up with a call to fetch all the brands?
Thanks
Mark
The json you return will mostly mirror the fixtures. I recently had pretty much the same question as you, so I built a backend in Grails and a front end in SC, just to explore the store and datasources. My models are:
Scds.Project = SC.Record.extend(
/** #scope Scds.Project.prototype */ {
primaryKey: 'id',
name: SC.Record.attr(String),
tasks: SC.Record.toMany("Scds.Task", {
isMaster: YES,
inverse: 'project'
})
});
Scds.Task = SC.Record.extend(
/** #scope Scds.Task.prototype */ {
name: SC.Record.attr(String),
project: SC.Record.toOne("Scds.Project", {
isMaster: NO
})
});
The json returned for Projects is
[{"id":1,"name":"Project 1","tasks":[1,2,3,4,5]},{"id":2,"name":"Project 2","tasks":[6,7,8]}]
and the json returned for tasks, when I select a Project, is
{"id":1,"name":"task 1"}
obviously, this is the json for 1 task only. If you look in the projects json, you see that i put a "tasks" array with ids in it -- thats how the internals know which tasks to get. so to answer your first question, you dont need the id from child to parent, you need the parent to load with all the children, so the json does not match the fixtures exactly.
Now, it gets a bit tricky. When I load the app, I do a query to get all the Projects. The store calls the fetch method on the datasource. Here is my implementation.
Scds.PROJECTS_QUERY = SC.Query.local(Scds.Project);
var projects = Scds.store.find(Scds.PROJECTS_QUERY);
...
fetch: function(store, query) {
console.log('fetch called');
if (query === Scds.PROJECTS_QUERY) {
console.log('fetch projects');
SC.Request.getUrl('scds/project/list').json().
notify(this, '_projectsLoaded', store, query).
send();
} else if (query === Scds.TASKS_QUERY) {
console.log('tasks query');
}
return YES; // return YES if you handled the query
},
_projectsLoaded: function(response, store, query) {
console.log('projects loaded....');
if (SC.ok(response)) {
var recordType = query.get('recordType'),
records = response.get('body');
store.loadRecords(recordType, records);
store.dataSourceDidFetchQuery(query);
Scds.Statechart.sendEvent('projectsLoaded')
} else {
console.log('oops...error loading projects');
// Tell the store that your server returned an error
store.dataSourceDidErrorQuery(query, response);
}
}
This will get the Projects, but not the tasks. Sproutcore knows that as soon as I access the tasks array on a Project, it needs to get them. What it does is call retrieveRecords in the datasource. That method in turn calls retrieveRecord for every id in the tasks array. My retrieveRecord method looks like
retrieveRecord: function(store, storeKey) {
var id = Scds.store.idFor(storeKey);
console.log('retrieveRecord called with [storeKey, id] [%#, %#]'.fmt(storeKey, id));
SC.Request.getUrl('scds/task/get/%#'.fmt(id)).json().
notify(this, "_didRetrieveRecord", store, storeKey).
send();
return YES;
},
_didRetrieveRecord: function(response, store, storeKey) {
if (SC.ok(response)) {
console.log('succesfully loaded task %#'.fmt(response.get('body')));
var dataHash = response.get('body');
store.dataSourceDidComplete(storeKey, dataHash);
} ...
},
Note that you should use sc-gen to generate your datasource, because it provides a fairly well flushed out stub that guides you towards the methods you need to implement. It does not provide a retrieveMethods implementation, but you can provide your own if you don't want to do a single request for each child record you are loading.
Note that you always have options. If I wanted to, I could have created a Tasks query and loaded all the tasks data up front, that way I wouldn't need to go to my server when I clicked a project. So in answer to your second question, it depends. You can either load the brands when you click on the client, or you can load all the data up front, which is probably a good idea if there isn't that much data.