I have a User entity that has a subscriptions property. This is an array of IDs.
When I perform a fetch, the API will populate those subscriptions, and return something like this:
{
subscriptions: [1, 2, 3],
__subscriptions: [
{
id: 1,
name: 'Example'
},
{
id: 2,
name: 'Example'
},
{
id: 3,
name: 'Example'
}
]
}
I have done this so that I can still perform actions on the original subscriptions and then save them back to the API. Any changes I make to __subscriptions will not be persisted as the API doesn't recognise this field – it is simply the populated data.
In the parse function of my User, I create the nested collection:
parse: function (response) {
this.subscriptions = new Subscriptions(response.__subscriptions)
}
However, if I want to remove a subscription, I have to splice it from the subscriptions field of the User entity, and then I also have to remove it from the subscriptions collected that is nested as a property on the User:
// Clone the subscriptions property, delete the model with a matching ID, and then set it again.
var value = _.clone(this.get('subscriptions'))
// Use splice instead of delete so that we don't leave an undefined value
// in the array
value.splice(value.indexOf(model.id), 1)
// Also remove the same model from the nested collection
var removedSubscription = this.subscriptions.get(model)
this.subscriptions.remove(removedSubscription)
this.set('subscriptions', value)
this.save()
This is sort of annoying. Ideally, removing an ID from the subscriptions property should automatically update the collection.
Does this seem like a good way to deal with nested models and collections? I've heard bad things about Backbone.Relational so I was interested in a simpler solution.
I would listen to events of Subscriptions collection and update subscriptions argument accordingly.
var User = Backbone.Model.extend({
initialize: function () {
this.subscriptions = new Subscriptions;
this.subscriptions.on('add remove', this.updateSubscriptions, this)
},
updateSubscriptions: function() {
this.set('subscriptions', this.subscriptions.pluck('id'))
},
parse: function (response) {
this.subscriptions.reset(response.__subscriptions);
return Backbone.Model.parse.call(this, response);
}
});
So then removing subscription will update subscriptions attribute of user model:
user.subscriptions.remove(subscription)
Related
I have an object which has many sub collections, one of the sub collections usually has more than 100 items and each of those have multiple nested objects. So I want to get deep data of the object but filter out just one sub collection so the response time and data gets minimised.
I want deep data of an object but I want to prevent Backand to get deep inside one of sub collection.
{
sub_A:[1,2,3],
sub_B:[1,2,3],
sub_C:[1,2,3],
sub_D:[1,2,3],
}
let say in the above object is it possible to get all except sub_D
You cannot use filter with deep, however you can create an on-demand action for that. Here is an example for user with many items:
function backandCallback(userInput, dbRow, parameters, userProfile) {
// get the user main level information
var user = $http({
method: "GET",
url: CONSTS.apiUrl + "/1/objects/users/" + parameters.userId
});
// get the user's related items
var userItems = $http({
method: "GET",
url: CONSTS.apiUrl + "/1/objects/items",
params: {
filter: [{
fieldName: "user",
operator:"in",
value:user.id
},
{
fieldName: "name",
operator:"contains",
value:parameters.namePart
}]
}
});
// get the user's related items
user.items = userItems.data;
return user;
}
So I've been struggling for a few hours now with a one-to-many mapping update.
I've got a project which has certain tasks (for example).
I add and remove tasks through the frontend and send the revised object to by backend running with sequelize.
Then I tried to update the records as follows:
return models.Project
.findOne({
where: { id: projectToUpdate.id },
include: [models.Task]
})
.then(function (ProjectFromDb) {
return models.sequelize
.transaction({
isolationLevel: models.sequelize.Transaction.ISOLATION_LEVELS.READ_COMMITTED
},
function (t) {
return ProjectFromDb
.update(projectToUpdate,
{
include: [{ model: models.Task }]
})
});
})
.then(function (result) {
return output.getSuccessResult(....
})
.catch(function (error) {
return output.getErrorResult(....
});
But this would only update the Project
Next I tried to update them with an additional then call:
.then(function (updateResult) {
return updateResult.setTasks(projectToUpdate.Tasks, {transaction: t})
})
But this would give me the result that he is trying to update the Task and set the ProjectId to NULL which is not possible because it is non-nullable.
I am currently "manually" adding the tasks and removing them but this seems to be a silly way of using the framework.
Can anyone tell me how to properly make this work with a one-to-many relationship without me calling Tasks.createBulk and Tasks.destroy?
EDIT TO INCLUDE MODEL
JSON object looks like this:
{
id: 1,
projectName: 'nameOfTheProject',
Tasks: [
projectId: 1,
name: 'taskName'
]
}
Please try changing the property name projectId to ProjectId on your Tasks objects that are nested to the projectToUpdate object.
Update
Looking at sequelize's source, it seems that the Instance.$save() function (which is called by Instance.$update() that you're using) does not support nested models creation when you're updating it - it checks if the flag wasNewRecord is true before doing it.
I am creating a Memory store as
var someData = [
{id:1, name:"One"},
{id:2, name:"Two"}
];
store = new Memory({
data: someData,
id:”userStore”
});
I was wondering if there is a way to query the Memory store to return the store instance by id. Like
var storePresent = Memory.getById(“userStore”)
something similar to
dijit.registry.byId();
that returns the instance of dijit specified by id
To my knowledge, there is not a store registry as you describe. You will need to code this yourself in your application's controller code.
A store is a simple Object.
You could:
Pass the store manually around your code.
Code a registry AMD module (caution, here be dragons).
The only exception to this rule is if you're already using dojox/app as your controller layer. That has some named store abilities. If not, I would not recommend refactoring to use it.
There's no build-in static repository of memory stores in module dojo/store/Memory. If you need something like that, the easiest way is to write custom factory of memory stores that will hold the static references to all stores that are created:
define(["dojo/store/Memory"], function(Memory){
var repository = {}
return {
getStore: function(id) {
return repository[id]
},
createStore: function(id, params) {
var memory = new Memory(params)
repository[id] = memory
return memory
}
}
});
The usage:
require(["modules/MemoryRepository"], function(MemoryRepository) {
MemoryRepository.createStore("userStore", {data: someData})
...
var userStore = MemoryRepository.getStore("userStore")
})
If you are to create a lot of stores on demand, you should think of deregistering them (removing the references from the factory) as well. Memory issues are probably the reason something like that is not provided out-of-the-box.
Like the other answerers already said, there's no specific repository or registry for stores. However, the dijit/registry can be used to store the reference as well by using the dijit/registry::add() function, for example:
// Add to registry
registry.add(new Memory({
id: "userStore",
data: [{
name: "Smith",
firstname: "John"
}, {
name: "Doe",
firstname: "John"
}]
}));
Then you can retrieve it by using the dijit/registry::byId() function, for example:
// Query the store by using the registry
var person = registry.byId("userStore").query({
firstname: "John"
}).forEach(function(person) {
console.log(person.firstname + " " + person.name);
});
A full example can be found on JSFiddle: http://jsfiddle.net/mn94f/
I have a route that should load a model (BatchDetail) and a number of related items (BatchItems). Since there are a great number of items I should be able to do pagination with the help of two request parameters, limit and offset.
Here is the route I set up:
App.BatchDetailRoute = Ember.Route.extend({
model: function(params) {
var store = this.get('store');
var adapter = store.get('adapter');
var id = params.batch_detail_id;
var rejectionHandler = function(reason) {
Ember.Logger.error(reason, reason.message);
throw reason
}
return adapter.ajax("/batch_details/" + id, "GET", {
data: { limit: 50, offset: 100 }
}).then(function(json) {
adapter.didFindRecord(store, App.BatchDetail, json, id);
}).then(null, rejectionHandler);
},
setupController: function(controller, model) {
return this.controllerFor('batchItems').set('model', model.get('items'));
}
})
This way, when I go to /batch_details/1 my REST adapter will fetch the correct data which I receive in json in the above code.
Now, the model hook should return a model object or a promise that can be resolved to a model object, and that's where the problem lies. In setupController (which runs after the model hook) model is set to undefined and so my code explodes.
That means that whatever adapter.ajax returns does not resolve correctly but instead returns undefined. I'm baffled, since the above mechanism is exactly how the different find methods in ember-data (findById, findByQuery, etc.) work and that's where I got my idea from.
Can you shed some light on what I'm not getting?
Thank you.
I currently have a Sproutcore app setup with the following relationships on my models:
App.Client = SC.Record.extend({
name: SC.Record.attr(String),
brands: SC.Record.toMany('App.Brand', {isMaster: YES, inverse: 'client'})
});
App.Brand = SC.Record.extend({
name: SC.Record.attr(String),
client: SC.Record.toOne('App.Client, {isMaster: NO, inverse: 'brands'})
});
When I was working with fixtures my fixture for a client looked like this:
{
guid: 1,
name: 'My client',
brands: [1, 2]
}
And my fixture for a brand looked like this:
{
guid: 1,
name: 'My brand',
client: 1
}
Which all worked fine for me getting a clients brands and getting a brands client.
My question is in regards to how Datasources then fit into this and how the server response should be formatted.
Should the data returned from the server mirror exactly the format of the fixtures file? So clients should always contain a brands property containing an array of brand ids? And vice versa.
If I have a source list view which displays Clients with brands below them grouped. How would I go about loading that data for the source view with my datasource? Should I make a call to the server to get all the Clients and then follow that up with a call to fetch all the brands?
Thanks
Mark
The json you return will mostly mirror the fixtures. I recently had pretty much the same question as you, so I built a backend in Grails and a front end in SC, just to explore the store and datasources. My models are:
Scds.Project = SC.Record.extend(
/** #scope Scds.Project.prototype */ {
primaryKey: 'id',
name: SC.Record.attr(String),
tasks: SC.Record.toMany("Scds.Task", {
isMaster: YES,
inverse: 'project'
})
});
Scds.Task = SC.Record.extend(
/** #scope Scds.Task.prototype */ {
name: SC.Record.attr(String),
project: SC.Record.toOne("Scds.Project", {
isMaster: NO
})
});
The json returned for Projects is
[{"id":1,"name":"Project 1","tasks":[1,2,3,4,5]},{"id":2,"name":"Project 2","tasks":[6,7,8]}]
and the json returned for tasks, when I select a Project, is
{"id":1,"name":"task 1"}
obviously, this is the json for 1 task only. If you look in the projects json, you see that i put a "tasks" array with ids in it -- thats how the internals know which tasks to get. so to answer your first question, you dont need the id from child to parent, you need the parent to load with all the children, so the json does not match the fixtures exactly.
Now, it gets a bit tricky. When I load the app, I do a query to get all the Projects. The store calls the fetch method on the datasource. Here is my implementation.
Scds.PROJECTS_QUERY = SC.Query.local(Scds.Project);
var projects = Scds.store.find(Scds.PROJECTS_QUERY);
...
fetch: function(store, query) {
console.log('fetch called');
if (query === Scds.PROJECTS_QUERY) {
console.log('fetch projects');
SC.Request.getUrl('scds/project/list').json().
notify(this, '_projectsLoaded', store, query).
send();
} else if (query === Scds.TASKS_QUERY) {
console.log('tasks query');
}
return YES; // return YES if you handled the query
},
_projectsLoaded: function(response, store, query) {
console.log('projects loaded....');
if (SC.ok(response)) {
var recordType = query.get('recordType'),
records = response.get('body');
store.loadRecords(recordType, records);
store.dataSourceDidFetchQuery(query);
Scds.Statechart.sendEvent('projectsLoaded')
} else {
console.log('oops...error loading projects');
// Tell the store that your server returned an error
store.dataSourceDidErrorQuery(query, response);
}
}
This will get the Projects, but not the tasks. Sproutcore knows that as soon as I access the tasks array on a Project, it needs to get them. What it does is call retrieveRecords in the datasource. That method in turn calls retrieveRecord for every id in the tasks array. My retrieveRecord method looks like
retrieveRecord: function(store, storeKey) {
var id = Scds.store.idFor(storeKey);
console.log('retrieveRecord called with [storeKey, id] [%#, %#]'.fmt(storeKey, id));
SC.Request.getUrl('scds/task/get/%#'.fmt(id)).json().
notify(this, "_didRetrieveRecord", store, storeKey).
send();
return YES;
},
_didRetrieveRecord: function(response, store, storeKey) {
if (SC.ok(response)) {
console.log('succesfully loaded task %#'.fmt(response.get('body')));
var dataHash = response.get('body');
store.dataSourceDidComplete(storeKey, dataHash);
} ...
},
Note that you should use sc-gen to generate your datasource, because it provides a fairly well flushed out stub that guides you towards the methods you need to implement. It does not provide a retrieveMethods implementation, but you can provide your own if you don't want to do a single request for each child record you are loading.
Note that you always have options. If I wanted to, I could have created a Tasks query and loaded all the tasks data up front, that way I wouldn't need to go to my server when I clicked a project. So in answer to your second question, it depends. You can either load the brands when you click on the client, or you can load all the data up front, which is probably a good idea if there isn't that much data.