In ST1.x I had no problem syncing an onlinestore to an offlinestore with the below method, now it seems that sync doesn't work in STB2. I can see the records being output on the console. Anyone else having this issue? I believe it may be a bug...
var remoteStore = Ext.getStore('UpdateConfig');
var localStore = Ext.getStore('UpdateLocalConfig');
remoteStore.each(function (record) {
localStore.add(record.data);
console.log(record.data);
});
localStore.sync();
same question + answer # Sencha Forum
...and same user??? XD
This was answered on the Sencha Touch 2 Forums by TommyMaintz, but I wanted to give the answer here as well.
"One thing I think I see which is wrong is that you are adding a record to the LocalStore using the record.data. In ST2 we now have a Model cache. This means that if you create two instances with the exact same model and id, the second time you create that instance it will just return the already existing instance. This means that if you sync your local store, it won't recognize that record as a 'phantom' record because it already has an id. What you would have to do in your case if you want to make a "copy" of your record by using all the data but removing the id. This will generate a new simple id for it and when you save it to your local storage it will generate a proper local id for it.
When I tried doing this I noticed the "copy" method on Model hasn't been updated to handle this. If you apply the following override you should be able to do localStore.add(record.copy()); localStore.sync()"
Ext.define('Ext.data.ModelCopyFix', {
override: 'Ext.data.Model',
/**
* Creates a copy (clone) of this Model instance.
*
* #param {String} id A new id. If you don't specify this a new id will be generated for you.
* To generate a phantom instance with a new id use:
*
* var rec = record.copy(); // clone the record with a new id
*
* #return {Ext.data.Model}
*/
copy: function(newId) {
var me = this,
idProperty = me.getIdProperty(),
raw = Ext.apply({}, me.raw),
data = Ext.apply({}, me.data);
delete raw[idProperty];
delete data[idProperty];
return new me.self(null, newId, raw, data);
}
});
Related
I'm new here. Thanks in advance for your advice.
I’m working on an app which will ask the user how many items they made.
The user will enter a number. My app should then create that many new records in a table called 'Items_Made'.
E.g. The app asks “How many items did you make?”, the user enters “19”, the app then creates 19 new records in the 'Items_Made' table.
I've managed to pull together some code (shown below) that creates ONE new record, but I would like it to create several. I probably need some kind of loop or 'while' function but am unsure how to do so.
var ceateDatasource = app.datasources.Items_Made.modes.create;
var newItem = ceateDatasource.item;
ceateDatasource.createItem();
This code successfully creates 1 record. I would like it to be able to create several.
Creating a lot of records via client script is not recommended, especially if you loose connection or the app gets closed by mistake. In my opinion, the best way to handle this would be via server script for two things: First, It's more reliable and second, it's faster. As in the example from the official documentation, to create a record you need to do something like this:
// Assume a model called "Fruits" with a string field called "Name".
var newRecord = app.models.Fruits.newRecord();
newRecord.Name = "Kiwi"; // properties/fields can be read and written.
app.saveRecords([newRecord]); // save changes to database.
The example above is a clear example on how to create only one record. To create several records at once, you can use a for statement like this:
function createRecordsInBulk(){
var newRecords = [];
for(var i=0; i<19; i++){
var newRecord = app.models.Fruits.newRecord();
newRecord.Name = "Kiwi " + i;
newRecords.push(newRecord);
}
app.saveRecords(newRecords);
}
In the example above, you initiate newRecords, an empty array that will be responsible for holding all the new records to create at once. Then using a for statement, you generate 19 new records and push them into the newRecords. Finally, once the loop is finished, you save all the records at once by using app.saveRecords and passing the newRecords array as an argument.
Now, all this is happening on the server side. Obviously you need a way to call this from the client side. For that, you need to use the google.script.run method. So from the client side you need to do the following:
google.script.run.withSuccessHandler(function(result) {
app.datasources.Fruits.load();
}).createRecordsInBulk();
All this information is clearly documented on the app maker official documentation site. I strongly suggest you to always check there first as I believe you can get a faster resolution by reading the documentation.
I'd suggest making a dropdown or textbox where the user can select/enter the number of items they want to create and then attach the following code to your 'Create' button:
var createDatasource = app.datasources.Items_Made.modes.create;
var userinput = Number(widget.root.descendants.YourTextboxOrDropdown.value);
for (var i = 0; i <= userinput; i++) {
var newItem = createDatasource.item;
createDatasource.createItem();
}
Simple loop with your user input should get this accomplished.
I'm studying Sensenet Framework and installed successfull on my computer, and now I'm developing our website based on this framework.I read documents on wiki and understood relationship between Database <-> Properties <--> Fields <-> View (you can see the image in this link: http://wiki.sensenet.com/Field_-_for_Developers). For suppose, if I added a new table in to Sensenet's database and desiderate show all datas inside this table to our page, but I don't know how to dev flow by this model: Database <=> Property <=> Field <=> View. ? can you show steps to help me?
Please consider storing your data in the SenseNet Content Repository instead of keeping custom tables in the database. It is much easier to work with regular content items and you will have all the feature the repo offers - e.g. indexing, permissions, and of course an existing UI. To do this, you will have to take the following steps:
Define content types in SenseNet for every entity type you have in your existing db (in the example below this is the Car type).
Create a container in the Content Repository where you want to put your content (in this case this is a Cars custom list under the default site).
Create a command line tool using the SenseNet Client library to migrate your existing data to the Content Repository.
To see the example in detail, please check out this article:
How to migrate an existing database to the Content Repository
The core of the example is really a few lines of code that actually saves content items into the Content Repository (through the REST API):
using (var conn = new SqlConnection(ConnectionString))
{
await conn.OpenAsync();
using (var command = new SqlCommand("SELECT * FROM Cars", conn))
{
using (var reader = await command.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
var id = reader.GetInt32(0);
var make = reader.GetString(1);
var model = reader.GetString(2);
var price = reader.GetInt32(3);
// Build a new content in memory and fill custom metadata fields. No need to create
// strongly typed objects here as the client Content is a dynamic type.
// Parent path is a Content Repository path, e.g. "/Root/Sites/Default_Site/Cars"
dynamic car = Content.CreateNew(ParentPath, "Car", "Car-" + id);
car.Make = make;
car.Model = model;
car.Price = price;
// save it through the HTTP REST API
await car.SaveAsync();
Console.WriteLine("Car-" + id + " saved.");
}
}
}
}
I currently have a ravenDB database with a model that has a specific set of fields that I have been working with. I realized there is a field or two that I need to add and have successfully used ravenDB's patch request once to patch my documents in my database to initialize those fields on all the pre existing documents. I wanted to add another field again but I cannot get the patch code to run again to update my documents another time. I was wondering if someone could tell me if there was any documentation or methods to check the database at deploy and see if the models are the same, if not to patch the ones that are not but leave the ones that are alone and ensure after an update the pre existing models are not reset to what the patch is patching.
private void updateDb(IDocumentStore store)
{
store.DatabaseCommands.UpdateByIndex("Interviews_ByCandidateInterviewAndDate",
new IndexQuery{
Query = "Candidate:"
},
new []{
new PatchRequest{
Type = PatchCommandType.Set,
Name = "IsArchived",
Value = true
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "ArchiveDate",
Value = null
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "TestingField",
Value = 14
}
},
new BulkOperationOptions
{
AllowStale = false
}
);
}
The first two patch requests went through and shows up in the database but one thing I cannot see is if i were to run this patch again to get that third field into the model, would it change all the values that are already existing in the database for the first two to true and null or would it leave them the way they are and more importantly, I cannot get this code to run again.
Any pointers in the right direction would be greatly appreciated! thanks.
You query is wrong:
Query = "Candidate:"
Should have no results (invalid query).
Use:
Query = "Candidate:*"
Just getting started with dojo/JsonRest, but having some problems with sending updates back to my server. I've got 2 questions that I'm stuck with;
The code below produces a grid with one of the columns set to editable.
The primary key in my json data is the "jobName" attribute (hence idAttribute in the JsonRest store).
First question is about the URI in the PUT;
- When I call dataStore.save() the server get's a PUT, but the URI is /myrestservice/Jobs/0.9877865987 (it changes each time, but is always a float)
- I don't see where dojo is getting the float number from? It's not my idAttribute value from that row. How can I get the PUT to respect the idAttribute in the JsonRest store?
- I did try setting idProperty in the MemoryStore to "jobName", but that changed the PUT in to a POST and removed the float, but I still don't get a jobName in the URI which is what my REST server needs.
Second question about the content of the PUT;
- The PUT contains the whole row. I'd really just like the idAttribute and the data that changed - is that possible?
I've been through the examples and docs, but there aren't many examples of handling the PUT/POST part of JsonRest.
Thanks
var userMemoryStore = new dojo.store.Memory( );
var userJsonRestStore = new dojo.store.JsonRest({target:"/myrestservice/Jobs/", idAttribute:"jobName"});
var jsonStore = new dojo.store.Cache(userJsonRestStore, userMemoryStore);
var dataStore = new dojo.data.ObjectStore( {objectStore: jsonStore } );
/*create a new grid*/
var grid = new dojox.grid.DataGrid({
id: 'grid'
,store: dataStore
,structure: layout
,rowSelector: '20px'}
,"gridDiv");
grid.startup();
dojo.query("#save").onclick(function() {
dataStore.save();
});
I think you want idProperty, not idAttribute. It also might help to set idProperty in the Memory store being used to cache as well; that may be what's generating the random float.
As for the second question, that'd probably require customization; I don't believe OOTB stores (or grids) generally expect to send partial items.
I have a bunch of existing sagas in various states of a long running process.
Recently we decided to make one of the properties on our IContainSagaData implementation unique by using the Saga.UniqueAttribute (about which more here http://docs.particular.net/nservicebus/nservicebus-sagas-and-concurrency).
After deploying the change, we realized that all our old saga instances were not being found, and after further digging (thanks Charlie!) discovered that by adding the unique attribute, we were required to data fix all our existing sagas in Raven.
Now, this is pretty poor, kind of like adding a index to a database column and then finding that all the table data no longer select-able, but being what it is, we decided to create a tool for doing this.
So after creating and running this tool we've now patched up the old sagas so that they now resemble the new sagas (sagas created since we went live with the change).
However, despite all the data now looking right we're still not able to find old instances of the saga!
The tool we wrote does two things. For each existing saga, the tool:
Adds a new RavenJToken called "NServiceBus-UniqueValue" to the saga metadata, setting the value to the same value as our unique property for that saga, and
Creates a new document of type NServiceBus.Persistence.Raven.SagaPersister.SagaUniqueIdentity, setting the SagaId, SagaDocId, and UniqueValue fields accordingly.
My questions are:
Is it sufficient to simply make the data look correct or is there something else we need to do?
Another option we have is to revert the change which added the unique attribute. However in this scenario, would those new sagas which have been created since the change went in be OK with this?
Code for adding metadata token:
var policyKey = RavenJToken.FromObject(saga.PolicyKey); // This is the unique field
sagaDataMetadata.Add("NServiceBus-UniqueValue", policyKey);
Code for adding new doc:
var policyKeySagaUniqueId = new SagaUniqueIdentity
{
Id = "Matlock.Renewals.RenewalSaga.RenewalSagaData/PolicyKey/" + Guid.NewGuid().ToString(),
SagaId = saga.Id,
UniqueValue = saga.PolicyKey,
SagaDocId = "RenewalSaga/" + saga.Id.ToString()
};
session.Store(policyKeySagaUniqueId);
Any help much appreciated.
EDIT
Thanks to David's help on this we have fixed our problem - the key difference was we used the SagaUniqueIdentity.FormatId() to generate our document IDs rather than a new guid - this was trivial tio do since we were already referencing the NServiceBus and NServiceBus.Core assemblies.
The short answer is that it is not enough to make the data resemble the new identity documents. Where you are using Guid.NewGuid().ToString(), that data is important! That's why your solution isn't working right now. I spoke about the concept of identity documents (specifically about the NServiceBus use case) during the last quarter of my talk at RavenConf 2014 - here are the slides and video.
So here is the long answer:
In RavenDB, the only ACID guarantees are on the Load/Store by Id operations. So if two threads are acting on the same Saga concurrently, and one stores the Saga data, the second thread can only expect to get back the correct saga data if it is also loading a document by its Id.
To guarantee this, the Raven Saga Persister uses an identity document like the one you showed. It contains the SagaId, the UniqueValue (mostly for human comprehension and debugging, the database doesn't technically need it), and the SagaDocId (which is a little duplication as its only the {SagaTypeName}/{SagaId} where we already have the SagaId.
With the SagaDocId, we can use the Include feature of RavenDB to do a query like this (which is from memory, probably wrong, and should only serve to illustrate the concept as pseudocode)...
var identityDocId = // some value based on incoming message
var idDoc = RavenSession
// Look at the identity doc's SagaDocId and pull back that document too!
.Include<SagaIdentity>(identityDoc => identityDoc.SagaDocId)
.Load(identityDocId);
var sagaData = RavenSession
.Load(idDoc.SagaDocId); // Already in-memory, no 2nd round-trip to database!
So then the identityDocId is very important because it describes the uniqueness of the value coming from the message, not just any old Guid will do. So what we really need to know is how to calculate that.
For that, the NServiceBus saga persister code is instructive:
void StoreUniqueProperty(IContainSagaData saga)
{
var uniqueProperty = UniqueAttribute.GetUniqueProperty(saga);
if (!uniqueProperty.HasValue) return;
var id = SagaUniqueIdentity.FormatId(saga.GetType(), uniqueProperty.Value);
var sagaDocId = sessionFactory.Store.Conventions.FindFullDocumentKeyFromNonStringIdentifier(saga.Id, saga.GetType(), false);
Session.Store(new SagaUniqueIdentity
{
Id = id,
SagaId = saga.Id,
UniqueValue = uniqueProperty.Value.Value,
SagaDocId = sagaDocId
});
SetUniqueValueMetadata(saga, uniqueProperty.Value);
}
The important part is the SagaUniqueIdentity.FormatId method from the same file.
public static string FormatId(Type sagaType, KeyValuePair<string, object> uniqueProperty)
{
if (uniqueProperty.Value == null)
{
throw new ArgumentNullException("uniqueProperty", string.Format("Property {0} is marked with the [Unique] attribute on {1} but contains a null value. Please make sure that all unique properties are set on your SagaData and/or that you have marked the correct properties as unique.", uniqueProperty.Key, sagaType.Name));
}
var value = Utils.DeterministicGuid.Create(uniqueProperty.Value.ToString());
var id = string.Format("{0}/{1}/{2}", sagaType.FullName.Replace('+', '-'), uniqueProperty.Key, value);
// raven has a size limit of 255 bytes == 127 unicode chars
if (id.Length > 127)
{
// generate a guid from the hash:
var key = Utils.DeterministicGuid.Create(sagaType.FullName, uniqueProperty.Key);
id = string.Format("MoreThan127/{0}/{1}", key, value);
}
return id;
}
This relies on Utils.DeterministicGuid.Create(params object[] data) which creates a Guid out of an MD5 hash. (MD5 sucks for actual security but we are only looking for likely uniqueness.)
static class DeterministicGuid
{
public static Guid Create(params object[] data)
{
// use MD5 hash to get a 16-byte hash of the string
using (var provider = new MD5CryptoServiceProvider())
{
var inputBytes = Encoding.Default.GetBytes(String.Concat(data));
var hashBytes = provider.ComputeHash(inputBytes);
// generate a guid from the hash:
return new Guid(hashBytes);
}
}
}
That's what you need to replicate to get your utility to work properly.
What's really interesting is that this code made it all the way to production - I'm surprised you didn't run into trouble before this, with messages creating new saga instances when they really shouldn't because they couldn't find the existing Saga data.
I almost think it might be a good idea if NServiceBus would raise a warning any time you tried to find Saga Data by anything other than a [Unique] marked property, because it's an easy thing to forget to do. I filed this issue on GitHub and submitted this pull request to do just that.