Core Data + CloudKit Migration: Cannot create or modify field [...] in record [...] in production schema - objective-c

I use NSPersistentCloudKitContainer to sync Core Data with Cloud Kit. To prepare for a new migration, I have created a new model version of the xcdatamodel and marked it as "current". I created a new entity and added a relationship from another entity. Nothing spectacular and suitable for a lightweight migration I thought.
Let's name this new entity: EntityNew
This is my code to initialize the NSPersistentCloudKitContainer:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentCloudKitContainer(name: "MyContainerName")
container.loadPersistentStores(completionHandler: { _, error in
guard let error = error as NSError? else { return }
fatalError("###\(#function): Failed to load persistent stores:\(error)")
})
container.viewContext.automaticallyMergesChangesFromParent = true
return container
}()
shouldMigrateStoreAutomatically and shouldInferMappingModelAutomatically are set to true by default.
Everything worked fine locally. No errors occurred during the migration.
The problems started when I created a new instance of EntityNew:
let newItem = EntityNew(context: context)
newItem = "..."
saveContext()
newItem was created locally without any problems, but the iCloud Sync stopped working from this moment. The following error appeared in the console:
"<CKRecordID: 0x283fb1460; recordName=2E2209A1-F9F6-4DF2-960D-2C31F764ED05, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a5950: \"Batch Request Failed\" (22/2024); server message = \"Atomic failure\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[MyContainerID]\">"
"<CKRecordID: 0x283fb1a00; recordName=3145C837-D80D-47E0-B944-DBC6576A9B0A, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a4000: \"Invalid Arguments\" (12/2006); server message = \"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[ContainerID]\">";
"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema"
Cloud Kit tries to modify the field CD_[Fieldname in EntityNew] (which is correct) on the record CD_[OtherEntityName], which is not the entity I created above! So Core Data tries to modify the wrong entity! This behavior does not happen for all fields (approx. 5 out of 10). I checked the local sqlite file of my iPhone but the local tables seems correct. The phenomenon can be observed in both, the Development and the Production icloud-container-environment. If I start with an empty database (which already contains the new entity, so no migration is necessary) the synchronization works.
What did I miss? Any ideas?
Thank you!

Related

How to move a record from one model to another in Google App Maker

I have created 2 SQL models in Google App Maker. For simplicity sake lets say Model 1 has all of the information that can be added and edited for each of the records. Model 2 works as a storage model where once a record in Model 1 is removed it moves over to Model 2. The idea is that the individual can click on a "removed" boolean which will open a dialog page to add in comments for the removal and once complete the record will be moved to Model 2 for storage and will no longer be visible in Model 1.
Is there any way to do this? If you need more information let me know and I will try to provide it but the reason I cannot post the existing app is because the information is confidential.
Thanks for you help!
Updated answer: move to another model
If you want to enforce users to enter a message, you need to forbid them to delete records through datasources:
// onBeforeDelete model event
throw new Error('You should provide message prior deleting a record');
Then you need to implement audit itself:
// server script
function archive(itemKey, message) {
if (!message) {
throw new Error('Message is required');
}
var record = app.models.MyModel.getRecord(itemKey);
if (!record) {
throw new Error('Record was not found');
}
var archive = app.models.Removed.newRecord();
archive.Field1 = record.Field1;
archive.Field2 = record.Field2;
...
archive.Message = message;
app.saveRecords([archive]);
app.deleteRecords([record]);
}
// client script
google.script.run
.withSuccessHandler(function() {
// TODO
})
.withFailureHandler(function() {
// TODO
})
.archive(itemKey, message);
If you need to implement auditing for multiple/all models then you can generalize the snippet by passing model's name and using Model Metadata: funciton archive(modelName, itemKey, message) {}
Original answer: move to another DB
Normally I would recommend just to add and set a boolean field Deleted to the model and ensure that records marked as deleted are not sent to the client. Implementation of moving data between databases could be tricky since transactions are not supported across multiple databases.
If you desperately want to make your app more complex and less reliable you can create record's backup in onBeforeDelete model event using JDBC Apps Script service (External Database Sample could be your friend to start with):
// onBeforeDelete model event
var connection = Jdbc.getConnection(dbUrl, user, userPassword);
var statement = connection.prepareStatement('INSERT INTO ' + TABLE_NAME +
' (Field1, Field2, ...) values (?, ?, ...)');
statement.setString(1, record.Field1);
statement.setString(2, record.Field2);
...
statement.execute();
Why do you need JDBC? Because App Maker natively doesn't support models attached to different databases.
I was able to do what I needed using a query filter as a client script This keeps the data on the back end when i export and only shows the active user whatever is not removed.
var datasource1 = app.datasources.WatchList_Data;
datasource1.query.filters.Remove_from_WatchList._equals = 'No';
datasource1.load();

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

After adding a lot of info into Realm one of my items do not work

I've been facing a very strange problem in Realm which doesn't occur every time , but it's been really annoying.
I'm saving a lot of information offline which come from a Webservice using realm. I don't get any exception while saving all these information.
Later I'm able to get all these objects by accessing them through MyObject.allObjects().
These objects contains a property which is an UUID, which I use to download the file related to this object and save it locally, after I finish downloading the file, I update my Realm Object with the path of the file, but some times when I do it:
let pred = NSPredicate(format: "UUID = %#", fileId)
print("This is the select \(pred.predicateFormat)")
let results = IssueFile.objects(with: pred) as! RLMResults<IssueFile>
The variable results is empty, but if I check Realm Database via Realm Browser, I can find the item I'm looking for.
So my question is: why some times Realm can't find the object I'm selecting? As I'm saving a lot of information, is there something I should change in the Realm Configuration?
I've already set the UUID as primary key of the table:
+ (NSString *)primaryKey {
return #"UUID";
}
Thanks,

Application name is not set. Call Builder#setApplicationName. error

Application: Connecting to BigQuery using BigQuery APIs for Java
Environment: Eclipse, Windows 7
My application was running fine until last night. I've made no changes (except for restarting my computer) and my code is suddenly giving me this error:
Application name is not set. Call Builder#setApplicationName.
Thankfully I had a tar'd version of my workspace from last night. I ran a folder compare and found the local_db.bin file was different. I deleted the existing local_db.bin file and tried to run the program again. And it worked fine!
Any idea why this might have happened?
Hopefully this will help anyone else who stumbles upon this issue.
Try this to set your application name
Drive service = new Drive.Builder(httpTransport, jsonFactory, null)
.setHttpRequestInitializer(credential)
.setApplicationName("Your app name")
.build();
If you are working with only Firebase Dynamic Links without Android or iOS app
Try this.
builder.setApplicationName(firebaseUtil.getApplicationName());
FirebaseUtil is custom class add keys and application name to this class
FirebaseDynamicLinks.Builder builder = new FirebaseDynamicLinks.Builder(
GoogleNetHttpTransport.newTrustedTransport(), JacksonFactory.getDefaultInstance(), null);
// initialize with api key
FirebaseDynamicLinksRequestInitializer firebaseDynamicLinksRequestInitializer = new FirebaseDynamicLinksRequestInitializer(
firebaseUtil.getFirebaseApiKey());
builder.setFirebaseDynamicLinksRequestInitializer(firebaseDynamicLinksRequestInitializer);
builder.setApplicationName(firebaseUtil.getApplicationName());
// build dynamic links
FirebaseDynamicLinks firebasedynamiclinks = builder.build();
// create Firebase Dynamic Links request
CreateShortDynamicLinkRequest createShortLinkRequest = new CreateShortDynamicLinkRequest();
createShortLinkRequest.setLongDynamicLink(firebaseUtil.getFirebaseUrlPrefix() + "?link=" + urlToShorten);
Suffix suffix = new Suffix();
suffix.setOption(firebaseUtil.getShortSuffixOption());
createShortLinkRequest.setSuffix(suffix);
// request short url
FirebaseDynamicLinks.ShortLinks.Create request = firebasedynamiclinks.shortLinks()
.create(createShortLinkRequest);
CreateShortDynamicLinkResponse createShortDynamicLinkResponse = request.execute();

repository created via RepositoryManager not behaving the same as repo created via workbench

I create sesame native java store using following code:
Create a native java store:
// create a configuration for the SAIL stack
boolean persist = true;
String indexes = "spoc,posc,cspo";
SailImplConfig backendConfig = new NativeStoreConfig(indexes);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
create a in-memory store:
// create a configuration for the SAIL stack
boolean persist = true;
SailImplConfig backendConfig = new MemoryStoreConfig(persist);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
When I store data in this repo and query back, the results are not same as the results returned from repository created using workbench. I get duplicate/multiple entries in my resultset.
Same behavior for in-memory store.
I also observed that my triples belong to a blank context which is not the case in repository created via workbench.
What is wrong with my code above?
There is nothing wrong with your code, as far as I can see. If the store as created from the Workbench behaves differently, this most likely means that it's configured with a different SAIL stack.
The most likely candidate for the difference is this bit:
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
You have configured your repository with a reasoner on top here. If the repository created via the workbench does not use a reasoner, you will get different results on identical queries (including, sometimes, apparent duplicate results).
If you consider this a problem, you can fix this in two ways. One is (of course) to simply not create your repository with a reasoner on top. The other is to disable reasoning for specific queries. In the Workbench, you can do this by disabling the "Include inferred statements" checkbox in the query screen. Programmatically, you can do this by using Query.setIncludeInferred(false) on your prepared query object. See the javadoc for details.