How to move a record from one model to another in Google App Maker - sql

I have created 2 SQL models in Google App Maker. For simplicity sake lets say Model 1 has all of the information that can be added and edited for each of the records. Model 2 works as a storage model where once a record in Model 1 is removed it moves over to Model 2. The idea is that the individual can click on a "removed" boolean which will open a dialog page to add in comments for the removal and once complete the record will be moved to Model 2 for storage and will no longer be visible in Model 1.
Is there any way to do this? If you need more information let me know and I will try to provide it but the reason I cannot post the existing app is because the information is confidential.
Thanks for you help!

Updated answer: move to another model
If you want to enforce users to enter a message, you need to forbid them to delete records through datasources:
// onBeforeDelete model event
throw new Error('You should provide message prior deleting a record');
Then you need to implement audit itself:
// server script
function archive(itemKey, message) {
if (!message) {
throw new Error('Message is required');
}
var record = app.models.MyModel.getRecord(itemKey);
if (!record) {
throw new Error('Record was not found');
}
var archive = app.models.Removed.newRecord();
archive.Field1 = record.Field1;
archive.Field2 = record.Field2;
...
archive.Message = message;
app.saveRecords([archive]);
app.deleteRecords([record]);
}
// client script
google.script.run
.withSuccessHandler(function() {
// TODO
})
.withFailureHandler(function() {
// TODO
})
.archive(itemKey, message);
If you need to implement auditing for multiple/all models then you can generalize the snippet by passing model's name and using Model Metadata: funciton archive(modelName, itemKey, message) {}
Original answer: move to another DB
Normally I would recommend just to add and set a boolean field Deleted to the model and ensure that records marked as deleted are not sent to the client. Implementation of moving data between databases could be tricky since transactions are not supported across multiple databases.
If you desperately want to make your app more complex and less reliable you can create record's backup in onBeforeDelete model event using JDBC Apps Script service (External Database Sample could be your friend to start with):
// onBeforeDelete model event
var connection = Jdbc.getConnection(dbUrl, user, userPassword);
var statement = connection.prepareStatement('INSERT INTO ' + TABLE_NAME +
' (Field1, Field2, ...) values (?, ?, ...)');
statement.setString(1, record.Field1);
statement.setString(2, record.Field2);
...
statement.execute();
Why do you need JDBC? Because App Maker natively doesn't support models attached to different databases.

I was able to do what I needed using a query filter as a client script This keeps the data on the back end when i export and only shows the active user whatever is not removed.
var datasource1 = app.datasources.WatchList_Data;
datasource1.query.filters.Remove_from_WatchList._equals = 'No';
datasource1.load();

Related

Elastic APM show total number of SQL Queries executed on .Net Core API Endpoint

Currently have Elastic Apm setup with: app.UseAllElasticApm(Configuration); which is working correctly. I've just been trying to find a way to record exactly how many SQL Queries are run via Entity Framework for each transaction.
Ideally when viewing the Apm data in Kibana the metadata tab could just include an EntityFramework.ExecutedSqlQueriesCount.
Currently on .Net Core 2.2.3
One thing you can use is the Filter API for this.
With that you have access to all transactions and spans before they are sent to the APM Server.
You can't run through all the spans on a given transaction, so you need some tweaking - for this I use a Dictionary in my sample.
var numberOfSqlQueries = new Dictionary<string, int>();
Elastic.Apm.Agent.AddFilter((ITransaction transaction) =>
{
if (numberOfSqlQueries.ContainsKey(transaction.Id))
{
// We make an assumption here: we assume that all SQL requests on a given transaction end before the transaction ends
// this in practice means that you don't do any "fire and forget" type of query. If you do, you need to make sure
// that the numberOfSqlQueries does not leak.
transaction.Labels["NumberOfSqlQueries"] = numberOfSqlQueries[transaction.Id].ToString();
numberOfSqlQueries.Remove(transaction.Id);
}
return transaction;
});
Elastic.Apm.Agent.AddFilter((ISpan span) =>
{
// you can't relly filter whether if it's done by EF Core, or another database library
// but you have all sorts of other info like db instance, also span.subtype and span.action could be helpful to filter properly
if (span.Context.Db != null && span.Context.Db.Instance == "MyDbInstance")
{
if (numberOfSqlQueries.ContainsKey(span.TransactionId))
numberOfSqlQueries[span.TransactionId]++;
else
numberOfSqlQueries[span.TransactionId] = 1;
}
return span;
});
Couple of thing here:
I assume you don't do "fire and forget" type of queries, if you do, you need to handle those extra
The counting isn't really specific to EF Core queries, but you have info like db name, database type (mssql, etc.) - hopefully based on that you'll be able filter the queries you want.
With transaction.Labels["NumberOfSqlQueries"] we add a label to the given transction, and you'll be able to see this data on the transaction in Kibana.

Core Data + CloudKit Migration: Cannot create or modify field [...] in record [...] in production schema

I use NSPersistentCloudKitContainer to sync Core Data with Cloud Kit. To prepare for a new migration, I have created a new model version of the xcdatamodel and marked it as "current". I created a new entity and added a relationship from another entity. Nothing spectacular and suitable for a lightweight migration I thought.
Let's name this new entity: EntityNew
This is my code to initialize the NSPersistentCloudKitContainer:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentCloudKitContainer(name: "MyContainerName")
container.loadPersistentStores(completionHandler: { _, error in
guard let error = error as NSError? else { return }
fatalError("###\(#function): Failed to load persistent stores:\(error)")
})
container.viewContext.automaticallyMergesChangesFromParent = true
return container
}()
shouldMigrateStoreAutomatically and shouldInferMappingModelAutomatically are set to true by default.
Everything worked fine locally. No errors occurred during the migration.
The problems started when I created a new instance of EntityNew:
let newItem = EntityNew(context: context)
newItem = "..."
saveContext()
newItem was created locally without any problems, but the iCloud Sync stopped working from this moment. The following error appeared in the console:
"<CKRecordID: 0x283fb1460; recordName=2E2209A1-F9F6-4DF2-960D-2C31F764ED05, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a5950: \"Batch Request Failed\" (22/2024); server message = \"Atomic failure\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[MyContainerID]\">"
"<CKRecordID: 0x283fb1a00; recordName=3145C837-D80D-47E0-B944-DBC6576A9B0A, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a4000: \"Invalid Arguments\" (12/2006); server message = \"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[ContainerID]\">";
"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema"
Cloud Kit tries to modify the field CD_[Fieldname in EntityNew] (which is correct) on the record CD_[OtherEntityName], which is not the entity I created above! So Core Data tries to modify the wrong entity! This behavior does not happen for all fields (approx. 5 out of 10). I checked the local sqlite file of my iPhone but the local tables seems correct. The phenomenon can be observed in both, the Development and the Production icloud-container-environment. If I start with an empty database (which already contains the new entity, so no migration is necessary) the synchronization works.
What did I miss? Any ideas?
Thank you!

WCF Data Services: ChangeInterceptor not firing for Update

I have a WCF Data Service (OData) that serves as the data repository for a larger system. I'm trying to fire off specific methods based on operations on Entities in the repository.
Specifically, if someone changes a Message record, I want to hook into the pipeline. I'm using ChangeInterceptors for this.
They work for Add and Delete. However, nothing fires when an entity is updated. I am concerned that the DbContext can not resolve the fact that the entity has changed, since the request is stateless.
This does not trigger the handler:
var whatever = from m in Messages
where m.MessageKey == 3
select m;
whatever.First().UpdatedDate = DateTime.Now;
this.SaveChanges();
Has anyone else faced this problem?
So, I was trying to use AttachTo() to handle the fact that my record was detached. This flat out didn't work, and led to runtime exceptions like the following:
This operation requires the entity be of an Entity Type, and has at least one key
property.Parameter name: entity
At any rate, just use the update method and the change will be intercepted (and actually applied)
var whatever = (from m in Messages where m.MessageKey == 1
select m ).Single();
whatever.UpdatedDate = DateTime.Now;
this.UpdateObject(whatever);
this.SaveChanges();

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources

RavenDB, RavenHQ and Appharbor - document size error with very first document

I have a completely empty RavenHQ database that's linked to my Appharbor application. The amount of space the database is currently using is 1.1mb out of an available 25mb for my bronze account. The database previously had records in it, but I have deleted them using "delete collection" in the management studio.
The very first time I call session.Store(myobject), and BEFORE I call .SaveChanges(), I get the following error.
System.InvalidOperationException: Url: "/docs/Raven/Hilo/AccItems"
Raven.Database.Exceptions.OperationVetoedException: PUT vetoed by Raven.Bundles.Quotas.Triggers.DatabaseSizeQoutaForDocumetsPutTrigger because: Database size is 45,347 KB, which is over the allowed quota of 25,600 KB. No more documents are allowed in.
Now, the document is definitely not that big, so I don't know what this error can mean, especially as I don't think I've even hit the database at that point since I haven't closed the session by calling SaveChanges(). Any ideas? Here's the code itself.
XDocument doc = XDocument.Parse(rawXml);
var accItems = ExtractItemsFromFeed(doc);
using (IDocumentSession session = _store.OpenSession())
{
var dbItems = session.Query<AccItem>().ToList();
foreach (var item in accItems)
{
var existingRecord = dbItems.SingleOrDefault(x => x.Source == x.SourceId == cottage.SourceId);
if (existingRecord == null)
{
session.Store(item);
_logger.Info("Saved new item {0}.", item.ShortName);
}
else
{
existingRecord.ShortName = item.ShortName;
_logger.Info("Updated item {0}.", item.ShortName);
}
session.SaveChanges();
}
}
Any other comments about the style of this code would be most welcome, as I was unsure of the best way to approach the "update existing item or create if it isn't there" scenario.
The answer here was as follows.
RavenHQ support found that the database was indeed oversized, but it seemed that the size reported in the Appharbor-branded RavenHQ control panel was incorrect. I had filled up the database way over the limit with a previous faulty version of the code posted above, so the error message I received was actually correct.
Fixing this problem without paying to upgrade the database wasn't straightforward, as it's not possible to shrink the database. As I also wasn't able to delete my single Appharbor/RavenHQ database or create another one that left me with the choice of creating an entirely new Appharbor application, or registering directly with RavenHQ for a new account. I chose the latter. The RavenHQ-branded control panel is slightly different to the Appharbor one, in that it has the ability to create and delete databases.
So to summarize: there doesn't seem to be any benefit to using RavenHQ as an add-on to Appharbor - you might as well go and get a proper free RavenHQ account.