PowerBI API Clone Report and Dataset changing the datasource - api

I am developing an application in .NET6 using the PoweBI Client for managing workspaces, reports, datasets, etc.
The idea is that the application will be able to create client workspaces and will inherit reports and datasets from a main workspace. In the main workspace there will be reports published from PowerBI Desktop and therefore the respective dataset will be also there.
At the moment of the clone datasource database, user and password should be changed accordantly to match the workspace customer context. Using the following code I can list the reports on the main workspace (workspace_from_id) and I can create them on customer workspace (workspace_towa_id)
var reports_from = pbiClient.Reports.GetReports(workspace_from_id);
foreach (Report report_from in reports_from.Value)
{
Guid report_from_id = report_from.Id;
CloneReportRequest cloneReportRequest = new();
cloneReportRequest.TargetWorkspaceId = workspace_towa_id;
cloneReportRequest.TargetModelId = dataset_towa.Id;
cloneReportRequest.Name = report_from.Name;
Report report_towa = pbiClient.Reports.CloneReport(workspace_from_id, report_from_id, cloneReportRequest);
}
The problem of the above code is that the dataset is not cloned and the source dataset is used as shared dataset for both workspaces. I tried already to copy the dataset details and create a new one with different database using the following code:
CreateDatasetRequest createDatasetRequest = new();
createDatasetRequest.Name = dataset_from.Name;
createDatasetRequest.Datasources = new List<Datasource>();
createDatasetRequest.Tables = new List<Table>();
Datasources datasources_from = pbiClient.Datasets.GetDatasources(workspace_from_id, dataset_from_id);
foreach (Datasource datasource_from in datasources_from.Value)
{
//FOREACH DATASOURCE IN DATASET
Datasource datasource_towa = new ();
datasource_towa.Name = datasource_from.Name;
datasource_towa.DatasourceType = datasource_from.DatasourceType;
//CHANGE DATASOURCE CONNECTION DETAILS
DatasourceConnectionDetails datasourceConnectiondetails = datasource_from.ConnectionDetails;
datasourceConnectiondetails.Database = $"{Variables.reporting_db}_{group_towa.Name.ToLower()}";
datasource_towa.ConnectionDetails = datasourceConnectiondetails;
datasource_towa.ConnectionString = datasource_from.ConnectionString;
datasource_towa.GatewayId = datasource_from.GatewayId;
//ADD DATASOURCE INTO DATASET
createDatasetRequest.Datasources.Add(datasource_towa);
}
Tables tables_from = pbiClient.Datasets.GetTables(workspace_from_id, dataset_from_id); //WORKS FOR PUSH DATASET
foreach (Table table_from in tables_from.Value)
{
//FOREACH TABLE IN DATASET
Table table_towa = new ();
table_towa.Name = table_from.Name;
table_towa.Source = table_from.Source;
table_towa.Columns = table_from.Columns;
table_towa.Rows = table_from.Rows;
table_towa.Description = table_from.Description;
//ADD TABLE INTO DATASET
createDatasetRequest.Tables.Add(table_from);
}
The problem with the above code is that the pbiClient.Datasets.GetTables function is not working for normal datasets but is used only for push datasets. Finally without beeing able to get the Tables the following code is failing:
var dataset_towa = pbiClient.Datasets.PostDataset(workspace_towa_id, createDatasetRequest);
Finally discovered that also the pbiClient.Datasets.PostDataset method is used to post push dataset as described here: https://learn.microsoft.com/en-us/rest/api/power-bi/push-datasets/datasets-post-dataset
=======UPDATE 13/01/2023=======
Tried already a few other ways to clone the report and dataset like to create a datasource but for that we need a data gateway. In that case when the reports are already into a cloud like Azure for PostgreSQL we do need a gateway. On the other side I tried to create a Virtual Gateway in order to create datasource into this Gateway, but =Virtual Gateway is not supported by PowerBI Api and is only supported in premium capacities.
So seems that I cannot clone report together with a dataset and change the datasource.
Any ideas?

After a lot of hours researching I managed to download report from main workspace and upload them into customer workpaces by changing the datasource details.
Steps to perform:
Export report with PowerBI API /Export as stream into memory
Import report with PowerBI API /Imports as stream from memory (needs special treat to be sure that you read the whole stream from the HTTP Content / using SDK is not working)
Update report ConnectionDetails with PowerBI Client SDK /pbiClient.Datasets.UpdateDatasourcesInGroup (needs special treat to change only the server and database attribute without putting new instance of object)
Update datasource of the dataset with PowerBI Client SDK /pbiClient.Gateways.UpdateDatasource (needs special treat to give credentials as JSON)

Related

Core Data + CloudKit Migration: Cannot create or modify field [...] in record [...] in production schema

I use NSPersistentCloudKitContainer to sync Core Data with Cloud Kit. To prepare for a new migration, I have created a new model version of the xcdatamodel and marked it as "current". I created a new entity and added a relationship from another entity. Nothing spectacular and suitable for a lightweight migration I thought.
Let's name this new entity: EntityNew
This is my code to initialize the NSPersistentCloudKitContainer:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentCloudKitContainer(name: "MyContainerName")
container.loadPersistentStores(completionHandler: { _, error in
guard let error = error as NSError? else { return }
fatalError("###\(#function): Failed to load persistent stores:\(error)")
})
container.viewContext.automaticallyMergesChangesFromParent = true
return container
}()
shouldMigrateStoreAutomatically and shouldInferMappingModelAutomatically are set to true by default.
Everything worked fine locally. No errors occurred during the migration.
The problems started when I created a new instance of EntityNew:
let newItem = EntityNew(context: context)
newItem = "..."
saveContext()
newItem was created locally without any problems, but the iCloud Sync stopped working from this moment. The following error appeared in the console:
"<CKRecordID: 0x283fb1460; recordName=2E2209A1-F9F6-4DF2-960D-2C31F764ED05, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a5950: \"Batch Request Failed\" (22/2024); server message = \"Atomic failure\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[MyContainerID]\">"
"<CKRecordID: 0x283fb1a00; recordName=3145C837-D80D-47E0-B944-DBC6576A9B0A, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a4000: \"Invalid Arguments\" (12/2006); server message = \"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[ContainerID]\">";
"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema"
Cloud Kit tries to modify the field CD_[Fieldname in EntityNew] (which is correct) on the record CD_[OtherEntityName], which is not the entity I created above! So Core Data tries to modify the wrong entity! This behavior does not happen for all fields (approx. 5 out of 10). I checked the local sqlite file of my iPhone but the local tables seems correct. The phenomenon can be observed in both, the Development and the Production icloud-container-environment. If I start with an empty database (which already contains the new entity, so no migration is necessary) the synchronization works.
What did I miss? Any ideas?
Thank you!

How to move a record from one model to another in Google App Maker

I have created 2 SQL models in Google App Maker. For simplicity sake lets say Model 1 has all of the information that can be added and edited for each of the records. Model 2 works as a storage model where once a record in Model 1 is removed it moves over to Model 2. The idea is that the individual can click on a "removed" boolean which will open a dialog page to add in comments for the removal and once complete the record will be moved to Model 2 for storage and will no longer be visible in Model 1.
Is there any way to do this? If you need more information let me know and I will try to provide it but the reason I cannot post the existing app is because the information is confidential.
Thanks for you help!
Updated answer: move to another model
If you want to enforce users to enter a message, you need to forbid them to delete records through datasources:
// onBeforeDelete model event
throw new Error('You should provide message prior deleting a record');
Then you need to implement audit itself:
// server script
function archive(itemKey, message) {
if (!message) {
throw new Error('Message is required');
}
var record = app.models.MyModel.getRecord(itemKey);
if (!record) {
throw new Error('Record was not found');
}
var archive = app.models.Removed.newRecord();
archive.Field1 = record.Field1;
archive.Field2 = record.Field2;
...
archive.Message = message;
app.saveRecords([archive]);
app.deleteRecords([record]);
}
// client script
google.script.run
.withSuccessHandler(function() {
// TODO
})
.withFailureHandler(function() {
// TODO
})
.archive(itemKey, message);
If you need to implement auditing for multiple/all models then you can generalize the snippet by passing model's name and using Model Metadata: funciton archive(modelName, itemKey, message) {}
Original answer: move to another DB
Normally I would recommend just to add and set a boolean field Deleted to the model and ensure that records marked as deleted are not sent to the client. Implementation of moving data between databases could be tricky since transactions are not supported across multiple databases.
If you desperately want to make your app more complex and less reliable you can create record's backup in onBeforeDelete model event using JDBC Apps Script service (External Database Sample could be your friend to start with):
// onBeforeDelete model event
var connection = Jdbc.getConnection(dbUrl, user, userPassword);
var statement = connection.prepareStatement('INSERT INTO ' + TABLE_NAME +
' (Field1, Field2, ...) values (?, ?, ...)');
statement.setString(1, record.Field1);
statement.setString(2, record.Field2);
...
statement.execute();
Why do you need JDBC? Because App Maker natively doesn't support models attached to different databases.
I was able to do what I needed using a query filter as a client script This keeps the data on the back end when i export and only shows the active user whatever is not removed.
var datasource1 = app.datasources.WatchList_Data;
datasource1.query.filters.Remove_from_WatchList._equals = 'No';
datasource1.load();

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

Upload and Download error synchronizing Sql Server and Sql Azure - Sync Framework

I'm trying to synchronize an Sql Server database with SQL Azure Database (please be patient 'cause I don't fully understand Sync Framework). These are the requirements:
First: synchronize 1 table from Sql Azure to Sql Server
Second: synchronize 13 other tables (including the table I mentioned in the first step) from Sql Server to Azure.
I've created a console application, and this is the code:
1.I create one scope with the 13 tables:
DbSyncScopeDescription myScope = new DbSyncScopeDescription("alltablesyncgroup");
DbSyncTableDescription table = qlSyncDescriptionBuilder.GetDescriptionForTable("tablename", sqlServerConn);
myScope.Tables.Add(table); //repeated 13 times.
2.I Provision both data bases:
SqlSyncScopeProvisioning sqlAzureProv = new SqlSyncScopeProvisioning(sqlAzureConn,myScope);
if (!sqlAzureProv.ScopeExists("alltablesyncgroup"))
{
sqlAzureProv.Apply();
}
SqlSyncScopeProvisioning sqlServerProv = new SqlSyncScopeProvisioning(sqlServerConn, myScope);
if (!sqlServerProv.ScopeExists("alltablesyncgroup"))
{
sqlServerProv.Apply();
}
3.I create the SyncOrchestrator with the SyncDirectionOrder.Download to sync the firts table:
SqlConnection sqlServerConn = new SqlConnection(sqllocalConnectionString);
SqlConnection sqlAzureConn = new SqlConnection(sqlazureConnectionString);
SyncOrchestrator orch = new SyncOrchestrator
{
RemoteProvider = new SqlSyncProvider(scopeName, sqlAzureConn),
LocalProvider = new SqlSyncProvider(scopeName, sqlServerConn),
Direction = SyncDirectionOrder.Download
};
orch.Synchronize();
4.Later, I use the same function only changing the direction SyncDirectionOrder.Upload to sync the 13 remaining tables
SqlConnection sqlServerConn = new SqlConnection(sqllocalConnectionString);
SqlConnection sqlAzureConn = new SqlConnection(sqlazureConnectionString);
SyncOrchestrator orch = new SyncOrchestrator
{
RemoteProvider = new SqlSyncProvider(scopeName, sqlAzureConn),
LocalProvider = new SqlSyncProvider(scopeName, sqlServerConn),
Direction = SyncDirectionOrder.Upload
};
orch.Synchronize();
Now, here is the thing, obviously I'm doing it wrong 'cause when I download, the syncStats shows that a lot of change have been applied BUT I can't see it reflected on any data base and when I try to execute the Upload sync it seems to be going into a loop 'cause the Upload process doesn't stop.
Thanks!!!
first, you mentioned you only want to sync one table from Azure to your SQL Server but you're provisioning 13 tables in the scope. if you want one table, just provision a scope with one table. (e.g. one scope for the download with table, one scope for the upload with the rest of the tables)
to find out why rows are not synching, you can subscribe to the ApplyChangeFailed event for both sides, and check if there are conflicts or errors being encountered.
or you can enable Sync Framework tracing in verbose mode so you can see what's happening underneath.

repository created via RepositoryManager not behaving the same as repo created via workbench

I create sesame native java store using following code:
Create a native java store:
// create a configuration for the SAIL stack
boolean persist = true;
String indexes = "spoc,posc,cspo";
SailImplConfig backendConfig = new NativeStoreConfig(indexes);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
create a in-memory store:
// create a configuration for the SAIL stack
boolean persist = true;
SailImplConfig backendConfig = new MemoryStoreConfig(persist);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
When I store data in this repo and query back, the results are not same as the results returned from repository created using workbench. I get duplicate/multiple entries in my resultset.
Same behavior for in-memory store.
I also observed that my triples belong to a blank context which is not the case in repository created via workbench.
What is wrong with my code above?
There is nothing wrong with your code, as far as I can see. If the store as created from the Workbench behaves differently, this most likely means that it's configured with a different SAIL stack.
The most likely candidate for the difference is this bit:
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
You have configured your repository with a reasoner on top here. If the repository created via the workbench does not use a reasoner, you will get different results on identical queries (including, sometimes, apparent duplicate results).
If you consider this a problem, you can fix this in two ways. One is (of course) to simply not create your repository with a reasoner on top. The other is to disable reasoning for specific queries. In the Workbench, you can do this by disabling the "Include inferred statements" checkbox in the query screen. Programmatically, you can do this by using Query.setIncludeInferred(false) on your prepared query object. See the javadoc for details.