How to find data structure of an app in PowerApps - sql

I want to create a SQL connection and import data from an app (Shoutouts template) to SQL database. I created a SQL connection and tried to import the data in there but I got this error.
CreatedOnDateTime: The specified column is generated by the server and can't be specified
I do have the CreatedOnDateTime column created but I guess it's datatype is not the same or something else.
Where can I look and see what fields and datatypes are being imported from PowerApps to SQL table in PowerApps via SQL connection?
Thank you for your help!

Overall, there's no easy way to find out the structure of a data source in PowerApps (please create a new feature request in the PowerApps Ideas board for that). There is a convoluted way to find it out, however, which I'll go over here.
But for your specific problem, this is the schema of a SQL table that would match the schema of the data source in PowerApps:
CREATE TABLE PowerAppsTest.StackOverflow51847975 (
PrimaryID BIGINT PRIMARY KEY,
[Id] NVARCHAR(MAX),
[Message] NVARCHAR(MAX),
CreatedOnDateTime NVARCHAR(MAX),
CreatorEmail NVARCHAR(MAX),
CreatorName NVARCHAR(MAX),
RecipientEmail NVARCHAR(MAX),
RecipientName NVARCHAR(MAX),
ShoutoutType NVARCHAR(MAX),
[Image] IMAGE
)
Now for the generic case. You've been warned that this is convoluted, so proceed at your own risk :)
First, save the app locally to your computer:
The app will be saved with the .msapp extension, but it's basically a .zip file. If you're using Windows, you can rename it to change the extension to .zip and you'll be able to uncompress and extract the files that describe the app.
One of those files, Entities.json, contains, among other things, the definition of the schema of all data sources used in the app. The file is a huge JSON file, and it has all of its whitespaces removed, so you may want to use some online tool to format (or prettify) the JSON to read it easier. Once this is done, you can open the file in your favorite text editor (anything better than Notepad should be able to handle it).
With the file opened, search for an entry in the JSON root with the property "Name" and the value equal to the name of the data source. For example, in the shoutouts app case, the data source is called "Shoutout", so search for
"Name": "Shoutout"
You'll have to remove the space if you didn't pretty-print the JSON file prior to opening it. This should be an object that describes the data source, and it has one property called DataEntityMetadataJson that has the data source schema, formatted as a JSON string. Again in the Shoutouts example, this is the value:
"{\"name\":\"Shoutout\",\"title\":\"Shoutout\",\"x-ms-permission\":\"read-write\",\"schema\":{\"type\":\"array\",\"items\":{...
Notice that it again is not pretty-printed. You'll first need to decode that string, then pretty-print it again, and you'll end up with something like this:
{
"name": "Shoutout",
"title": "Shoutout",
"x-ms-permission": "read-write",
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"PrimaryID": {
"type": "number",
"format": "double",
...
},
"Message": {
"type": "string",
...
},
"Image": {
"type": "string",
"format": "uri",
"x-ms-media-kind": "image",
...
},
"Id": {
"type": "string",
...
},
"CreatedOnDateTime": {
"type": "string",
...
},
...
And this is the schema for the data source. From that I recreated the schema in SQL, removed the reference to the Shoutout data source from the app (which caused many errors), then added a reference to my SQL table, and since it has a different name, went looking for all places that have errors in the app to fix those.
Hope this helps!

Related

Is there a way to add a default to a json schema array

I just want to understand if there is a way to add a default set of values to an array. (I don't think there is.)
So ideally I would like something like how you might imagine the following working. i.e. the fileTypes element defaults to an array of ["jpg", "png"]
"fileTypes": {
"description": "The accepted file types.",
"type": "array",
"minItems": 1,
"items": {
"type": "string",
"enum": ["jpg", "png", "pdf"]
},
"default": ["jpg", "png"]
},
Of course, all that being said... the above actually does seem to be validate as json schema however for example in VS code this default value does not populate like other defaults (like for strings) populate when creating documents.
It appears to be valid based on the spec.
9.2. "default"
There are no restrictions placed on the value of this keyword. When multiple occurrences of this keyword are applicable to a single sub-instance, implementations SHOULD remove duplicates.
This keyword can be used to supply a default JSON value associated with a particular schema. It is RECOMMENDED that a default value be valid against the associated schema.
See https://json-schema.org/draft/2020-12/json-schema-validation.html#rfc.section.9.2
It's up to the tooling to take advantage of that keyword in the JSON Schema and sounds like VS code is not.

value of key A equals to value of Key B in JSONschema

For {keyA:valueA},{KeyB:valueB} Is it possible to define in the schema, valueB must equal to valueA. In other words, copying down ValueA to ValueB?
I understand it causes duplication. But two different keys must be used to meet different standards.
For example, I want to use name as sample name in the schema below.
Schema
{
"$id": "sampleSchema",
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"sample name":{
"type":"string"
},
}
}
The data will be like:
{
"name":"example1",
"sample name":"example1"
}
JSON Schema does not support operations like this.
We call this "data consistency validation" because it tests that data in one place is consistent with how it's defined in another location.
Supporting these types of operations would be very difficult. It would probably require a general purpose programming language to support most of the cases that people would like to see.
For more information, see Scope of JSON Schema Validation.
As an alternative, some validators allow you to implement custom keywords, or implement events or hooks when an instance is being validated against a schema with a particular ID. You can use this to implement the functionality you're looking for.

Use of type : object and properties in JSON schema

I'm new to JSON.
I see in various examples of JSON, like the following, where complex values are prefixed with "type":"object", properties { }
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"type" : "object",
"properties" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
}
I have written JSON without type, object, and properties, like the following.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
and submitted to an on-line JSON schema validator with no errors.
What is the purpose of type:object, properties { }? Is it optional?
Yes it is optional, try removing it and use your validator.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"foo": "bar"
}
You actually don't even need to use the $schema keyword i.e. {} is valid json
I would start by understanding what json is, https://www.json.org/ is the best place to start but you may prefer something easier to read like https://www.w3schools.com/js/js_json_intro.asp.
A schema is just a template (or definition) to make sure you're producing valid json for the consumer
As an example let's say you have an application that parses some json and looks for a key named test_score and saves the value (the score) in a database in some table/column. For this example we'll call the table tests and the column score. Since a database column requires a type we'll choose a numeric type, i.e. integer for our score column.
A valid json example for this may look like
{
"test_score": 100
}
Following this example the application would parse the key test_score and save the value 100 to the tests.score database table/column.
But let's say a score is absent so you put in a string i.e "NA"
{
"test_score": "NA"
}
When the application attempts to save NA to the database it will error because NA is a string not an integer which the database expects.
If you put each of those examples into any online json validator they are valid json example. However, while it's valid json to use "NA" or 100 it is not valid for the actual application that needs to consume the json.
So now you may understand that the author of the json may wonder
What are the different valid types I can use as values for my test
score?
The responsibility then falls on the writers of the application to provide some sort of definition (i.e a schema) that the clients (authors) can reference so the author knows exactly how to structure the json so the application can process it accordingly. Having a schema also allows you to validate/test your json so you know it can be processed by the application without actually having to send your json through the application.
So putting it altogether let's say in the schema you see
"$test_score": {
"type": "integer",
"format": "tinyint"
},
The writer of the json now knows that they must pass an integer and the range is 0 to 255 because it's a tinyint. They no longer have to trial by error different values and see which ones the application process. This is a big benefit to having a schema.

Azure Data Factory Source Dataset value from Parameter

I have a Dataset in Azure Datafactory backed by a CSV file. I added an additional column in Dataset and want to pass it's value from Dataset parameter but value never gets copied to the column
"type": "AzureBlob",
"structure":
[
{
"name": "MyField",
"type": "String"
}
]
I have a defined parameter as well
"parameters": {
"MyParameter": {
"type": "String",
"defaultValue": "ABC"
}
}
How can copy the parameter value to Column? I tried following but doesn't work
"type": "AzureBlob",
"structure":
[
{
"name": "MyField",
"type": "String",
"value": "#dataset().MyParameter"
}
]
But this does not work. I am getting NULL in destination although parameter value is set
Based on document: Expressions and functions in Azure Data Factory , #dataset().XXX is not supported in Azure Data Factory so far. So, you can't use parameters value as custom column into sink or source with native copy activity directly.
However, you could adopt below workarounds:
1.You could create a custom activity and write code to do whatever you need.
2.You could stage the csv file in a azure data lake, then execute a U-SQL script to read the data from the file and append the new column with the pipeline rundId. Then output it to a new area in the data lake so that the data could be picked up by the rest of your pipeline. To do this, you just need to simply pass a Parameter to U-SQL from ADF. Please refer to the U-SQL Activity.
In this thread: use adf pipeline parameters as source to sink columns while mapping, the customer used the second way.

How to add field descriptions programmatically in BigQuery table

I want to add field description in a bq table programmatically, I know how to do in UI.
I have this requirement because I have few tables in my dataset which are refreshed on a daily basis and we use "writeMode": "WRITE_TRUNCATE". This also deletes the description of all the field names of the table.
I have also added the description in my schema file for the table, like this
{
"name" : "tax",
"type" : "FLOAT",
"description" : "Tax amount customer paid"
}
But I don't see the descriptions in my final table after running the scripts to load data.
Some Tables API (https://cloud.google.com/bigquery/docs/reference/v2/tables) allow you to set table and schema's fields descriptions
You can set descriptions during
table creation - https://cloud.google.com/bigquery/docs/reference/v2/tables/insert
or after table created using one of below APIs:
Patch -
https://cloud.google.com/bigquery/docs/reference/v2/tables/patch
or Update - https://cloud.google.com/bigquery/docs/reference/v2/tables/update
I think, in your case Patch API is more suitable
Below link shows you table resources you can set with those APIs
https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
BigQuery load jobs accept a schema that includes "description" with each field.
https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load
If you specify the description along with each field you are creating during your WRITE_TRUNCATE operation, the descriptions should be applied to the destination table.
Here's a snippet from the above link that includes the schema you are specifying:
"load": {
"sourceUris": [
string
],
"schema": {
"fields": [
{
"name": string,
"type": string,
"mode": string,
"fields": [
(TableFieldSchema)
],
"description": string
}
]
},