Insert image with SQL - directus

I want to insert images into the image field of each column.
First, I move images to the upload folder of the Directus app.
Then I use the SQL query to insert the image name to each column.
I get this error:
invalid input syntax for type uuid
How can I solve it?
I use the latest version of Directus. My images are typed in slug style. Is there a way not to change the names of images?
I get this error when I type my image names as UUID:
{
"message": "Invalid foreign key in field \"logo\".",
"extensions": {
"code": "INVALID_FOREIGN_KEY",
"collection": "items",
"field": "logo",
"invalid": "3c676907-ab7e-4c20-9e77-63df89d7b2e4"
}
}

Related

Is there a way to add a default to a json schema array

I just want to understand if there is a way to add a default set of values to an array. (I don't think there is.)
So ideally I would like something like how you might imagine the following working. i.e. the fileTypes element defaults to an array of ["jpg", "png"]
"fileTypes": {
"description": "The accepted file types.",
"type": "array",
"minItems": 1,
"items": {
"type": "string",
"enum": ["jpg", "png", "pdf"]
},
"default": ["jpg", "png"]
},
Of course, all that being said... the above actually does seem to be validate as json schema however for example in VS code this default value does not populate like other defaults (like for strings) populate when creating documents.
It appears to be valid based on the spec.
9.2. "default"
There are no restrictions placed on the value of this keyword. When multiple occurrences of this keyword are applicable to a single sub-instance, implementations SHOULD remove duplicates.
This keyword can be used to supply a default JSON value associated with a particular schema. It is RECOMMENDED that a default value be valid against the associated schema.
See https://json-schema.org/draft/2020-12/json-schema-validation.html#rfc.section.9.2
It's up to the tooling to take advantage of that keyword in the JSON Schema and sounds like VS code is not.

How to read BigQuery view using BigQuery REST API?

I have BQ table configuration in MAPPINGS and its view config_vw in SHARED_VIEWS data set.
Now I am trying to read the table and its view using REST API URI.
The table request GET https://bigquery.googleapis.com/bigquery/v2/projects/data-dev2/datasets/MAPPINGS/tables/configuration/data is responding correctly.
But when I am doing GET https://bigquery.googleapis.com/bigquery/v2/projects/data-dev2/datasets/SHARED_VIEWS/tables/config_vw/data for the view, it is giving below error.
{
"error": {
"code": 400,
"message": "Cannot list a table of type VIEW.",
"errors": [
{
"message": "Cannot list a table of type VIEW.",
"domain": "global",
"reason": "invalid"
}
],
"status": "INVALID_ARGUMENT"
}
}
Please suggest how to access BQ view using REST API ?
Regards,
San
It is expected that you cannot fetch data from a view using tabledata.list REST API.
VIEW is essentially a "saved query", which you need to make a query to materialize it into a table before you can use tabledata.list to fetch its data.
E.g., you can use jobs.insert API to run a query like
CREATE TABLE SHARED_VIEWS.materailized_config_vw
AS SELECT * FROM SHARED_VIEWS.config_vw
Then you can read SHARED_VIEWS.materailized_config_vw using tabledata.list.

Use of type : object and properties in JSON schema

I'm new to JSON.
I see in various examples of JSON, like the following, where complex values are prefixed with "type":"object", properties { }
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"type" : "object",
"properties" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
}
I have written JSON without type, object, and properties, like the following.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
and submitted to an on-line JSON schema validator with no errors.
What is the purpose of type:object, properties { }? Is it optional?
Yes it is optional, try removing it and use your validator.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"foo": "bar"
}
You actually don't even need to use the $schema keyword i.e. {} is valid json
I would start by understanding what json is, https://www.json.org/ is the best place to start but you may prefer something easier to read like https://www.w3schools.com/js/js_json_intro.asp.
A schema is just a template (or definition) to make sure you're producing valid json for the consumer
As an example let's say you have an application that parses some json and looks for a key named test_score and saves the value (the score) in a database in some table/column. For this example we'll call the table tests and the column score. Since a database column requires a type we'll choose a numeric type, i.e. integer for our score column.
A valid json example for this may look like
{
"test_score": 100
}
Following this example the application would parse the key test_score and save the value 100 to the tests.score database table/column.
But let's say a score is absent so you put in a string i.e "NA"
{
"test_score": "NA"
}
When the application attempts to save NA to the database it will error because NA is a string not an integer which the database expects.
If you put each of those examples into any online json validator they are valid json example. However, while it's valid json to use "NA" or 100 it is not valid for the actual application that needs to consume the json.
So now you may understand that the author of the json may wonder
What are the different valid types I can use as values for my test
score?
The responsibility then falls on the writers of the application to provide some sort of definition (i.e a schema) that the clients (authors) can reference so the author knows exactly how to structure the json so the application can process it accordingly. Having a schema also allows you to validate/test your json so you know it can be processed by the application without actually having to send your json through the application.
So putting it altogether let's say in the schema you see
"$test_score": {
"type": "integer",
"format": "tinyint"
},
The writer of the json now knows that they must pass an integer and the range is 0 to 255 because it's a tinyint. They no longer have to trial by error different values and see which ones the application process. This is a big benefit to having a schema.

How to find data structure of an app in PowerApps

I want to create a SQL connection and import data from an app (Shoutouts template) to SQL database. I created a SQL connection and tried to import the data in there but I got this error.
CreatedOnDateTime: The specified column is generated by the server and can't be specified
I do have the CreatedOnDateTime column created but I guess it's datatype is not the same or something else.
Where can I look and see what fields and datatypes are being imported from PowerApps to SQL table in PowerApps via SQL connection?
Thank you for your help!
Overall, there's no easy way to find out the structure of a data source in PowerApps (please create a new feature request in the PowerApps Ideas board for that). There is a convoluted way to find it out, however, which I'll go over here.
But for your specific problem, this is the schema of a SQL table that would match the schema of the data source in PowerApps:
CREATE TABLE PowerAppsTest.StackOverflow51847975 (
PrimaryID BIGINT PRIMARY KEY,
[Id] NVARCHAR(MAX),
[Message] NVARCHAR(MAX),
CreatedOnDateTime NVARCHAR(MAX),
CreatorEmail NVARCHAR(MAX),
CreatorName NVARCHAR(MAX),
RecipientEmail NVARCHAR(MAX),
RecipientName NVARCHAR(MAX),
ShoutoutType NVARCHAR(MAX),
[Image] IMAGE
)
Now for the generic case. You've been warned that this is convoluted, so proceed at your own risk :)
First, save the app locally to your computer:
The app will be saved with the .msapp extension, but it's basically a .zip file. If you're using Windows, you can rename it to change the extension to .zip and you'll be able to uncompress and extract the files that describe the app.
One of those files, Entities.json, contains, among other things, the definition of the schema of all data sources used in the app. The file is a huge JSON file, and it has all of its whitespaces removed, so you may want to use some online tool to format (or prettify) the JSON to read it easier. Once this is done, you can open the file in your favorite text editor (anything better than Notepad should be able to handle it).
With the file opened, search for an entry in the JSON root with the property "Name" and the value equal to the name of the data source. For example, in the shoutouts app case, the data source is called "Shoutout", so search for
"Name": "Shoutout"
You'll have to remove the space if you didn't pretty-print the JSON file prior to opening it. This should be an object that describes the data source, and it has one property called DataEntityMetadataJson that has the data source schema, formatted as a JSON string. Again in the Shoutouts example, this is the value:
"{\"name\":\"Shoutout\",\"title\":\"Shoutout\",\"x-ms-permission\":\"read-write\",\"schema\":{\"type\":\"array\",\"items\":{...
Notice that it again is not pretty-printed. You'll first need to decode that string, then pretty-print it again, and you'll end up with something like this:
{
"name": "Shoutout",
"title": "Shoutout",
"x-ms-permission": "read-write",
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"PrimaryID": {
"type": "number",
"format": "double",
...
},
"Message": {
"type": "string",
...
},
"Image": {
"type": "string",
"format": "uri",
"x-ms-media-kind": "image",
...
},
"Id": {
"type": "string",
...
},
"CreatedOnDateTime": {
"type": "string",
...
},
...
And this is the schema for the data source. From that I recreated the schema in SQL, removed the reference to the Shoutout data source from the app (which caused many errors), then added a reference to my SQL table, and since it has a different name, went looking for all places that have errors in the app to fix those.
Hope this helps!

How to add field descriptions programmatically in BigQuery table

I want to add field description in a bq table programmatically, I know how to do in UI.
I have this requirement because I have few tables in my dataset which are refreshed on a daily basis and we use "writeMode": "WRITE_TRUNCATE". This also deletes the description of all the field names of the table.
I have also added the description in my schema file for the table, like this
{
"name" : "tax",
"type" : "FLOAT",
"description" : "Tax amount customer paid"
}
But I don't see the descriptions in my final table after running the scripts to load data.
Some Tables API (https://cloud.google.com/bigquery/docs/reference/v2/tables) allow you to set table and schema's fields descriptions
You can set descriptions during
table creation - https://cloud.google.com/bigquery/docs/reference/v2/tables/insert
or after table created using one of below APIs:
Patch -
https://cloud.google.com/bigquery/docs/reference/v2/tables/patch
or Update - https://cloud.google.com/bigquery/docs/reference/v2/tables/update
I think, in your case Patch API is more suitable
Below link shows you table resources you can set with those APIs
https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
BigQuery load jobs accept a schema that includes "description" with each field.
https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load
If you specify the description along with each field you are creating during your WRITE_TRUNCATE operation, the descriptions should be applied to the destination table.
Here's a snippet from the above link that includes the schema you are specifying:
"load": {
"sourceUris": [
string
],
"schema": {
"fields": [
{
"name": string,
"type": string,
"mode": string,
"fields": [
(TableFieldSchema)
],
"description": string
}
]
},