creating a more strict meta schema for jsonschema's it self - schema

I want to validate a JSON Schema to see if the JSON Schema is valid. So I don't want to validate a standard JSON document against a schema.
I need a meta schema that is a bit more strict than the current meta schema.
I would like to have a schema that
only allows appropriate properties on a type
e.g. not maxLength on a int type
adds custom required fields on a type.
In this question some insights are given by #JasonDesrosiers.
I was wondering if there is any update on the improvements and an example of how to extend the meta schema.

Related

Does Elm have JSON Type Provider?

It seems that the only way to handle JSON in Elm is to decode each JSON scheme manually by Json.Decode.
Is there a nice alternative like F#'s Type Provider or something?
F# Data: JSON Type Provider
There is no official package doing this but there are community alternatives like this one: https://github.com/eeue56/json-to-elm
Create Elm type aliases and decoders based on JSON input
This project allows you to automate the creation of:
type aliases from JSON data
decoders from type aliases and some union types
encoders from type aliases and some union types

bigquery loadJob of a json - forcing a field to be String in the schema auto-detect

if in the beginning the json contains
"label": "foo"
and later it is
"label": "123"
bigquery returns
Invalid schema update. Field label has changed type from STRING to INTEGER
although it is "123" and not 123.
file is being loaded with
autodetect: true
is there a way to force bigquery to make any field as string when it applies its auto-detect, or the only way is using csv instead ?
The auto-detection is based on the best effort to recognize the data type by scanning up to 100 rows of data to use as a representative sample. There is no way to give insight about which kind of type it is. You may consider to specify manually the schema for your use case.
UDATE:
I have tested to load a file with only {"label" : "123"} and it is recognized as INTEGER. Therefore, the auto detection recognizes "123" as INGETER no matter if there are quotes or not. For your case, you may consider to export the schema from the existent table as explained in the documentation:
Note: You can view the schema of an existing table in JSON format by
entering the following command: bq show --format=prettyjson
[DATASET].[TABLE].
and use it for further dynamic loads

Purpose of Json schema file while loading data into Big query from a csv file

Can someone please help me by stating the purpose of providing the json schema file while loading a file to BQtable using bq command. what are the advantages?
Dose this file help to maintain data integrity by avoiding any column swap ?
Regards,
Sreekanth
Specifying a JSON schema--instead of relying on auto-detect--means that you are ensured to get the expected types for each column being loaded. If you have data that looks like this, for example:
1,'foo',true
2,'bar',false
3,'baz',true
Schema auto-detection would infer that the type of the first column is an INTEGER (a.k.a. INT64). Maybe you plan to load more data in the future, though, that looks like this:
3.14,'foo',true
1.59,'bar',false
-2.001,'baz',true
In that case, you probably want the first column to have type FLOAT (a.k.a. FLOAT64) instead. If you provide a schema when you load the first file, you can specify a type of FLOAT for that column explicitly.

Use TOXI Solution in DataBase with Json data 101

We want to project a new database schema for our Society's application.
The program is developed in c# nancy serverside and react-redux-graphql on clientside.
Our Society often must implement repentine changing for treat new business data. So we want to realise a solid core for the fundamental and no subject to decadence data eg: Article (Code, description, Qty, Value, Price, categoryId).
But often we need to add particular category to an article, or special implementation only for a limited period of time. We are thinking to implement a TOXI like solution for treat those situations.
But in TOXI pattern implementation we wan to add a third table for define each tag data type and definition.
Here is a simple explanatory image:
In the Metadata we have two columns with JSON data: DataType and DefinedValue
DataType define How the program (eventually a func in db) must cast the varchar data in articoli_meta.value
DefinedValue is not null define if the type must have a series of predefined value eg: High, Medium, Low etc...
Those two column are varchar and contain JSON with a predefined standard, a defined standard from our programming team (ev. an sql func for validate those two columns)
I Understand that this kind of approach is not a 'pure' relational approach but we must consider that we often pass data to the client in json format so the DefinedValue column can easily queried as string and passed to interface as data for a dropdown list.
Any ideas, experience or design tips are appreciated

Tables using LIKE may only reference flat structures

I wanna have a table parameter in a RFC function module of type CGPL_TEXT1, which uses the domain type TEXT40, which is a char 40.
I tried to create it:
IT_PARTS_DESCRIPTION LIKE CGPL_TEXT1
But I keep getting this error
tables using like may only reference flat structures
I am also not able to use TYPE. If I do so, I get this error:
Flat types may only be referenced using LIKE for table parameters
Don't go there. For RFC-enabled function modules, always use a structure as a line type for your table. The RFC protocol itself also supports unstructured tables, but many adapters don't. So you should
declare a data dictionary structure Z_MY_PARTS_DATA with a single field DESCRIPTION TYPE CGPL_TEXT2
declare a data dictionary table type Z_MY_PARTS_TABLE using this structure
use that table type in your function module.
Look inside the dictionary for a table type which has only one column representing Your Text.
If You cannot find it, just go the proper way and define a z structure and a z tabletype based on that structure. This is the proper way and I also prefer to use this ( even sometimes, when I would not really need it, i do this ).... because the structures sand table types can be documented.