How to add custom key words in json schema? - jsonschema

I want generate value using json schema. But json schema not support file type.So i add custom key word inheritType . Now i validate this schema it throw Exception.
So How to slove this problem. and How to add custom keyword in json schema
This My Json Schema
{
"type" : "object" ,
"properties" : {
"file" : {
"type" : "string" ,
"inheritType" : "File"
}
}
}
This is my java code throws Exception
{
"level" : "error",
"schema" : {
"loadingURI" : "#",
"pointer" : "/properties/file/inheritType" ,
"ignored" : ["inheritType"]
}

It sounds like you might want to add the keyword to those allowed, based on this github issue comment.
This is an implementation choice. The primary goal of these messages
is to, in fact, detect spelling mistakes (consider pattenrProperties
for instance).
As the report says, these warnings will be ignored; therefore you need
not worry about these.
Note that you have the option to either:
configure the log level so that those warnings do not appear in the
final log, or update the dictionary of keywords so that this keyword
is recognized.
I can't see how to do this in the javadocs though. Sorry.

Related

Can I compose two JSON Schemas in a third one?

I want to describe the JSON my API will return using JSON Schema, referencing the schemas in my OpenAPI configuration file.
I will need to have a different schema for each API method. Let’s say I support GET /people and GET /people/{id}. I know how to define the schema of a "person" once and reference it in both /people and /people/{id} using $ref.
[EDIT: See a (hopefully) clearer example at the end of the post]
What I don’t get is how to define and reuse the structure of my response, that is:
{
"success": true,
"result" : [results]
}
or
{
"success": false,
"message": [string]
}
Using anyOf (both for the success/error format check, and for the results, referencing various schemas (people-multi.json, people-single.json), I can define a "root schema" api-response.json, and I can check the general validity of the JSON response, but it doesn’t allow me to check that the /people call returns an array of people and not a single person, for instance.
How can I define an api-method-people.json that would include the general structure of the response (from an external schema of course, to keep it DRY) and inject another schema in result?
EDIT: A more concrete example (hopefully presented in a clearer way)
I have two JSON schemas describing the response format of my two API methods: method-1.json and method-2.json.
I could define them like this (not a schema here, I’m too lazy):
method-1.json :
{
success: (boolean),
result: { id: (integer), name: (string) }
}
method-2.json :
{
success: (boolean),
result: [ (integer), (integer), ... ]
}
But I don’t want to repeat the structure (first level of the JSON), so I want to extract it in a response-base.json that would be somehow (?) referenced in both method-1.json and method-2.json, instead of defining the success and result properties for every method.
In short, I guess I want some kind of composition or inheritance, as opposed to inclusion (permitted by $ref).
So JSON Schema doesn’t allow this kind of composition, at least in a simple way or before draft 2019-09 (thanks #Relequestual!).
However, I managed to make it work in my case. I first separated the two main cases ("result" vs. "error") in two base schemas api-result.json and api-error.json. (If I want to return an error, I just point to the api-error.json schema.)
In the case of a proper API result, I define a schema for a given operation using allOf and $ref to extend the base result schema, and then redefine the result property:
{
"$schema: "…",
"$id": "…/api-result-get-people.json",
"allOf": [{ "$ref": "api-result.json" }],
"properties": {
"result": {
…
}
}
}
(Edit: I was previously using just $ref at the top level, but it doesn’t seem to work)
This way I can point to this api-result-get-people.json, and check the general structure (success key with a value of true, and a required result key) as well as the specific form of the result for this "get people" API method.

Use of type : object and properties in JSON schema

I'm new to JSON.
I see in various examples of JSON, like the following, where complex values are prefixed with "type":"object", properties { }
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"type" : "object",
"properties" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
}
I have written JSON without type, object, and properties, like the following.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"motor" : {
"class" : "string",
"voltage" : "number",
"amperage" : "number"
}
}
and submitted to an on-line JSON schema validator with no errors.
What is the purpose of type:object, properties { }? Is it optional?
Yes it is optional, try removing it and use your validator.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"foo": "bar"
}
You actually don't even need to use the $schema keyword i.e. {} is valid json
I would start by understanding what json is, https://www.json.org/ is the best place to start but you may prefer something easier to read like https://www.w3schools.com/js/js_json_intro.asp.
A schema is just a template (or definition) to make sure you're producing valid json for the consumer
As an example let's say you have an application that parses some json and looks for a key named test_score and saves the value (the score) in a database in some table/column. For this example we'll call the table tests and the column score. Since a database column requires a type we'll choose a numeric type, i.e. integer for our score column.
A valid json example for this may look like
{
"test_score": 100
}
Following this example the application would parse the key test_score and save the value 100 to the tests.score database table/column.
But let's say a score is absent so you put in a string i.e "NA"
{
"test_score": "NA"
}
When the application attempts to save NA to the database it will error because NA is a string not an integer which the database expects.
If you put each of those examples into any online json validator they are valid json example. However, while it's valid json to use "NA" or 100 it is not valid for the actual application that needs to consume the json.
So now you may understand that the author of the json may wonder
What are the different valid types I can use as values for my test
score?
The responsibility then falls on the writers of the application to provide some sort of definition (i.e a schema) that the clients (authors) can reference so the author knows exactly how to structure the json so the application can process it accordingly. Having a schema also allows you to validate/test your json so you know it can be processed by the application without actually having to send your json through the application.
So putting it altogether let's say in the schema you see
"$test_score": {
"type": "integer",
"format": "tinyint"
},
The writer of the json now knows that they must pass an integer and the range is 0 to 255 because it's a tinyint. They no longer have to trial by error different values and see which ones the application process. This is a big benefit to having a schema.

"Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)" in GraphQL SDL( aws appsync)

Iam new to graphql.Iam implementing a react-native app using aws appsync.Following is the code i have written in schema
type Messages {
id: ID!
createdAt: String!
updateAt: String!
text: String!
sendBy: Person!
#relation(name: "UserMessages")}
type Person {
id: ID!
createdAt: String!
updateAt: String!
name: String!
messages: [Messages!]!
#relation(name: "UserMessages")}
When i tried to query the sendBy value it is giving me an error saying
query getMessages{
getMessages(id : "a0546b5d-1faf-444c-b243-fab5e1f47d2d") {
id
text
sendBy {
name
}
}
}
{
"data": {
"getMessages": null
},
"errors": [
{
"path": [
"getMessages",
"sendBy"
],
"locations": null,
"message": "Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)"
}
]
}
Am not understanding that error please help me.Thanks!! in Advance
This might sound silly, but still, developers do this kind of mistakes so did I. In subscription, the client can retrieve only those fields which are outputted in the mutation query. For example, if your mutation query looks like this:
mutation newMessage {
addMessage(input:{
field_1: "",
field_2: "",
field_n: "",
}){
field_1,
field_2
}
}
In the above mutation since we are outputting only field_1 & field_2. A client can retrieve the only subset of these fields.
So if in the schema, for a subscription if you have defined field_3 as required(!), and since you are not outputting field_3 in the above mutation, this will throw the error saying Cannot return null for non-nullable type: field_3.
Looks like the path [getMessages, sendBy] is resolving to a null value, and your schema definition (sendBy: Person!) says sendBy field cannot resolve to null. Please check if a resolver is attached to the field sendBy in type Messages.
If there is a resolver attached, please enable CloudWatch logs for this API (This can be done on the Settings page in Console, select ALL option). You should be able to check what the resolved Request/Response mapping was for the path [getMessages, 0, sendBy].
I encountered a similar issue while working on my setup with CloudFormation. In my particular situation I didn't configure the Projection correctly for the Global Secondary Indexes. Since the attributes weren't projected into the index, I was getting an ID in the response but null for all other values. Updating the ProjectionType to 'ALL' resolved my issue. Not to say that is the "correct" setting but for my particular implementation it was needed.
More on Global Secondary Index Projection for CloudFormation can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html
Attributes that are copied (projected) from the source table into the index. These attributes are additions to the primary key attributes and index key attributes, which are automatically projected.
I had a similar issue.
What happened to me was a problem with an update resolver. I was updating a field that was used as GSI (Global Secondary Index). But I was not updating the GSI, so when query by GSI the index exists but the key for that attribute had changed.
If you are using Dynamo DB, you can start debugging there. You can check the item and see if you have any reference to the primary key or the indexes.
I had a similar issue.
for me the problem was lying with the return type of the schema . As i was doing a query with PK on dynamodb table ..it was returning a list of items or data you can say . but in my schema i had a schema define as a singular struct format .
Error was resolved when i just made the return type in schema as list of items .
like
type mySchema {
[ID]
}
instead of
type mySchema
{
id : ID!
name : String!
details : String!
}
This error is thrown for multiple reasons . so your reason could be else but still i just posted one of the scenarios.

How to update old data with new data in Firebase?

I'm developing a chat app. In my app there are 4 nodes called User, Recent, Message, Group. I'm using Objective-C My message object looks like,
{
"createdAt" : 1.486618017521277E9,
"groupId" : "-KcWKeXXQ9tjYsYfCknx",
"objectId" : "-KcWKftK8GiMxxAnarL5",
"senderId" : "828949592937598976",
"senderImage" : "http://hairstyleonpoint.com/wp-content/uploads/2014/10/marcello-alvarez.png",
"senderName" : "John Doee",
"status" : "Seen",
"text" : "Hi all",
"type" : "text",
"updatedAt" : 1.486622011467733E9
}
When I'm updating a User, all message's senderName should be updated accordingly. Is there are way to do this via the code or Do I need to write a rule. I'm a newbie to the firebase. Please suggest me a way to do that. If It's possible to do with the rules, Please guide me on this.
It's not possible to do this via rules, so you have to manually iterate over all your data and update the senderName.
Anyways, I think you would probably be better off with saving {senderID: $someUserID} instead - like you would do in a relational database. The userID is static, so can change the user without having to update all the instances where you use it.

Permissions on a rest API implementing HATEOAS

I'm trying to figure out the right way to handle permissions in a single page app that talks directly to several RESTful APIs, that implement HATEOAS.
As an example:
"As a user of my application I can view, start and pause jobs but not stop them."
The underlying rest API has the following resource:
/jobs/{id}
Which accepts GET and PUT. The GET returns a job model and the PUT accepts a job model as a request body in the form:
{
"_links" : {
"self" : "/jobs/12345678"
}
"id" : 12345678,
"description" : "foo job",
"state" : "STOPPED"
}
Accepted job states can be: dormant | running | paused | stopped.
The requirement says that on the UI I must have the buttons:
START, PAUSE, STOP
... and only display based on the logged in user's permissions.
From the API perspective everything works as the underlying logic on the server makes sure that the user cannot update the state to a STOPPED state when a request is made (a 401 is returned maybe).
What is the best way to inform the app / UI of the user's permissions, so it can hide any buttons that the user has no permission to action?
Should the API provide a list of permissions, maybe something like :
{
"_links" : {
"self" : "/permissions",
"jobs" : "/jobs"
}
"permissions" : {
"job" : ["UPDATE", "DELETE"],
"job-updates" : ["START", "PAUSE"]
}
}
OR should the API change so that the permissions are reflected in the HATEOS links maybe something like :
{
"_links" : {
"self" : "/jobs/12345678",
"start" : "/jobs/12345678/state?to=RUNNING",
"pause" : "/jobs/12345678/state?to=PAUSED",
}
"id" : 12345678,
"description" : "foo job",
"state" : "DORMANT"
}
Or should it be done in a completely different way?
UPDATE
I've found the following article which suggests an answer:
https://softwareengineering.stackexchange.com/questions/215975/how-to-handle-fine-grained-field-based-acl-permissions-in-a-restful-service
I would go with the latter: Imply permissions based on which links are present.
If the link isn't there, the user can't access the resource/perform the action. If it is, they can. That's what I'd do, because it's simple and clean and leaves little to the discretion of the front-end code. Decoupling, yo.
Alternatively, if you do want to include all the links in each response but explicitly specify which are allowed and which aren't, if you use a format such as HAL to write your links, you could extend it with a flag on each link like so:
{
"_links" : {
"self" : {
"href":"/jobs/12345678",
"allowed":false
},
"start" : {
"href":"/jobs/12345678/state?to=RUNNING",
"allowed":false
},
"pause" : {
"href":"/jobs/12345678/state?to=PAUSED",
"allowed":false
}
},
"id" : 12345678,
"description" : "foo job",
"state" : "DORMANT"
}
I would go with the latter. The reason I don't like the former is because you are creating extra work for the client by requiring it to figure out the mapping between permissions and the resources they permit access to. If you use hateoas and check for the presence of relation types, this mapping is done for you by the server. It also means the uris can change without breaking the client.
I recently wrote a blog post on this area:
https://www.opencredo.com/2015/08/12/designing-rest-api-fine-grained-resources-hateoas-hal/
You should be using forms, not links, to provide state transition hypermedia.
If you cannot provide forms in your media type, provide links to URIs which use another media type that supports forms, such as XHTML.
IANA has link relations for create-form, edit-form and delete-form for this purpose.
Also, please do not use start and pause as real link relations. If you define them yourself, they must be URIs (preferably HTTP URLs, but any URI under your control will suffice). start has a completely different meaning to what you're using it for, and pause is not defined.