Assume we have a json document
{
ips: ['192.168.1.1','192.168.1.2', '192.168.1.3'],
active: '192.168.1.1'
}
Can I write a schema that asserts that active must be in the ips; hence the following would fail validation.
{
ips: ['192.168.1.1','192.168.1.2', '192.168.1.3'],
active: '192.168.1.4'
}
This is only possible using the pure JSON schema standard if the set of ips is known. In this case, you can define an enum of IP values and use it to validate the active property. See also: http://json-schema.org/understanding-json-schema/reference/generic.html#id4
You could use JSONPatch to update your schema if the set of IPs is changing. Afterward, apply the validation.
Related
I want to describe the JSON my API will return using JSON Schema, referencing the schemas in my OpenAPI configuration file.
I will need to have a different schema for each API method. Let’s say I support GET /people and GET /people/{id}. I know how to define the schema of a "person" once and reference it in both /people and /people/{id} using $ref.
[EDIT: See a (hopefully) clearer example at the end of the post]
What I don’t get is how to define and reuse the structure of my response, that is:
{
"success": true,
"result" : [results]
}
or
{
"success": false,
"message": [string]
}
Using anyOf (both for the success/error format check, and for the results, referencing various schemas (people-multi.json, people-single.json), I can define a "root schema" api-response.json, and I can check the general validity of the JSON response, but it doesn’t allow me to check that the /people call returns an array of people and not a single person, for instance.
How can I define an api-method-people.json that would include the general structure of the response (from an external schema of course, to keep it DRY) and inject another schema in result?
EDIT: A more concrete example (hopefully presented in a clearer way)
I have two JSON schemas describing the response format of my two API methods: method-1.json and method-2.json.
I could define them like this (not a schema here, I’m too lazy):
method-1.json :
{
success: (boolean),
result: { id: (integer), name: (string) }
}
method-2.json :
{
success: (boolean),
result: [ (integer), (integer), ... ]
}
But I don’t want to repeat the structure (first level of the JSON), so I want to extract it in a response-base.json that would be somehow (?) referenced in both method-1.json and method-2.json, instead of defining the success and result properties for every method.
In short, I guess I want some kind of composition or inheritance, as opposed to inclusion (permitted by $ref).
So JSON Schema doesn’t allow this kind of composition, at least in a simple way or before draft 2019-09 (thanks #Relequestual!).
However, I managed to make it work in my case. I first separated the two main cases ("result" vs. "error") in two base schemas api-result.json and api-error.json. (If I want to return an error, I just point to the api-error.json schema.)
In the case of a proper API result, I define a schema for a given operation using allOf and $ref to extend the base result schema, and then redefine the result property:
{
"$schema: "…",
"$id": "…/api-result-get-people.json",
"allOf": [{ "$ref": "api-result.json" }],
"properties": {
"result": {
…
}
}
}
(Edit: I was previously using just $ref at the top level, but it doesn’t seem to work)
This way I can point to this api-result-get-people.json, and check the general structure (success key with a value of true, and a required result key) as well as the specific form of the result for this "get people" API method.
I am experimenting with Cloud Firestore security rules. Is it possible to filter document fields?
For example if you have a document
{
name: "John Doe",
email: "doe#example.com"
}
then some users aren't allowed to get the document with the email address. Their application requests the document with
firebase.firestore.doc('users/doe-uid')
and gets this document
{
name: "John Doe",
}
If yes, how?
I think it should be possible because the Cloud Firestore Security Rules Reference says in the first sentence (emphasis is mine):
Cloud Firestore Security Rules are used to determine who has read and write access to collections and documents stored in Cloud Firestore, as well as how documents are structured and what fields and values they contain.
However I couldn't find anything in the reference telling me how to filter out fields.
Firestore rules are not filters, they're a server-side validation of document queries, meaning that you access (or not) the whole document, not particular fields.
The piece of documentation you mentionned means that you can do data validation on fields.
Here is a basic example of rules validating data on a write query (via request.resource.data) :
match /users/{userId} {
allow write: if request.resource.data.age is int;
}
Here is another basic example that uses an existing field to validate a read query (via resource.data) :
match /articles/{articleId} {
allow read: if resource.data.isPublished == true;
}
To filter out fields, you have do it client side, after the query.
Now If you want to secure access to certain fields, you have to create another collection (look into subcollections) with a different set of rules, and make another query that will match these rules.
Trying to generate JSON schema (http://jsonschema.net) from the syncthing (https://docs.syncthing.net/rest/system-connections-get.html) JSON below.
The problem is that the connection objects start with their ID (e.g.
YZJBJFX-RDB...) which is interpreted as a type.
Is it the JSON from synching that isn't standard or is it the issue with the schema generator?
Do you have any suggestions how to get around this if schema generation is a requirement (I.e. no typing schemas manually).
{
"total":{
"paused":false,
"clientVersion":"",
"at":"2015-11-07T17:29:47.691637262+01:00",
"connected":false,
"inBytesTotal":1479,
"type":"",
"outBytesTotal":1318,
"address":""
},
"connections":{
"YZJBJFX-RDBL7WY-6ZGKJ2D-4MJB4E7-ZATSDUY-LD6Y3L3-MLFUYWE-AEMXJAC":{
"connected":true,
"inBytesTotal":556,
"paused":false,
"at":"2015-11-07T17:29:47.691548971+01:00",
"clientVersion":"v0.12.1",
"address":"127.0.0.1:22002",
"type":"TCP (Client)",
"outBytesTotal":550
},
"DOVII4U-SQEEESM-VZ2CVTC-CJM4YN5-QNV7DCU-5U3ASRL-YVFG6TH-W5DV5AA":{
"outBytesTotal":0,
"type":"",
"address":"",
"at":"0001-01-01T00:00:00Z",
"clientVersion":"",
"paused":false,
"inBytesTotal":0,
"connected":false
},
"UYGDMA4-TPHOFO5-2VQYDCC-7CWX7XW-INZINQT-LE4B42N-4JUZTSM-IWCSXA4":{
"address":"",
"type":"",
"outBytesTotal":0,
"connected":false,
"inBytesTotal":0,
"paused":false,
"at":"0001-01-01T00:00:00Z",
"clientVersion":""
}
}
}
Any input is appreciated.
Is it the JSON from synching that isn't standard or is it the issue
with the schema generator?
There is nothing non-standard about this JSON. Neither is there any issue with the schema generation.
Unfortunately, defining a schema for what is effectively dynamic content is difficult. This will always be the case because the job of schemas is to describe static data structures.
That said, it may be possible to do this using the patternProperties field in JSON schema. This post is effectively asking the same question as yours.
In a webapp, I use Hibernate's #SQLDelete annotation in order to "soft-delete" entities (i.e. set a status column to a value that denotes their "deleted" status instead of actually deleting them from the table).
The entity code looks like this :
#Entity
#SQLDelete(sql="update pizza set status = 2 where id = ?")
public class Pizza { ... }
Now, my problem is that the web application doesn't use the owner of the schema to which the tables belong to connect to the DB. E.g. the schema (in Oracle) is called pizza, and the db user the webapp uses to connect is pizza_webapp. This is for security reasons. The pizza_webapp user only has select/update/delete rights, it can't modify the structure of the DB itself. I don't have any choice here, it is a policy that I can't change.
I specify the name of the schema where the tables actually are with the hibernate-default_schema parameter in hibernate config :
<property name="hibernate.default_schema">pizza</property>
This works fine for everything that goes through mapped entities, Hibernate knows how to add the schema name in front of the table name in the SQL it generates. But not for raw SQL, and the #SQLDelete contains raw SQL. This is executed 'as is' and results in a "table or view not found error".
So far we worked around the issue by adding synonyms to the pizza_webapp schema, pointing to the pizza schema. It works, but it is not fun to maintain across multiple DBs when entities are added.
So, is it possible to make #SQLDelete take the hibernate.default_schema parameter into account ?
(NB: Obviously I don't want to hard-code the schema name in the SQL either...)
Yes, it is possible:
#SQLDelete(sql="update {h-schema}pizza set status = 2 where id = ?")
I could not find any Hibernate solution to this problem. However I found a work-around based on an Oracle feature. I do this in to my session before using it :
//set the default schema at DB session level for raw SQL queries (see #SQLDelete)
HibernateUtil.currentSession().doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
connection.createStatement().execute("ALTER SESSION SET CURRENT_SCHEMA="+HibernateUtil.getDefaultSchema());
}
});
I works fine, but unfortunately only on Oracle (which is fine for us for now at least). Maybe there are different ways to achieve the same thing on other RDBMS as well ?
Edit: the the getDefaultSchema() method in my HibernateUtil class does this to get the default schema from Hibernate's config :
defaultSchema = config.getProperty("hibernate.default_schema");
where config is my org.hibernate.cfg.Configuration object.
I'm using NHibernate to connect to an ERP database on our DB2 server. We have a test schema and a production schema. Both schemas have the same table structure underneath. For testing, I would like to use the same mapping classes but point NHibernate to the test environment when needed and then back when in production. Please keep in mind that we have many production schemas and each production schema has an equivalent test schema.
I know that my XML mapping file has a schema property inside it, but since it's in XML, it's not like I can change it via a compiler directive or change the schema property based on a config file.
Any ideas?
Thank You.
No need to specify schema in the mappings: there's a SessionFactory-level setting called default_schema. However, you can't change it at runtime, as NHibernate pregenerates and/or caches SQL queries, including the schema part.
To get what I wanted, I had to use NHibernate.Mapping.Attributes.
[NHibernate.Mapping.Attributes.Class(0, Table = “MyTable”, Schema = MySchemaConfiguration.MySchema)]
In this way, I can create a class like MySchemaConfiguration and have a property inside of it like MySchema. I can either set the property's value via a compiler directive or get it through a configuration file. This way I only have to change the schema in one place and it will be reflected throughout all of the other mappings.
I have found following link that actually fixes the problem.
How to set database schema for namespace in nhibernate
The sample code could be
cfg.ClassMappings.Where(cm => cm.Table.Schema == "SchemaName")
.ForEach(cm => cm.Table.Schema = "AnotherSchemaName");
This should happen before you initialize your own data service class.
#Brian, I tried NHibernate.Mapping.Attributes, the attribute value you put inside should be a constant. So it could not be updated during run time. How could you have set the property's value using a parameter value in configuration file?
The code to fix HBM XML resources.
// This is how you get all the hbm resource names.
private static IList<string> GetAllHbmXmlResourceNames(Assembly assembly)
{
var result = new List<string>();
foreach (var resource in assembly.GetManifestResourceNames())
{
if (resource.EndsWith(".hbm.xml"))
{
result.Add(resource);
}
}
return result;
}
// This is how you get the stream for each resource.
Assembly.Load(assembly).GetManifestResourceStream(name)
// What you need to do next is to fix schema name in this stream
// Replacing schema name.
private Stream FixSchemaNameInStream(Stream stream)
{
StreamReader strStream = new StreamReader(stream);
string strCfg = strStream.ReadToEnd();
strCfg = strCfg.Replace(string.Format("schema=\"{0}\"" , originalSchemaName), string.Format("schema=\"{0}\"" , newSchemaName));
return new MemoryStream(Encoding.ASCII.GetBytes(strCfg));
}
Take a look at SchemaUpdate.
http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/28/create-and-update-database-schema.aspx