Hi I am learning JSON schema,
I am having a doubt regarding format keyword by this
http://json-schema.org/latest/json-schema-validation.html#rfc.section.7.3
there are some predefined format types and we can also add some custom formats. My doubt is how to add those custom formats and validate it using any JSON schema validation tools
thanks
Lakshmanan
The way how custom formats are added to a schema validator is always implementation-specific. So if you use a PHP validator then you have to write a PHP class / function, if you use a java validator then you implement a java class. I can just hardly imagine any ways to write an implementation-independent custom validator. A schema validator may even decide not to support custom formats (although popular implementations do).
I personally don't recommend using custom formats. They are always implememtation-dependent, so if you want to consume your schema from different languages then most probably you have to write the format validator multiple times.
Related
In my WebAPI, I have a change password method. It takes a parameter that is a class that contains two properties: "OldPassword" and "NewPassword".
Obviously, i do not want to pass that thru query strings, i want em passed into the body.
As soon as I use the FromBody attribute, the Swagger UI no longer offers a nice form with two textbox, it turns into a single JSON blob.
I'd like to mix those behaviors, provide a form but format the input into json that would match the schema.
I tried playing around with OperationFilters but couldn't achieve anything similar.
I'm sure this is pretty common, but my google and stackoverflow searches haven't returned anything. Perhaps i'm not searching for the proper keywords. Unsure.
Decided to stick with json blob in the body as it is the decison that makes the most sense, design wise.
I got it working with FromHeader, but considering that this is not the way this API should be consumed, we prefered to stick with designing for the actual use case and not around Swagger's features and limiations.
Thanks for the answers!
Before Hibernate Search 5.2 there was no need to explicitly use a #Facet annotation. In 5.2 it became necessary in order to use Lucene’s native faceting API.
I'm using Hibernate Search on external classes that cannot be annotated. Is there a way to define this "facet" programmatically?
For the mapping configuration, there is no issue because the SearchMapping provides a complete programmatic alternative to the #Entity, #Indexed, and #Field annotations. But within this API, and in particular in the EntityMapping class, there is no way to define that a field will be used in a facet query; there is no other alternative rather than annotating the field with #Facet.
2018 update:
I've updated to Hibernate Search 5.6.4 and it is working with this kind of mappings:
.property("businessProcess", ElementType.METHOD)
.field()
.analyze(Analyze.NO)
.store(Store.YES)
.facet()
.name("businessProcess")
.encoding(FacetEncodingType.STRING)
The workaround you referenced does not configure faceting in Hibernate Search at all (no #Facet annotation nor programmatic equivalent). In recent versions of Hibernate Search, this will not work, because we had to require this metadata in order to fix other bugs.
Using custom facet formatting is very much uncharted territory, and admittedly much harder than it should be. The main reason is that facets were originally, for a reason I cannot fathom, designed to work directly on the entity property instead of the field value. Thus, facets ignore the field bridge. We're working on cleaning up the faceting support in Search 6, but this one of many works in progress and will take some time.
In the meantime, your easiest option will probably be to just use the built-in formatting.
EDIT: Also, for dates you might want to use numeric facet formatting, so as to perform range faceting (from the first of May to the 30th of may). In that case, the name of facets is defined at query time, so built-in formatting should not matter.
And there's actually one easy solution to customize formatting in your string-encoded facets, but I didn't mention it since you are using programmatic mapping and probably do not want to change your model: you could add read-only properties returning the exact value you want in your facet (getYear, getMonth, ...), and add fields with faceting on those properties.
Is there any best practice on when to use Datamapper vs Groovy transformers etc when performing transformations or mappings in Mule?
For example I need to transform xml to json etc. I can do this nicely with Groovy using the XML builders and Json builders and its open source etc. It requires me to write some code though.
Where as Datamapper is EE only and seems a lot more opaque being a visual drag and drop.
Are there any downsides to not using DataMapper?
As you said data mapper is a graphical building tool. As pros:
Easier to maintain
User don't need any special programing skill
Support for data sense in studio
But as you said there is nothing that you can not do either with groovy or a java component.
Yes, Datamapper are for enterprise edition, but there are following advantages :-
1. Extraction and loading of flat and structured data formats
2. Filtering, extraction and transformation of input data using XPath and powerful scripting
3. Augmenting data with input parameters and lookups from other data sources
Live design-time previews of transformation results
4. It definitely has high-performance, scalable data mapping operations
Full reference :- https://developer.mulesoft.com/docs/display/current/Datamapper+User+Guide+and+Reference
Only issue I see in Datamapper is you need to maintain the mapping files.
But for the community edition user, they need to find other options in transforming and mapping .
In that case as you said, they might use their custom Java classes, Groovy component, expression component etc.
I am building a REST API that uses a filter parameter to control search results. E.g., one could search for a user by calling:
GET /users/?filter=name%3Dfoo
Now, my API should allow many different filter operators. Numeric operators such as equals, greater than, less than, string operators like contains, begins with or ends with and date operators such as year of or timediff. Moreover, AND and OR combinations should be possible.
Basically, I want to support a subset of the underlying MySQL database operators.
I found a lot of different implementations (two good examples are Google Analytics and LongJump) that seem to use custom syntax.
Looking at my requirements, I would probably design a custom syntax pretty similiar to the MySQL operator syntax.
However, I was wondering if there are any best practices established that I should follow and whether I should consider anything else. Thanks!
You need an already existing query language, don't try to reinvent the wheel! By REST this is complicated and not fully solved issue. There are some REST constraints your application must fulfill:
uniform interface / hypermedia as the engine of application state:
You have to send hypermedia responses to your clients, and they have to follow the hyperlinks given in those responses, instead of building the requests on their own. So you can decouple the clients from the structure of the URI.
uniform interface / self-descriptive messages:
You have to send messages annotated with semantics. So you can decouple the clients from the data structure. The best solution to do this is RDF with for example open linked data vocabs. If you don't want to use RDF, then the second best solution to use a vendor specific MIME type, so your messages will be self-descriptive, but the clients need to know how to parse your custom MIME type.
To describe simple search links, you can use URI templates, for example GET /users/{?name} will wait a name parameter in the query string. You can use the hydra:IRITemplateMapping from the hydra vocab to add semantics to the paramers like name.
Describing ad-hoc queries is a hard task. You have to describe somehow what your query can contain.
You can choose an URI query language and stick with URI templates and probably hydra annotation. There are many already existing URI query languages, like HTSQL, OData query (ppl don't like that one), etc...
You can choose an existing query language and send it in a single URI param. This can be anything you want, for example SQL, SPARQL, etc... You have to teach your client to generate that param. You can create your own vocab to describe the constraints of the actual query. If you don't need complicated things, this should not be a problem. I don't know of already existing query structure descibing vocabs, but I never looked for them...
You can choose an existing query language and send it in the body in a SEARCH request. Afaik SEARCH is not cached or supported by recent HTTP clients. It was defined by webdav. You can describe your query with the proper MIME type, and you can use the same vocab as by the previous solution.
You can use an RDF query solution, for example a SPARQL endpoint, or triple pattern fragments, etc... So your queries will contain the semantic metadata, and not your link description. By SPARQL you don't necessary need a triple data storage, you can translate the queries on server side to SQL, or whatever you use. You can probably use SPIN to describe query constraints and query templates, but that is new for me too. There might be other solutions to describe SPARQL query structures...
So to summarize if you want a real REST solution, you have to describe to your clients, how they can construct the queries and what parameters, logical operators they can use. Without query descriptions they won't be able to generate for example a HTML form for the user. If you don't want a REST solution, then pick a query language write a builder on the client, write a parser on the server and that's all.
The Open Data Protocol (OData)
You can check BreezeJs too and see how this protocol it's implemented for node.js + mongodb with breeze-mongodb module and for a .NET project using Web API and EntityFramework with Breeze.ContextProvider dll.
By embracing a set of common, accepted delimiters, equality comparison can be implemented in
straight-forward fashion. Setting the value of the filter query-string parameter to a string using those
delimiters creates a list of name/value pairs which can be parsed easily on the server-side and utilized
to enhance database queries as needed. You can use the delimeters of your choice say (“|”) to separate individual filter phrases for OR and ("&") to separate
individual filter phrases for AND and a double colon (“::”) to separate the names and values.
This provides a unique-enough set of delimiters to support the majority of use cases and creates a user readable
query-string parameter. A simple example will serve to clarify the technique. Suppose we want
to request users with the name “Todd” who live in "Denver" and have the title of “Grand Poobah”.
The request URI, complete with query-string might look like this:
GET http://www.example.com/users?filter="name::todd&city::denver&title::grand poobah”
The delimiter of the double colon (“::”) separates the property name from the comparison value,
enabling the comparison value to contain spaces—making it easier to parse the delimiter from the value
on the server.
Note that the property names in the name/value pairs match the name of the properties that would be
returned by the service in the payload.
Case sensitivity is certainly up for debate on a case-by-case basis, but in general,
filtering works best when case is ignored. You can also offer wild-cards as needed using the asterisk
(“*”) as the value portion of the name/value pair.
For queries that require more-than simple equality or wild-card comparisons, introduction of operators
is necessary. In this case, the operators themselves should be part of the value and parsed on the server
side, rather than part of the property name. When complex query-language-style functionality is
needed, consider introducing query concept from the Open Data Protocol (OData) Filter System Query
Option specification (http://www.odata.org/documentation/odata-version-4-0/)
There seems to be a lot of standards (like OData), but many are quite complicated in that they introduce new syntax.
For simple multi filtering the following format avoid polluting the parameter namespace while still standing on top of existing web-technology
GET /users?filter[name]=John&filter[title]=Manager
It's easily readable and on the backend languages like PHP will receive it as an array of filters to apply.
A possible standard would SCIM which is adopted by some commercial products. But it's not distinguished by brevity. For a pet project I used this
= equal
! not equal
* like
< smaller
> greater
& bitwise and
| bitwise or
^ bitwise xor
~ in comma separated value list
Examples
So GET /user?name=*An* would get all users whose name start with An and GET /user?name=~Anna,Bertha would get those two users.
Not yet a standard but who knows...
I'm trying to build an interface for my tool to query from Semantic/Relational DB using C#.NET
I now need to have a layer above the query layer to convert NL input to SQL/SPARQL, I read through papers of NLIs, The process of making such a layer is such a load for my project besides, it's not the main target, it's an add-on.
I don't care if the dll supports Guided input only or freely input text and handles unmatchings, I just need a dll to start from and add some code on it.
The fact of whether it should support both SQL and SPARQL doesn't really matter, because I can manage to convert one to another in my project's domain (something local)
any idea on available dlls ?
You could try my Natural Language Engine for .NET. Sample project on Bitbucket and Nuget packages available.
Using TokenPhrase in your rules can match any unmatched strings in the input, or quoted strings.
In the next revision that I'll be releasing soon it also supports 'production rules' and operator precedence which make it even easier to define your grammar.
Uniquely it delivers strongly-typed .NET objects and executes your rules in a manner similar to ASP.NET MVC with controllers, dependency injection and action methods. All rules are defined in code simply by writing a method that accepts the tokens you want to match. It includes tokens for common things like numbers, distances, times, weights and temporal expressions including finite and infinite temporal expressions.
I use it in various applications to build SQL queries so it shouldn't be too hard to use it to create SPARQL queries.
Check out Kueri.me
It's not a DLL but rather a server exposing an API, so Currently it doesn't have a wrapper specifically for C#. There's an API exposed via XmlRpc that you can integrate with any language.
It converts English to SQL and gives google-style suggestions If you want to implement a search-box(supports several DB providers - like MySQL, MSSQL etc).