I am trying to use not in operator in odata.
So does it support or not?
If yes then could you please give some example.
I was going through this post and trying but could not success.
https://github.com/OData/odata.net/issues/1478
the comments in the linked git hub issue https://github.com/OData/odata.net/issues/1478 actually demonstrates that there are variations of support for NOT IN, even though there is not a single operator for it.
The .Net implementation of OData v4 does support the IN Operator for arrays of discreetly defined values, it does not Support a NOT-IN style operator but it does have support for both NOT and IN.
NOT simply negates the expression result but we can also replicate the NOT functionality by equating the expression result with false.
I can't find a publicly available sample to share, but my local projects do not have any trouble with the following queries:
IN
Although redundant, the second query demonstrates that the result of IN is a boolean expression and that you can further chain that expression with further conditions.
~/OData/Facilities?$filter=Entity/Title in ('Abbotsford','test')
~/OData/Facilities?$filter=Entity/Title in ('Abbotsford','test') eq true
returns:
{
"#odata.context": "http://localhost:16486/odata/$metadata#Facilities(FacilityType,Id,Entity(Title))",
"value": [
{
"FacilityType": "None",
"Id": 1081,
"Entity": {
"Title": "Abbotsford"
}
},
{
"FacilityType": "None",
"Id": 2009,
"Entity": {
"Title": "test"
}
},
{
"FacilityType": "None",
"Id": 8046,
"Entity": {
"Title": "test"
}
}
]
}
NOT IN
~/OData/Facilities?$filter=not(Entity/Title in ('Abbotsford','test'))
~/OData/Facilities?$filter=Entity/Title in ('Abbotsford','test') eq false
{
"#odata.context": "http://localhost:16486/odata/$metadata#Facilities(FacilityType,Id,Entity(Title))",
"value": [
{
"FacilityType": "None",
"Id": 1065,
"Entity": {
"Title": "Sleepy Hollow Templestowe Lower"
}
},
{
"FacilityType": "None",
"Id": 13076,
"Entity": {
"Title": "Peach Meadows"
}
}
]
}
If you model has addressable collection or array
Related
I'm trying to write a TextMate grammar for a VS Code language extension. Take the following example
(lang=css attribute2=something-else)
"""
.css-class {
background: gray;
}
"""
The (...) part is an "attributes" section, and the """ ... """ is a code section. I'm trying to highlight everything in the code section according to the lang attribute.
The problem is, they are two distinct sections where one might be present without the other in other parts of the file. For example, you can have attributes without the code block.
In the grammars section of package.json I have
"embeddedLanguages": {
"meta.embedded.block.css": "css",
"meta.embedded.block.javascript": "javascript"
}
In the tmLanguage.json file I have both patterns in the repository property.
"attributes": {
"begin": "\\(",
"end": "\\)",
"captures": {
"0": {
"name": "punctuation.definition.annotation punctuation.section.group punctuation.section.parens"
}
},
"patterns": [
{
"begin": "[a-zA-Z_][a-zA-Z0-9_\\.-]*",
"beginCaptures": {
"0": {
"name": "entity.other.attribute-name"
}
},
"end": "(?=\\s*+[^=\\s])",
"patterns": [
{
"begin": "=",
"beginCaptures": {
"0": {
"name": "punctuation.separator.key-value"
}
},
"end": "(?<=[^\\s=])(?!\\s*=)|(?=/?>)",
"patterns": [
{
"match": "([^0-9-.\\s='\"][^\\s='\")]*)",
"name": "string.unquoted.html"
},
{
"match": "=",
"name": "invalid.illegal.unexpected-equals-sign"
},
{
"include": "#strings"
},
{
"include": "#number"
}
]
}
]
}
]
},
"fenced-code": {
"begin": "\\(.*lang=(css|javascript).*\\)\\s*(\"\"\")",
"beginCaptures": {
"1": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$1",
"patterns": [
{
"include": "source.css"
}
]
}
I have a third pattern not shown where I'm using these together by including them in its patterns array. They seem to be mutually exclusive though. I can have the attributes, and I can have a code block if I start the pattern at """, but if I start the code pattern with \\(.*lang=(css|javascript).*\\)\\s*(\"\"\") to capture the lang attribute, the attributes stop getting highlighting.
Is this even possible? I've never worked with TextMate grammars outside of a VS Code theme and VS Code doesn't seem to have deep documentation on the more "advanced" (I guess) things like this.
I tried using VS Code's HTML grammar for its uses of embedded code, but I don't think the HTML one needs to swap syntax based on something in a previous line.
Update
The fenced-code pattern I have below allows the attributes pattern highlighting while also having the code block with dynamic contentName property, i.e. "meta.embedded.block.$2" becomes meta.embedded.block.css when "css" is found in attributes.
"fenced-code": {
"begin": "(\\(.*lang=(css|javascript).*\\))\\s*(\"\"\")",
"beginCaptures": {
"1": {
"patterns": [
{
"include": "#attributes"
}
]
},
"3": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$2",
"patterns": [
{
"include": "source.css"
},
{
"include": "source.js"
}
]
}
However, there's two things wrong so far.
It only works when the opening """ is on the same line as the attributes section
fenced-code's patterns array doesn't seem to allow a dynamic include, i.e. "patterns": [{"include": "source.$2}]. I'm not sure if including the different languages as I did above will work
I have been working on my own validator for JSON schema and FINALLY have most of how unevaluatedProperties are supposed to work,... I think. That's one tricky piece there! However I really just want to confirm one thing. Given the following schema and JSON, what is the expected outcome... I have tried it with a https://www.jsonschemavalidator.net and gotten an answer, but I was hoping I could get a more definitive answer.
The focus is the faz property is in fact being evaluated, but the command to disallow unevaluatedProperties comes from a deeply nested schema.
Thoguhts?
Here is the schema...
{
"type": "object",
"properties": {
"foo": {
"type": "object",
"properties": {
"bar": {
"type": "string"
}
},
"unevaluatedProperties": false
}
},
"anyOf": [
{
"properties": {
"foo": {
"properties": {
"faz": {
"type": "string"
}
}
}
}
}
]
}
Here is the JSON...
{
"foo": {
"bar": "test",
"faz": "test"
}
}
That schema will successfully evaluate against the provided data. The unevaluatedProperties keyword will be aware of properties evaluated in subschemas of adjacent keywords, and is evaluated after all other applicator keywords, so it will see the annotation produced from within the anyOf subschema, also.
Evaluating this keyword is easy if you follow the specification literally -- it uses annotations to decide what to do. You just need to make sure that all keywords either produce annotations correctly or propagate annotations correctly that were produced by other keywords, and then all the information is available to generate the correct result.
The result produced by my implementation is:
{
"annotations" : [
{
"annotation" : [
"faz"
],
"instanceLocation" : "/foo",
"keywordLocation" : "/anyOf/0/properties/foo/properties"
},
{
"annotation" : [
"foo"
],
"instanceLocation" : "",
"keywordLocation" : "/anyOf/0/properties"
},
{
"annotation" : [
"bar"
],
"instanceLocation" : "/foo",
"keywordLocation" : "/properties/foo/properties"
},
{
"annotation" : [],
"instanceLocation" : "/foo",
"keywordLocation" : "/properties/foo/unevaluatedProperties"
},
{
"annotation" : [
"foo"
],
"instanceLocation" : "",
"keywordLocation" : "/properties"
}
],
"valid" : true
}
This is not an answer but a follow up example which I feel is in the same vein. I feel this guides us to the answer.
Here we have a single object being validated. But the unevaluated command resides in two different schemas each a part of a different "adjacent keyword subschemas"(from the core spec http://json-schema.org/draft/2020-12/json-schema-core.html#rfc.section.11)
How should this be resolved. If all annotations must be evaluated then in what order do I evaluate? The oneOf first or the anyOf? According the spec an unevaluated command(properties or items) generate annotation results which means that that result would affect any other unevaluated command.
http://json-schema.org/draft/2020-12/json-schema-core.html#unevaluatedProperties
"The annotation result of this keyword is the set of instance property names validated by this keyword's subschema."
This is as far as I am understanding the spec.
According to the two validators I am using this fails.
Schema
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"properties": {
"foo": {
"type": "string"
}
},
"oneOf": [
{
"properties": {
"faz": {
"type": "string"
}
},
"unevaluatedProperties": true
}
],
"anyOf": [
{
"properties": {
"bar": {
"type": "string"
}
},
"unevaluatedProperties": false
}
]
}
Data
{
"bar": "test",
"faz": "test",
}
I am trying to have JSON validation based on the following input:
{
"elements":[
{
"..."
"isSelected": true
},
{
"..."
"isSelected": false
},
{
"..."
"isSelected": false
}
]
}
The input is going to be valid if and only if we have "isSelected" set to "true" (and all the rest set to "false"). Can't have "isSelected: true" more than once (and all the rest need to be "false").
Tried with the following:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"definitions": {
"element":{
"type": "object",
"properties": {
"isSelected": {
"type": "boolean"
}
}
}
},
"properties": {
"elements": {
"type": "array",
"items": {
"$ref": "#/definitions/element"
},
"oneOf": [
{
"isSelected": true
}
]
}
},
}
unfortunately I don't think this is possible with json schema draft 7. the newest draft (2019-09) features the maxContains keyword, which would be able to validate this, but tooling for this draft is sparse so far. I don't know the tooling you're using, but if you are able to use 2019-09, the schema for 'elements' would look something like:
{
"type": "array",
"contains": {
"properties": {
"isSelected": {"const": true}
}
},
"maxContains": 1
}
oneOf isn't what you're looking for, for this - it checks that one of a set of schemas validates against the instance, not whether one of a set of instances validates against a schema.
This is not currently supported, but you may be interested in this proposal which intends to add a keyword to support key-based item uniqueness. It's not exactly the same, but I think it's related.
I have an ElasticSearch index with below configuration:
{
"my_ind": {
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "10000000"
}
},
"number_of_shards": "3",
"provided_name": "my_ind",
"creation_date": "1539773409246",
"analysis": {
"analyzer": {
"default": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "whitespace"
}
}
},
"number_of_replicas": "1",
"uuid": "3wC7i-E_Q9mSDjnTN2gxrg",
"version": {
"created": "5061299"
}
}
}
}
}
I want to search below content with plain search:
DL-1234170386456
This contents are available in the below field:
DNumber
This filed has mapping like below:
{
"DNumber": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
I am trying to implement it in JAVA language. I came across the ElasticSearch Analyzers and Tokenizers so I made use of "whitespace" tokenizer.
I am trying to search with below query:
{
"query": {
"multi_match": {
"query": "DL-1234170386456",
"fields": [
"_all"
],
"type": "best_fields",
"operator": "OR",
"analyzer": "default",
"slop": 0,
"prefix_length": 0,
"max_expansions": 50,
"lenient": false,
"zero_terms_query": "NONE",
"boost": 1
}
}
}
What wrong I am doing?
After doing lot of research and Trial & Error, found out the answer!
Some basic but important points:
We need to specify Analyzers and Tokenizers while creating/indexing the index/data.
In specified string i.e. "DL-1234170386456", special character (i.e. "-") is available and ElasticSearch is using by default Standard Analyzer.
Standard Analyzer contains Standard Tokenizer which is based on the Unicode Text Segmentation algorithm.
Actual Problem:
ElasticSearch is separating the String ("DL-1234170386456") into two different parts like "DL" and "1234170386456".
Solution:
We need to specify Whitespace Analyzer which contains Whitespace Tokenizer.
It will split the word whenever space is encountered. So, String ("DL-1234170386456") will kept as it is by ElasticSearch and we are able to find it out.
After reading the documentation, testing and reading a lot of other questions here on stackoverflow:
We have documents that have titles and description in multiple languages. There are also tags that are translated to the same languages. There might be up to 30-40 different languages in the system, but probably only 3 or 4 translations for a single document.
This is the planned document structure:
{
"luck": {
"id": 10018,
"pub": 0,
"pr": 100002,
"loc": {
"lat": 42.7,
"lon": 84.2
},
"t": [
{
"lang": "en-analyzer",
"title": "Forest",
"desc": "A lot of trees.",
"tags": [
"Wood",
"Nature",
"Green Mouvement"
]
},
{
"lang": "fr-analyzer",
"title": "ForĂȘt",
"desc": "A grand nombre d'arbre.",
"tags": [
"Bois",
"Nature",
"Mouvement Vert"
]
}
],
"dates": [
"2014-01-01T20:00",
"2014-06-06T20:00",
"2014-08-08T20:00"
]
}
}
Possible queries are "arbre" or "wood" or "forest" or "nature" combined with a date and a geo_distance filter, furthermore there will be some facets over the tags array (that obviously include counting).
We can produce any document structure that fits best for elasticsearch (or for lucene). It's crucial that each language is analyzed specifically, so we use "_analyzer" in order to distinguish the languages.
{
"luck": {
"properties": {
"id": {
"type": "long"
},
"pub": {
"type": "long"
},
"pr": {
"type": "long"
},
"loc": {
"type": "geo_point"
},
"t": {
"_analyzer": {
"path": "t.lang"
},
"properties": {
"lang": {
"type": "string"
},
"properties": {
"title": {
"type": "string"
},
"desc": {
"type": "string"
},
"tags": {
"type": "string"
}
}
}
}
}
}
A) Apparently, this idea does not work: after PUTting the mapping, we retrieve the same mapping ("GET") and it seems to ignore the specific analyzers (A test with a top-level "_analyzer" worked fine). Does "_analyzer" work for sub-documents and if yes how to should we refer to it? We also tested declaring the sub-document as "object" or "nested". How is multi-language document indexing supposed to work.
B) One possibility would be to put each language in its own document: In that case how do we manage the id? Finally both documents should refer to the same id. For example if the user searches for "nature" (and we don't know if the user intends to find "nature" in English or French), this document would appear twice in the result set, and the counting and paging would be very wrong (also facet counting).
Any ideas?