Can a container inside a grouping be augmented in YANG? - schema

The first module contains this:
module a
{
namespace "my:namespace";
prefix a;
grouping mygrouping
{
container firstcontainer {
list modules {
leaf firstmodule;
leaf secondmodule;
}
}
}
}
Now I want to augment it in the second module as follows:
module b
{
namespace "my:namespace";
prefix a;
import b {
prefix b;
}
augment "/b:mygrouping/b:firstcontainer/b:modules"{
leaf thirdmodule;
}
}
This does not work, apparently because a grouping cannot be augmented. But there must be a way, since what I really want to extend is the list, not the grouping itself.
Is there another way to have the intended result using another way ?

Augmenting a grouping is only possible when 'using' that grouping.
For example:
uses a:mygrouping {
augment firstcontainer/modules {
leaf thirdmodule {...}
}
}
So this would need to be done every time the grouping is 'used'.
YANG doesn't provide a way to augment a grouping abstractly, you can only do it on a per 'uses' basis.

https://datatracker.ietf.org/doc/html/rfc7950#section-7.12
7.13. The "uses" Statement
The **"uses" statement is used to reference a "grouping" definition**.
It takes one argument, which is the name of the grouping.
The **effect of a "uses" reference to a grouping is that the nodes
defined by the grouping are copied into the current schema tree and
are then updated according to the "refine" and "augment" statements**.
The identifiers defined in the grouping are not bound to a namespace
until the contents of the grouping are added to the schema tree via a
"uses" statement that does not appear inside a "grouping" statement,
at which point they are bound to the namespace of the current module.

Related

kotlin + get all component / members from class (object) with its original order

Im trying to get all memberProperties of a class. I'm using bellow code
for (props in SomeClass::class.memberProperties) {
//do something with the props
}
I can get all the members, but the sequence that I get is ignoring the original member/property's order, instead it automatically ordering it by alphabet (a-z)
For example, If I declare my class as
class foo(b:String, a:String, c:String)
when I print all of its properties, I would get a,b,c rather than b,a,c
is there any way for me to iterate all members of a class and maintain it's original order / sequence?

Kotlin: get members of a data class by reflection in the order they have been defined

Assume the following simple example data class:
data class SomeDataClass(
var id: String,
var name: String,
var deleted: String
)
With the following code it is possible to get the properties (and set or get their values):
import kotlin.reflect.full.memberProperties
val properties = SomeDataClass::class.memberProperties
print(properties.map { it.name }) // prints: [deleted, id, name]
The map within the print statement will return a List with the name of the properties in alphabetical order. I need the list in the order they have been defined in the source code, in this case: [id, name, deleted].
It doesn't seem achievable purely through reflection. The only solution I could come up with is to use a helper class defining the order:
val SomeDataClass_Order = listOf("id", "name", "deleted")
This wouldn't be a problem for one or two classes, but it is for hundreds of data classes with the largest one having up to almost one hundred properties.
Any idea would be welcome. I do not need detailed code, rather hints (like parsing the source code, annotations, etc).
If all the properties are declared in the primary constructor, you could "cheat":
val propertyNames = SomeDataClass::class.primaryConstructor!!.parameters.map { it.name }
If you want the KPropertys:
val properties = propertyNames.map { name ->
SomeDataClass::class.memberProperties.find { it.name == name }
}
This unfortunately doesn't find the properties that are declared in the class body.
I don't know about other platforms, but on Kotlin/JVM, the order in which the backing fields for the properties are generated in the class file is not specified, and a quick experiment finds that the order (at least for the version of kotlinc that I'm using right now), the order is the same as the declaration order. So in theory, you could read the class file of the data class, and find the fields. See this related answer for getting the methods in order. Alternatively, you can use Java reflection, which also doesn't guarantee any order to the returned fields, but "just so happens" to return them in declaration order:
// not guaranteed, might break in the future
val fields = SomeDataClass::class.java.declaredFields.toList()
If you do want to get the properties declared inside the class body in order too, I would suggest that you don't depend on the order at all.

Queries on schema and JSON data conversion

We already have flatbuffer library embedded in our software code for simple schemas with JSON output data generation.
More update: We are generating the header files using flatc compiler against the schema and integrate these files inside of our code along with FB library for further serialization/deserialization.
Now we also need to have the following schema tree to be supported.
namespace SampleNS;
/// user defined key value pairs to add custom metadata
/// key namespacing is the responsibility of the user
table KeyValue {
key:string (key, required);
value:string (required);
}
enum SchemaVersion:byte {
V1,
V2
}
table Sometable {
value1:ubyte;
value2:ushort (key);
}
table ComponentData {
inputs: [Sometable];
outputs: [Sometable];
}
table Node {
name:string (key);
/// IO definition
data:ComponentData;
/// nested child
child:[Components];
}
table Components {
type:ubyte;
index:ubyte;
nodes:[Node];
}
table GroupMasterData {
schemaversion:SchemaVersion = sampleNS::SchemaVersion::V1;
metainfo:[KeyValue];
/// List of expected components in the system
components:[Components];
}
root_type GroupMasterData;
As from above, table Components is nested recursively. The intention is components may have childs that have the same fields.
I have few queries:
Flatc didnt gave me any error during schema compilation for such
recursive nested tables. But is this supported during the field
access for such tables?
I tried to generate a sample json data file based on above data but I
could not see the field for schemaversion. I learned FB doesn't
serialize the default values. so, I removed the default value that I
assigned in the schema. But, it still doesnt write into the json data
file. On this I also learned we can forcefully write into the file
using force_defaults option. I don't know where is this is to be
put: in the attribute or elsewhere?
Can I create a struct of enum field?
Is their any API to set Flatbuffer options that we otherwise pass to the compiler arguments? or if not, may be I think we have to tinker with the FB library code. Please suggest.
Method 1:
In our serialization method, we do this:
flatbuffers::Parser* parser = new flatbuffers::Parser();
parser->opts.output_default_scalars_in_json = true;
Is this the right method or should I use any other API?
Yes, trees (and even DAG) structures are fully supported. The type definition is recursive, but the data will eventually have leaf nodes with an empty vector of children, presumably.
The integer value for V1 is 0, and that is also the default value for all fields with no explicit default assigned. Use --defaults-json to see this field when converting. Note that explicit versions in a schema is an anti-pattern, since schemas are naturally evolvable without breaking backwards compatibility.
You can put enum fields in structs, yes. Is that what you mean?

Can Raku's introspection list all the multi candidates across different files/Modules?

When a proto and multis are defined in the same module, Type.^lookup('method').candidates returns a list of all multi candidates. However, this appears not to work when the proto lives in a different file/module from the multis.
say Setty.^lookup('grab').candidates; # OUTPUT: ()
Is there any way to find the full list of multi candidates through Raku's introspection? Or is there no alternative to grepping through the source code? (I ask because having a full list of multi candidates applicable to a given proto would be helpful for documentation purposes.)
So far as multi methods go, it's not really to do with being in the same module or file at all. Consider these classes:
class Base {
proto method m(|) { * }
multi method m() { 1 }
}
class Derived is Base {
multi method m() { 2 }
}
Whenever we compose a class that contains multi methods, we need to attach them to a controlling proto. In the case of Base, this was explicitly written, so there's nothing to do other than to add the multi candidate to its candidate list. Had we not written a proto explicitly in Base, however, then one with an empty candidate list would have been generated for us, with the same end result.
The process I just described is a bit of a simplification of what really happens, however. The steps are:
See if this class has a proto already; if so, add the multi to it
Otherwise, see if any base class has a proto; if so, clone it (in tern cloning the candidate list) and add the multi to that.
Otherwise, generate a new proto.
And step 2 is really the answer to your question. If we do:
say "Base:";
.raku.say for Base.^lookup('m').candidates;
say "Derived:";
.raku.say for Derived.^lookup('m').candidates;
Then the output is:
Base:
multi method m (Base: *%_) { #`(Method|82762064) ... }
Derived:
multi method m (Base: ) { #`(Method|82762064) ... }
multi method m (Derived: ) { #`(Method|82762208) ... }
That is, the candidate list in Base has one entry, and the candidate list in Derived has the entry cloned from Base as well as a new one.
Pretty much everything follows this principle: derived classes reference their base class (and the roles they do), but base classes (and roles) don't know about their descendants.

How to design generic filtering operators in the query string of an API?

I'm building a generic API with content and a schema that can be user-defined. I want to add filtering logic to API responses, so that users can query for specific objects they've stored in the API. For example, if a user is storing event objects, they could do things like filter on:
Array contains: Whether properties.categories contains Engineering
Greater than: Whether properties.created_at is older than 2016-10-02
Not equal: Whether properties.address.city is not Washington
Equal: Whether properties.name is Meetup
etc.
I'm trying to design filtering into the query string of API responses, and coming up with a few options, but I'm not sure which syntax for it is best...
1. Operator as Nested Key
/events?properties.name=Harry&properties.address.city.neq=Washington
This example is uses just a nested object to specific the operators (like neq as shown). This is nice in that it is very simple, and easy to read.
But in cases where the properties of an event can be defined by the user, it runs into an issue where there is a potential clash between a property named address.city.neq using a normal equal operator, and a property named address.city using a not equal operator.
Example: Stripe's API
2. Operator as Key Suffix
/events?properties.name=Harry&properties.address.city+neq=Washington
This example is similar to the first one, except it uses a + delimiter (which is equivalent to a space) for operations, instead of . so that there is no confusion, since keys in my domain can't contain spaces.
One downside is that it is slightly harder to read, although that's arguable since it might be construed as more clear. Another might be that it is slightly harder to parse, but not that much.
3. Operator as Value Prefix
/events?properties.name=Harry&properties.address.city=neq:Washington
This example is very similar to the previous one, except that it moves the operator syntax into the value of the parameter instead of the key. This has the benefit of eliminating a bit of the complexity in parsing the query string.
But this comes at the cost of no longer being able to differentiate between an equal operator checking for the literal string neq:Washington and a not equal operator checking for the string Washington.
Example: Sparkpay's API
4. Custom Filter Parameter
/events?filter=properties.name==Harry;properties.address.city!=Washington
This example uses a single top-level query paramter, filter, to namespace all of the filtering logic under. This is nice in that you never have to worry about the top-level namespace colliding. (Although in my case, everything custom is nested under properties. so this isn't an issue in the first place.)
But this comes at a cost of having a harder query string to type out when you want to do basic equality filtering, which will probably result in having to check the documentation most of the time. And relying on symbols for the operators might lead to confusion for non-obvious operations like "near" or "within" or "contains".
Example: Google Analytics's API
5. Custom Verbose Filter Parameter
/events?filter=properties.name eq Harry; properties.address.city neq Washington
This example uses a similar top-level filter parameter as the previous one, but it spells out the operators with word instead of defining them with symbols, and has spaces between them. This might be slightly more readable.
But this comes at a cost of having a longer URL, and a lot of spaces that will need to be encoded?
Example: OData's API
6. Object Filter Parameter
/events?filter[1][key]=properties.name&filter[1][eq]=Harry&filter[2][key]=properties.address.city&filter[2][neq]=Washington
This example also uses a top-level filter parameter, but instead of creating a completely custom syntax for it that mimics programming, it instead builds up an object definition of filters using a more standard query string syntax. This has the benefit of bring slightly more "standard".
But it comes at the cost of being very verbose to type and hard to parse.
Example Magento's API
Given all of those examples, or a different approach, which syntax is best? Ideally it would be easy to construct the query parameter, so that playing around in the URL bar is doable, but also not pose problems for future interoperability.
I'm leaning towards #2 since it seems like it is legible, but also doesn't have some of the downsides of other schemes.
I might not answer the "which one is best" question, but I can at least give you some insights and other examples to consider.
First, you are talking about "generic API with content and a schema that can be user-defined".
That sound a lot like solr / elasticsearch which are both hi level wrappers over Apache Lucene which basically indexes and aggregates documents.
Those two took totally different approaches to their rest API, I happened to work with both of them.
Elasticsearch :
They made entire JSON based Query DSL, which currently looks like this :
GET /_search
{
"query": {
"bool": {
"must": [
{ "match": { "title": "Search" }},
{ "match": { "content": "Elasticsearch" }}
],
"filter": [
{ "term": { "status": "published" }},
{ "range": { "publish_date": { "gte": "2015-01-01" }}}
]
}
}
}
Taken from their current doc. I was surprised that you can actually put data in GET...
It actually looks better now, in earlier versions it was much more hierarchical.
From my personal experience, this DSL was powerful, but rather hard to learn and use fluently (especially older versions). And to actually get some result you need more than just play with URL. Starting with the fact that many clients don't even support data in GET request.
SOLR :
They put everything into query params, which basically looks like this (taken from the doc) :
q=*:*&fq={!cache=false cost=5}inStock:true&fq={!frange l=1 u=4 cache=false cost=50}sqrt(popularity)
Working with that was more straightforward. But that's just my personal taste.
Now about my experiences. We were implementing another layer above those two and we took approach number #4. Actually, I think #4 and #5 should be supported at the same time. Why? Because whatever you pick people will be complaining, and since you will be having your own "micro-DSL" anyway, you might as well support few more aliases for your keywords.
Why not #2? Having single filter param and query inside gives you total control over DSL. Half a year after we made our resource, we got "simple" feature request - logical OR and parenthesis (). Query parameters are basically a list of AND operations and logical OR like city=London OR age>25 don't really fit there. On the other hand parenthesis introduced nesting into DSL structure, which would also be a problem in flat query string structure.
Well, those were the problems we stumbled upon, your case might be different. But it is still worth to consider, what future expectations from this API will be.
Matomo Analytics has an other approach to deal with segment filter and its syntaxe seems to be more readable and intuitive, e.g:
developer.matomo.org/api-reference/reporting-api-segmentation
Operator
Behavior
Example
==
Equals
&segment=countryCode==IN Return results where the country is India
!=
Not equals
&segment=actions!=1 Return results where the number of actions (page views, downloads, etc.) is not 1
<=
Less than or equal to
&segment=actions<=4 Return results where the number of actions (page views, downloads, etc.) is 4 or less
<
Less than
&segment=visitServerHour<12 Return results where the Server time (hour) is before midday.
=#
Contains
&segment=referrerName=#piwik Return results where the Referer name (website domain or search engine name) contains the word "piwik".
!#
Does not contain
&segment=referrerKeyword!#yourBrand Return results where the keyword used to access the website does not contain word "yourBrand".
=^
Starts with
&segment=referrerKeyword=^yourBrand Return results where the keyword used to access the website starts with "yourBrand" (requires at least Matomo 2.15.1).
=$
Ends with
&segment=referrerKeyword=$yourBrand Return results where the keyword used to access the website ends with "yourBrand" (requires at least Matomo 2.15.1).
and you can have a close look at how they parse the segment filter here: https://github.com/matomo-org/matomo/blob/4.x-dev/core/Segment/SegmentExpression.php
#4
I like how Google Analytics filter API looks like, easy to use and easy to understand from a client's point of view.
They use a URL encoded form, for example:
Equals: %3D%3D filters=ga:timeOnPage%3D%3D10
Not equals: !%3D filters=ga:timeOnPage!%3D10
Although you need to check documentation but it still has its own advantages. IF you think that the users can get accustomed to this then go for it.
#2
Using operators as key suffixes also seems like a good idea (according to your requirements).
However I would recommend to encode the + sign so that it isn't parsed as a space. Also it might be slightly harder to parse as mentioned but I think you can write a custom parser for this one. I stumbled across this gist by jlong some time back. Perhaps you'll find it useful to write your parser.
You could also try Spring Expression Language (SpEL)
All you need to do is to stick to the said format in the document, the SpEL engine would take care of parsing the query and executing it on a given object. Similar to your requirement of filtering a list of objects, you could write the query as:
properties.address.city == 'Washington' and properties.name == 'Harry'
It supports all kind of relational and logical operators that you would need. The rest api could just take this query as the filter string and pass it to SpEL engine to run on an object.
Benefits: it's readable, easy to write, and execution is well taken care of.
So, the URL would look like:
/events?filter="properties.address.city == 'Washington' and properties.name == 'Harry'"
Sample code using org.springframework:spring-core:4.3.4.RELEASE :
The main function of interest:
/**
* Filter the list of objects based on the given query
*
* #param query
* #param objects
* #return
*/
private static <T> List<T> filter(String query, List<T> objects) {
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression(query);
return objects.stream().filter(obj -> {
return exp.getValue(obj, Boolean.class);
}).collect(Collectors.toList());
}
Complete example with helper classes and other non-interesting code:
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.expression.Expression;
import org.springframework.expression.ExpressionParser;
import org.springframework.expression.spel.standard.SpelExpressionParser;
public class SpELTest {
public static void main(String[] args) {
String query = "address.city == 'Washington' and name == 'Harry'";
Event event1 = new Event(new Address("Washington"), "Harry");
Event event2 = new Event(new Address("XYZ"), "Harry");
List<Event> events = Arrays.asList(event1, event2);
List<Event> filteredEvents = filter(query, events);
System.out.println(filteredEvents.size()); // 1
}
/**
* Filter the list of objects based on the query
*
* #param query
* #param objects
* #return
*/
private static <T> List<T> filter(String query, List<T> objects) {
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression(query);
return objects.stream().filter(obj -> {
return exp.getValue(obj, Boolean.class);
}).collect(Collectors.toList());
}
public static class Event {
private Address address;
private String name;
public Event(Address address, String name) {
this.address = address;
this.name = name;
}
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
public static class Address {
private String city;
public Address(String city) {
this.city = city;
}
public String getCity() {
return city;
}
public void setCity(String city) {
this.city = city;
}
}
}
I decided to compare the approaches #1/#2 (1) and #3 (2) and concluded that (1) is preferred (at least, for Java server side).
Assume, some parameter a must be equal 10 or 20. Our URL query in this case must look like ?a.eq=10&a.eq=20 for (1) and ?a=eq:10&a=eq:20 for (2). In Java HttpServletRequest#getParameterMap() will return the next values: { a.eq: [10, 20] } for (1) and { a: [eq:10, eq:20] } for (2). Later we must convert returned maps, for example, to SQL where clause. And we should get: where a = 10 or a = 20 for both (1) and (2). Briefly, it looks something like that:
1) ?a=eq:10&a=eq:20 -> { a: [eq:10, eq:20] } -> where a = 10 or a = 20
2) ?a.eq=10&a.eq=20 -> { a.eq: [10, 20] } -> where a = 10 or a = 20
So, we got the next rule: when we pass through URL query two parameters with the same name we must use OR operand in SQL.
But let's assume another case. The parameter a must be greater than 10 and less than
20. Applying the rule above we will have the next conversion:
1) ?a.gt=10&a.ls=20 -> { a.gt: 10, a.lt: 20 } -> where a > 10 and a < 20
2) ?a=gt:10&a=ls:20 -> { a: [gt.10, lt.20] } -> where a > 10 or(?!) a < 20
As you can see, in (1) we have two parameters with different names: a.gt and a.ls. This means our SQL query will have AND operand. But for (2) we still have the same names and it must be converted to the SQL with OR operand!
This means that for (2) instead of using #getParameterMap() we must directly parse the URL query and analyze repeated parameter names.
I know this is old school, but how about a sort of operator overloading?
It would make the query parsing a lot harder (and not standard CGI), but would resemble the contents of an SQL WHERE clause.
/events?properties.name=Harry&properties.address.city+neq=Washington
would become
/events?properties.name=='Harry'&&properties.address.city!='Washington'||properties.name=='Jack'&&properties.address.city!=('Paris','New Orleans')
paranthesis would start a list. Keeping strings in quotes would simplify parsing.
So the above query would be for events for Harry's not in Washington or for Jacks not in Paris or in New Orleans.
It would be a ton of work to implement... and the database optimization to run those queries would be a nightmare, but if you're looking for a simple and powerful query language, just imitate SQL :)
-k