How to augment an enumeration in YANG - schema

Is there a way to augment an enumeration in another module in YANG ? There is no way, in my case, to put all the values in the first module where the enumeration was defined.
Knowing that the enumeration is inside a grouping as follows:
grouping mygrouping {
...
container mycontainer {
...
list mylist {
leaf type {
type enumeration {
enum type1
enum type2
...
enum typen
}
}
}
}
}
The grouping is used in the new module, but I couldn't augment the leaf type to add new types in the enumeration.

In YANG, enumerations are for well-known static set of options.
For extensible options, you can use identityrefs.
This allows identities to be used in multiple files, and to define a leaf with a identityref type, which can then take any of the values of the defined identities.
Think of it as a decentralized enumeration. It's not really 'augmenting' but it does allow to introduce new options to a value without changing the original module.
Of course, this does assume that you can actually change the original leaf with the enumeration.
Definition of identities in YANG RFC: https://www.rfc-editor.org/rfc/rfc6020#section-7.16
Some reference on enumeration versus identities: https://www.rfc-editor.org/rfc/rfc8407#section-4.11.1
Update: one option that is 'sort of' augmenting enums is to define the original enum in a typedef, and then extend it via a union:
typedef myenum {
enum val1 { value 1; }
enum val2 { value 2; }
enum val3 { value 3; }
}
...
leaf myleaf {
type union {
type myenum;
type enumeration {
enum val4 { value 4; }
enum val5 { value 5; }
}
}
}
So in this case, the myleaf can have values val1, val2, val3, val4, val5, which means the original enum was indeed 'augmented'.
Of course this means it is not really an enum, but a union between two enums, which are arranged so that their values don't intersect (something that unions do allow). This may or may not be a simplification, on both client and server side - depending on the implementation.

Related

OOP: Inheriting from immutable objects

Background
Suppose I have some set of fields which are related to each other I therefore make a class to gather them. Let us call this class Base. There are certain methods as well, which operate on these fields which will be common to all derived classes. Additionally, let us suppose we want Base and all its derived classes to be immutable.
In different contexts, these fields support additional operations, so I have different derived classes which inherit the fields and provide additional methods, depending on their context. Let us call these Derived1, Derived2, etc.
In certain scenarios, the program needs instances of a derived class, but the state of the fields must satisfy some condition. So I made a class RestrictedDerived1 which makes sure that the condition is satisfied (or changes the parameters to conform if it can) in the constructor before calling its base constructor, or throws an error otherwise.
Further, there are situations where I need even more conditions to be met, so I have SuperRestrictedDerived1. (Side note: given that some conditions are met, this class can more efficiently compute certain things, so it overrides some methods of Derived1.)
Problem
So far so good. The problem is that most of the methods of all these classes involve making another instance of some class in this hierarchy (not always the same as the one that the method was called on, but usually the same one) based on itself, but with some modifications which may involve somewhat complex computation (i.e. not just changing one field). For example one of the methods of Derived1 might look like:
public Derived1 foo(Base b) {
TypeA fieldA = // calculations using this and b
TypeB fieldB = // more calculations
// ... calculate all fields in this way
return new Derived1(fieldA, fieldB, /* ... */);
}
But then down the hierarchy RestrictedDerived1 needs this same function to return an instance of itself (obviously throwing an error if it can't be instantiated), so I'd need to override it like so:
#Override
public ResrictedDerived1 foo(Base b) {
return new RestrictedDerived1(super.foo(b));
}
This requires a copy constructor, and unnecessarily allocating an intermediate object which will immediately destroyed.
Possible solution
An alternative solution I thought of was to pass a function to each of these methods which constructs some type of Base, and then the functions would look like this:
// In Derived1
public Derived1 foo(Base b, BaseCreator creator) {
TypeA fieldA = // calculations using this and b
TypeB fieldB = // more calculations
// ... calculate all fields in this way
return creator.create(fieldA, fieldB, /* ... */);
}
public Derived1 foo(Base b) {
return foo(b, Derived1::create);
}
public static Derived1 create(TypeA fieldA, TypeB fieldB, /* ... */) {
return new Derived1(fieldA, fieldB, /* ... */);
}
// In RestrictedDerived1
#Override
public ResrictedDerived1 foo(Base b) {
return (RestrictedDerived1) foo(b, RestrictedDerived1::create);
}
public static RestrictedDerived1 create(TypeA fieldA, TypeB fieldB, /* ... */) {
return new RestrictedDerived1(fieldA, fieldB, /* ... */);
}
My question
This works, however it feels "clunky" to me. My question is, is there some design pattern or concept or alternative design that would facilitate my situation?
I tried do use generics, but that got messy quick, and didn't work well for more than one level of inheritance.
By the way, the actual classes that these refer to is 3D points and vectors. I have a base called Triple with doubles x, y, and z (and some functions which take a lambda and apply them to each coordinate and construct a new Triple with the result). Then I have a derived class Point with some point related functions, and another derived class Vector with its functions. Then I have NonZeroVector (extends Vector) which is a vector that cannot be the zero vector (since other objects that need a vector sometimes need to be guaranteed that it's not the zero vector, and I don't want to have to check that everywhere). Further, I have NormalizedVector (extends NonZeroVector) which is guaranteed to have a length of 1, and will normalize itself upon construction.
MyType
This can be solved using a concept variously known as MyType, this type, or self type. The basic idea is that the MyType is the most-derived type at runtime. You can think of it as the dynamic type of this, but referred to statically (at "compile time").
Unfortunately, not many mainstream programming languages have MyTypes, but e.g. TypeScript does, and I was told Raku does as well.
In TypeScript, you could solve your problem by making the return type of foo the MyType (spelled this in TypeScript). It would look something like this:
class Base {
constructor(public readonly fieldA: number, public readonly fieldB: string) {}
foo(b: Base): this {
return new this.constructor(this.fieldA + b.fieldA, this.fieldB + b.fieldB);
}
}
class Derived1 extends Base {
constructor(fieldA: number, fieldB: string, protected readonly repeat: number) {
super(fieldA * repeat, fieldB.repeat(repeat));
}
override foo(b: Base): this {
return new this.constructor(
this.fieldA + b.fieldA, this.fieldB + b.fieldB, this.repeat
);
}
}
class RestrictedDerived1 extends Derived1 {
constructor(fieldA: number, fieldB: string, repeat: number) {
super(fieldA * repeat, fieldB.repeat(repeat), repeat);
if (repeat >= 3) {
throw new RangeError(`repeat must be less than 3 but is ${repeat}`)
}
}
}
const a = new RestrictedDerived1(23, 'Hello', 2);
const b = new Base(42, ' World');
const restrictedDerived = a.foo(b); // Inferred type is RestrictedDerived1
Slightly b0rken Playground link
Implicit factories
In a language with type classes or implicits (like Scala), you could solve your problem with implicit Factory objects. This would be similar to your second example with the Creators, but without the need to explicitly pass the creators around everywhere. Instead, they would be implicitly summoned by the language.
In fact, your requirement is very similar to one of the core requirements of the Scala Collections Framework, namely that you want operations like map, filter, and reduce to only be implemented once, but still preserve the type of the collection.
Most other Collections Frameworks are only able to achieve one of those goals: Java, C#, and Ruby, for example, only have one implementation for each operation, but they always return the same, most-generic type (Stream in Java, IEnumerable in C#, Array in Ruby). Smalltalk's Collections Framework is type-preserving, but has duplicated implementations for every operation. A non-duplicated type-preserving Collections Framework is one of the holy grails of abstractions designers / language designers. (It's no coincidence that so many papers that present novel approaches to OO uses a refactoring of the Smalltalk Collection Framework as their working example.)
F-bounded Polymorphism
If you have neither MyType nor implicit builders available, you can use F-bounded Polymorphism.
The classic example is how Java's clone method should have been designed:
interface Cloneable<T extends Cloneable<T>> {
public T clone();
}
class Foo implements Cloneable<Foo> {
#Override
public Foo clone() {
return new Foo();
}
}
JDoodle example
However, this gets tedious very quickly for deeply-nested inheritance hierarchies. I tried to model it in Scala, but I gave up.

How to effectively map between Enum in Kotlin

I have two Enums,
enum class EnumKey
enum class EnumValue
and I already have a mapping from EnumKey to EnumValue.
fun EnumKey.toEnumValue(): EnumValue =
when(this) {
EnumA.KEY1 -> EnumValue.VALUE1
EnumA.KEY2 -> EnumValue.VALUE2
...
...
EnumA.KEY1000 -> EnumValue.VALUE1000
}
Now I need to have an another mapping from EnumValue to EnumKey.
Is using a Map and its reversed map created by associateBy the best way to do it? Or is there any other better ways?
Thanks!
If the enum values are somehow connected by name and they're as large as in your example, then I would advise using something like EnumValue.values().filter { it.name.contains(...) } or using regex.
If they aren't and the connection needs to be stated explicitly then I would use an object (so it's a singleton like the enums themselves) and have this mapping hidden there:
object EnumsMapping {
private val mapping = mapOf(
EnumKey.A to EnumValue.X,
EnumKey.B to EnumValue.Y,
EnumKey.C to EnumValue.Z,
)
....
and next, have the associated values available by functions in this object like:
fun getEnumValue(enumKey: EnumKey) = mapping[enumKey]
and
fun getEnumKey(enumValue: EnumValue) = mapping.filterValues { it == enumValue }.keys.single()
If it's often used or the enums are huge, and you're troubled by the performance of filtering the values every time, then you can create the association in the second way, just like you've proposed:
private val mapping2 = mapping.toList()
.associate { it.second to it.first }
and then have the second function just access this new mapping.
Writing the extension functions like you've provided, but using this object, will result in cleaner code and having the raw association still in one place.

Understanding polymorphism in Go

I guess I got stuck in thinking about a polymorphism solution to my following problem:
Let's say I have a BaseTX struct with fields for a transaction. Now I have two special types of transactions: RewardTX struct and AllowanceTX struct.
RewardTX struct has at this moment only the composition of BaseTX struct.
AllowanceTX struct has a composition of BaseTX struct and an AddField.
I have also a function logicAndSaveTX(), which has some logic on fields from BaseTX but at the end is serializing the whole object using json.Marshal() and saving the byte[] somewhere.
type TXapi interface {
logicAndSaveTX()
}
type BaseTX struct {
Field1 string
Field2 string
}
type RewardTX struct {
BaseTX
}
type AllowanceTX struct {
BaseTX
AddField string
}
func (tx BaseTX) logicAndSaveTX() {
// logic on BaseTX fields; simplified:
tx.Field1 = "overwritten"
tx.Field2 = "logic done"
// here would be marshal to json and save; simplified to print object:
fmt.Printf("saved this object: %+v \n", tx)
}
func SaveTX(tx TXapi) {
tx.logicAndSaveTX()
}
func main() {
rewardTX := RewardTX{BaseTX : BaseTX{Field1: "Base info1", Field2: "Base info2"}}
SaveTX(rewardTX) // should print rewardTX with fields from BaseTX
allowanceTX := AllowanceTX{BaseTX : BaseTX{Field1: "Base info1", Field2: "Base info2"}, AddField: "additional field"}
SaveTX(allowanceTX) // would like to print allowanceTX with fields from BaseTX + AdditionalField >>> instead only printing fields from BaseTX
}
https://play.golang.org/p/0Vu_YXktRIk
I try to figure out how to implement the structures and the function to operate on both kinds of transactions but at the end serializing both structures properly. My problem is, that the AddField is not being seen in my current implementation.
Maybe I have got some brain fail here--I would really like to implement this the "proper Go way". :)
Go is not object-oriented. The only form of polymorphism in Go is interfaces.
Coming from other, object-oriented languages can be difficult, because you have to get rid of a lot of ideas you might try to carry over - things like, for example, "base" classes/types. Just remove "base" from your design thinking; you're trying to turn composition into inheritance, and that's only going to get you into trouble.
In this case, maybe you have a legitimate case for composition here; you have some common shared fields used by multiple types, but it's not a "base" type. It's maybe "metadata" or something - I can't say what to call it given that your example is pretty abstract, but you get the idea.
So maybe you have:
type TXapi interface {
logicAndSaveTX()
}
type Metadata struct {
Field1 string
Field2 string
}
type RewardTX struct {
Metadata
}
func (tx RewardTX) logicAndSaveTX() {
// logic on BaseTX fields; simplified:
tx.Field1 = "overwritten"
tx.Field2 = "logic done"
// here would be marshal to json and save; simplified to print object:
fmt.Printf("saved this object: %+v \n", tx)
}
type AllowanceTX struct {
Metadata
AddField string
}
func (tx AllowanceTX) logicAndSaveTX() {
// logic on BaseTX fields; simplified:
tx.Field1 = "overwritten"
tx.Field2 = "logic done"
tx.AddField = "more stuff"
// here would be marshal to json and save; simplified to print object:
fmt.Printf("saved this object: %+v \n", tx)
}
If the handling of the metadata (or whatever) fields is identical in all uses, maybe you give that type its own logicTX method to fill those fields, which can be called by the logicAndSaveTX of the structs that embed it.
The key here is to think of the behavior (methods) on a type to be scoped to that type, instead of thinking of it as somehow being able to operate on "child types". Child types don't exist, and there is no way for a type that is embedded in another type to operate on its container.
Also point to be noted here aht Go only support run time polymorphism through interfaces. Compile time polymorphism is not possible in Golang.
Source: - https://golangbyexample.com/oop-polymorphism-in-go-complete-guide/

Ensuring embedded structs implement interface without introducing ambiguity

I'm trying to clean up my code base by doing a better job defining interfaces and using embedded structs to reuse functionality. In my case I have many entity types that can be linked to various objects. I want to define interfaces that capture the requirements and structs that implement the interfaces which can then be embedded into the entities.
// All entities implement this interface
type Entity interface {
Identifier()
Type()
}
// Interface for entities that can link Foos
type FooLinker interface {
LinkFoo()
}
type FooLinkerEntity struct {
Foo []*Foo
}
func (f *FooLinkerEntity) LinkFoo() {
// Issue: Need to access Identifier() and Type() here
// but FooLinkerEntity doesn't implement Entity
}
// Interface for entities that can link Bars
type BarLinker interface {
LinkBar()
}
type BarLinkerEntity struct {
Bar []*Bar
}
func (b *BarLinkerEntity) LinkBar() {
// Issues: Need to access Identifier() and Type() here
// but BarLinkerEntity doesn't implement Entity
}
So my first thought was to have FooLinkerEntity and BarLinkerEntity just implement the Entity interface.
// Implementation of Entity interface
type EntityModel struct {
Id string
Object string
}
func (e *EntityModel) Identifier() { return e.Id }
func (e *EntityModel) Type() { return e.Type }
type FooLinkerEntity struct {
EntityModel
Foo []*Foo
}
type BarLinkerEntity struct {
EntityModel
Bar []*Bar
}
However, this ends up with an ambiguity error for any types that can link both Foos and Bars.
// Baz.Identifier() is ambiguous between EntityModel, FooLinkerEntity,
// and BarLinkerEntity.
type Baz struct {
EntityModel
FooLinkerEntity
BarLinkerEntity
}
What's the correct Go way to structure this type of code? Do I just do a type assertion in LinkFoo() and LinkBar() to get to Identifier() and Type()? Is there any way to get this check at compile time instead of runtime?
Go is not (quite) an object oriented language: it does not have classes and it does not have type inheritance; but it supports a similar construct called embedding both on struct level and on interface level, and it does have methods.
So you should stop thinking in OOP and start thinking in composition. Since you said in your comments that FooLinkerEntity will never be used on its own, that helps us achieve what you want in a clean way.
I will use new names and less functionality to concentrate on the problem and solution, which results in shorter code and which is also easier to understand.
The full code can be viewed and tested on the Go Playground.
Entity
The simple Entity and its implementation will look like this:
type Entity interface {
Id() int
}
type EntityImpl struct{ id int }
func (e *EntityImpl) Id() int { return e.id }
Foo and Bar
In your example FooLinkerEntity and BarLinkerEntity are just decorators, so they don't need to embed (extend in OOP) Entity, and their implementations don't need to embed EntityImpl. However, since we want to use the Entity.Id() method, we need an Entity value, which may or may not be EntityImpl, but let's not restrict their implementation. Also we may choose to embed it or make it a "regular" struct field, it doesn't matter (both works):
type Foo interface {
SayFoo()
}
type FooImpl struct {
Entity
}
func (f *FooImpl) SayFoo() { fmt.Println("Foo", f.Id()) }
type Bar interface {
SayBar()
}
type BarImpl struct {
Entity
}
func (b *BarImpl) SayBar() { fmt.Println("Bar", b.Id()) }
Using Foo and Bar:
f := FooImpl{&EntityImpl{1}}
f.SayFoo()
b := BarImpl{&EntityImpl{2}}
b.SayBar()
Output:
Foo 1
Bar 2
FooBarEntity
Now let's see a "real" entity which is an Entity (implements Entity) and has both the features provided by Foo and Bar:
type FooBarEntity interface {
Entity
Foo
Bar
SayFooBar()
}
type FooBarEntityImpl struct {
*EntityImpl
FooImpl
BarImpl
}
func (x *FooBarEntityImpl) SayFooBar() {
fmt.Println("FooBar", x.Id(), x.FooImpl.Id(), x.BarImpl.Id())
}
Using FooBarEntity:
e := &EntityImpl{3}
x := FooBarEntityImpl{e, FooImpl{e}, BarImpl{e}}
x.SayFoo()
x.SayBar()
x.SayFooBar()
Output:
Foo 3
Bar 3
FooBar 3 3 3
FooBarEntity round #2
If the FooBarEntityImpl does not need to know (does not use) the internals of the Entity, Foo and Bar implementations (EntityImpl, FooImpl and BarImpl in our cases), we may choose to embed only the interfaces and not the implementations (but in this case we can't call x.FooImpl.Id() because Foo does not implement Entity - that is an implementation detail which was our initial statement that we don't need / use it):
type FooBarEntityImpl struct {
Entity
Foo
Bar
}
func (x *FooBarEntityImpl) SayFooBar() { fmt.Println("FooBar", x.Id()) }
Its usage is the same:
e := &EntityImpl{3}
x := FooBarEntityImpl{e, &FooImpl{e}, &BarImpl{e}}
x.SayFoo()
x.SayBar()
x.SayFooBar()
Its output:
Foo 3
Bar 3
FooBar 3
Try this variant on the Go Playground.
FooBarEntity creation
Note that when creating FooBarEntityImpl, a value of Entity is to be used in multiple composite literals. Since we created only one Entity (EntityImpl) and we used this in all places, there is only one id used in different implementation classes, only a "reference" is passed to each structs, not a duplicate / copy. This is also the intended / required usage.
Since FooBarEntityImpl creation is non-trivial and error-prone, it is recommended to create a constructor-like function:
func NewFooBarEntity(id int) FooBarEntity {
e := &EntityImpl{id}
return &FooBarEntityImpl{e, &FooImpl{e}, &BarImpl{e}}
}
Note that the factory function NewFooBarEntity() returns a value of interface type and not the implementation type (good practice to be followed).
It is also a good practice to make the implementation types un-exported, and only export the interfaces, so implementation names would be entityImpl, fooImpl, barImpl, fooBarEntityImpl.
Some related questions worth checking out
What is the idiomatic way in Go to create a complex hierarchy of structs?
is it possible to call overridden method from parent struct in golang?
Can embedded struct method have knowledge of parent/child?
Go embedded struct call child method instead parent method
Seems to me having three ID in one structure with methods relying on them is even semantically incorrect. To not be ambiguous you should write some more code to my mind. For example something like this
type Baz struct {
EntityModel
Foo []*Foo
Bar []*Bar
}
func (b Baz) LinkFoo() {
(&FooLinkerEntity{b.EntityModel, b.Foo}).LinkFoo()
}
func (b Baz) LinkBar() {
(&BarLinkerEntity{b.EntityModel, b.Bar}).LinkBar()
}

Best design for lookup-and-possibly-change method

I am designing a class that stores (caches) a set of data. I want to lookup a value, if the class contains the value then use it and modify a property of the class. I am concerned about the design of the public interface.
Here is how the class is going to be used:
ClassItem *pClassItem = myClass.Lookup(value);
if (pClassItem)
{ // item is found in class so modify and use it
pClassItem->SetAttribute(something);
... // use myClass
}
else
{ // value doesn't exist in the class so add it
myClass.Add(value, something);
}
However I don't want to have to expose ClassItem to this client (ClassItem is an implementation detail of MyClass).
To get round that the following could be considered:
bool found = myClass.Lookup(value);
if (found)
{ // item is found in class so modify and use it
myClass.ModifyAttribute(value, something);
... // use myClass
}
else
{ // value doesn't exist in the class so add it
myClass.Add(value, something);
}
However this is inefficient as Modify will have to do the lookup again. This would suggest a lookupAndModify type of method:
bool found = myClass.LookupAndModify(value, something);
if (found)
{ // item is found in class
... // use myClass
}
else
{ // value doesn't exist in the class so add it
myClass.Add(value, something);
}
But rolling LookupAndModify into one method seems like very poor design. It also only modifies if value is found and so the name is not only cumbersome but misleading as well.
Is there another better design that gets round this issue? Any design patterns for this (I couldn't find anything through google)?
Actually std::set<>::insert() does precisely this. If the value exists, it returns the iterator pointing to the existing item. Otherwise, the iterator where the insertion was made is returned.
It is likely that you are using a similar data structure for fast lookups anyway, so a clean public interface (calling site) will be:
myClass.SetAttribute(value, something)
which always does the right thing. MyClass handles the internal plumbing and clients don't worry about whether the value exists.
Two things.
The first solution is close.
Don't however, return ClassItem *. Return an "opaque object". An integer index or other hash code that's opaque (meaningless) to the client, but usable by the myClass instance.
Then lookup returns an index, which modify can subsequently use.
void *index = myClass.lookup( value );
if( index ) {
myClass.modify( index, value );
}
else {
myClass.add( value );
}
After writing the "primitive" Lookup, Modify and Add, then write your own composite operations built around these primitives.
Write a LookupAndModify, TryModify, AddIfNotExists and other methods built from your lower-level pieces.
This assumes that you're setting value to the same "something" in both the Modify and Add cases:
if (!myClass.AddIfNotExists(value, something)) {
// use myClass
}
Otherwise:
if (myClass.TryModify(value, something)) {
// use myClass
} else {
myClass.Add(value, otherSomething);
}