I work on an implementation of the Repository pattern in Rust.
I need to have two (or more) files:
entity.rs — data descriptions
repository.rs — data access methods
...
Problem:
One file implies one mod. This means that for a function in repository.rs to have access to struct field from entity.rs it is required that field be pub. Is there some way to avoid this?
In Rust, modules are self-contained. Unlike C++ or Java there is no cheating via either friend or the use of reflection.
As such, if you (arbitrarily) attempt to separate the definition of the struct from the methods in charge of maintaining its encapsulation, you will fight against the language.
Solution 1: Prefer non-member non-friend functions
Define the methods absolutely requiring access to the fields in entity.rs; if you follow the "Prefer non-member non-friend functions" guideline from C++, you should see that actually most methods do NOT need to access the fields directly. For example, empty can be defined in terms of len:
fn empty(c: &Container) -> bool { c.len() == 0 }
Then, repository.rs can add many other methods if it needs to, but has to go through the "minimal" interface exported by entity.rs to achieve its needs. Since you are in control of both modules, you can tweak the methods of entity.rs at will anyway so it should not be an issue.
I would point out that encapsulation-wise, this is the sensible decision: reducing the number of methods that may access the internals of an object reduces the number of methods that may put this object in an invalid state.
This solution is advantageous because you are not fighting the language.
Solution 2: Total split
Another solution is to duplicate your entities:
have the internal entity, entirely public
have the external entity, opaque
This is achieved by:
pub struct SomeEntImpl {
pub field0: i32,
}
pub struct SomeEnt {
inner: SomeEntImpl,
}
The authorized modules will be given references to a SomeEntImpl, while others will have to use the restricted interface available through SomeEnt. The control over who sees what will be achieved by careful exports.
This solution will probably drive you insane.
Related
consider a SAM defined in Java
public interface Transform {
public String apply(String str);
}
This interface supports lambda to type conversion in Kotlin automatically
fun run(transform: Transform) {
println(transform.apply("world"))
}
run { x -> "Hello $x!!" } // runs fine without any issues
But now consider a Kotlin interface
interface Transform2 {
fun apply(str: String): String
}
Now the only way to invoke the run function would be by creating an anonymous instance of Transform2
run(object : Transform2 {
override fun transform(str: String): String = "hello $str!!"
})
but if we make the Transform2 interface a functional interface then the below is possible
run { str -> "hello $str!!" }
Why the Kotlin compiler cannot automatically type cast lambdas to matching interfaces (just as it does with Java interfaces) without needing to explicitly mark the said interfaces as a functional interface.
I've found some kind of a rationale in a comment in KT-7770:
... treating all the applicable interfaces as SAM might be too
unexpected/implicit: one having a SAM-applicable interface may not
assume that it will be used for SAM-conversions. Thus, adding another
method to the interface becomes more painful since it might require
changing syntax on the call sites (e.g. transforming callable
reference to object literal).
Because of it, current vision is adding some kind of modifier for
interfaces that when being applied:
Adds a check that the interface is a valid SAM
Allows SAM-conversions on call sites for it
Something like this:
fun interface MyRunnable {
fun run()
}
Basically, he is saying that if the SAM conversion were done implicitly by default, and I add some new methods to the interface, the SAM conversions would no longer be performed, and every place that used the conversion needs to be changed. The word "fun" is there to tell the compiler to check that the interface indeed has only one abstract method, and also to tell the call site that this is indeed a SAM interface, and they can expect the author to not suddenly add new abstract methods to the interface, suddenly breaking their code.
The thread goes on to discuss why can't the same argument can't be applied to Java, and the reason essentially boils down to "Java is not Kotlin".
This is speculation, but I strongly suspect one reason is to avoid encouraging the use of functional interfaces over Kotlin's more natural approach.
Functional interfaces are Java's solution to the problem of adding lambdas to the Java language in a way that involved the least change and risk, and the greatest compatibility with what had been best practice in the nearly 20 years that Java had existed without them: the use of anonymous classes implementing named interfaces. It needs umpteen different named interfaces such as Supplier, BiFunction, DoublePredicate… each with their own method and parameter names, each incompatible with all the others — and with all the other interfaces people have developed over the years. (For example, Java has a whole host of interfaces that are effectively one-parameter functions — Function, UnaryOperator, Consumer, Predicate, ActionListener, AWTEventListener… — but are all unrelated and incompatible.) And all this is to make up for the fact that Java doesn't have first-class functions.
Kotlin has first-class functions, which are a much more general, more elegant, and more powerful approach. (For example, you can write a lambda (or function, or function literal) taking a single parameter, and use it anywhere that you need a function taking a single parameter, without worrying about its exact interface. You don't have to choose between similar-looking interfaces, or write your own if there isn't one. And there are none of the hidden gotchas that occur when Java can't infer the correct interface type.) All the standard library uses function types, as does most other Kotlin code people write. And because they're so widely-used, they're widely supported: as part of the Kotlin ecosystem, everyone benefits.
So Kotlin supports functional interfaces mainly for compatibility with Java. Compared to first-class functions, they're basically a hack. A very ingenious and elegant hack, and arguably a necessary one given how important backward compatibility is to the Java platform — but a hack nonetheless. And so I suspect that JetBrains want to encourage people to use function types in preference to them where possible.
In Kotlin, you have to explicitly request features which improve Java compatibility but can lead to worse Kotlin code (such as #JvmStatic for static methods, or casting to java.lang.Object in order to call wait()/notify()). So it fits into the same pattern that you also have to explicitly request a functional interface (by using fun interface).
(See also my previous answer on the subject.)
How do I structure Raku code so that certain symbols are public within the the library I am writing, but not public to users of the library? (I'm saying "library" to avoid the terms "distribution" and "module", which the docs sometimes use in overlapping ways. But if there's a more precise term that I should be using, please let me know.)
I understand how to control privacy within a single file. For example, I might have a file Foo.rakumod with the following contents:
unit module Foo;
sub private($priv) { #`[do internal stuff] }
our sub public($input) is export { #`[ code that calls &private ] }
With this setup, &public is part of my library's public API, but &private isn't – I can call it within Foo, but my users cannot.
How do I maintain this separation if &private gets large enough that I want to split it off into its own file? If I move &private into Bar.rakumod, then I will need to give it our (i.e., package) scope and export it from the Bar module in order to be able to use it from Foo. But doing so in the same way I exported &public from Foo would result in users of my library being able to use Foo and call &private – exactly the outcome I am trying to avoid. How do maintain &private's privacy?
(I looked into enforcing privacy by listing Foo as a module that my distribution provides in my META6.json file. But from the documentation, my understanding is that provides controls what modules package managers like zef install by default but do not actually control the privacy of the code. Is that correct?)
[EDIT: The first few responses I've gotten make me wonder whether I am running into something of an XY problem. I thought I was asking about something in the "easy things should be easy" category. I'm coming at the issue of enforcing API boundaries from a Rust background, where the common practice is to make modules public within a crate (or just to their parent module) – so that was the X I asked about. But if there's a better/different way to enforce API boundaries in Raku, I'd also be interested in that solution (since that's the Y I really care about)]
I will need to give it our (i.e., package) scope and export it from the Bar module
The first step is not necessary. The export mechanism works just as well on lexically scoped subs too, and means they are only available to modules that import them. Since there is no implicit re-export, the module user would have to explicitly use the module containing the implementation details to have them in reach. (As an aside, personally, I pretty much never use our scope for subs in my modules, and rely entirely on exporting. However, I see why one might decide to make them available under a fully qualified name too.)
It's also possible to use export tags for the internal things (is export(:INTERNAL), and then use My::Module::Internals :INTERNAL) to provide an even stronger hint to the module user that they're voiding the warranty. At the end of the day, no matter what the language offers, somebody sufficiently determined to re-use internals will find a way (even if it's copy-paste from your module). Raku is, generally, designed with more of a focus on making it easy for folks to do the right thing than to make it impossible to "wrong" things if they really want to, because sometimes that wrong thing is still less wrong than the alternatives.
Off the bat, there's very little you can't do, as long as you're in control of the meta-object protocol. Anything that's syntactically possible, you could in principle do it using a specific kind of method, or class, declared using that. For instance, you could have a private-class which would be visible only to members of the same namespace (to the level that you would design). There's Metamodel::Trusting which defines, for a particular entity, who it does trust (please bear in mind that this is part of the implementation, not spec, and then subject to change).
A less scalable way would be to use trusts. The new, private modules would need to be classes and issue a trusts X for every class that would access it. That could include classes belonging to the same distribution... or not, that's up to you to decide. It's that Metamodel class above who supplies this trait, so using it directly might give you a greater level of control (with a lower level of programming)
There is no way to enforce this 100%, as others have said. Raku simply provides the user with too much flexibility for you to be able to perfectly hide implementation details externally while still sharing them between files internally.
However, you can get pretty close with a structure like the following:
# in Foo.rakumod
use Bar;
unit module Foo;
sub public($input) is export { #`[ code that calls &private ] }
# In Bar.rakumod
unit module Bar;
sub private($priv) is export is implementation-detail {
unless callframe(1).code.?package.^name eq 'Foo' {
die '&private is a private function. Please use the public API in Foo.' }
#`[do internal stuff]
}
This function will work normally when called from a function declared in the mainline of Foo, but will throw an exception if called from elsewhere. (Of course, the user can catch the exception; if you want to prevent that, you could exit instead – but then a determined user could overwrite the &*EXIT handler! As I said, Raku gives users a lot of flexibility).
Unfortunately, the code above has a runtime cost and is fairly verbose. And, if you want to call &private from more locations, it would get even more verbose. So it is likely better to keep private functions in the same file the majority of the time – but this option exists for when the need arises.
I see some usages of Extension functions in Kotlin I don't personally think that makes sense, but it seems that there are some guidelines that "apparently" support it (a matter of interpretation).
Specifically: defining an extension function outside a class (but in the same file):
data class AddressDTO(val state: State,
val zipCode: String,
val city: String,
val streetAddress: String
)
fun AddressDTO.asXyzFormat() = "${streetAddress}\n${city}\n${state.name} $zipCode"
Where the asXyzFormat() is widely used, and cannot be defined as private/internal (but also for the cases it may be).
In my common sense, if you own the code (AddressDTO) and the usage is not local to some class / module (hence behing private/internal) - there is no reason to define an extension function - just define it as a member function of that class.
Edge case: if you want to avoid serialization of the function starting with get - annotate the class to get the desired behavior (e.g. #JsonIgnore on the function). This IMHO still doesn't justify an extension function.
The counter-response I got to this is that the approach of having an extension function of this fashion is supported by the Official Kotlin Coding Conventions. Specifically:
Use extension functions liberally. Every time you have a function that works primarily on an object, consider making it an extension function accepting that object as a receiver.
Source
And:
In particular, when defining extension functions for a class which are relevant for all clients of this class, put them in the same file where the class itself is defined. When defining extension functions that make sense only for a specific client, put them next to the code of that client. Do not create files just to hold "all extensions of Foo".
Source
I'll appreciate any commonly accepted source/reference explaining why it makes more sense to move the function to be a member of the class and/or pragmatic arguments support this separation.
That quote about using extension functions liberally, I'm pretty sure means use them liberally as opposed to top level non-extension functions (not as opposed to making it a member function). It's saying that if a top-level function conceptually works on a target object, prefer the extension function form.
I've searched before for the answer to why you might choose to make a function an extension function instead of a member function when working on a class you own the source code for, and have never found a canonical answer from JetBrains. Here are some reasons I think you might, but some are highly subject to opinion.
Sometimes you want a function that operates on a class with a specific generic type. Think of List<Int>.sum(), which is only available to a subset of Lists, but not a subtype of List.
Interfaces can be thought of as contracts. Functions that do something to an interface may make more sense conceptually since they are not part of the contract. I think this is the rationale for most of the standard library extension functions for Iterable and Sequence. A similar rationale might apply to a data class, if you think of a data class almost like a passive struct.
Extension functions afford the possibility of allowing users to pseudo-override them, but forcing them to do it in an independent way. Suppose your asXyzFormat() were an open member function. In some other module, you receive AddressDTO instances and want to get the XYZ format of them, exactly in the format you expect. But the AddressDTO you receive might have overridden asXyzFormat() and provide you something unexpected, so now you can't trust the function. If you use an extension function, than you allow users to replace asXyzFormat() in their own packages with something applicable to that space, but you can always trust the function asXyzFormat() in the source package.
Similarly for interfaces, a member function with default implementation invites users to override it. As the author of the interface, you may want a reliable function you can use on that interface with expected behavior. Although the end-user can hide your extension in their own module by overloading it, that will have no effect on your own uses of the function.
For what it's worth, I think it would be very rare to choose to make an extension function for a class (not an interface) when you own the source code for it. And I can't think of any examples of that in the standard library. Which leads me to believe that the Coding Conventions document is using the word "class" in a liberal sense that includes interfaces.
Here's a reverse argument…
One of the main reasons for adding extension functions to the language is being able to add functionality to classes from the standard library, and from third-party libraries and other dependencies where you don't control the code and can't add member functions (AKA methods). I suspect it's mainly those cases that that section of the coding conventions is talking about.
In Java, the only option in this cases is utility methods: static methods, usually in a utility class gathering together lots of such methods, each taking the relevant object as its first parameter:
public static String[] splitOnChar(String str, char separator)
public static boolean isAllDigits(String str)
…and so on, interminably.
The main problem there is that such methods are hard to find (no help from the IDE unless you already know about all the various utility classes). Also, calling them is long-winded (though it improved a bit once static imports were available).
Kotlin's extension methods are implemented exactly the same way down at the bytecode level, but their syntax is much simpler and exactly like member functions: they're written the same way (with this &c), calling them looks just like calling a member function, and your IDE will suggest them.
(Of course, they have drawbacks, too: no dynamic dispatch, no inheritance or overriding, scoping/import issues, name clashes, references to them are awkward, accessing them from Java or reflection is awkward, and so on.)
So: if the main purpose of extension functions is to substitute for member functions when member functions aren't possible, why would you use them when member functions are possible?!
(To be fair, there are a few reasons why you might want them. For example, you can make the receiver nullable, which isn't possible with member functions. But in most cases, they're greatly outweighed by the benefits of a proper member function.)
This means that the vast majority of extension functions are likely to be written for classes that you don't control the source code for, and so you don't have the option of putting them next to the class.
Is there a language which has a feature that can prevent a class accessing any other class, unless an instance or reference is contained?
isolated class Example {
public Integer i;
public void doSomething()
{
i = 5; // This is ok because i belongs to this class
/*
* This is forbidden because this class can only
* access anything contained within, nothing outside
*/
System.out.println("This does not work.");
}
}
[edit]An example use case might be a plugin system. I could define a plugin object with references to certain objects that class can manipulate, but nothing else is permissible. It could potentially make security concerns much easier.[/edit]
I'm not aware of any class-based access modifiers with such intent, but I believe access modifiers to be misguided anyway.
Capability-based security or, more specifically, the object-capability model seems to be what you want.
http://en.wikipedia.org/wiki/Object-capability_model
The basic idea is that in order to do anything with an object, you need to hold a reference to it. Withhold the reference and no access is possible.
Global things (such as System.out.println) and a few other things are problematic features of a language, because anyone can access them without a reference.
Languages such as E, or tools like google caja (for Javascript) allow proper object-capability models. Here an example in JS:
function Example(someObj) {
this.someObj = someObj;
this.doStuff() = function() {
this.someObj.foo(); //allowed, we have been given a reference to it
alert("foobar"); //caja may deny/proxy access to global "alert"
}
}
Any language where you must include headers would probably count: Just don't include any headers.
However, I would wager that there's no language that explicitly forbids external access. What's the point? You can't do anything if you can't access the outside world. And, why would the reference to Integer be okay, but System.out.println not be?
If you clarify the potential use-case, we can probably help you better...
Edit for your Edit:
I thought you might be going there.
If this is for security, it's flawed from the start. Let's examine:
class EvilCode {
void DoNiceThings() {
HardDrive.Format();
}
}
What incentive do I have to voluntarily place a keyword on my class? I'm certainly not going to because I'm nice, since I'm not!
One thing to consider is that any time you're loading native code that's not your own (native, in this case, means not scripted), you're potentially allowing a bad guy to run his code. No language features are going to protect you from that.
The proper answer depends on your target language. Java has Security descriptors, .NET lets you create AppDomains with restricted permissions, etc. Unfortunately, I'm not an expert in these fields.
Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!