how do we create plugin / extensions in object oriented programming? - oop

How does software allow developers to make a plugin / extensions on top of its core? How is that related to object oriented programming? maybe with inheritance or interfaces? What kind of design pattern should one use?
For example, firefox extensions that enhance firefox, wordpress extensions, etc. Those systems sort of "recognize" plugins after being installed and work well and in some cases they perform safety checking, dependencies, and the like.
Anyone care to shed light on this?

Plugin models in real applications like firefox may be more complex than they are in general. In general you define some interface that a plugin should implement, and implement it in your plugins, that's it.
Modern frameworks contain plugin development facilities like MEF in .NET, mojo in Java, etc.

Java supports a basic plug-in mechanism through its SPI (Service Provider Interface). The main mechanism revolves around discovery and binding of the new provider. Below two articles will get you started
Replaceable Components and the Service Provider Interface
Creating Extensible Java Applications
There are open source frameworks that are more powerful than provided by JDK
JPF
JSPF
But OSGi is the standard and mother of all plugin frameworks in my opinion.

Related

Continued Ninject support in ASP.NET Core MVC?

I have been very happily using Ninject for a long time now, and I really like it, but I am faced with a difficult choice since the release of ASP.NET Core and MVC Core.
Basically, out of the gate, Microsoft has revealed their own dependency injection system; Which is one that to my knowledge has gotten a lot of criticism. But my bigger problem lies with how it affects other libraries.
From another question I asked and other resources online, it seems that Ninject does not work out of the box with MVC Core. Though there is a "solution" given in the form of a verbose library Microsoft.Framework.DependencyInjection.Ninject and Ninject. This is even trickier because that library requires adding https://www.myget.org/F/aspnetmaster/ to your list of NuGet feeds.
I have done some digging and found where this library is hosted; It looks fine, it seems to work okay from what I can tell, but there are a few things that trouble me.
The library does not really appear to be headed by the Ninject creators
The library is buried pretty deep in an obscure repository
The actual Ninject resources online never mention it
So basically, I am very concerned that this is some kind of band-aide, and that support for Ninject (and even other container libraries) is dying out. Is there some hidden information that I'm just not discovering?
There is a discussion going on between maintainers of the existing DI libraries, whether or not to build, maintain and support an adapter for the new ASP.NET built-in DI system. The Autofac maintainers have confirmed that they will create and support an adapter, while the Ninject team has been silent, and other teams such as the Simple Injector team (that includes me) have explained that they won't support an adapter.
Personally, I think that the ASP.NET Core built-in DI library is a nice and clean DI library, but it is limited to simple applications. As I explained here, many features that are required for developing maintainable applications built around the SOLID principles are not supported. However, just like the Unity DI library did a couple of years ago, I think that this built-in container might actually trigger developers to start using dependency injection, which is a win for our industry.
These limitations make the built-in container especially suited to configure and extend the ASP.NET system itself. To build large maintainable applications, you will need to use a different DI library. This of course is fine; you will have to pick the right tools for the job.
Unfortunately, up until now, the ASP.NET team has communicated publicly that using a different DI library, means you will have to write/use an adapter. This unfortunately is the wrong message IMO, because most DI libraries are incompatible with the API presented by the built-in container (as I explained here and here in detail). Only Autofac seems reasonably in sync, which explains why the Autofac team choose to maintain an adapter. But do note that even Autofac has proven to be incompatible with the abstraction that Microsoft defined, and they (just like StructureMap) had to make big changes to their product to even be able to comply with the abstraction. And the Autofac maintainers are severely frustrated about the whole process and the abstraction in general. And as I explained here, even the ASP.NET provided adapter implementation of Ninject is broken.
This message by the ASP.NET team to use an adapter is IMO a big error, because this stifles innovation (while the DI library itself doesn't; it's just another DI library). The ASP.NET team is promoting a model where both your application components and the ASP.NET system (and all other sub systems that will plugin in the future) will be registered in your custom container. It is much more reasonable and practical to keep your application configuration separate from the configuration of the ASP.NET system (as explained here).
Because of this, I find the use of an adapter for any container rather useless. As I shown here it is really easy to plugin your own DI container, while keeping it completely separate from ASP.NET's registrations. This means that you don't need support for Ninject to be able to effectively use Ninject on an ASP.NET Core project. The only thing Ninject needs to do is to create a version that is compatible with .NET Core (in case your product needs to run on that new platform).
UPDATE: "Ninject 3.3.0 was released September 26th 2017 and now targets .NET Standard 2.0 and thus also runs on .NET Core 2.0." source
So in a nutshell, I'm not sure that support 'is dying out', although some DI maintainers (such as the Simple Injector team, and probably Castle Windsor and Ninject as well) have chosen not to build, maintain and support an adapter implementation for ASP.NET Core, because it is not needed, and is only in the way.
UPDATE November 2016
I've been discussing some improvements to ASP.NET Core with Microsoft to make it easier to plugin a container that don't have an adapter (take a look at my example repository and especially to the Startup.cs of the Ninject sample project), but until now Microsoft seems to stall progress because (as Fowler states hisself) their "bias towards conforming containers [is] clouding [their] vision".
The library does not really appear to be headed by the Ninject
creators
That library, and it would seem these also, look to be Microsoft created samples of Dependency Injection providers that were removed in beta7. Note that the link to DI in MVC 6 referenced by your original question says the following;
These DI container adaptors are temporary and are there for reference;
we expect that they will eventually be removed and replaced by the
respective container owners.
As they should be. Microsoft should not be responsible for maintaining 3rd party providers.
The library is buried pretty deep in an obscure repository
If you are not aware, ASP.NET 5 is still in development. Beta 7 is available on nuget as a pre-release, but there are other sources as well including;
https://github.com/aspnet/ (source code)
https://www.myget.org/gallery/aspnetvnext (nuget dev branch builds of the above)
https://www.myget.org/gallery/aspnetmaster (nuget master branch builds of the above [same as your question])
These sources are maintained by Microsoft.
The actual Ninject resources online never mention it
As with any new development, 3rd party library providers must themselves determine when (if at all) they will provide implementations of their products that support the new codebase. For some, it will be seen as most efficient to wait until the new framework is officially released, as API breaking changes are still highly likely to occur until that point. Whether support will be implemented at all is of course up to the providers resources, and/or in the case of open source the community.

What are the advantages of using OSGi at target side in a Remote Software Provisioning System?

I am developing a Remote Software Provisioning system that should be able to handle all deployment, installation, un-installation and upgrades of software components. Software can be in any language (java, .net, c/c++ etc) and target side can be PC, embedded systems and smart phones.
I have found Apache ACE as good candidate for developing this system.
I want to know if there is any advantage/necessity of using OSGi at target side as Apache ACE can do software provisioning to non-OSGi targets as well.
Having a modular framework like OSGi at the client side is a huge advantage when doing remote management, because it gives you much insight into what's happening inside - installed bundles, dependencies, states of the bundles, available services etc. This helps a lot when you have to solve a problem remotely. Another advantage is that OSGi basically forces programmers to develop proper modular and dynamic systems, which makes (remote) updating much easier.
So, if you have to decide now what language and framework to use for the client side, I strongly recommend OSGi for the embedded and mobile clients. For the PCs (I guess you mean desktop PCs?) this is probably not the best choice - it depends a lot what you want to achieve there. If you want to install MS Office remotely OSGi won't bring you forward ;)
However, if you already have existing programs at the client side and are discussing whether to convert them to OSGi, I would recommend to investigate some time first to see whether they can be converted easily. Some software packages could give you a lot of trouble converting to OSGi, not because OSGi is complex, but because the program itself is not modular and has a lot of assumptions about the static nature of the environment (e.g. nothing ever disappears, parts of the system never get updated etc.). The irony in the matter is that these are exactly the programs which will give you most trouble later anyway no matter which remote provisioning system you chose.
If you have OSGi at some of the targets be sure to use a remote provisioning system which gives you access to the full OSGi functionality and not only the most basic and simple install and update functions. I haven't yet used Apache ACE, but I have experience with another provisioning system - mPower Remote Manager. Here are some snapshots from the documentation which can give you a feeling what is possible with OSGi as a base - you can draw your own conclusions whether it will be useful for your case or not.
I've given some examples in the other question you asked:
What are the non-osgi targets with which Apache ACE can work
You can write your own management agent that talks to the ACE server and installs artifacts. There actually are a couple of places where you could hook in your own code and protocol. Is there a concrete language/environment you're thinking of using, or are you just exploring the possibilities right now?
Well, the advantages of OSGi haven't changed, so for that I can refer you to the standard page.
To be a bit more constructive, I'll read the question as 'Should I bother converting my application to OSGi, as it is not necessary for ACE?'
I think that depends on what 'kind' of updating mechanism you're after. If you have a monolithical application (at least from the provisioning perspective) which you deploy and update only as a whole (Like an iOS app) then there isn't much to gain for provisioning purposes by using OSGi.
For the rest I can tell you the same as I tell anybody else: Converting an application to OSGi isn't hard, but modularizing code can be a nightmare, but something you'll need to face at some point, OSGi or not. If your code is modularized already, using OSGi should be a piece of cake.

add-ons/extension, how to program?

I want to know how to enable the developers to create add-ons for my application like chrome, firefox, blender and VS?
I'm asking here about the concept how the made it? programmatically, what I need to provide in my application with to make this?
any references I may help me?
There is a number of options.
You can embed a scripting language (or an entire VM, like .NET or JVM) into your application, providing a decent API for all the internal functionality. If your application is built on top of such a VM already, chances are you don't need to do anything specific to enable an extensibility, just make sure your API is available and documented. Popular embedded scripting choices are Lua, Python, Guile and Tcl.
Alternatively, for a purely native code, you can provide your API as a separate dynamic linking library, and allow to load third party modules (linked to that library).
You can also make your application modular (split into separate processes), with the components talking to each other over a simple, text-based protocol via pipes or sockets. A very elaborate and powerful infrastructure is available for such an integration option, which is known as the "Unix way". In this case users will be able to choose any way of integrating their extensions with your core functionality.
Choose any, depending on a nature of your application.

Should I use CORBA, MessagePack RPC or Thrift, or something else entirely?

I'm writing software for a new hardware device which I want any kind of new third-party application to be able to access if they want to.
The software will be a native process (C++) that should be pollable by 3rd party games and applications that want to support the hardware device. Those 3rd party apps should also be able to receive events from the native process, on a subscribe basis. So aside from the native process, I'll also supply "connector" libraries to the 3rd party developers, for all platforms/languages that they might choose (Java, C++, Python etc.) to embed in their apps so they can easily connect to the device with hardly any extra code needing to be written by them. I want to target all desktop/laptop OS platforms, and have a pretty good idea of what functions I want to expose, but ideally I don't want to be too stuck (i.e. I want it to be elegantly scalable from both client and server perspectives).
I'm looking for reliability going forward, performance, maintainability going forward, and cross-platform/language flexibility going forward, and ease of development, in that order.
What should I use?
CORBA, MessagePack-RPC, Thrift, or something else entirely?
(I've omitted ICE because of it's licensing)
Thrift or Message Pack is the best option going forward. Both are sleek, light weight and do not add much latencies to your process. They have support for most of the common languages, and are in Active Development. At the current stage I would prefer thrift personally but message pack does seem to promise a lot of features.
Thought thrift might not be as windows friendly as we want but people are using it on windows.
This is a starter guide for thrift on windows.
http://wiki.apache.org/thrift/ThriftInstallationWin32
Only installing and getting the Thrift compiler can be troublesome on windows. Using the generated files depend on the language you choose and lot of the languages have good support to run the files by importing thrift libraries. (Java it is very easy, MAVEN artifact)
There is a discussion on the RPC frameworks available at RPC frameworks available?
CORBA according to me is old cumbersome and very heavyweight.
If ancient and heavyweight don't put you off, obsolete definitely should. Regardless, I can tell you what we've been using Google Protocol Buffers at work recently, and they're pretty easy to use.
From the developer's perspective, all you need to do is have a build of GPB (which really isn't that difficult), and then it will generate source files for you. The end result is a cross-platform binary message transport message passing interface (think XML and limited RMI, not MPI-like functionality).
We use it on Windows to talk to an Arm-based Linux system (TS-7200's from embedded arm) running the same software. to my knowledge, it is compatible with many languages.
CORBA is the only free "RPC" thing that would work for my system right now, even though it scales very badly. Thrift isn't Windows-friendly yet. Neither is MessagePack-RPC yet available in all languages and OSs, even though it's still in development. If CORBA was elegantly scalable it probably wouldn't have become obsolete at all.
Protocol Buffers and messaging would work, I'd have to develop a both a client and service implementation for every platform/language. It would also be very scalable. I've decided on this.
I'm currently using Apache Thrift for a Hospital Manager project. It is better than CORBA in many areas, not to mention it is lightweight and much easier to implement and understand. The learning curve for Thrift is definitely subtle compared to CORBA, but the documentation for Thrift is the worst thing.
I'm using a Ruby Thrift server to which Obj-C and Java clients connect. The Thrift parser or "compiler" does a pretty good job generating source files for the languages you want, although it is far too verbose. I would definitely look into implementing Thrift, or Google ProtoBuffs if I was starting a new project, since CORBA is really outdated, and might not implement new technologies in the future, not to mention that there are many vulnerabilities and exploits targeting CORBA that will not get patched since it's not in development anymore, presenting some serious security holes on your new project.
Thrift supports many programming languages: C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Objective-C, JavaScript, Node.js, Smalltalk, OCaml and Delphi as of this writing. Supporting multiple languages is key, I think, for the purpose of your project.

What is the difference between building a WSDL in Eclipse and using WCF?

I'm somewhat familiar with WCF in that I can build Web Services in VS.Net ... I understand some of the concepts...
But, the other day I cam across this option in Eclipse (I also use Java to code) to create a WSDL. Playing around with it it looks great since it has a GUI method of building itself.
I guess I just wanna know what the difference is.
1) Are they different technologies like WSDL vs WCF? Or, is it that WCF uses WSDLs?
2) I read that WSDLs are a top-down approach... so what about WCF, is that top-down or is that bottom-up?
3) Will this WSDL in Eclipse actually be able to generate CSharp code for my server and client efficiently, or will it require a lot of fixing?
Windows Communication Framework and other services frameworks use standards like the Web Service Definition Language to communicate specifications.
WSDL is neither inherently top-down nor bottom-up. You can do it either way; that is, you can design your interface using WSDL and then code your service to the WSDL, or you can design your application and use a tool like those built into Visual Studio and Eclipse to automatically generate the WSDL. There are pros and cons (and proponents and opponents) to both approaches.
IDEs like Visual Studio and Eclipse usually do a good job (probably better than humans) of generating WSDL. I haven't used the Eclipse plugin for C# (I'm assuming there is one and that's what you're using if you want to generate C# in Eclipse), so I can't speak for its functionality.
EDIT: I answered question 3 backwards, but the answer still applies. The WSDL-to-code generators also generally do a good job just like the code-to-WSDL generators.