Switchable enhancements one by one. Possible? - abap

Working on the current project I'm going to make highly configurable app by multiple parameters: country, region-specific and business-case wise configurability.
Considering the ways to implement this I evaluated Switch Framework, which is the foundational part of SAP industry solutions delivery, and it is also used in Enhancement framework.
Despite heavily used in standard solutions, I rarely saw the real use cases of Switch Framework with custom developments and with custom enhancements. Most of the blogs about it are dated to 2002-2009 years and I found little info about using Switch Framework now.
https://blogs.sap.com/2009/01/15/how-to-get-the-most-from-the-enhancement-and-switch-framework-as-a-customer-or-partner-tips-from-the-experts
https://blogs.sap.com/2008/01/07/the-three-use-cases-of-the-enhancement-and-switch-framework-part-2
https://blogs.sap.com/2008/01/14/the-three-use-cases-of-the-enhancement-and-switch-framework-part-3
Neither official Switch Framework docu helps, nor Enhancement Framework docu.
By my hands-on tests I found out that using the switch framework I can switch on and off enhancement only if they lays in different packages, by assigning the package to the switch and assigning the switch to the correspondent business function.
This finding of mine is confirmed by this diagram from the above blogs
I need more fine-grained switching for enhancements, being able to enable or disable them independently, without putting each enhancement into separate package.
The question: is it possible apply switch to a single enhancement, not to a package?
If not possible via Switch Framework, is there any other technique I can use?

Related

Choosing between dnx451 and dnxcore 50 for Azure Web App in terms of functionality, performance, etc

I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.

Can you Auto update branches from the Main trunk?

Here is the scenario.
We are developing a product where we have a base product and regional variations for the product. We have all the common code checked into the main trunk while we have created 2 branches (branch_us, branch_uk) for the variations off of the main trunk. There is common code that is constantly being checked into the main trunk and the code that is being checked into branch_uk,branch_us is dependent on the code that is checked into the main trunk. This is being done because we expect more regions to added in future releases and as a result we want to have max reuse as well as thin regional variations layer.
Based on the current strategy, the developer will have to develop locally and then manually check-in the common files into main_trunk and regional variations into branch_uk & branch_us. Then everytime code is checked into the main_trunk, we will have to perform a merge from main_trunk->branch_uk & main_trunk->branch_us before we can perform a build for branch_uk & branch_uk (two separate deployments) because of dependency of new code in branch_uk/us branch to the new common code in main_trunk. This model seems extremely painful to think about and unproductive.
I'm by no means an expert on TFS. Here is what I am seeking opinion on:
Is there a way TFS can dynamically pull changes into branch_uk/branch_us from the main_trunk without doing a manual merge after every check-in (in the main_trunk)?
Do you guys have any other recommendations on the code management process that might be more effective/productive than the current one?
Any thoughts and feedback will be much appreciated!
This seems like a weird architecture to me, but of course I'm coming at it from a position of almost total ignorance, so there might be a compelling reason to approach it that way.
That being said: It sounds to me like you don't have a single application with two regional variations, you have two separate applications that share a common ancestor. The short answer to your question is "No". A slightly longer answer is "No, but you could write code to automate it."
A more thoughtful question-answer is "Are you sure centralized version control is the right tool for the job?" It might be more intuitive to use Git for this. What you have are, in effect, a base repository and two forks of that repository. Developers can work against whatever fork makes sense, and if something represents a change that should apply to all localizations, open a pull request to have the change merged into the base repository. This would require more discipline on the part of the developers, since they would have to ensure that their commits are isolated such that they can open a pull request that contains just commits that apply to the core platform. Git has powerful but difficult history-rewriting tools that can assist. Or, of course, they could just switch back and forth between working on the core platform, then pulling changes from the core platform back up to the separate repositories. This puts you back to where you started, but Git merges are very fast and shouldn't be a big issue.
Either way, thinking of the localizations are a single application is your mistake.
A non-source control answer might involve changing the application's architecture so that all localizations run off of the same codebase, but with locale-specific functionality expressed in a combination of configuration flags and runtime-discoverable MEF plugins, or making a "core" application platform that runs as an isolated service, and separately developed locale-specific services that express only deviations from the core application platform.

Etherpad style synchronisation in Meteor?

Looking into Meteor to create a collaborative document editing app, because it’s great that Meteor synchronizes data between multiple clients by default.
But when using a text-area, like in Sameer Kalburgi’s example
http://www.skalb.com/2012/04/16/creating-a-document-sharing-site-with-meteor-js/
http://docshare-tutorial.meteor.com/
the experience is sub-optimal.
I tried to type at the same time with a colleague and my changes would be overwritten when she typed and vice versa. So in the conflict resolution there is no merge algorithm yet, I think?
Is this planned for the feature? Are there ways to implement this currently? Etherpad seems to handle this problem rather well. Having this in Meteor would make creating collaborative document editing apps way more accessible.
So I looked into it some more, the algorithm used in Etherpad is known as Operational Transformation:
The solution is Operational Transformation (OT). If you haven’t heard of it, OT is a class of algorithms that do multi-site realtime concurrency. OT is like realtime git. It works with any amount of lag (from zero to an extended holiday). It lets users make live, concurrent edits with low bandwidth. OT gives you eventual consistency between multiple users without retries, without errors and without any data being overwritten.
Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. We need some good libraries, so any project can just plug in OT if they need it.
Thats’s from the site of sharejs. A node.js based ot server-client that you can hook into your existing client.
OT is also implemented in the Racer model synchronization engine for Node.js, that forms the underpinnings for Derby. At the moment, derby.js doesn’t transparently provide it yet, but they plan too, from the Derby docs:
Currently, Racer defaults to applying all transactions in the order received, i.e. last-writer-wins. (…) Racer [also] supports conflict resolution via a combination of Software Transactional Memory (STM), Operational Transformation (OT), and Diff-match-patch techniques.
These features are not fully implemented yet, but the Racer demos show preliminary examples of STM and OT.
Coincidentally, both the sharejs and derbyjs teams have an ex Google-waver on board. Meteor has an ex etherpad/Google Waver in their core team. Since Etherpad is one of the best known implementations of OT I was imagining Meteor would surely want to support it at some point as well…
I've created a Meteor smart package that integrates ShareJS:
https://github.com/mizzao/meteor-sharejs
It's quite preliminary right now, but you can import it into your app, drop in textareas, and it "just works". Please try it out and submit some new features via pull requests :)
Check out a demo here:
http://documents.meteor.com
What you describe seems out of Meteors scope for me. Its not a tool to set up collaboration possibilities!
What it provides is a way to transparently work against a subset of a servers database. But the implementation of use-case specific merging functionality is the job of the application, not the framework.

OpenSwing Framework

Is OpenSwing a good framework for developing professional desktop application?
I was recently using the OpenSwing Framework. I can say only the best for the functionalities which are provided with the framework. It is a multitier concept with excelent data binding possibilities. My App uses a small Derby DB in background and I’m managing it with hibernate.
I’m sure, you will be able to advance very fast and provide a working prototype very quick. I would advice you to read the available doc first and to run the provided examples (http://oswing.sourceforge.net/).
However, it has another side which you should be aware of and you will probably notice by yourself if you run the examples. The GridFrame, GridFrameControler, DetailFrame, DetailFrameControler etc classes are not really generic. There are a lot of dependencies bult in and you will have to customize them again and again for every single implementation (can be seen in the demos).
I had another approach, I invested some time in building my own classes which are generic and using the unchanged OpenSwing classes in the background first. Now I’m only setting the properties file where all details are pre-defined. The rest is generic and I don’t have to re-code again and again for every single frame.
I hope this will help.
Regards
I used the openswing in team for more than two years.
It's a pretty nice swing framework for the enterprise development used in the Internal.
It provide great component based by MVP pattern ,such as grid , document ...
If you try it , It's a good article for you about Model-View-Presenter
And try the demo in the source,It's quite good.
The JAllInOne is also a good demo for the framework also made by the mcarniel
and It's a personal project only developed by mcarniel. Thanks mcarniel's great work.

Staying open with DI/IoC containers

I am involved with several open source projects which taken together provide an application development framework. The question I have is what mechanism(s) should I provide for integrating them with each other?
On the conceptual level the answer is clear - DI/IoC. The "only" problem is to decide which one. In several installations we used StructureMap, but then a user came along who wanted only one of the components and wanted NInject.
So, to qualify the question, how should I go about building my components so that they can be integrated with each other (and 3rd Party) using a variety of DI/IoC containers.
The best I could come up with was to separate out all integration code into separate projects and then have a project per supported IoC container, but this sounds suspiciously like IoC squared.
Any bright ideas? or I am just thinking too hard?
P.S. for the curious: NDjango; Bistro; Workflow Server
As long as you develop reusable components, you can implement them in a DI-friendly way without ever referencing any particular DI Container.
It' only when you need to compose an actual, running application that you need the DI Container, but as I understand, you are developing a framework, and it's best to keep it DI-neutral.
See this very related question (almost a duplicate).
For inspiration about integrating several projects while keeping them independent, see the Castle Project.