I had design a multi version API architecture,
Please give me feedback and suggestion about good and bad, or there are better way to achieve this.
version server (MVC4) to route request to correct API interface(WebApi).
version will send to server by header, will default to 1.0 if not header found.
First time I design for multi version API architecture, and google does't have much information for this.
Welcome any suggestion, critic, feedback and anything.
Cheers,
It depends on how many version you will maintain and what the NFR are for older versions.
Your current setup is fine for only a handful of versions pro's and con's:
+ Same NFR for all versions is possible
+ Quick to achieve at first
- Changes to common resources (like the DB) have an impact on all supported versions, so they have to be re-released, tested, ... which can become quite expensive
An other option is to build a chain of adapters, from version 2.2 to 1.0 and from 3.7 to 2.2:
+ Easier to maintain a large set of versions
+ changes only require one new/updated adapter in the chain
- harder to set up at first
- performance drops per adapter that is being used
For the chain of adapters there are several possible scenarios, have them all in one process or have a separate service for each. Again both have pro's and con's.
As usual, it all depends on your situation.
Related
We are 3 teams:
Website front-end (React)
Website back-end (Node.js)
Native app (React Native, Node.js)
We want to share logic (e.g. Validations).
As of now I found articles on 3 ways to do so:
A NPM Package we will create for our own needs
A micro-service with endpoints who carry relevant logic
Serverless functions who carry relevant logic
Any other real-life, production suggestions?
Any other real-life, production suggestions?
Kind of - in no specific order:
You could specify the rules in a language/technology agnostic way, and then have your app load them at runtime (or be compiled in during build). The rules could then exist as a config file, or even be fetched from a remote location (a variation on your options 2 & 3).
Of course, designing a language agnostic rules engine / approach is non-trivial, and depends on what you need the rules to do (how complex, etc). You might find a pre-built open source solution that does that.
I have seen people try this, but the projects never succeeded (for unrelated reasons). One team specified the rules in an Excel sheet.
But there are trade-offs:
Performance hit - how to take language agnostic rules and be able to execute them? This will probably take some translation. Native code is almost always going to be faster and more efficient.
Higher development effort.
Added complexity - harder to debug (even if you compensate by developing more mechanisms to assist you do that - which is more development effort).
Regarding Your Options
For what it's worth, code / design-time sharing is an obvious approach, which I guess is sufficiently covered by NPM. I don't know enough about React and Node to know if they have any better ways of doing that. Normally if I have logic I want to share I'll use a component which is purpose built (lean as possible, minimal dependencies, intended to be re-used across multiple projects), and ingested in (C# / .Net) at compile/design time.
As an alternative to NPM you could look at dependency injection. This would allow you to do things like update the logic even after the app was deployed, as long as it can access where ever a newer set of rules are. So it's a bit like your option 1 (NPM, code level loading) but at runtime, and just once, and your options 2 & 3 - fetched remotely at runtime - the difference being that you're ingesting the logic not firing off questions and receiving answers (less chatty).
Service base rules are good in that they are totally separated, but the obvious trade-offs are availability and performance at runtime.
I don't see a difference in your options 2 & 3 from the stand-point of creating, managing and sharing logic. The only material impact is on whomever implements and supports that service system.
I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.
I've evaluated a number of versioning schemas for REST apis (header, url, …). So far, the most reliable approach seems to be the url option: It works with proxies, and does not rely on obscure schemas such as dates for versioning.
Now, when I look around, everybody who uses the url based approach seems to use versions such as v1, v2, and so on. Nobody uses minor versions, or even a schema such as semantic versioning.
This raises some questions:
When do you increase the version number of a REST api (for sure, you have more updates to it than just once in five years)?
If you just have a bug fix, you probably do not increase the version number, but how do you differ both versions?
If you use a very fine-granular approach, you end up with lots of versions you need to host in parallel. How do you handle that?
In other words: How does a company such as GitHub, e.g., make to only have v3 today (2015), when they are around in business already for 7 years now? Does that mean that they actually only changed their api two times? I can hardly believe that.
Any hints?
The major version number is all you need for a web service. Because your consumers are only concerned about backward incompatible changes, and (if you're following semantic versioning correctly) those will only be introduced when a new major version is released.
All other changes (including new features, bugfixes, patches etc.) should be 'safe' for your consumers. Those new features don't have to be used by your consumers, and you probably don't want to continue to run that unpatched version that contains bug X or Y any longer than necessary.
Using the complete version number in your URLs (or whatever method you use for API versioning) would actually mean that your consumers have to update the URL of your API everytime you make an update / bugfix to your API, and they would keep using an unpatched version until they do so.
This doesn't mean that you can't use semantic versioning internally, of course! Just use the first part (major version) as the version number for your API. It just doesn't make sense to include the full version number in your URL / header / other versioning system.
So to answer your question: you update your API everytime you make a new release, but you only release a new API version when you have a new major version. This way you only have to host a couple of different versions (and you can of course deprecate old versions over time).
I am going to implement Lucene search into my project and I want to make a best start.
So I consider between 3 versions of Lucene (Java/C#.Net/C++) which is the best version upon these criterias :
1.performance
2.easy to implement
3.plenty of documents ?
Assume the system is Window server, and I ask it for a long-term use.
Thanks
I would say Java. Lucene was initially developed in Java and I would think there are bigger community, more documentation and bigger deployments using Java.
Granted, Windows is not usually considered as primary platform for deploying Java services but it still would work with flying colors. Many people using Windows for Java development and even deployment so I don't expect any major issues.
Unless you've got a specific feature you need, I would look at best being:
a) Whatever platform you are developing the program in -- there are lots of advantages to not having to switch tools/contexts/platforms to muck around with the search internals.
b) Whatever platform your ops guys want to deal with -- I know lots of windows ops guys hate dealing with java as it is a strange foreign language. For example.
c) All of the above being equal, Java is the real flagship lucene project that everyone else is keeping up with with and that has the most tools & resources. It is the way to go if you don't have any reason not to use java. Solr is another advantage here -- you can pretty easily use a pre-wrapped fully functional lucene http server.
In any case, keep in mind that at least theoretically any lucene index written on one platform is readable by others so you don't necessarily have to fully commit to a single platform.
Is it true:
If the application does not require new features from newer APIs (i.e. higher API levels), it is better to take lower API levels.
The major concern is lower API levels means better compatibility, and thus means larger market.
Is there anything else I have to keep in mind when I make such decisions?
I came up with this question when coming across some question about Android API Levels, but I think this can be a general question, not only for Android.
I general, yes, but only if the older API is still supported by the newer implementations, of course. (For example Lucene Java changes their API in incompatible ways on major updates, so you do not have this option).
There could also be cases where the host platform looks at what API version your require, and then behave differently, in a way that you may not want (cannot think of a good real-world example right now).
For Android, at the moment, I'd say, yes, declare the lowest API level you need.