Presently i'm working as web developper in a small company and i'm in charge to create a new web software to manage our business.
We cannot hire new developpers yet and we must deliver a first version as soon as posible.
In this context, i'm thinking about microservices architecture and i don't know if we should spend some time and resources to start our project with this kind of architecture.
Somebody has some experience about this subject?
Thanks,
We're a small team (<10 persons) and are using a microservices architecture and are getting a lot of benefits of it. But to be successful with a microservices approach you need to meet a bunch of prerequisites. (See http://martinfowler.com/bliki/MicroservicePrerequisites.html) So if you need to deliver fast and you're not yet into continuous delivery and DEVOPS, I would stay away from it.
My 2c
I think your approach towards microservices appeals to be misleading. I also understand your apprehensions towards microservices.
But, decision to chose microservices strategy should not be directly dependent on the developer base. In deed it is highly dependent on the current and future business needs of your organization. In fact if you do not anticipate any major growth or expansion of your IT services and its complexity around the systems, then you could stick to monolithic pattern.
Irrespective of small/big enterprise, one key factor for microservices strategy is its growing number of services.
Related
Why CDCT is not working for most cases in real life? The concept and tools have been selled by architects for quite a few years especially in micro-service architect, or in multi-modules complex system, there are a lot of pains for integration testings, but why CDCC is not implemented everywhere?
I heard the concept and tools about CDCT (consumer-driven contract test) about three years ago, I used to do some research in our enterprise software (one of the most complex SaaS softwares in the world, 15 years-old, developed by more than a thousand engineers) and discuss it with our chief arch about two years ago. It looks promising that we are supposed to be able to find a real case to implement it via a proper tool like pact, between two proper teams who have pain point so does the motivation, why not? The concept absolutely makes a lot of sense, the problem it aimed to solve is a very common one (who doesn't have an integration broken by another team?), everything looks perfect and I even added into my yearly goal.
I failed, I was young and simple, it didn't work out, hopeless.
Today I heard a same failure from another team, and no surprise they have same reason that’s why I think it might be write it down to as a reminder and useful (probably) knowledge to share.
The reason is high adoption cost including mindset change. CDCT is not a tool (you can use a tool like pact to better implement it), it's not even a methodology only, it's a new mindset to tell people how to work together.
Yes it’s aimed to solve the problem between multiple systems/modules, but it is more to create a new mindset which needs the two groups of people to accept: firstly a contract is needed (vesus no contract is needed), secondly consumer is the driver of the contract (vesus provider is the driver of integration).
Here is the tricky part, from consumer perspective, what needs to be done for integration point(s):
Before CDCT: 1. find an API and use it. 2. when it breaks, blame provider
After CDCT: 1. find an API 2. drive: find provider, meet with provider, negotiate with provider, come up a contract, repeat this if there is gap, signoff the contract and save it. 3. Write testing, ask provider to review the testing, ask provider to put your testing into their pipeline. Figure out how to make sure provider always make your testing pass rather than comment them out before they release a new version of service.
I can understand why consumer may not really want this, or why they want the result but hesitate to pay the cost first.
So when CDCT implementation will be successful? I think there might be two conditions:
The consumer's business is too important to be broken (say accounting), they have no choice but do everything can safeguard the dependency. However, in such a case the better idea is to remove the dependency, or adding duplication and fail-over mechinsim, testing is still the last choice.
The provider and consumer are working very closely so the mindset and setup cost will be mininum, unfortunately contract testing might not be needed in this case, because the teams are working very closely.
Regards,
Emil
While not a code-based question, I feel this question is relevant to the developer community in pursuit of a deeper understanding of API's and their role in business and the IoT at large.
Can someone please expand on the statement below? Other than in-house dev time, how exactly do API's save businesses money and foster agility?
"...APIs save businesses money and provide new levels of business agility through reusability and consistency."
Additionally, while we all know that API's are cool and can be used to build amazing things, I'm seeking to understand this from the perspective of risk vs. reward for a business.
APIs benefit larger organizations or distributed organizations with separate business units or functional units. In that scenario it allows the different functional units to deploy independently assuming you do API versioning. This has a very substantial work queuing benefit in a larger organization.
In a small organization their benefits are questionable and APIs should probably be extracted from systems as duplication arises or new problems could benefit from old solutions. Having gone through this transition I can say it's unwise to build APIs without existing applications.
In the context of IoT APIs make a lot of sense because you have largely dumb devices (supercomputers by 1980's standards) that connect back to smart infrastructure. If that is done in a bespoke or ad-hoc way it's going to be an enormous headache to change things as you release new devices. With versioned APIs separating the devices and the smart infrastructure you have a greater chance of introducing change without disabling your customers' legacy devices.
In IoT Space, APIs offer the following benefits:
New device types (e.g. from different vendors) can be easily added to the IoT platform. This saves money, because you as business owner can select from multiple devices and choose the best for your purpose, i.e. the most cost efficient one. (This relates to the API between the device and the platform).
New applications or new features can be added easily. E.g. in case you need an additional feature, it can be added on top. Even better, you can ask your internal IT or an external system integrator to do the work, again giving you the choice to select the best offer. (This relates to the API between the platform and other application).
From risk viewpoint, APIs do need to be protected as any other endpoint that you expose to the Internet (or Intranet). As minimum, you need authentication (username, password or other means), authentication (access to a subset of the data) and encryption (i.e. use TLS). Depending on your scope, you might need additional governance and API protection (e.g. throttling).
I wonder if anyone could share their thoughts on my question regarding web based APIs (we use Microsoft stacks)..
We are currently in the process of building an infrastructure to host web apis across our business.
As a organisation we have seperate business areas that provide services to our customers. These individual areas of our business generally have their own best of breed IT system. Offering APIs is something we've long thought about and we have started the design process.
The APIs we aim to offer shall be web based (.NET/webAPI/WCF etc.) and will largely (99%) be consumed within our organisation but some may be exposed externally in the future should the requirement arise (new mobile app may need to use the services etc.)
I'd love to hear your thoughts and experiences around how you architected yuor farms. I understand its quite an open question without understanding the crooks of our requirements but its more general advice/experiences I'd like to hear.
Particularly we are trying to decide whether we should design the infrastrcuture by:
1) Providing each area of the business with their own API server whereby we shall deploy each web API within a new application inside IIS.
or
2) Setup up a load balanced web api farm whereby we have say 2/3 iis web servers, all built the same, hosting the same web apis but the business areas will all share the same server effectively. Each area would have a segregated site within iis and new APIs shall be setup under new applications inside their respective web sites.
I dont foresee us having thousands of APIs but some will be business critical so I'm certainly bearing resilience in mind which is why as much as I like each business area having their own API server, I'm being swayed towards the option of having a load balanced farm which the whole business shares.
Anyone have any thoughts, experiences etc.?
Thanks!
That's a very interesting question, and i'd love to hear what others might think. I'm no big expert, but here are my two cents.
It seems to me, that the answer should be somewhere in between those two options you specified. Specifically, each critical business area, should get their own resilient, load balanced farm, while less critical services can utilize single machine deployments. Critical business area may not mean only one API, but can actually be a group of APIs, with high cohesion among themselves.
Using option 1 environment to full extent can be hard to maintain,
while utilizing option 2 fully, can be inefficient in terms of redeployment if (or better yet, when) business logic changes. Furthermore, i think it will be possible for greedy APIs to hog resources in peak traffic, making other services temporary less performant (unless you have some sort of dynamic scaling mechanism).
Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!
My company runs a couple of B2B apps (written in Rails) dealing with parts and inventory and we've been trying to figure out the best way to integrate with some of our bigger users. We already offer the REST-style API that comes with Rails, but that, of course requires an IT Department on their end to decide to integrate it, so we'd like to lower that barrier if possible.
From what we've found, most of them are on SAP systems. Now, pretty much all I know about SAP is it's 1) expensive, 2) huge, 3) and does everything and anything you could ever need for your gigantic business to run. Naturally, this is all a bit imposing, and the resources on the site are a cross between impenetrable buzz-word laden sales material, and impenetrable jargon laden advanced technical material with little for the new, but technically competent user to be able to sink his teeth into.
So what I'm wondering is: as a 3rd party, that's not running a SAP installation, is there a way for us to offer access to our site's data through a web service or other API? Is it just a matter of providing or implementing a certain WSDL (and what would that be)? Is this feasible for someone without in-depth experience with SAP? Or is this a complete non-starter?
I'd say it's not possible without someone who knows the SAP system. You probably won't need to hire someone with in-depth SAP knowledge, but at least for the initial implementation, you'll need both the knowledge and a working system you can develop against. Technically speaking, it's not really that hard, but considering the fact that SAP systems are designed to handle multiple organizations, countries, legal systems, localizations and several thousands of users simultaneously, things are bound to be a bit more complex than almost any other software around - and most of the time not even bloated, it's just easy to get lost in that kind of flexibility.
My recommendation would be to find a customer (or a prospective customer) who has someone in their IT department with the necessary technical and processual knowledge and who is interested in conducting a development project. This way, you'd get access to a real system (testing of course) and someone who can explain to you the basics of the system. But, as I said, be prepared for complexity.
vwegert makes some excellent points.
As to this part of your question:
So what I'm wondering is: as a 3rd
party, that's not running a SAP
installation, is there a way for us to
offer access to our site's data
through a web service or other API? Is
it just a matter of providing or
implementing a certain WSDL (and what
would that be)?
Technically it is possible to expose any of your system's services as web-services to a client's SAP system. In order to do this you do not need any prior knowledge of SAP. (SAP should be able to import a WSDL, although there may be some limitations in the earlier pre-ECC5 systems).
For example a service that provides meter reads, airport departure schedules, industry trends etc is not dependend of what is in the user's system or how they set it up. However as soon as there is a need to initiate updates to the client system's data is when you need access to more specialised SAP knowledge.
Also note that many SAP functions can also be exposed as web services, but generally you do need someone with SAP (ABAP) knowledge to do this.
The ABAP language is actually fairly simple, but there is a huge learning curve to understand the data model and the myriad of configurable options in SAP.