I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.
Related
I trying to develop Microservice in .Net core.
planning to implement project structure like
Frontend
Services
-Product
-Product.Api
-Product.Application
-Product.Domain
-Product.Infrastructure
-Basket
-Basket.Api
-Basket.Application
-Basket.Domain
-Basket.Infrastructure
-Order
-Order.Api
-Order.Application
-Order.Domain
-Order.Infrastructure
In the above project structure, under service folder currently three module(Product, Basket and Order). many module will added later.
Where each module have 4 projects for Api, Application ,Domain, Infrastructure. if add more module increase number of class library and web project. this will drop Visual studio loading, compile and running time of project due to my hardware is not enough.
Please recommend any other pattern for optimizing number of projects in the microservice?
If the number of class libraries is the determining factor in your architecture performance, maybe it is time to converge the modules into the same module.
If it is absolutely necessary to continue using the microservice architecture and the high number of modules, you should consider investing in more powerful hardware.
Developing software often requires a lot of ram to house all the processes running the stack locally.
Another approach would be to try to develop on a cloud platform such as Azure, and use the corresponding tools to debug against a cloud instance or even in a GitHub Codespace.
If Product, Basket and Order are different microservices, then they should be in different Visual Studio solutions. Each solution will be small and independent and they'll all load and work fast regardless of how many microservices you have.
If Product, Basket and Order are part of the same microservice and you are planning to add many more modules, your microservice design is probably wrong, as a single microservice appears to have far too many responsibilities. In this case, the solution is to limit the responsibilities of each microservice so that they don't grow to enormous sizes.
If what you are building is a modular monolith (a single deployable unit, but with the code organised in modules), then the solutions are a bit different. If it's a single developer application, you probably don't need to split the modules in separate projects. For example, the whole API can be a single project and each module be in a different folder. If there'll be many developers and teams working on the source code, then you might want to create a separate solution for each module, so each team can work on their own code.
Like #abo said: if the number of assemblies in impacting the performance of your application consider one assembly per module.
If your driver for having multiple assemblies per module was governance on dependencies then consider using an additional tool like https://github.com/realvizu/NsDepCop which allows you to enforce architectural/dependency rules without the help of the compiler.
well,
I have some notes on this Architecture you are making.
if you do it right the one of the payoff is going to be less impact on the hardware.
note#1 : make A Kernel Module which has all of the abstraction of the common functionalities. (that other microservices and take a reference from). like the base repository, message queue base handler ,command /command handler. and u you want to disconnect a module from the kernel you can do that and add your own abstractions alone in that module (simplicity is the ultimate sophistication).
note#2 : Not every module needs to have the same projects or layers as u putting for example: -generally speaking- Basket doesn't need infrastructure. all it does to tell the order module if there is any order on going or pending for this user.
note#3 : microservices is notorious of needing a high-end server to handle the massive amount of nodes/application.
finally I have an awesome blueprint for you to study and follow.
which happens to covers the same case youre after.
here is the link
Wanting to open a discussion about testing approaches.
Context
I'm creating a new project and my main focus has been on efficiency and clean structure (not necessarily the most STANDARDISED code, but easiest to read, consistent, and quick to understand). Building my server-side code, and have mapped it from a loose outline of website UX designs. I want to create some integration tests in preparation for building FE.
I experimented with using newman/postman to automate some of these integration tests. Pros for this:
Not reinventing the wheel, by that - the postman requests would exist anyway, it's not a whole new suite I'm having to build and maintain
Consistency with project state, during the manual testing phase these postman requests would be updated so by the time integration tests happens, it's up to date automatically with the project state.
But had some issues with running. Then had the idea...
Why not build out the FE API library which connects to my server, and then use that as my testing suite?
Easier to enforce contract testing as it can be coupled with unit testing on a real consumer
Not a whole new suite I'm having to build and maintain, makes de-coupling near impossible and therefore reliability of testing increases
Efficient use of time as this will be built anyway, the only additional coding would be scripts that schedule test calls
Considering integrating the two repos (FE + BE) into a monorepo, using a management tool like NX (which can check for differences and only do builds / deploys when affected areas are touched). That way types can be consistent across touch points as they're both running of TypeScript.
Ideal conversation anchors:
Experimented with something similar? How did it go?
Considerations I haven't made which are likely to cause problems down the line?
Anything easier to achieve the goal that I haven't considered?
I have a project with login and other functionalities tests in Cucumber. There are different projects which use the same login function. I would like to reuse the cucumber login steps from one project to another project.
Eg:
Project1->TestLogin1
Project2->TestLogin1
In general don't try and do this. Cucumber scenarios should describe the behaviour of your system and their implementation should be specific to each particular system. People have been trying to do this in the cukes community for years, generally with little success.
Sure with something as simple as login you could share ... until one application starts allowing you to register by Facebook whilst the other requires you to confirm via email.
In practice the amount you save in sharing (which is very small) if offset by the amount you lose in being able to make you scenarios specific to the world of your application.
You could definitely benefit from sharing step definitions between projects, because there is likely to be a lot of overlap between certain parts of an app, such as admin tasks.
If you use an IDE for feature editing, you may then be able to benefit from leveraging those step defs through autocomplete.
It should be possible to package step defs into repos that are then included by module. You might be able to leverage tags or hooks to aid in setup so the context is correct.
Whether it’s worth the effort of coordinating across many projects will likely depend on your use case.
We are 3 teams:
Website front-end (React)
Website back-end (Node.js)
Native app (React Native, Node.js)
We want to share logic (e.g. Validations).
As of now I found articles on 3 ways to do so:
A NPM Package we will create for our own needs
A micro-service with endpoints who carry relevant logic
Serverless functions who carry relevant logic
Any other real-life, production suggestions?
Any other real-life, production suggestions?
Kind of - in no specific order:
You could specify the rules in a language/technology agnostic way, and then have your app load them at runtime (or be compiled in during build). The rules could then exist as a config file, or even be fetched from a remote location (a variation on your options 2 & 3).
Of course, designing a language agnostic rules engine / approach is non-trivial, and depends on what you need the rules to do (how complex, etc). You might find a pre-built open source solution that does that.
I have seen people try this, but the projects never succeeded (for unrelated reasons). One team specified the rules in an Excel sheet.
But there are trade-offs:
Performance hit - how to take language agnostic rules and be able to execute them? This will probably take some translation. Native code is almost always going to be faster and more efficient.
Higher development effort.
Added complexity - harder to debug (even if you compensate by developing more mechanisms to assist you do that - which is more development effort).
Regarding Your Options
For what it's worth, code / design-time sharing is an obvious approach, which I guess is sufficiently covered by NPM. I don't know enough about React and Node to know if they have any better ways of doing that. Normally if I have logic I want to share I'll use a component which is purpose built (lean as possible, minimal dependencies, intended to be re-used across multiple projects), and ingested in (C# / .Net) at compile/design time.
As an alternative to NPM you could look at dependency injection. This would allow you to do things like update the logic even after the app was deployed, as long as it can access where ever a newer set of rules are. So it's a bit like your option 1 (NPM, code level loading) but at runtime, and just once, and your options 2 & 3 - fetched remotely at runtime - the difference being that you're ingesting the logic not firing off questions and receiving answers (less chatty).
Service base rules are good in that they are totally separated, but the obvious trade-offs are availability and performance at runtime.
I don't see a difference in your options 2 & 3 from the stand-point of creating, managing and sharing logic. The only material impact is on whomever implements and supports that service system.
I had been exploring on STAX/STAF past week.
It is used for test automation execution & is some what similar to Hudson.
I would like to know on which type of Tests it can be used. i.e functional tests, load tests etc., The functional automation tests are basically dependent on the framework i.e how they run, their return status on fail or pass are through the framework . How I can we integrate such with the Test Automation Framework like STAF?
I've been using STAF/STAX for over 4 years.
PROs:
Open Source
Cross-platform
Concurrent execution
Extensible (i.e. you can write your own services)
Decent support from IBM through the STAF website
CONs:
Sometimes buggy
Difficult to diagnose problems
Programming STAX scripts is awkward and ugly (i.e. scripting via XML tags and embedded jython)
I've found that STAF/STAX is useful for systems test. It enables you, for example, to launch a server on one system and a client on another, then test their interaction. It's also helpful if you need to test cross-platform, or for multiple language bindings. I also like the fact that it can be used both in large, networked systems, as well as on an individual's desktop.
On the other hand, I would probably avoid using it for unit testing, or tests that are relatively simple and can be run on a single system. I'd probably use a language specific unit framework for that.
STAF is not comparable to Hudson.
When I look at something like Hudson/Jenkins and Buildbot, I see GUI with emphasis on scheduling, viewing what's going on, what was done, and how it went.
STAF, on the other hand, is more like the plumbing for a QA framework over a distributed environment. It helps with launching processes, collecting logs, locking resources, etc.