Pros/cons of using FE API library as integration test tool suite - testing

Wanting to open a discussion about testing approaches.
Context
I'm creating a new project and my main focus has been on efficiency and clean structure (not necessarily the most STANDARDISED code, but easiest to read, consistent, and quick to understand). Building my server-side code, and have mapped it from a loose outline of website UX designs. I want to create some integration tests in preparation for building FE.
I experimented with using newman/postman to automate some of these integration tests. Pros for this:
Not reinventing the wheel, by that - the postman requests would exist anyway, it's not a whole new suite I'm having to build and maintain
Consistency with project state, during the manual testing phase these postman requests would be updated so by the time integration tests happens, it's up to date automatically with the project state.
But had some issues with running. Then had the idea...
Why not build out the FE API library which connects to my server, and then use that as my testing suite?
Easier to enforce contract testing as it can be coupled with unit testing on a real consumer
Not a whole new suite I'm having to build and maintain, makes de-coupling near impossible and therefore reliability of testing increases
Efficient use of time as this will be built anyway, the only additional coding would be scripts that schedule test calls
Considering integrating the two repos (FE + BE) into a monorepo, using a management tool like NX (which can check for differences and only do builds / deploys when affected areas are touched). That way types can be consistent across touch points as they're both running of TypeScript.
Ideal conversation anchors:
Experimented with something similar? How did it go?
Considerations I haven't made which are likely to cause problems down the line?
Anything easier to achieve the goal that I haven't considered?

Related

Can you write unit tests if you don't actually write the code?

For instance, I assume this is what SDETs do?
They don't actually write the functional code but they're able to write integration/unit tests, am i correct?
But can someone learn to read code and then start writing tests?
This is actually a good question. I have been at the same place when I was working only on Manual Testing. Here is how I experienced things when I transitioned to automation. To answer your question, yes someone can read code and start writing tests on it but you need to understand the code that you are going to test.
There are different types of testing methodologies that are used when testing an application. These tests are done in layers so that the application is properly tested. Here is how the layering looks like:
1) Unit testing: This part is usually written by developers. This is because they have written the code and know the functionalities of the code and is easier for them to write. I am an SDET and I have written unit tests. There was only one opportunity that presented itself and it was when we were refactoring our code and there was a lot of room to write Unit Tests. In unit tests, you test functions in isolation by giving it some values and verifying an expected value. This is not something an SDET does, but should be able to do if provided the chance.
2) Integration testing: This part is also usually written by developers but the definition of integration testing is a bit vague. It means testing multiple modules in isolation. This can be modules in backend or modules in frontend but not together. Frameworks that help achieve this are code level integration tests for the technology you are using. Like for Angular application, there are deep integration tests that test the HTML and CSS of a component and there are shallow integration tests that just test two component's logic together. This can be written by an SDET but is usually written by the developer.
3) API testing (contract based testing): Pact helps us achieve this. There are other tools like rest assured, postman and jmeter that help in testing API end points. Pact helps test the integration of APIs on the frontend and verifies that integration in backend. This is very popular with microservices. This is something that can be written either by the developer or by the SDET.
4) End to End testing: This is something that is the sole responsibility of SDET. This covers testing of user flows depending on user stories. It is testing the entire stack together. Backend and frontend. This allows SDETs to automate how a user would use the application. This is also called as blackbox testing. There are different frameworks that help achieve this. Selenium, Protractor, Cypress, Detox etc. This is the sole responsibility of an SDET.
5) Load testing: This is again something that an SDET does. Using tools like hey, jmeter, loadrunner etc. These tests allow the SDET to initiate a heavy load on the system and check for breaking points of the system.
6) Performance testing: Testing the performance of the webpage for an end user depending on the page load time, the SEO optimisation and the weight of elements of page. This is something achieved by google's lighthouse tool that is an amazing tool to use. I am not aware of anything else that is as amazing as lighthouse because it gives us a lot of data that we can use to improve our website. This is a primary job of an SDET.
7) CI/CD: Continuous Integration and Continuous Deployments is something that requires architectural knowledge of the system. This is something you can do when you are an SDET3 or a lead QA engineer. For systems like AWS and GCP, using CI build tools like Jenkins and CircleCI, one can set up a pipeline that runs all the above tests when ever a branch is merged into master or whenever a pull request is created. Creating the pipeline will require you to have knowledge of Docker, Kubernetes, Jenkins and your test frameworks. First you dockerize your tests, then you build the image and push it to a directory in cloud, then you use the image to create a kubernetes job that runs everytime a change is presented in your code.
These are the levels of work that an SDET does. It takes time and hard work to have an understanding of all testing frameworks and how everything fits together. An SDET should have knowledge of the server, http protocols, frontend, backend, browsers, caching, pipeline management and orchestration of tests.
Yes, sure. You can write unit tests increasing test coverage of the codebase. That's very qualified work from software test engineering since you need to be aware what is going on in the code. This skill is definitely great!
I advise you to take a look on so called "mutation coverage". Usage of mutation coverage as a better metric than simple unit test coverage. Mutation tests are changing logical operators in the different parts of the codebase (making so called "mutants") and then are running unit tests to find out how many unit tests will fail showing their effectiveness (if after mutants were injected the result is the same as without them - unit tests quality is low and they won't catch any new injected issues to the codebase).

microservice testing : in-process component(individual micro-service) testing

I am exploring individual microservice testing & came across a good guide https://martinfowler.com/articles/microservice-testing/#testing-component-in-process-diagram. This post suggest there is a way to test the individual microservices "in-process".
Which means the test & service will execute in same process, it also provides the lib which can be used. However I am not able to understand following things
1) How exactly "in-process" testing will work in microservice architecture
2) Any pointers on how to use "inproctester" lib.
Thanks
This can be accomplished by hosting your API in process, interacting with it - in process - whilst using test doubles at the edges of you API, or (put another way) using test doubles for external collaborators.
This allows an API (and its core behaviour) to be tested in a test (unit/integration depending on opinion) without ever hitting a network or database or filesystem.
This testing will provide coverage but obviously not all the coverage you need
In .NET core you can use TestServer included in the Microsoft.AspNetCore.TestHost package.
If your API is simple enough you can perhaps reduce your types of testing (N.B. I am not suggesting reducing coverage)
This approach is alluded to in the slide deck you mention on page 8/25
Paraphrasing. If the logic in the API is small, component tests may be favourable to isolated and sociable unit tests
This approach is also recommended in the Spotify "Honeycomb" style testing. They call component testing of an API Integration testing (which coincidentally Microsoft also call it when they are using TestServer in their tests (second link for concrete examples)
https://labs.spotify.com/2018/01/11/testing-of-microservices/
https://learn.microsoft.com/en-us/aspnet/core/testing/integration-testing
I think that the thrust of the approach (regardless of its specific name) is that in process testing is fast and external collaborators are replaed with test doubles.
One thing worth noting is that "out of process" component tests are basically end to end tests but with all external collaborators swapped out with test doubles.
These may be a bit slower than in process component tests but exercise a bit more of the "stack". I'd suggest that if you are going to end-end test anyway, I'd try and do in process component testing as much as possible.

Testing reusable components / services across multiple systems

I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.

Need of Integration testing

We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!

Usage of STAF/STAX in Automation

I had been exploring on STAX/STAF past week.
It is used for test automation execution & is some what similar to Hudson.
I would like to know on which type of Tests it can be used. i.e functional tests, load tests etc., The functional automation tests are basically dependent on the framework i.e how they run, their return status on fail or pass are through the framework . How I can we integrate such with the Test Automation Framework like STAF?
I've been using STAF/STAX for over 4 years.
PROs:
Open Source
Cross-platform
Concurrent execution
Extensible (i.e. you can write your own services)
Decent support from IBM through the STAF website
CONs:
Sometimes buggy
Difficult to diagnose problems
Programming STAX scripts is awkward and ugly (i.e. scripting via XML tags and embedded jython)
I've found that STAF/STAX is useful for systems test. It enables you, for example, to launch a server on one system and a client on another, then test their interaction. It's also helpful if you need to test cross-platform, or for multiple language bindings. I also like the fact that it can be used both in large, networked systems, as well as on an individual's desktop.
On the other hand, I would probably avoid using it for unit testing, or tests that are relatively simple and can be run on a single system. I'd probably use a language specific unit framework for that.
STAF is not comparable to Hudson.
When I look at something like Hudson/Jenkins and Buildbot, I see GUI with emphasis on scheduling, viewing what's going on, what was done, and how it went.
STAF, on the other hand, is more like the plumbing for a QA framework over a distributed environment. It helps with launching processes, collecting logs, locking resources, etc.