Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
To follow the 12 factor app in my (Java) microservice development, I want to construct my testing infrastructure to
be conform to the 12 factor app
test/push the system under test towards the 12 factor app.
What are good test harnesses (tools and approaches) for this?
Details:
To have effective microservices, I want to follow the 12 factor app. To find as many bugs as possible, I want a test harness that is effective for 12 factor microservices -- mine is not.
For instance, when I am developing microservices, I often introduce bugs that are not caught by my unit tests -- cannot be caught since they do not occur in local logic but in plumbing, i.e. only when putting together many pieces (Java code, Javascript code, Helm chart, Docker file, environment variables). Thus I want to strengthen my coarse-grained tests (integration, component and acceptance/end-to-end tests), which leads me to multiple questions related to the 12 factor app, e.g.
should I have more coarse-grained tests that run directly in the production environment?
Which parts can I mock in my integration and component tests without breaking dev/prod parity?
How can I test that my system under test follow the 12 factor app?
I have found a lot of tools and material for testing (Java) microservices, e.g.
The book Testing Java Microservices
https://martinfowler.com/articles/microservice-testing/
How do I write useful unit tests for a mostly service-oriented app?
http://arquillian.org/
https://github.com/SpectoLabs/hoverfly-java
https://github.com/rest-assured/rest-assured
https://github.com/DiUS/pact-jvm
https://www.testcontainers.org/
Unfortunately, none of them give advice on the 12 factor app -- it isn't even mentioned. Hence any advice on testing approaches that support the 12 factor app is appreciated!
In my experience, the problem is the "shitty" difference between unit-tests and integration-tests.
There are in-betweens, imhO you should design your tests to check the complete service-functionality as it is defined by the contract, by simulating,mocking the services it consumes and calling it at the interfaces it provides. To do that you need a framework like springunit, cdi-unit, ioc-unit or arquillian.
Testing only single (or a few) classes of the service and mocking the rest exposes too much internals of the code to the testcode which prevents refactoring and also often leads to tests which just copy the businesslogic somewhat so they don't really test the functionality.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I want to test a GraphQL API.
For now, I'm using GraphiQL, but I'm looking for any automated tool...
It seems that SOAPUI does not support GraphQL testing.
Any ideas?
Thanks.
You basically have a few options that I've seen:
Apollo's GraphQL Platform. It gives you full-blown telemetry on your individual resolvers, and can integrate with VS Code to let your developers know how expensive their query is in real time. You'll pay for it though.
An observation tool such as HoneyComb or DataDog, also paid.
Write your own. For a simple enough use-case it may make sense, but if you're looking for a rich feature set it probably makes more sense to buy rather than to build.
I am using SoapUI 5.4.0 (community edition) and have no trouble testing GraphQL requests.
Treat them as a Rest request and add a header, e.g. Content-Type: application/graphql
see image for all details.
What, specifically, do you want to test?
We have a number of automated sanity-check tests that we run on every build:
Is the schema valid (according to graphql-js)? This can be surprisingly easy to mess up if your implementation allows for e.g. multiple definitions of the same type name, or any other number of subtle bugs.
Is this a breaking schema change? If so, break the build unless there's a specific git commit message acknowledging and accepting it. With graphql-js this is fairly easy - run the introspection query against current production, run it against the current build, and use the built in findBreakingChanges function.
Note that the graphql-js tests don't mean your server has to be written in JS - ours is written in ReasonML using ocaml-graphql-server, and then on build we use a node test suite to hit it as any other client would.
Finally, beyond that, we have some tests that run queries/mutations for an end-to-end API server test. Overall, this has been quite robust against regressions so far.
And keep in mind that you can simply hit your GraphQL server with any http client, there doesn't have to be GraphQL-awareness in your test suite. I'd recommend this route on top of the sanity checks I mentioned above.
For automated testing there is https://github.com/ohler55/graphql-test-tool/gtt. It's written in go but as a standalone application it can be used with any GraphQL server. We use it for unit testing and CI.
Karate is the only open-source tool to combine API test-automation, mocks and performance-testing into a single, unified framework.
https://github.com/intuit/karate
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have custom API included in a website which creates new UID on new unique user visit like Google Analytics and sends the UID data to the backend server(NodeJs) for computation.
I need to check concurrent users and max the limit of users can be created/handled per current cloud config.
Also, need to check, is there any limit on API creating and sending users data. The API is on CDN(fastly)
Please suggest some testing tools, to check for above scenario.
SoapUI is a kind of standard for web services functional testing, it has also certain load testing capabilities
Web Services are basically JSON or SOAP over HTTP so any tool which supports HTTP protocol will suit. Here you can find the list of free and open source load testing tools. Narrowed down to the most powerful ones it will look like:
Grinder
Gatling
Apache JMeter
Tsung
Check out Open Source Load Testing Tools: Which One Should You Use? article for the main features comparison, sample scripts and reports.
I agree with Dmitry that those four (Grinder/Gatling/Tsung/Jmeter) are good tools, with a lot of functionality, but they are also fairly complex, require dependencies and can be somewhat painful to get started with. It all depends on your requirements which tool is best for you.
It sounds to me like you want to test one or two REST API end points powered by NodeJS. If you want a simple-to-get-started with tool that can be scripted, there are some good command-line tools available:
Wrk - very fast, scriptable in Lua
Artillery - NodeJS-based, scriptable in JS
k6 - our own newly released tool, currently the fastest tool scriptable in JS
There is also Locust which is scriptable in Python, but very low-performing.
I like these tools because they offer simple command-line usage and can be scripted in a real language, as opposed to Jmeter and Tsung, where you'll have to resort to XML if you want to do something slightly out of the ordinary. Gatling is a bit better, offering a DSL based on Scala classes where you can do most things but it is still not "real" Scala. The Grinder is the only one of those other tools that offers true scripting (in Jython), but again, it is not a simple one-line command to get started.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I work in a small development team consisting of 5 programmers, of which none have any overall testing experience. The product we develop is a complex VMS, basically consisting of a (separate) video server and a client for viewing live and recorded video. Since video processing requires a lot of hardware power, the software is typically deployed on multiple servers.
We use a slimmed down version of feature driven development. Over the past few months a lot of features were implemented, leaving almost no time for the luxury of QA.
I'm currently researching a way for us to test our software as (time) efficient as possible. I'm aware of software methodologies build around testing, such as TDD. However, since many features are built around the distributed architecture, it is hard to write individual tests for individual features. Given that many of the features require some of the endless scenarios is which it can be deployed to be replicated in order to be tested properly.
For example, recently we developed a failover feature, in which one or more idle server will monitor other servers, and take its place in case of failure. Likely scenarios will include failover servers in a remote location or different subnet, or multiple failing servers at a time.
Manually setting up these scenarios takes a lot of valuable time. Even though I'm aware that manual initialization will always be required in this case, I cannot seem to find a way in which we can automate these kinds of tests (preferably defining them before implementing the feature) without having to invest an equal or greater amount of time in actually creating the automated tests.
Does anyone have any experience in a similar environment, or can tell me more about (automated) testing methodologies or techniques which are fit for such an environment? We are willing to overthrow our current development process if it enhances testing in a significant way.
Thanks in advance for any input. And excuse my grammar, as English not my first language :)
I approach test strategy by thinking of layers in a pyramid.
The first layer in the pyramid are your unit tests. I define unit tests as tests that exercise a single method of a class. Each and every class in your system should have a suite of tests associated with it. And each and every method should have a set of tests in included in that suite. These tests can and should exist in a mocked environment.
This is the foundation of testing and quality strategy. If you have solid test coverage here, a lot of issues will be nipped in the bud. These are the cheapest and easiest of all the tests you will be creating. You can get a tremendous bang for your buck here.
The next layer in the pyramid are your functional tests. I define functional tests as tests that exercise the classes in a module. This is where you are testing how various classes interact with one another. These tests can and should exist in a mocked environment.
The next layer up are your integration tests. I define integration tests as tests that exercise the interaction between modules. This is where you are testing how various modules interact with one another. These tests can and should exist in a mocked environment.
The next layer up is what I call behavioral or workflow tests. These are tests which exercise the system as would a customer. These are the most expensive and hardest tests to build and maintain, but they are critical. They confirm that the system works as a customer would expect it to work.
The top of your pyramid is exploratory testing. This is by definition a manual activity. This is where you have someone who knows how to use the system take it through its paces and work to identify issues. This is to a degree an art and requires a special personality. But it is invaluable to your overall success.
What I have described above, is just a part of what you will need to do. The next piece is setting up a Continuous Integration/Continuous Deployment (CI/CD) pipeline.
Whenever code is committed to one of your repos, and I do hope that you have a project as big as this broken up into separate repos, that component should undergo static analysis (i.e. lint it), be built, have tests executed against it, have code coverage data gathered.
Just the act of building each component of your system regularly, will help to flush out issues. Combine that with running unit/functional/integration tests against it and you are going to be identifying a lot of issues.
Once you have built a component, you should deploy it into a test or staging environment. This process must be automated and able to run unattended. I highly recommend you consider using Chef from Opscode for this process.
Once you have it deployed in a staging or test environment, you can start hitting it with workflow and behavioral tests.
I approach testing first by:
choosing P0/P1 test cases for functional and automated testing
choosing what framework I will use and why
getting tools and framework setup while doing testing manually for releases
build an MVP, at lease automating high priority test cases
after building a test suite of regression test cases that run on a daily basis.
Main thing is you have to start with MVP.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
What is exactly PAT, When we will do the pre acceptance testing?
I don't think it's a widely-used term or part of a standard. Therefore, what exactly it means is organization-specific and should be defined in a glossary somewhere. More likely though you'll just have to ask people what it means.
Any testing done before acceptance testing.
This would include:
Unit tests
Stress tests
Integration tests
Performance tests
There's no standardised meaning for the term - often it depends on your process- be it Agile or Extreme Programming etc.
Generally however, there are a number of tests done by developers or testing in a developer test environment. This can be unit tests, developer tests, sanity regression tests, performance tests - ie tests that the QA team wants done before they'll even look at it. At a bare minimum, it might be just testing that the software builds (although it's frightening how often I've had a developer fail to even check this).
Well I would like to share something which everyone may not agree to but this is what I feel Pre-Acceptance testing would be:
The testing done to perform that the system under test functions as per the designed requirements to cover the customer's business areas before entering the User Acceptance Test phase where users from the customer's side are invited to perform the testing at the vendor's location where development team assistance is available when any flaw occurs in the expected business flow. This will be called as Alpha Test. Please feel free to correct me if I have said something wrong.
Acceptance testing is a testing technique performed to determine whether or not the software system has met the requirement specifications. In SDLC, acceptance testing fit as follows:
Requirement Analysis - Acceptance Testing
High Level Design - System Testing
Low Level Design - Integration Testing
Coding - Unit Testing
Simply put, PAT is any test done before acceptance testing. There are various forms of acceptance testing such as User acceptance testing, business acceptance testing, alpha testing and beta testing.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I believe there has been some up-take of model-driven development / engineering (aka OMGs model driven architecture) in the real-time and embedded software development sectors. What tools and tool vendors have people had experience with?
Google gives me lots of academic papers and a vendor or two (IBMs Rational Rose Technical Developer and VisSim).
Additionally, any information on model format (UML?), target languages/environments for the platform specific models (C? RTOSs?), and testing (logic-based?) would be greatly appreciated.
We have used Enterprise Architect and IBM Software - Rational Rhapsody. We have used the build in code generation engine and our own code generation engine that generate code suitable to Do178B qualification. With Rational Rhapsody we targeted VX Works as well as our own OS. These tools uses UML models. Since it generates code you can do unit testing with what ever you used to tools that provide integration into these tools.
Scade is also a option if you write safety critical software. Some of the other divisions in our company has used this successfully. It is very logic orientated so it is not able to do everything but it can generate up to 70% of the code for some projects. Using a qualified tool eliminates most of the testing. It has an model verification tool and if the model is correct then code is correct. It integrates in requirement and configuration management tools.
For non safety critical development by experienced developers it is difficult to say if using model driven development will provide you with any saving. It is worth trying, as technology matures and more developers get used to model based development, we will see a lot more of this in the embedded environment.
I have used MS Visio for drawings only; no code generation. Just starting to look at Enterprise Architect, and this is looking promising.
Others in our company have used Simulink/Stateflow for design modelling in an automotive environment. Not for auto code generation I think, but for running the model on the PC.
NI LabVIEW is another possibility. We've only used it in a PC-based automated testing system, but it can also be used for model-based design.
Both these systems can generate code, but we don't have much experience with that so far. Even without using code generation, model-based design has several advantages to help the high-level and mid-level design process and design documentation. Code generation is something we could consider in future.
If you want to model a state machine you could do worse than try visualState from IAR Systems (the embedded compiler company).