I am trying to create junit tests an application that makes use of Spring Cloud Stream to recieve events from RabbitMQ. In other application I generally import spring-cloud-stream-test-support and spring-rabbit-test and annotate the test class with #SpringBootTest and we're good to go. However, #SpringBootTest loads the entire application context which is not ideal in this application as it is quite large and would require the mocking of too many beans which are irrlevant to the test. Therefore, I tried limiting the context by specifying the classes I want loaded as follows:
#SpringBootTest(classes = {MessageProcessor.class, Consumer.class}). It seems like this is not enough as I'm getting a Dispatcher has no subscribers for channel error.
So my question is, what are the minimum classes that need to be included in the context to test SpringCloudStream/RabbitMQ consumer?
After a lot of trial and error it seems that the bare minimum for the tests to work is the following:
#SpringBootTest(classes = {MessageProcessor.class, Consumer.class,
org.springframework.cloud.stream.config.BindingServiceConfiguration.class,
org.springframework.cloud.stream.test.binder.TestSupportBinderConfiguration.class
})
Related
I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566
I have a few Nifi process groups which I want to run integration tests on before promoting to production. The issue is that I can't seem to find any documentation on how to do so.
Data Provenance seems like a promising tool to accomplish what I want, however, over the course of the flowfile's lifecycle, data is published to/from kafka or the file system. As a result, the flowfile UUID changes so I cannot query for it using the nifi-api.
Additionally, I know that Nifi offers a TestRunner library to run tests, however, this seems to only be for processors/processor groups generated via code and not the UI.
Does anyone know of a tool, framework, or pattern for integration and unit testing nifi process groups. Ideally this would be a solution where you can programatically compare input/output of the processor/processor group without modifying the existing workflow.
With the introduction of the Apache NiFi Registry, we have seen users promote flows from a development/sandbox environment to a test/QE environment where there are existing "test harness" flows surrounding the "flow under test" so that they can send repeatable and deterministic (or an anonymized sample of real production data) through the flow and compare the results to an expected value.
As you point out, there is a TestRunner class and a whole testing framework provided for unit tests. While it can be difficult to manually translate a UI-constructed flow to the programmatic construction, you could also create something like a translator to accept a flow template or flow.xml.gz file and convert it into something processable by the test framework.
Maybe plumber will help you with flow testing.
We also wanted to test whole NiFi flows, not just single processor, so we created this library and decided to open-source it.
Simple example in Scala:
// read flow previously exported from NiFi
val template = TemplateDeserializer.deserialize(this.getClass.getClassLoader.getResourceAsStream("exported-flow.xml"))
val flow = NifiTemplateFlowFactory(template).create()
// enqueue some data to any processor
flow.enqueueByName("csv row,12,another value,true", "CsvParserProcessor")
// run entire flow once
flow.run(1)
// get the results from any processor
val records = flow.resultsFromProcessorRelation("LastProcessorInFlow","successRelation")
records should have size 1
This library is still under development so improvements and ideas are welcomed! :)
I've been seraching for a while how should I test my data access layer with not a lot of success. Let me list my concerns:
Unit tests
This guy (here: Using InMemoryConnection to test ElasticSearch) says that:
Although asserting the serialized form of a request matches your
expectations in the SUT may be sufficient.
Does it really worth to assert the serialized form of requests? Do these kind of tests have any value? It doesn't seem likely to change a function that should not change the serialized request.
If it does worth it, what is the correct way to assert these reqests?
Unit tests once again
Another guy (here: ElasticSearch 2.0 Nest Unit Testing with MOQ) shows a good looking example:
void Main()
{
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
};
var mockSearchResponse = new Mock<ISearchResponse<Person>>();
mockSearchResponse.Setup(x => x.Documents).Returns(people);
var mockElasticClient = new Mock<IElasticClient>();
mockElasticClient.Setup(x => x
.Search(It.IsAny<Func<SearchDescriptor<Person>, ISearchRequest>>()))
.Returns(mockSearchResponse.Object);
var result = mockElasticClient.Object.Search<Person>(s => s);
Assert.AreEqual(2, result.Documents.Count()).Dump();
}
public class Person
{
public int Id { get; set;}
}
Probably I'm missing something but I can't see the point of this code snippet. First he mocks an ISearchResponse to always return the people list. then he mocks an IElasticClient to return this previous search response to any search request he makes.
Well it doesn't really surprise me the assertion is true after that. What is the point of these kind of tests?
Integration tests
Integration tests do make more sense to me to test a data access layer. So after a little search i found this (https://www.nuget.org/packages/elasticsearch-inside/) package. If I'm not mistaken this is only about an embedded JVM and an ES. Is it a good practice to use it? Shouldn't I use my already running instance?
If anyone has good experience with testing that I didn't include I would happily hear those as well.
Each of the approaches that you have listed may be a reasonable approach to take, depending on exactly what it is you are trying to achieve with your tests. you haven't specified this in your question :)
Let's go over the options that you have listed
Asserting the serialized form of the request to Elasticsearch may be a sufficient approach if you build a request to Elasticsearch based on a varying number of inputs. You may have tests that provide different input instances and assert the form of the query that will be sent to Elasticsearch for each. These kinds of tests are going to be fast to execute but make the assumption that the query that is generated and you are asserting the form of is going to return the results that you expect.
This is another form of unit test that stubs out the interaction with the Elasticsearch client. The system under test (SUT) in this example is not the client but another component that internally uses the client, so the interaction with the client is controlled through the stub object to return an expected response. The example is contrived in that in a real test, you wouldn't assert on the results of the client call as you point out but rather on the output of the SUT.
Integration/Behavioural tests against a known data set within an Elasticsearch cluster may provide the most value and go beyond points 1 and 2 as they will not only incidentally test the generated queries sent to Elasticsearch for a given input, but will also be testing the interaction and producing an expected result. No doubt however that these types of test are harder to setup than 1 and 2, but the investment in setup may be outweighed by their benefit for your project.
So, you need to ask yourself what kinds of tests are sufficient to achieve the level of assurance that you require to assert that your system is doing what you expect it to do; it may be a combination of all three different approaches for different elements of the system.
You may want to check out how the .NET client itself is tested; there are components within the Tests project that spin up an Elasticsearch cluster with different plugins installed, seed it with known generated data and make assertions on the results. The source is open and licensed under Apache 2.0 license, so feel free to use elements within your project :)
I'm designing a web service running on Google App Engine that scrapes a number of websites and presents their data via a RESTful interface. Based on some background reading, I think I'd like to attempt Test Driven Development (TDD) and develop my tests before I write any business code.
My problem is caused by the fact that my list of scraped elements includes timetables and other records that change quite frequently. The limit of my knowledge on TDD is that you write tests that examine the results of code execution and compare these results to a hardcoded result set. Seeing as the data set changes frequently, this method seems impossible. Assuming that this is true, what would be the best approach to test such an API? How would a large-scale web API be tested (Twitter, Google, Netflix etc.)?
You have to choose the type of test:
Unit tests just test proper operation of your modules (units). You provide input data and test that code outputs proper results. If there are system dependent classes you try to mock them or in case of GAE services, you use google provided local services. Unit tests can be run locally on your machine or on CI servers. There are two popular unit test libs for java: Junit & TestNG.
Integration tests check that various modules (internal & external) work together - they basically check that APIs between modules are working. They are usually run on real servers and call real external services. They are technology specific and are harder to run.
In your case, I'd go with unit tests and provide sets of different input data which you logic should parse and act upon. Since your flow is pretty simple (load data from fixed Url, parse it) you could also embed loading of real data into unit tests (we do this when we parse external sources).
From what you are describing you could easily find yourself writing integration tests. If your aim is to test the logic for processing what is returned from the scraped data (e.g. you know that you are going to get a timetable in a specific format coming in and you now have logic to process that data) you will need to create a SEAM between your web services logic and your processing logic. Once you have done this you should be able to mock the data that is returned from the web service call to always return the same table data and then you can write consistent unit tests against it.
public class ScrapingService : IScrapingService
{
public string Scrape(string url)
{// scraping logic}
}
public interface IScrapingService
{
string Scrape(string url);
}
public class ScrapingProcessor
{
private IScrapingService _scrapingService
// inject the dependency
pubilc ScrapingProcessor(IScrapingService scrapingService)
{
_scrapingService = scrapingService;
}
public void Process(string url)
{
var scrapedData = _scrapingService.Scrape(url)
// now process the scrapedData
}
}
To test you can now create a FakeScrapingService that implements the IScrapingService interface and then return whatever data you like from the Scrape method. There are some very good Mocking frameworks out there that make this type of thing easy. My personal favorite is NSubstitue.
I hope this explanation helps.
I see several other questions about load testing web services. But as far as I can tell those are all synchronous load testing tools. (Meaning they send a ton of requests but the go one at a time.)
I am looking for a tool where I can say, "I want 100 requests to be launched at the exact same time".
Now, I am new to the whole load testing thing, so it is possible that those tools are asynchronous and I am just missing it.
Anyway, in short my question is: Is there a good tool for load testing WCF Web Services asynchronously (ie lots of threads).
In general, I recommend you look at soapUI, for anything to do with testing web services. They do have load testing features in the Professional edition (I haven't used these yet).
In addition, they've just entered beta with a loadUI product. If it's anywhere near as good as the parent product, then it's worth a hard look.
you can use the Visual Studio load testing agent components to run on multiple client machines and that will allow you to run as asynchronously as you have machines to load.
There is a licence requirement for using this feature.
There are no tools that will allow you to apply a load at exactly the same instant (i.e. within milliseconds), but this is not necessary to load test an application correctly.
For most needs a single load test server running Visual Studio Ultimate edition will be more than enough to get an understand of how your webservice performs under load.
Visual Studio and most other tools I imagine will apply load in an asynchronous manner, but I think in your view you want to apply a set load all at once.
This is not really necessary as in practice load is not applied to a service in this manner.
The best bet for services expecting high load is to load your service until a given number of "requests per second" is reached. Finding what level your application should expect is a bit trickier, but involves figuring out roughly how many users you would expect and the amount they will be using it over a given period.
The other test to do is to setup a load test harness and run the load up until either the webservice starts to perform badly or the test harness runs out of "oomph" and cannot create any more load.
For development time you can use NLoad (http://nload.github.io)
to run load tests on your development machine or testing environment.
For example
public class MyTest : ITest
{
public void Initialize()
{
// Initialize your test, e.g., create a WCF client, load files, etc.
}
public void Execute()
{
// Send http request, invoke a WCF service or whatever you want to load test.
}
}
Then create, configure and run a load test:
var loadTest = NLoad.Test<MyTest>()
.WithNumberOfThreads(100)
.WithDurationOf(TimeSpan.FromMinutes(5))
.WithDeleyBetweenThreadStart(TimeSpan.Zero)
.OnHeartbeat((s, e) => Console.WriteLine(e.Throughput))
.Build();
var result = loadTest.Run();