Testing integration between two systems - testing

I have two systems:
REST web application which return data in xml
Windows service which daily gets data from 1st web app and sync it wit its database.
Question: how to make integration testing for this applications (check whether data is sunchronised corectly)? Is it possible to automate such testing?

If I were you, I would send a request from 2 and validate my database data at 2. This forms a whole journey (E2E) there by interacting with as many other systems involved. You may also need to consider different scenarios/paths so that as much interaction is covered.

Related

How to architect scheduled API to API integration

My organization moves data for customers between systems, these integrations are in BizTalk and are done by file, sometimes to/from APIs. More and more customers are switching to APIs so we are facing more and more API to API integrations.
I'm mostly a backend developer but have been tasked with finding out how we can find a more generic pattern or system to make these integrations, we are talking close to a thousand of integrations.
But not thousands of different APIs, many customers use the same sort of systems.
What I want is a solution that:
Fetches data from the source api
Transforms the data to the format for the target api
Sends the data to the target api
Another requirement is that it should be possible to set a schedule when these jobs should run.
This is easily done in BizTalk but as mentioned there will be thousands of integrations and if we need to change something in one of the steps it will be a lot of work.
My vision is something that holds interfaces to all APIs that we communicate with and also contains the scheduled jobs we want to be run between them. Preferrably with logging/tracking.
There must be something out there that does this?
Suggestions?
NOTE: No cloud-based solutions since they are not allowed in our organization.
You can easily implement this using temporal.io open source project. You can code your integrations using a general-purpose programming language. Temporal ensures that the integration runs to completion in the presence of all sorts of intermittent failures. Scheduling is also supported out of the box.
Disclaimer: I'm a founder of the Temporal project.

Testing multiple services in Integration Tests?

I know Integration tests are supposed to test multiple components working together, but is it fine to test behavior of one component(which has side-effects) using an unrelated component.
So, I have one microservice (Service A) which fetches data, does some processing and put it in another data store. (Basically, its task is to load the data in the DB)
There is another microservice (Service B) written to perform transactional queries on the data store.
Now while writing integration tests for Service A, is it fine that I use the read operations of Service B to verify that the data has loaded correctly?
By the way Service A does not use Service B to load the data into the data store.
I think that it will increase the coupling between the two services but at the same time directly querying the databases has its own challenges (integrating test environment with the database).

Testing an n-tier web application - should my test project have its own database?

In an n-tier web-app, should I be running integration tests against a different database, one dedicated to testing the code? Is it standard practice to test against the production database as well?
You should never run untested code on production. After all, you don't want to discover that it has a bug that wipes out all data. That's what tests are supposed to find. And you should not have test/staging data in the production system. It is good practice to dump the data out of production and load it into another environment for periodic testing with real-world data.
You should have a test database (not shared with production). It's a good idea to wipe out the data before every test.
You can have smoke tests that run in production. They will pretend to be a user(agent) and visit many pages, maybe even create things (with a special tag so you can find them again and delete them.)
I'd rather think of different database user with own data set. Database schema should be the same. I'd never run tests on production database with the same database user. Test logic shouldn't even be delivered to the client as it may lead to severe security issues.
In my opinion you'd need a full production-like data set for testing purposes, to be able to test every single feature of your application. And also you would need an empty database (without any bussiness data) for application clients to have it as initial point on delivery. Such a dataset shouldn't be tested as there is no data needed to test bussiness logic.

How to perform stress test against sharepoint site using threads

I want to analyze the performance (hence its weak points) of a sharepoint site doing stress test activity. What is needed to be done is call some methods exposed via web service that do the following things inside the sharepoint site:
-create a new group
-add a content to the group
-add an attachment to the content
-delete the content
-delete the previously created group
What is required is a simulation of a situation where there are 4500 users trying to do these operations concurrently (at the same time or more realistically within a short timespan, for example within 5 seconds).
We want to register the execution time of each operation (web method, for example of the "create new group"), too. I thought I could simulate these operations via a console applications using threads and stopwatchs. Is there anyone who has encountered a similar problem and can give me any existing solutions or hints to do it "the right way"? For
example how can I obtain that all threads start at the same instant? Thanks in advance.
I am a user of Visual Studio Load Testing since 2 years, and I find it very powerfull and easy to use. You can run integration tests, navigation in a web site, simulate database load, ... in fact, everything. Because it is a MS application, it is also fully compatible with all MS products like Sharepoint : it's easier to call a WCF service from a unit test than another technology (how to test nettcpbinding ?). You can also use the Visual Studio Profiler for instrumenting your code (and see what line of code is expensive or event ADO.net interactions). You can also easily extend the load testing by many extensibility points.
One important thing is that VS laod testing is "intrusive". It will note only collect response time, request lengths, ... but also all performance counters, database queries, ... All this metrics are saved in a dedicated database like SQLExpress for reporting. There is an AddOn for Excel.
Juste one important note (available for all load testing solutions !) :
You can run load tests from a developer machine or even a single dedicated machine, but you usually can't generate enough traffic to really see how the application responds (you machine can not simulate 500 concurrent users because of limited CPU/Memory/Network) . In order to simulate a lot of users, you'll set up what is known as a Load Test Rig.
A test rig is made up of a Test Controller machine and one or more Test Agent machines as shown in Figure 1. The controller manages and coordinates the agent machines and the agents generate load against the application. The test controller is also responsible for collecting performance monitor data from the servers under test and optionally from the test rig machines.
Here are some links :
MSDN
Dave's introduction
Not saying Visual Studio Load Testing is not a great tool. There are tools, like Tsung, Eventlet (and many others) that can support well over thousands of concurrent users.
Good luck.

Should test data be used in production?

We are deploying an update to our main application in production. The update has been tested in QA and it looks good to go. Our client wants to do a test in production. For that case, we will run the application using "test data" in production and once the test has been finished, we will delete the "test data".
A couple of server admins are against this because "test data doesn't belong to production". I think it's OK since the QA server and the production server have different hardware and the databases house different applications (QA has more databases, production is dedicated). Besides that, are there other facts that I can use to back my opinion?
EDIT: adding context
The application is a tool that automates the reception and validation of data. We receive the files via email and this tool automatically validates them and imports them to the database. We have a BI system that creates reports using this information (excel files are received by email, then validate, then reports/views come out, all this automated).
The "test data" would be old files (good and bad files from previous efforts) that represent true data (actually it is true data but with problems or just too old).
Yes! But manual usage of test data in production does not sound like a good idea to me as it cannot be controlled or monitored. My answer below is assuming the test data is used for automated testing.
Test data in production is "todays" need. This was not a requirement back then when automated testing was not a requirement(or did not exist). So in general this will be frowned upon. Security is the main reason. Its impact in messing up site analytics is another reason. These are genuine and good reasons.
One cannot decide one day to simply put test data in production especially towards the end of project. This needs to be made a requirement from the time development starts. So the test data needs to be there in production from the very first deployment onwards. And its impact needs to studied and documented. Organization as a whole need to understand it's benefit and impact.
Test data needs to be divided based on it's type,need or context. eg: Retrievable test data and editable test data. First step would be to have Retrievable(read only-never changes) test data available. Perhaps this is farthest we could go in many case, still would provide good results. And creation of this read only test data needs to be automated and preferably documented.
The benefits of having test data in production is huge. An automated test of an application is more precious that then the application itself. If the management realizes that then at least the initial "frown" changes.I feel test data in production should be considered a requirement/userstory and all problems against it should be mitigated. And new patterns of development need to evolve in this area.
This discussion is also related to integration testing and this article focuses on the benefits of it over unit testing
Your admins are right. Having test data in production will expose you to the risks (security holes):
Test data in production can be used to do damage to your company (intentional or nonintentional).
For example if you have non excisting identities on production you can do payment to them. If they are linked to real bank accounts you lose money without the ability to detect it.
Test data can change your management reports. When having fake action, some can infuence reports and have impact on decisions made. This will very hard to track and even harder to correct.
Test data can interact with production data. If someone makes a mistake and make a wrong relation production data can be changed based on test data.
There is no good way of detecting you have test data, if you would mark it. All data can be marked as test data. If you handle the test data different in your businesslayer, it whould not be a real test of your production environent.
Nowadays it is a good practice have Staging environment with the same infrastructure configuration like Production, so you can execute pentests, load tests, and do whatever you want to do to ensure that Production will behave as you expect.