transaction processing scenario simulation in a web application - testing

I am looking into transaction processing and I cannot find a way to simulate multiple queries (HTTP Methods) to a resource (script) which will act on shared data.
e.g an HTTP GET representing access to a resource from user1 with param1 and another HTTP GET for access from user2 with param2
For example 2 users trying to book a limited resource "at the same time" or access a url which triggers actions that should have all ACID properties.
Is there a way to test such scenarios in a web application?
Should I stick in a "programmable" scenario (a scenario I will code) which can run using a stress test tool ?
What method(s) do you use in such cases?

You can use Apache JMeter to set up test scripts which run multiple simulated users, varying test content and so on. You can even run slaves to test from more than one physical test clients if you need to increase your load. The requests can be created with templates including user-specific data, pick random prepared requests or run scripts to create the data for each request.

Related

Difference between functional test and end-to-end test

What is the difference between functional test and end-to-end test?
Techopedia says that end-to-end test is
a methodology used to test whether the flow of
an application is performing as designed from start to finish. The
purpose of carrying out end-to-end tests is to identify system
dependencies and to ensure that the right information is passed
between various system components and systems.
Techopedia also says the following about functional test:
Functional testing is a software testing process used within software
development in which software is tested to ensure that it conforms
with all requirements. Functional testing is a way of checking
software to ensure that it has all the required functionality that's
specified within its functional requirements.
After reading the above two paragraphs, I'm still confused about the difference between them.
I have a node.js application which accepts requests, then parses the request, then sends the parsed data to a Database.
requests parse requests and send data to the database
Client ---------> node.js app --------------------------------------------> Database
How can I write end-to-end test and functional test for the node.js app I mentioned?
I think in both types of the tests, I should treat the node.js app as a black box. And send requests to it. Then check if the output of the black box is correct or not.
It seems that in my case, there's no difference between functional test and end-to-end test.
As I understand it, the biggest difference between the two is that an end-to-end test requires the test to setup the system components as they are in production. Real database, services, queues, etc. The reason for this is to see that your system is wired correctly (database connections, configuration and such).
A functional test can setup the system with in-memory implementations of your application ports, which would make the test run faster and perhaps allow tests to run in parallel (in some cases). The only thing the test cares about is that a feature works as expected. This can reduce the overhead of setting up certain tests, since preparing 3rd party systems with data can be difficult or time consuming.
I think the definitions of functional and end to end testing could vary based on the context of your project. I have seen different people use these terms to describe different things. That being said, usually this is what the 2 terms mean-
Functional testing - This refers to testing the functionality of system based on the requirements. This usually focuses on different requirements of the system and ensure it is working properly. For example - Logging into an application - could be one requirement and then a person could test this functionality manually or in an automated way. Similarly, adding a product to the cart could be one functionality, then, able to make a payment to purchase a product could be a functionality.
End to end testing - This refers to testing the system based on end to end user flows, instead of testing the system has separate components like in unit testing or story level testing. For example - Logging into the application, then adding a product to the shopping cart, then going to the check out screen and then placing an order and then logging out of the application could be one user flow.
What we follow is slightly different and of course difference in just how your team treats each of them. for further clarity,
Functional Test : Tests a feature say login, verify from database if login data is correct, verify if intended event received, or send to a message bus or any external activity in a Prod like environment like staging environment. You test a particular functionality in a real environment.
End to End testing : Test complete feature like, login to app, view product on view page, select product, checkout and do payment. This could cover multiple microservices as well, or maybe multiple teams. If this flow breaks, we can pin point which of the functional tests failed.
Integration Test: Test integration between multiple components, from a wide spectrum of multiple classes to multiple system. Like can UI connect to some external login service, can backend connect to database. If a functional test breaks, we can watch which Int Test failed and so on with unit test.

Access control of objects in Julia Web Platform

We are creating a online platform and exposing an Julia API via a embedded code-editor. The user can access the API and run some analysis on our web-app. I have a question related to controlling access to the API and objects.
The API right now contains a database handle and other objects that are exposed to the user and can be used to hack the internal system.
Below is the current architecture:
UserProgram.jl
function doanalysis()
data = getdata()
# some analysis on data
end
InternalProgram.jl
const client = MongoClient()
const collection = MongoCollection(client,"dbname","collectionName")
function getdata()
data = #some function to get data from collection
return data
end
#after parsing the user program
doanalysis()
To run the user analysis, we pass user program as a command-line argument (using ArgParse module) and run the internal program as follows
$ julia InternalProgram.jl --file Userprogram.jl
With this architecture, user potentially gets access to "client" and "collection" and can modify internal databases.
Is there a better way to solve this problem without exposing the objects?
I hope someone has an answer to this.
You will be exposing yourself to multiple types of vulnerabilities - as the general rule, executing user inputed code is a VERY BAD IDEA.
1/ like you said, you'll potentially allow users to execute random code against your database.
2/ your users will have access to all the power of Julia to do things on your server (download files they can later execute for example, access other servers and services on the server [MySQL, email, etc]). Depending on the level of access of the Julia process, think unauthorized access to your file system, installing key loggers, running spam servers, etc.
3/ will be able to use Julia packages and get you into a lot of trouble - like for example add/use the Requests.jl package and execute DoS attacks on other servers.
If you really want to go this way, I recommend that:
A/ set proper (minimal) permissions for the MongoDB user configured to be used in the app (ex: http://blog.mlab.com/2016/07/mongodb-tips-tricks-collection-level-access-control/)
B/ execute each user's code into a separate sandbox / container that only exposes the minimum necessary software
C/ have your containers running on a managed platform where tooling exists (firewalls) to monitor incoming and outgoing traffic (for example to block spam or DoS attacks)
In order to achieve B/ and C/ my recommendation is to use JuliaBox. I haven't used it myself, but seems to be exactly what you need: https://github.com/JuliaCloud/JuliaBox
Once you get that running, you can also use https://github.com/JuliaWeb/JuliaWebAPI.jl

Run automated tests on a schedule to server as health check

I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy

Handling dynamic http requests instead of hardcoded http requests in Jmeter

I'm creating a 50 users load test on a JSF web application.
I record a scenario using JMeter proxy for one user who logs in, does some db operations and logs out. After recording the scenario, the recorded test contains http requests and data that particularly belongs to the user used while scenario recording.
At the time of running the test for 50 unique virtual users, the recorded test sends http requests and data which was in the recorded scenario. But in our application, the http requests and data vary depending upon the user. So how do I handle such situations in JMeter when it comes to methods being called depending upon the existence or non-existence of data for a user after logging in?
To be precise how would I make changes in my Test plan to manage dynamic urls and dynamic data for each virtual user?
Latest versions of JMeter allow you to write the whole parameters (raw data) from scratch, so you could use variables in this field.
To achieve dynamic URLs use a Regular Expression Extractor (Post-Processor) on a prior request that define what request will be sent and use the variable in HTTP Request's path field.
If you know what request each type of users will send you could use If Controllers and test a thread variable, created by a previous Regular Expression Extractor, and inside each controller add the specific request.
If the subsequent request for each user is defined by the server, using redirection, just check "Follow Redirection" field.
See JMeter Wiki for more examples on how to do this.

Enable debug logging when inserting ApexTestQueueItem records via the SOAP API

I'm inserting several ApexTestQueueItem records into an Org via the Partner API to queue the corresponding Apex Classes for asynchronous testing. The only field I'm populating is the ApexClassId. (Steps as per Running Tests Using the API)
After the tests have run and I retrieve the corresponding ApexTestResult record(s) the ApexLogId field is always null.
For the ApexLogId field the help documents have the Description:
Points to the ApexLog for this test method execution if debug logging is enabled; otherwise, null
How do I enable debug logging for asynchronous test cases?
I've used the DebuggingHeader in the past with the runTests() method but it doesn't seem to be applicable in this case.
Update:
I've found if I add the user who owns the Salesforce session under Administration Setup > Monitoring > Debug Logs as a Monitored User the ApexLogId will be populated. I'm not sure how to do this via the Partner API or if it is the correct way to enable logging for asynchronous test cases.
You've got it right. That's the intended way to get a log.