JMeter and end-to-end testing - scripting

I have a jmx script that is being used to perform functional and load testing.
The script tests, using 1 user and multiple thread users, a simple order management system that does the following for things:
Load the System
Login
Order Placement (select a product, add to cart, check out, submit order till Order Confirmation Page)
Logout
These steps become steps in the jmx script.
When the script is executed, I see no major issues. JMeter does not report any errors as its gathering performance metrics and processing times.
However post-testing, when we check the database (and the System itself outside of JMeter) - those orders, that should have been created when we ran the JMeter test are not being created.
I assume that when JMeter logs in as a dummy user and performs any transactions on the UI, those transactions see their way through to the database. There is a transaction that goes end-to-end. But it appears that this is not the case here.
Any ideas so as to what might be causing this?
Does JMeter actually push out the actions on the UI all the way to the back-end?
Any help would be appreciated.

First, JMeter is not a browser, it reproduces only trafic with server.
Second, are you adding assertions to check that responses are ok and contain what they should?
Third, you say you use 1 user and N threads, of by this you mean you only have 1 user That you multithread then you test is wrong as it Will provoke caching, transaction contention...
I suggest you Check your script first with one user and view results tree listener. Then check your users by running them all with low number of threads.
Finally run real load test.

Related

How to use Jmeter with timer

I am having a problem with the JMETER, using it with Timer causes Crash to the Jmeter
The case is : I want to create a load of requests to be executed every half hour
Is that something you can do with Jmeter?
every-time i try it it causes Jmeter to keep loading and hangs and require a shut down
If you want to leave JMeter up and running forever make sure to follow JMeter Best Practices as certain test elements might cause memory leaks
If you need to create "spikes" of load each 30 minutes it might be a better idea to consider your operating system scheduling mechanisms to execute "short" tests each half an hour like:
Windows Task Scheduler
Unix cron
MacOS launchd
Or even better go for Continuous Integration server like Jenkins, it has the most powerful trigger mechanism allowing defining flexible criteria regarding when to start the job and you can also benefit from the Performance Plugin which allows automatically marking build as unstable or failed depending on test metrics and building performance trend charts

Pentaho Logging specify Job or Trans for each line

I am running Pentaho Kettle 6.1 through a java application. All of the Pentaho logs are directed through the java app and logged out into the same log file at the java level.
When a job starts or finishes the logs indicate which job is starting or finishing, but when the job is in the middle of running the log output only indicates the specific step it is on without any indication of which job or trans is executing.
This causes confusion and is difficult to follow when there is more than one job running simultaneously. Does anyone know of a way to prepend the name of the job or trans to each log entry?
Not that I know, and I doubt there is for the simple reason that the same transformation/job may be split to run on more than one machine, by more that one user, and/or launched in parallel in different job hierarchies of callers.
The general answer is to log in a database (right-click any where, Parameters, Logging, define the logging table and what you want to log). All the logging will be copied to a table database together with a channel_id. This is a unique number that will be attributed to each "run" and link together all the logging information that comes from all the dependent job/transformations. You can then view this info with a SELECT...WHERE channel_id=...
However, you case seams to be simpler. Use the database logging with a log_intervale of, say, 2 seconds and SELECT TRANSNAME/JOBNAME, LOG_FIELD FROM LOG_TABLE continuously on your terminal.
You can also follow a specific job/transformation by logging in a specific table, but this means you know in advance which is the job/transformation to debug.

Jmeter Performance Testing - Difference in result count getting from API and from MongoDB using JSR223 sampler

I have created a performance test suit using JMETER 4.0 and i have multiple test cases which are divided in 2 fragment and i am calling them from a single thread. Following are the type of test cases which are in 2 fragments.
Test Fragment 1: CURD operation on User
Test Fragment 2: Getting User counts from MongoDB and API's and comparing them
and test cases from Test Fragment 1 runs first multiple time based on thread count and then test case from second fragment runs
In Test Fragment 2 i am having these two test cases
TC1: Fetching user count from mongoDB(using JSR223 Sampler)
TC2: Fetching user count using API
When 2nd Test Fragment runs then test case to fetch user count from mongoDb gives different count compared to test case which fetch count using API directly. API's are talking time to update data in mongoDB as there could be some layers which takes time to update data in database(i am not sure which layer exists and why it takes time exactly). The Scripts work fine when i run it for single user so there is not doubt that something is wrong with script.
someone please suggest what approach we can use here to get the same count.
a. Is it a good approach to add timers/delay or something else can be used?
b. If use use timer/delay is it effects performance test report as well, are those delays going to add up in our performance test reports?
It might be the case you're facing a race condition, i.e. while you're performing read operation from database with one thread the number was already updated with another thread.
The options are in:
Amend your queries so your DB checks would be user-specific.
Use Critical Section Controller to ensure that your DB check is being executed only by 1 thread at a time
Use Inter-Thread Communication plugin in order to implement synchronisation across threads based on certain conditions. The latter one can be installed using JMeter Plugins Manager

How can JMeter identify what to optimize in a website?

I'm new with website performance testing field and will be using JMeter. After playing with it, I am still having troubles with identifying what to optimize in a website load time?
I'm currently still learning about the load testing - who should I give the performance report to? Developers/Programmers? or Network department? Example of an error I usually get is 502 error or timeouts.
Thanks in advance.
JMeter cannot identify anything, all it does is executing HTTP requests and measuring response times. Ideally it should be you, who takes JMeter raw results, performs analysis and creating the final report highlighting current problems and bottlenecks (and ideally what needs to be done to fix them)
Consider the following checklist:
You load test needs to be realistic, a test which doesn't represent real-life application usage does not make sense. So make sure your JMeter test carefully represents real users in terms of cookies, headers, cache, downloading images, styles and scripts, virtual user groups distribution, etc.
Increase and decrease the load gradually, this way you will be able to correlate such metrics as transactions per second and response time with increasing/decreasing number of users so make sure you apply reasonable ramp-up and ramp-down settings.
Monitor the application under test health. The reason of error may be as simple as lack of hardware resources (CPU, RAM, Disk, etc.). It can be done using i.e. PerfMon JMeter Plugin.
Do the same for JMeter instance(s). JMeter measures response time from "just before sending the request" until "last response byte arrives" so if JMeter is not able to send requests fast enough - you will have high response time without other visible reason.
Website load time is a combination of many factors including the browser rendering time, script execution time, resource download time etc. You can't use JMeter to validate the front end time. You can achieve it using chrome developer tools and other similar tools available for each browser. Refer https://developers.google.com/web/fundamentals/performance/
JMeter is primarily used for measuring the protocol level performance to ensure that you server can process the heavy workloads when it is subjected to real time stress conditions from several customers. It won't compute the java script execution time or HTML parsing time. Your JMeter script should be written in such a way that it emulates the logic of your java script executions and other presentation logic to form the request inputs and the subsequent requests.
Your question is way too open ended and you might have to start with a mentor who can help you with the whole process and train you.
Also, the mindset for functional testing and performance testing are totally different. Lot of key players in the performance area have suggested to measure the load time as part of the functional testing efforts while the majority of the server side performance is validated by the performance team.

Is it a good practice to check the state of a database in acceptance tests?

Consider that one is supposed to write automated acceptance tests for e-commerce system. For example, you want to check that when customer completes checkout operation he has a new order registered in the system that is linked to his account. Now one thing, of course, that you can check is that there are some UI messages displayed like 'Order completed succesfully'. However that does not guarantee that the order is in fact persisted in the db. My question is whether it is OK to additionally verify with querying the DB that order indeed was saved. Or should I verify that inexplicitly in other acceptance test e.g. by checking list of orders (which should not be empty after completing checkout operation succesfully) beforehand performing checkout operation?
you can check is that there are some UI messages displayed like 'Order completed succesfully'. However that does not guarantee that the order is in fact persisted in the db.
Actually it depends. If we're talking Selenium - they do suggest such database-validation:
Another common type of testing is to compare data in the UI against the data actually stored in the AUT’s database. Since you can also do database queries from a programming language, assuming you have database support functions, you can use them to retrieve data and then use the data to verify what’s displayed by the AUT is correct.
However testing the Acceptance criteria should be done only after they are clear enough. If there is no such specific E2E requirement - you shouldn't include this DB check in these tests. You could put them in you functional and integration level, where the SUT architecture allows such black-box approach. If we concider the typical N-tier (UI-Backend-DB), your black-box will be the Middleware - input from UI, [skip all between], output is in DB.
This of course will introduce a bit more complexity and your test will become brittle (which is especially true for the UI ones). Further more you should think for expensive objects and keeping/disposing them properly (e.g. DB connection per suite run).
IMHO you should have all this covered in your auto tests:
it is OK to additionally verify with querying the DB that order indeed was saved. Or should I verify that inexplicitly in other acceptance test e.g. by checking list of orders (which should not be empty after completing checkout operation succesfully) beforehand performing checkout operation
And the only question would be where is the place to put them.