Using Jmeter, how to find how many simultaneous users  Web server will handle - testing

I have downloaded the JMeter and played around it. It is working fine. I have one quick question. I need help in this.
Using JMeter tool How can I say that given webserver handles efficiently n number of user at the given second or minute.
.
Please help me as soon as possible.
Thank You
Regards
Ganapathy

Answer to that question is the very reason why we do performance testing.
We primarily want to find out how application response time grows as we increase the number of parallel users.
To find out you can start with jMeter Plugins Ultimate Thread Group to gradually add users during a test.
To visualize test results, use Response Times vs Threads graph, which also comes with jMeter Plugins.
But that graph only shows average response time for specific number of users. To include time component use Composite Graph in which you'll include number of threads (users) and response time and you'll be able to see real time how response time changes with number of users.
That's where I'd start.

Try to increase the number of requests (and threads that fire requests) and see how the answer time behaves. Part of the answer is the question which response time is acceptable for you. Also note that this will not be equal to real users as you are firing requests from a single machine and in a manner not comparable to real users requesting real pages.

Read this comparison, it may help you.

Related

How to calculate response time between elements in web page using Robot Framework selenium

I have a browser where customer told to automate the test case of calculating the response time between elements when click and load time of it.
is there any keyword available ?
the best solution for that is to mesure accoding to Jmeter, it's not time that will come from code because there is difference between machine and machine , but on the API side there is no difference therefore the solution will be the Jmeter for example
So this will possibly answer your question
Calculating response time is widely used technology, so you could just make simple google...
There are more other, different questions and tutorials about this

What is a good strategy for polling Microsoft Graph API so that I don't get throttled?

My goal is to maintain "real-time" (or as close as possible) information about the email messages sent by a group of users.
My ideal solution would be to periodically query the API for messages by all users in the group. This feature is not (yet?) implemented.
My second choice would be to create subscriptions (https://graph.microsoft.io/en-us/docs/api-reference/v1.0/api/subscription_post_subscriptions) for every member in the group and then request message information after I become aware of an event. The problem is, in practice, I am only allowed to create 20-30 simultaneous subscriptions (Issues to use Webhook for Microsoft Graph API), which might not be enough.
So I'm stuck with polling all the users in a cycle. The main problem with this approach is I can't find any information how many request are "too many", ie I get throttled. I want to maximize the number of requests to minimize the time it takes to get through one cycle.
A solution that comes to mind is to develop an adaptive program that slowly decreases the time between requests until throttled, then abruptly adds some time to it until a nice balance is found and maintained. This seems like a lot of work though. Right now I'm working on the assumption that 1/second is about the highest I can safely go (0.5 seconds on average per round trip, then a cool down of another 0.5 seconds).
What is the best way to deal with an unknown throttling limit in general, and Microsoft Graph in particular?
Edit:
While I think the accepted answer is a good response, it might not be suitable for all cases. For instance, if you don't want to use the 365 API, and you don't mind using beta features, perhaps you might check out this (delta tokens), which seem to be designed for real-time syncing with the data.
The only potential downside with the accepted answer is that you still need a subscription for each user you want to track (I think...), and there are limits on those. Very curious as to how other people tackled this problem.
While still in Preview, you may want to take a look at Outlook Streaming Notifications. These APIs provide a more robust notification model than simple web hooks. You would still need to establish multiple subscriptions but I expect you'll see far less latency.

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).

Stress test our store's checkout process

I want to simulate our whole checkout process under load. This essentially involves running a number of POSTs in sequence, where the client is storing a unique cookie for each sequence that allows the session to be preserved. Can anyone recommend a software or service that meets these conditions?
This sort of thing could be very easily, effectively and freely accomplished using Apache JMeter. You can either record the journey using JMeter's proxy or simply add the requests manually.
To simulate cookies add a Cookie Manager to the testplan. For any other tokens or session ids that need to be correlated you can use a Regular Expression Extractor.
There are lots of options for this kind of test. Free/open-source tools will require a bit more work on your part but are otherwise free. Tools like ours (Load Tester 5) will get the job done much quicker, but there is a cost for the software. If your organization does not have much experience with load testing and are on a tight schedule, you might want to bring in outside help to help you meet your deadline and learn the process (we offer services as well!).

Bloomberg Session timeout?

As informed by the example in the Bloomberg APIv3, i need to start a Bloomberg session to open a service, then i need to use the service to create a request.
My question is, if my program sent a request, got the answer, and then after a while it might need to send another request. In this situation, how do i determine whether the Session/Service is still good to use to send the request, or do i need to start another session?
Does it cost much to start a session?
Dose it bother Bloomberg's server if i start and stop a session quite often?
BTW, when i'm retrieving historical data, what's the proper size of data to ask for within a single request?
Thanks a lot for your kind help!
There are many questions here. The following answers are just my opinion, your best bet is to ask bloomberg themselves from a "Help Help" in your terminal session. Tell the person at the other end that you want your question to go to the API team.
Q: How do I determine if the session is still good?
A: I don't know of any other way than using is and seeing if an exception occurs. However I have had sessions stay open for many hours perfectly happily.
Q: Does it cost much to start a session?
A: Bloomberg don't give any guidance on this, but compared with the overhead of fetching data it doesn't seem much.
Q: What is the proper size of data to ask for?
A: I believe if you ask for a lot bloomberg will break the request up for optimal transport, so you should ask for as much as you can in one request, as it will be more efficient. Beware of stepping over your data limits though.