The server should randomly generate a stop time per round and per room
The interface should allow for creating new rooms
The interface should allow users to record a time in room simultaneously by pushing a button and showing all users times and sort by who was closest to the user round stop time
When round has ended a pause of 5 seconds is set for timer and then server randomly gets a new stop time
Related
I am new to JMeter. I am working with it the last month. The problem that i am facing is with the graph that shows the active threats over time. What i want to achieve is a linear graph that will show that every 2 seconds a new threat is entering the application and do whatever it needs to do. My set up is as follow:
I can not add loop count to infinite as each user is executing different tasks that can be executed only once. It can not reuse the data and hit the services/tasks again with the use of the same user.
The process is:
Login
Get Requests
Post requests
If i execute my scenario i am getting the following graph:
What i need to do in order to get something like the below:
You're dealing with the Listener which means that it will plot the first data point only when first sampler reports its metrics.
If your first request takes 10 seconds you will see the first dot in the Active Threads Over Time chart at 10 seconds when 3 users are online already.
So if you want to see "smooth" arrival of virtual users you need to add a "synthetic" sampler with a couple of milliseconds response time before your other samplers (for example a Dummy Sampler will be a perfect match), this way listeners will take it as the starting point
Demo:
I have built a react native app/game where a user has 30 mins to finish a task....when they start task, the 30 mins starts to countdown and it is registered in DB (Firebase) that user is "in play". When they complete task (or 30 mins run out) then DB is again updated to "not in play".
Countdown function is operating on phone and not on server.
Problem is that if the user exits the app, then the counter on the phone ceases (the user is no longer "in play") but the DB does not know about it.....there appears to be no "user has exited app" event/handler that I can use to let DB know that user has quit.
I was thinking maybe the countdown logic should be running on backend but I cant think how.....any ideas?
Currently there is no way to handle app terminated in React Native so I think your best shot is to implement it on the backend.
How about when the user starts, you save the time the user started, and if the time difference between now and when the countdown was started surpasses 30 minutes the user is no longer "in play".
One way to detect that the user has left the game would be with Firebase's onDisconnect handler. With this call you register a write operation on the database that is executed when the server detects that the client is gone.
The server can detect this in two ways:
If the client disconnects cleanly, it sends a message to the server that it is disconnecting and the server runs the disconnect handlers for that client straight away.
If the client disconnects in another way, the server will detect that the client is gone when the socket times out, which may take a few minutes.
So in your case you could use an onDisconnect handler to either remove the player from the game, or otherwise mark them as "gone".
The only problem with this approach is that dirty disconnects may take a few minutes, which might be too long for your scenario.
An alternative would be to have the client write a message into the database periodically to signify that it's still here, e.g. a lastUpdated timestamp.
You can then in any code that reads the data use that timestamp to detect if the player was still recently playing, and consider them "gone" after a certain period that works well for your game. This code can then remove the player from the database.
This code can run in a server-side component if you want, but I've in the past also run this type of code in the client and then used (server-side) security rules to ensure it can only remove users that are "gone".
Here is my Test Scenario:
First, I am generating a load which includes only Login Requests (say for 1000 users). I am using "Ultimate Thread Group" and "Constant Throughput Timer" in my script. Constant Throughput Timer Value=120/sec is used. I want to run this test for five or ten minutes. My load would be held for 3 minutes.
During this Test(while loads are held) , I am sending another set of login requests from the different machine (say for 100 users). I want to measure the response time of this 100 users login which I've generated from the different machine.
But My Requirement is: While I'm sending 100 users login requests, My Previous 1000 users login requests session should be alive on the server. I've also checked "Use KeepAlive" in my login sampler.
So, How can I achieve this?? How can I be sure that my previous all the login requests sessions are held on the server?
Test Script:
Image 1: Screenshot for Ultimate Thread Group
Image 2: Screenshot for Constant throughput timer
You need to consider following fields of ultimate thread group -
Shutdown time for 1000 threads ( ramp down time)
Hold load for - for 1000 threads.
Initial delay for 100 threads. ( time between starting the script and first server hit)
start time for 100 threads - make sure the 100 threads get ramp up before 1000 threads get shut down. Each thread would be start after ( start time/ start thread count ) seconds from the previous thread.
you need to configure these values in such a way that the first 1000 threads don't get shut down till the all of the next 100 users are active.
Also you can use the active thread over time graph which is provided with in the ultimate thread group to see how the many threads would be active.
P.S Don't confuse the thread number with number of requests, each thread will create multiple requests for seconds in field " Hold Load for ".
On a penny auction site, there are a few fundamental requests that happen over time, namely:
Bidding request (when someone places a bid)
Timer updates
Leading bidder updates
I am trying to understand long polling a bit better and I'm stuck with this. As far as i know, Long polling is there to reduce Ajax requests. I.e. By only having ONE for visual updates, and ONE for actions. So, therefore:
bidding request (to place bids) will remain as is, but all the visual update requests will be combined into one "long poll" request, right?
If the user connects to the site for the first time, he will request the current state of the page by also passing in what he was last told the state of the page was. The server will compare it with the state of what it should be, and if they are different, it will pass the new state back to the user, correct?
When passing the state back, the LONG POLL will effectively stop, the screen will be updated, and a new LONG POLL will be started, correct?
Is this understanding correct so far?
If that is so, how will this in any way decrease the number of requests to the backend if the server still has to compare the state?
How will this help if the page is opened in 50 different windows by one user?
Long polling is used to simulate a connection in which the server pushes data to the client (rather than what is actually happening - which is the client requesting the information from the server). Basically the client requests data from the server, but rather than returning data to the client immediately the server 'holds' the request open - it can then return data to the client at a later time point - which can be used to simulate the server updating the client in 'real time'.
So in your example of an auction site the client might long-poll the sever for an item bid amount - the server would hold this request open, and when the bid value on that item changes can return the updated amount to the client.. this can be used to give the impression of the server updating the client as the bid amount changes.
As far as requests to the server go, this very much depends on how this is implemented. Obviously using long polling will reduce the number of requests made to the server compared with, say, getting the client to issue a new 'standard' request every second to check for updates. Multiple instances of the client will still result in multiple requests to the server - and moreover the server still has to deal with the overhead of holding the long polling requests open and responding to these when appropriate.. Apparently different servers, and server architectures, deal with this more effectively than others.
HelloA Windows Phone application need to connect to a server and get messages from it. This is done using WCF and long polling on the server. 3 minutes is the timeout defined on the server. Call from windows phone is done using HttpWebRequest.
The problem is that Windows Phone devices have a timeout of 60 seconds for get request (emulator have a different value, greater than 3 minutes).
Currently i can't decrease server timeout. Doing a new GetRequest after the 60 seconds doesn't get anymore messages.
Does anyone have an idea ?
Thanks
I don't think leaving a connection open is a good idea on mobile devices. I'm assuming that's what you're doing. In my app, I would just poll whenever needed by creating a new HttpWebRequest. But it made sense to do this in my app, because I would be updating train arrival status every 40 seconds.
If you're trying to pull data on a given schedule, put a timer in and just call the webserver every 3 minutes or whatever the requirement is.
If you want to be able to check things (when the app is closed) or if there's rarely fresh data on the server, then you'd need to implement a Push mechanism.
Update: Here's a good article on dealing with the timeout issue - http://blog.xyzzer.me/2011/03/10/real-time-client-server-communication-on-windows-phone-with-long-polling/
Update 2: What if you arranged it so that, you have cascading connections - what I mean is since you can't go beyond 60 seconds per connection, you can write a class that'll house two connections and once one of them is about to timeout, say several seconds before, you can start opening the other connection - you can pick the timing so that there's at most 5 seconds of overlap between them. This way you could have your always open connection.
Also see what these guys have done with the GChat app, they have their source code available at this link. This may provide a more proper design.