I am establishing the SLA for my web application. To do this I want a rollup of the speed of the entire page load -- a single value reported in milliseconds (ms).
I need to be able to have this value programmatically (from a command-line or script - not embedded in a UI)
Then I will push this single value into an existing metrics engine for reporting / graphing (statsd or collectd for those metrics people)
I would like the value to be representative for how a client (Chrome, Firefox) would perform. Though I am less interested in the rendering time of the client and more interested in the total response time: from the initial call through the nested app calls -- but a value including rendering time would be fine.
I believe modern clients have API's which will return an aggregated value -- but I cannot find a API which will provide a rollup value for the speed of the entire page-load.
~~~
Background: When a client browser performs an http get -- the server responds with a framework containing other http gets, which the client browser must then send off for.
The response time value I am looking for is from the initial client browser request through all the nested and embedded gets.
Cheers!
Related
Overview
I’m currently building a prototype to track and control a fleet of drones.
The prototype consists of a service and a web app. In the web app, the location of each drone is displayed in real-time on a map and the user can issue basic commands to each of these drones.
The service is automated and can also issue commands to each of the drones at random times when certain conditions occur.
I am using HiveMQ (an MQTT broker) to facilitate communication between drones, the web app and the service. The web app and the service are both subscribed to the 'telemetry' topic to receive real-time data about the network of drones. The broker will store the telemetry data for each drone directly into a database through the use of HiveMQ's extension functionality.
Specific commands can only be executed if certain criteria are met.
For example: To issue an 'execute mission' command to a drone the service or the web app will make a call to an API. The API will:
Check the drone is not currently on a mission (drone status value must be idle)
Check weather conditions are acceptable in the area the mission is to occur
(Note by 'mission' I mean a drone fly's to a series of set locations autonomously).
If conditions aren't met a response indicating this will be returned to the requester (web app or service). If conditions are met the API will issue the command to the appropriate drone via the MQTT broker and send a response to the requester.
Requirements
I need a storage mechanism that meets the following criteria:
I need to ensure that a race condition does not occur between the web app and the service. That is if a request to issue a command to a drone is being made by the web app, a request made by the service in this time should be automatically rejected.
Drone status between the service and the web app are not synchronous, as a result, they need a synchronized point to check a drones status.
Drones will update their status every second, and API call's to issue commands will be made every 10 - 30 seconds. There will be 5 drones in this prototype but I would like a solution that can scale to 50 drones.
Considered Solution
My solution would be that of a relational database - using a separate table with a 'request_lock' field, this field uses a row-level lock.
When an API call is made it checks if this field is true, if true the request is rejected. If it is false it sets the field to true performs the necessary condition checks and then sets the 'request_lock' field to false when once the command has reached the drone.
I am concerned the status update frequency from each drone does not fit a relational database model and won't scale well. Am I on the right track, or should I be looking to include a NoSQL database in some way to handle status updates?
Thank you to anyone who takes the time to answer.
There are a lot of questions here, so I'll try to pick what seems to be most important:
I am concerned the status update frequency from each drone does not fit a relational database model ..
Should I use a relational or non-relational database?
First, let's calculate the maximum number of drone status updates, per second.
Drones will update their status every second, and API call's [sic] to issue commands will be made every 10 - 30 seconds. There will be 5 drones in this prototype but I would like a solution that can scale to 50 drones.
50 drones * 1 drone-update per second = 50 drone-updates per second
50 drones * (10 / 60) drone-commands per second = 8.3 drone-commands per second
So, can a relational database handle ~60 queries per second?
Yes. Assuming reasonable query complexity, this is within the ability of a traditional relational database. I would not expect the database to need extraordinary system resources, either.
If you'd like to confirm this level of performance with a benchmark, I'd recommend a tool like pgbench.
I have REST backend api, and front end will call api to get data.
I was wondering how REST api handles continuous data update, for example,
in jenkins, we will see that if we execute build job, we can see the continous log output on page until job finishes. How REST accomplish that?
Jenkins will just continue to send data. That's it. It simply carries on sending (at least that's what I'd presume it does). Normally the response contains a header field indicating how much data the response contains (Content-Length). But this field is not necessary. The server can omit it. In such a case the response body ends when the server closes the connection. See RFC 7230:
Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.
Another possibility would be to use the chunked transfer encoding. Then the server sends a chunk of data having its own Content-Length header. The server terminates this by sending a zero-length last chunk.
Websocksts would be a third possibility.
I was searching for an answer myself and then the obvious solution struck me. In order to see what type of communication a service is using, you can simply view it from browser side using Developer Tools.
In Google Chrome it will be F12 -> Network.
In case of Jenkins, front-end is sending AJAX requests to backend for data:
every 5 seconds on Dashboards page
every second during Pipeline run (Console Output page), that you have mentioned.
I have also checked the approach in AWS. When checking the status of instances (example: Initializing... , Booting...), it queries the backend every second. It seems to be a standard interval for its services.
Additional note:
When running an AWS Remote Console though, it first sends requests for remote console instance status (backend answers with { status: "BOOTING" }, etc.). After backend returns status as "RUNNING", it starts a WebSocket session between your browser and AWS backend (you can notice it by applying WS filter in developer tools).
Then it is no longer REST API, but WebSockets, that is a different protocol (stateful).
I'm doing website optimisations using Google's Pagespeed Insights to test improvements. Among the high-priority fix suggestions, is this:
Reduce server response time
In our test, your server responded in 2.1 seconds.
I read the 'helpful' doc linked in this section, and now I'm really confused.
Is the server response time the DNS response, the time to first-byte, or a combination? Is it purely a server-side thing, or could this be affected by, for example, a slow JavaScript resource or ready events in the DOM?
My first guess would have been that it's the time taken from the moment the request was issued, to the 1st byte received from the server, however Google's definition is not quite that:
(from this page https://developers.google.com/speed/docs/insights/Server)
Server response time measures how long it takes to load the necessary
HTML to begin rendering the page from your server, subtracting out the
network latency between Google and your server. There may be variance
from one run to the next, but the differences should not be too large.
In fact, highly variable server response time may indicate an
underlying performance issue.
To take 2.1 seconds would suggest to me that your application/webserver is buffering it's output, so all your server side processing is happening before it sends the content. If you don't buffer then the html can begin being sent to the browser more quickly which may help, however you lose the ability to do things like change response headers late in your logic.
Does Load Runner support JavaScript execution once response is received, unlike Jmeter?
Because in JMeter when we received the response Page if it contains JavaScript or AJAX call then it is not process by JMeter? So is it supported by Load Runner or not?
Yes, TruClient Virtual User type, vesions 11.x and later.
Unless your code is truly asynchronous, where seperate threads are kicking off Javascript and the server requests are arriving substantially different in sequence every time, you really don't need JavaScript processing. Most of the AJAX clients out there are less 'A' and more 'S'ynchronous in their behavior when you look at the sequence of calls for a given business process across multiple recording sessions. Of the remainder that are truely 'A'synchronous in behavior, a substantial majority of the 'A' calls are to third party components that would not be included in your performance test anyway (Can you imagine trying to coordinate your performance test with people at Google because your app includes Google Maps!)
So, back your core core question. Yes, LoadRunner does include a Virtual User type which supports JavaScript processing, the TruClient Virtual User. You could also use a GUI Virtual User or a Citrix|RDP Virtual User if you wanted to run full browsers. To your larger question, do you really need a virtual user which processes JavaScript? Look carefully at your request sequences across multiple recording sessions to understand whethere your business process is truly asynchronous in nature (with your servers and your code) or is synchronous in behavior with your application.
I have a vb.net 2.0 winforms project that is full of all kinds of business reports (generated with Excel interop calls) that can be run "on-demand". Some of these reports filter through lots of data and take a long time to run - especially on our older machines around the office.
I'd like to have a system where a report request can be made from the client machines, some listener sees it, locates a server with low-load, runs the report on that server, and emails the result to the user that requested it.
How can I design such a change? All our reports take different parameters, and I can't seem to figure out how to deal with this. Does each generator need to inherit from a "RemoteReport" class that does this work? Do I need to use a service on one of our servers to listen for these requests?
One approach you could take is to create a database that the clients can connect to, and have the client add a record that represents a report request, including the necessary parameters which could be passed in an xml field.
You can then have a service that periodically checks this database for new requests, and depending on how many other requests are current processing, submit the request to the least busy server.
The server would then be able to run the report and email the file to the user.
This is by no means a quick solution and will likely take some time to design the various elements and get them to work together, but its not impossible, especially considering that it has the possibility to scale rather well (adding more available/more powerful servers).
I developed a similar system where a user can submit a request for data from a web interface, that would get picked up by a request manager service that would delegate the request to the appropriate server based on the type of request, while providing progress indication to the client.
How about write a web service that accepts reporting requests. On completion the reports could be emailed to the users. The web service can provide a Status method that allows your WinForms app to interrogate the current status of the report requests.