I have an Apache server with 16GB of Ram. The script cap.php returns a very small chunk of data (500B). It starts a mysql connection and makes a simple query.
However, the response from the server is, in my opinion, too lengthy.
I attach a screenshot of the Developer Tool Panel in Chrome.
Beside SSL and the TTFB there is a strange delay of 300ms (Stalled).
If I try a curl from the WebServer:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -k -H 'miyazaki' https://127.0.0.1/ui/cap.php
Lookup time: 0.000
Connect time: 0.000
PreXfer time: 0.182
StartXfer time: 0.266
Total time: 0.266
Does anyone know what that is?
Eventually, I found that if you use SSL it is really better and it does really matter to switch on the KeepAlive directive into Apache. See the picture below.
According to the Chrome documentation:
Stalled/Blocking
Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation.
Additionally, this time will include when the browser is waiting for
an already established connection to become available for re-use,
obeying Chrome's maximum six TCP connection per origin rule.
So this appears to be a client issue with Chrome talking to the network rather than a server config issue. As you are only making one request I think we can rule out the TCP limit per origin (unless you have lots of other tabs using up these connections) so would guess either limitations on your PC (network card, RAM, CPU) or infrastructure issues (e.g. You connect via a proxy and it takes time to set up that connection).
Your curl request doesn't seem to show this delay as it has just a 0.182 wait time to send the request (which is easily explained with https negotiation) and then a 0.266 total time to download (including the 0.182). This compares with 0.700 seconds when using Chrome so don't understand why you say "total time is similar" when to me it's clearly not?
Finally I do not understand your follow up answer. It looks to me like you have made request, presumably after a recent other request as this has skipped the whole network connection stage (including any grey stalling, blue DNS lookup, orange initial connection and purple https connection). So of course this quicker. But it's not comparing like for like with your first screenshot in your question and is not addressing your question.
But yes you absolutely should be using keep-alives (they are on by default in most web server so usually takes extra efforts to turn them off) and https resumption techniques (not on by default unless you explicitly add this to your https config) to benefit any additional requests sent shortly after the first. But these will not benefit the first connection of the session.
Related
I have been using jquery to try and pull data from an API. However I am getting a 504 error. Even when I am using postman to test the data this happens. Can anyone suggest what I need to do to get around this?
I recently experienced this issue on one of my app's that was making an ambitious call to it's Firebase Database - it was grabbing a very large record, which took over 60 seconds (the default timeout) to retrieve.
For those experiencing this error, that have access to their app / site's hosting environment, which is proxying through NGINX, this issue can be fixed by extending the timeout for API requests.
In your /etc/nginx/sites-available/default or /etc/nginx/nginx.conf add the following variables:
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
send_timeout 240;
Run sudo nginx -t to check the syntax, and then sudo service nginx restart.
This should effectively quadruple the time before NGINX will timeout your API requests (the default being 60 seconds, our new timeout being 240 seconds).
Hope this helps!
There is nothing you can do.
You are sending a request to a server. This particular request fails, because the server sends a request to a proxy, and gets a timeout error. Your server reports this back to you as status 504.
The only way to fix it is to fix the proxy (make it respond in a timely manner), or to change the server to not rely on that proxy. Both are outside your area.
You cannot prevent such errors. What you can do is find out what user experience there should be when such a problem happens, and implement it. BTW. If you get 504 errors, then you should also expect timeout errors. Say you make a request to your server with 60 second timeout, and your server makes a request to the proxy with 60 second timeout. Because both timeouts are the same, sometimes your server will receive the proxy timeout and send it to you (status 504), but sometimes your request to the server will time out just before that, and you get a timeout error.
One way to fix this is, Changing proxy settings to "NO PROXY" in the browser
It works in Firefox browser, Others I don't know.
Follow the steps:
1. Go to preferences (top right corner 3 lines, search in drop down)
Search for proxy (Ctrl+F)
Go inside the settings
select "NO PROXY" in the choice buttons, below "Configure Proxy Access to the Internet"
refresh web page.
No matter what changes we've made TTFB is high!
Surprisingly, server side guy insists that everything is set correct and the server runs fast enough but webpagetest report didn't change at all!
After so much optimization, I can't believe it doesn't change and I started to suspect TLS, gZIP and redirections...
Did I miss something?
It appears that the page itself takes about a second to generate. Here's a curl output (using this to get timestamps), and notice the value of time_appconnect
$ curl -w "#curl_format.txt" -so /dev/null https://jimmydance.com/
time_namelookup: 0.004
time_connect: 0.217
time_appconnect: 0.921
time_pretransfer: 0.921
time_redirect: 0.000
time_starttransfer: 1.348
----------
time_total: 1.352
This would suggest that the bottleneck is the application generating the page. Looking at the page itself, it shouldn't take so long, I would look at the framework you're using or the resources allocated to the server.
Using #frederik-deweerdt's answer and a breakdown of curl timing figures
Your server is spending nearly 1 second doing its TLS handshake (time_appconnect: 0.921)
TTFB is time_starttransfer - time_appconnect (1.348 - 0.921 = 0.427)
the time the server spent generating the html is TTFB - (time_connect - time_namelookup) (0.427 - (0.217 - 0.004)) = 0.214
so I'd say the bottleneck isn't the app, instead it's the server's slow TLS negotiation and some latency in the network.
Have a look here for a good explanation of these curl timing figures: https://blog.cloudflare.com/a-question-of-timing/
The diagram below shows what each of those timings refer to against a typical HTTP over TLS 1.2 connection (TLS 1.3 setup needs one less round trip):
time_namelookup in this example takes a long time. To exclude DNS resolver performance from the figures, you can resolve the IP for cURL: --resolve www.zasag.mn:443:218.100.84.167. It may also be worth looking for a faster resolver :).
time_connect is the TCP three-way handshake from the client’s perspective. It ends just after the client sends the ACK - it doesn't include the time taken for that ACK to reach the server. It should be close to the round-trip time (RTT) to the server. In this example, RTT looks to be about 200 ms.
time_appconnect here is TLS setup. The client is then ready to send it’s HTTP GET request.
time_starttransfer is just before cURL reads the first byte from the network (it hasn't actually read it yet). time_starttransfer - time_appconnect is practically the same as Time To First Byte (TTFB) from this client - 250 ms in this example case. This includes the round trip over the network, so you might get a better guess of how long the server spent on the request by calculating TTFB - (time_connect - time_namelookup), so in this case, the server spent only a few milliseconds responding, the rest of the time was the network.
time_total is just after the client has sent the FIN connection tear down.
I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.
I tested my web site with 100 users with http and https. The response time obtained in https is much higher compared to the response time obtained in http. The response time of https is nearly four times greater than http. Can anyone explain me why the response time is higher in https compared to http? or do i need to change any SSL property in jmeter system.properties? Thanks in Advance..!
SSL Handshake assumes 4 requests for establishing a connection so first request should be something like 4x times longer than in case of HTTP. See The SSL handshake diagram for more info
However if you receive 4 times performance degradation for all requests - that doesn't sound right.
There are following JMeter properties which control SSL flows:
https.sessioncontext.shared - controls whether SSL session contexts are created per thread (if it's set to false) or shared (if it's set to true)
https.use.cached.ssl.context - controls if cached SSL context is being reused between iterations
These properties live in jmeter.properties file under /bin folder of your JMeter installation. It's also possible to override them using -J command line key as follows:
jmeter -Jhttps.sessioncontext.shared=true -Jhttps.use.cached.ssl.context=true
See Apache JMeter Properties Customization Guide for more details.
If above setting won't help you'll need to review your test plan and perhaps profile application to see where this extra time is spent.
Any ideas as to why my Apache/2.0.49 server always waits 20 seconds from receiving a request that is executed using a cgi-bin script to starting to run that script?
The server responds immediately to normal HTTP requests that only use static files, but always takes 20 seconds to respond to cgi-bin requests. I've used tcpdump to time the arrival of the request and printed the time at the beginning of the script to determine that the delay is between those two events.
I can't see anything in the configuration that relates to 20 seconds. The server runs Red Hat Linux 3.3.3-7 & I'm pretty sure that it used to respond to cgi-scripts immediately, but am unsure when it started being slow & what might have changed to cause that.
Thanks in advance for any help/suggestions you may be able to provide.
We had a similar problem with our Apache server running PHP but only with a specific client, in the end it turned out the issue was that the PHP was trying to fetch a file from a remote server but didn’t have connectivity, after 20 seconds the PHP “gave up” and the flow was completed. You should try to search your code for something similar.