i ran a test from pingdom tools to check the loading time of my website... the result is that i have a lot of files that, in spite of being very small (5kB), take a lot of time (1 second or more) to load because there is a big delay between the beginning of the connection and the beginning of data downloading (in pingdom tools, this results in a very large green bar).
Have a look at this for example: http://tools.pingdom.com/default.asp?url=http%3a%2f%2fwww.giochigratis-online.net%2f&id=5691308
How can i lower the "green bar" time? Is this an apache problem (like, i dont know, the number of max. parallel connections, or something similar...), or an hardware problem? Cpu-limited, bandwith-limited, or what else?
I see that many other websites have very little green bars... how do they reduce the delay between the connection and the real data sending?
Thanks!
ps.: the site is made with drupal. Homepage generation takes about 700ms
pps.: i tested 3 other websites on the same server: same problem.
I think it could be the problem with max no. of parallel connections as you mentioned - either on server or client side. For instance, Firefox has default of network.http.max-connections-per-server = 15 (see here) while you have >70 files to be downloaded in your domain and next 40 from Facebook.
You can reduce number of loaded images by generating sprites i.e. the image consisting of multiple small images, and then using CSS to diplay them properly in places that you want. This is widely used e.g. by Google - see http://www.google.com/images/nav_logo83.png
Related
So I have two applications that are processing PDF files:
The first one combines two files and then captures digital signature and fingerprint to then return a single file with the images of the captures.
The second one is a cloud-signer that inserts qr codes for all of the pages of the pdfs and then digitally signs the document placing also some bar codes and legend. This application will eventually receive requests with 100 documents.
The applications both work now with t2.micro instances (some operations take a while e.g. signing 200 docs around 4 minutes) but I would like to know the optimal type to go smoothly without trouble, would M or C instances make a significant difference?
Which one do you recommend, M, C or other?
Thanks,
Felipe
t-type instances are "burstable performance" which means you cannot really utilize them 100% over a long period of time.
I worked on a project where I extracted text from a PDF and it seemed that it was pretty much a CPU-bound task, so c-type instances may be a good choice. However, what you really should take in the account (especially if the workload will be uneven let's say during the day) is elasticity.
It may take some effort, but let you consider using either lambda functions or an auto-scaling groups so you can start more servers when you need it and stop them later. Actually, it may be a significantly more work, but it may save lots of money if you will use this function for a long time.
Also, I would invest some time to make it simple to deploy with various parameters. For example if you create a CloudFormation template and you need only occasionally sign 200 documents fast, you can deploy 24 t2.micro and run them over the period when you need it (like 1 hour) and then shut them down. It will cost you about the same money as running 1 t2.micro the full day.
One service which may help with it is SQS - you can put documents into it and then just let the machines pick up a document, sign it and put it to its destination.
I was used a load of 100 using ultimate thread group for execution in NON GUI Mode .
The Execution takes place around 5 mins. only . After that my test environment got shut down. I am not able to drill down the issues. What could be the reason for server downs. my environment supports for 500 users.
How do you know your environment supports 500 users?
100 threads don't necessarily map to 100 real users, you need to consider a lot of stuff while designing your test, in particular:
Real users don't hammer the server non-stop, they need some time to "think" between operations. So make sure you add Timers between requests and configure them to represent reasonable think times.
Real users use real browsers, real browsers download embedded resources (images, scripts, styles, fonts, etc) but they do it only once, on subsequent requests the resources are being returned from cache and no actual request is being made. Make sure to add HTTP Cache Manager to your Test Plan
You need to add the load gradually, this way you will be able to state what was amount of threads (virtual users) where response time start exceeding acceptable values or errors start occurring. Generate a HTML Reporting Dashboard, look into metrics and correlate them with the increasing load.
Make sure that your application under test has enough headroom to operate in terms of CPU, RAM, Disk space, etc. You can monitor these counters using JMeter PerfMon Plugin.
Check your application logs, most probably they will have some clue to the root cause of the failure. If you're familiar with the programming language your application is written in - using a profiler tool during the load test can tell you the full story regarding what's going on, what are the most resources consuming functions and objects, etc.
I have VPS on hetzner. Server is located in Germany.
It has 256GB RAM, 6CPUs (12 threads).
I have a file which since yesterday, is requested about 30 times in one second. File has 2 Select, 2 Update, 2 Insert queries, so I assumed (not sure how this works) from this file server has about 180 requests per second. So right after this requests started, all the websites on the server just started loading poorly. I made this file run just one select query and than die. This didn't help. In WHM load is aboiut 0.02.
I've checked for error logs and there is no max_user_connection or any error there.
I have enabled slow query log and checked log file. there is nothing (I've tested it with select sleep(10) and this query was logged).
This is visit statistics, please bring your attention to may 30th:
Bandwidth stats for last 24 hours:
There are many errors like this in ssl_log (diff IPs of course):
188.121.206.150 - - [30/May/2018:19:50:03 +0200] "-" 408 - "-" "-"
I've been searching web a lot and couldn't find any solution. Could anyone at least tell what should I monitor or where. I have full access to anything there is possible inside the server. Any help is appreciated.
UPDATE 1
I have subdomain: banners.analyticson.com (access allowed for now) and there I have all the images and html5 files that are requested.
Take one image for example : https://banners.analyticson.com/img/suy8G1S6RU.jpg
It needs too much time to load. As I noticed, this sub domain has some issue.
Script, that I mentioned earlier (with 6 queries) just tries to get one of those banners to the user, so result of that script is to return one banner from banners.analyticson.com.
UPDATE 2
I've checked my script, it is fine. It takes less than 1 second to complete.
I've also checked Top command and there is a result. I'm not sure if $MEM value is fine.
You're going to have to narrow the problem down...
There are multiple potential issues.
First thing to eliminate would be the performance of your new script on a development laptop - I assume you're using PHP, so use the profiling tools to work out what is going on. If it's a database query, you'll see which one by looking at the profiler.
If your PHP script and database queries are fine, the next thing to look at: it sounds like you've hit some bottleneck resource on your infrastructure. In these cases, scripts that run fine as a single request start queueing for the bottleneck resource, and every new request adds to the queue until the whole server starts to crawl. This can be a bit of a puzzle - start with top and keep digging.
Next, I'd look at configuration of Apache to make sure everything is squeaky clean - Apache used to have a default to do a reverse DNS lookup for every request, which slows the server down rather impressively on production. You may also want to look at your SSL configuration - the error you report is linked to a load balancer issue.
If it's not as simple as memory, CPU etc., you're into more esoteric issues. You may need to ramp up a load testing rig so you can experiment without affecting the live site - typically, I do this on a machine as similar to live as possible, using Apache JMeter to generate load, and find the "inflection point". Typically, you see response times increase linearly with the number of concurrent requests, until you hit the bottleneck resource, at which point the response time increases rapidly. As a simple example, if you have 10 database connections available, response time should increase linearly up to 10 concurrent connections, and then become much larger from 11 up.
Knowing where the inflection point is and being able to recreate it allows you to use PHP profiling tools under load. This is a lot of work.
UPDATE
You're using php-cgi; this is easily the most inefficient way of running PHP scripts. Your server is barely breaking a sweat - CPU and memory basically idle. Here's a comparison for how to run PHP; consider changing to mod_php.
I create a photo-gallery site. I want an each photo to have 3 or 4 instances with different sizes (including original photo).
Is better to resize a photo on client-side (using Flash or HTML5) and upload all the instances of this photo to a server separately? Or it's better to upload a photo to a server only one time, but resize it using server resources (for example GD)?
What would be your suggestions?
Also it's interesting to know, how does big sites do this work? For example 500px.com (this site for each photo creates 4 instances and all works fast enough) or Facebook.
There are several schools of thought on this topic, it really comes down to how many images you have an how likely it is that the images will be viewed more than once. It is most common for all of the image sizes to be created using a tool like Adobe Photoshop, GIMP, Sizzlepig or GD (locally or on A server, not necessarily the web server) then upload all the assets to the server.
Resizing before you host the image takes some of the strain off of the end-user's web browser and more importantly reduces the amount of bandwidth required to host the site (especially useful when you are running a large site and paying per GB transferred)
To answer your part about really big sites, some do image scaling ahead of time, others do it on the fly, but typically it's done server side.
Is there a way where I can force caching files at an OS level and/or Web Server level (IIS)
The problem I am facing is that there a many static files ( xslt's for example ) that need to be loaded again and again - and I want to load all these files to memory so that no time wasted on hard disk I/O.
(1) I want to cache it at the OS level so that every program that runs on my OS and which tries to read a file must read it from memory. I want no changing in program source code - it must happen transparently. For example, read("c:\abc.txt") must not cause a disk I/O, it must read it from the memory.
(2) Achieving similar thing in IIS. I've read few things about output caching for database queries - but how to achieve it for files?
All suggestions are welcome!
Thanks
You should look into some tricks used by SO itself. One was that they moved all their static content off to another domain for efficiency.
The problem with default set ups for Apache (at a minimum) is that the web server will pass all requests through to an app server to see if the content is meant to be dynamic. That's a huge waste for content that you know to be static.
Far better to set up a separate domain for static content without an app server. That way, the static requests are not sent unnecessarily to another layer and the web server can run much faster.
Even in a setup where there's not another layer invoked every time, there are other reasons for a separate domain, as you'll see from that link (specifically removing cookies which both reduces traffic and improves the chances of the Internet caching your data).