imageresizer.net poor performance with simultaneous requests - imageresizer

I am trying out imageresizing.net, but found their performance to be really poor. 5 simultaneous request to resize 5 different large jpg on a 8 core machine, and the performance is like 5 second per image.
While a single request to resize for the same image is 1.1 second.
This is on a clean Win 2008 r2 server, on a separate new asp.net site running .net 4 in integrated mode, and the imageresizing library running as HTTP Module.
I can't validate the claimed performance and scale from their web site.
Can someone share their experience with using imageresizing.net, is the kind of performance I measured is normal? It seems that the library cannot resize multiple images at the same time, and rely on the disk cache to gain performance on subsequent request of the same image. My scenario is that re sizing of images will most likely not be repeated, ie won't have cache hit, hence raw performance is important.

Related

jmeter Load Test Serevr down issues

I was used a load of 100 using ultimate thread group for execution in NON GUI Mode .
The Execution takes place around 5 mins. only . After that my test environment got shut down. I am not able to drill down the issues. What could be the reason for server downs. my environment supports for 500 users.
How do you know your environment supports 500 users?
100 threads don't necessarily map to 100 real users, you need to consider a lot of stuff while designing your test, in particular:
Real users don't hammer the server non-stop, they need some time to "think" between operations. So make sure you add Timers between requests and configure them to represent reasonable think times.
Real users use real browsers, real browsers download embedded resources (images, scripts, styles, fonts, etc) but they do it only once, on subsequent requests the resources are being returned from cache and no actual request is being made. Make sure to add HTTP Cache Manager to your Test Plan
You need to add the load gradually, this way you will be able to state what was amount of threads (virtual users) where response time start exceeding acceptable values or errors start occurring. Generate a HTML Reporting Dashboard, look into metrics and correlate them with the increasing load.
Make sure that your application under test has enough headroom to operate in terms of CPU, RAM, Disk space, etc. You can monitor these counters using JMeter PerfMon Plugin.
Check your application logs, most probably they will have some clue to the root cause of the failure. If you're familiar with the programming language your application is written in - using a profiler tool during the load test can tell you the full story regarding what's going on, what are the most resources consuming functions and objects, etc.

Server load is minimal but website responds poorly

I have VPS on hetzner. Server is located in Germany.
It has 256GB RAM, 6CPUs (12 threads).
I have a file which since yesterday, is requested about 30 times in one second. File has 2 Select, 2 Update, 2 Insert queries, so I assumed (not sure how this works) from this file server has about 180 requests per second. So right after this requests started, all the websites on the server just started loading poorly. I made this file run just one select query and than die. This didn't help. In WHM load is aboiut 0.02.
I've checked for error logs and there is no max_user_connection or any error there.
I have enabled slow query log and checked log file. there is nothing (I've tested it with select sleep(10) and this query was logged).
This is visit statistics, please bring your attention to may 30th:
Bandwidth stats for last 24 hours:
There are many errors like this in ssl_log (diff IPs of course):
188.121.206.150 - - [30/May/2018:19:50:03 +0200] "-" 408 - "-" "-"
I've been searching web a lot and couldn't find any solution. Could anyone at least tell what should I monitor or where. I have full access to anything there is possible inside the server. Any help is appreciated.
UPDATE 1
I have subdomain: banners.analyticson.com (access allowed for now) and there I have all the images and html5 files that are requested.
Take one image for example : https://banners.analyticson.com/img/suy8G1S6RU.jpg
It needs too much time to load. As I noticed, this sub domain has some issue.
Script, that I mentioned earlier (with 6 queries) just tries to get one of those banners to the user, so result of that script is to return one banner from banners.analyticson.com.
UPDATE 2
I've checked my script, it is fine. It takes less than 1 second to complete.
I've also checked Top command and there is a result. I'm not sure if $MEM value is fine.
You're going to have to narrow the problem down...
There are multiple potential issues.
First thing to eliminate would be the performance of your new script on a development laptop - I assume you're using PHP, so use the profiling tools to work out what is going on. If it's a database query, you'll see which one by looking at the profiler.
If your PHP script and database queries are fine, the next thing to look at: it sounds like you've hit some bottleneck resource on your infrastructure. In these cases, scripts that run fine as a single request start queueing for the bottleneck resource, and every new request adds to the queue until the whole server starts to crawl. This can be a bit of a puzzle - start with top and keep digging.
Next, I'd look at configuration of Apache to make sure everything is squeaky clean - Apache used to have a default to do a reverse DNS lookup for every request, which slows the server down rather impressively on production. You may also want to look at your SSL configuration - the error you report is linked to a load balancer issue.
If it's not as simple as memory, CPU etc., you're into more esoteric issues. You may need to ramp up a load testing rig so you can experiment without affecting the live site - typically, I do this on a machine as similar to live as possible, using Apache JMeter to generate load, and find the "inflection point". Typically, you see response times increase linearly with the number of concurrent requests, until you hit the bottleneck resource, at which point the response time increases rapidly. As a simple example, if you have 10 database connections available, response time should increase linearly up to 10 concurrent connections, and then become much larger from 11 up.
Knowing where the inflection point is and being able to recreate it allows you to use PHP profiling tools under load. This is a lot of work.
UPDATE
You're using php-cgi; this is easily the most inefficient way of running PHP scripts. Your server is barely breaking a sweat - CPU and memory basically idle. Here's a comparison for how to run PHP; consider changing to mod_php.

Performance issue for Rails: How to send gzip assets

I am using rails asset pipeline feature in production environment.I have written some setting in nginx to send files in the gzip format and files are coming properly in gzip format.I guess the browser is automatically decoding it so hence i am not able to find out whether the js files are comming in gizp format or not.
I have put the following command and i am getting "content-encoding: zip" in the response.
curl -v -H 'Accept-Encoding: gzip' -o /dev/null http://www.example.com/assets/style.css 2>&1 | grep -i Content-Encoding
I have written below setting in nginx to send files in the gzip format and files are coming properly in gzip format.
location ~ ^/(assets)/ {
root /home/webuser/app/woa/public;
gzip_static on;
expires max;
add_header Cache-Control public;
# access_log /dev/null;
}
How will i come to know that files are coming in gzip format or not??
Also please suggest any other options which can be helpful to improve the performance of the site.
Not a direct answer to your first question(if you solved it please do explain), but for improving site performance remember the Perfomance Golden Rule:
80-90% of the end-user response time is spent on the front-end. Start there.
Below is a non-exhaustive list areas of improvement for increasing performance in a Rails app:
Diagnosing the Problem:
YSlow / Google Page Speed
A useful diagonsis tool for identifying perfomance issues is Yslow or Google Page Speed They are browser extensions that diagnoses and identifies common issues slowing down your app (particularly on the front end).
Back-end Tools
For your Rails back-end I recommend incorporating tools such as Bullet & NewRelic directly into your development processes, so that while you're developing you can spot bad queries immediately while they are still easy to fix.
Check Server Console Logs
Checking your server logs is an effective method for diagnosing what components of your Rails app is taking the longest. E.g. below are sample logs from two unrelated production Rails apps running in my local development environment:
# app1: slowish
Rendered shared/_header.html.erb (125.9ms)
Rendered clients/details.html.erb within layouts/application (804.6ms)
Completed 200 OK in 3655ms (Views: 566.9ms | ActiveRecord: 1236.9ms)
# app2: debilitatingly slow
Rendered search/_livesearch_division_content.js.erb (5390.0ms)
Rendered search/livesearch.js.haml (34156.6ms)
Completed 200 OK in 34173ms (Views: 31229.5ms | ActiveRecord: 2933.4ms)
App1 & App2 both suffer from performance issues, but App2's performance issues are clearly debilitatingly slow. (34 seconds!) But with these server logs, I know for App1 that I should look into clients/details.html.erb, and that for App2 I absolutely need to investigate search/livesearch.js.haml.
Improving Front-end Performance
Budget your page size strictly
To maintain fast load times you need reduce the amount/size of your page assets (JS/CSS/Images). So think about your page size like a budget. For example, Hootesuite recently declared that their home page now has a strict page-size budget of 1 mb. No exceptions. Now check out their page. Pretty fast isn't it?
Easy wins for reducing your page size include stripping out unused JS or CSS files, including them only where needed, and changing static images into much smaller vectors.
Serve smaller image resolutions based on screen width
Image loading is a large cause of slow page loading times. A large 5mb image used in the background of your splash page can easily be brought down to 200kb-400kb in size, and still be high quality enough to be hardly indistinguishable from the higher resolution original. The difference in page load times will be dramatic.
You should do these same improvements to user uploaded images as well. E.g. if your website's user avatars are 50px by 50px in size, but a user uploads a 5mb image for his avatar, then it's essential that you serve the image with lower file sizes and resolutions to fit exactly how it will be shown on your site.
Carrierwave, Fog, and rmagick are popular gems used with Amazon S3 to achieve better image loading. With that collection of packages you can dynamically serve smaller image resolutions based upon the screen size of each user. You can then use media queries so that mobile device get served smaller resolution sizes of images compared to your users with Retina screens.
Use a Content Delivery Network to speed up asset loading
Adding on to the last point, you can speed up asset/image loading times by using a Content Delivery Network (CDN's) such as Cloudfront. CDN's distribute assets across many servers, then serve assets to your users via servers that are located the closest to the user making the request.
Fingerprint Static Assets
When static assets are fingerprinted, when a user visits your page their browser will cache a copy of these assets, meaning that they no longer need to be reloaded again for the next request.
Move Javascript files to the bottom of the page
Javascript files placed at the bottom of the page will load after the page loads. If javascript assets are placed on the top of the page, then the page will remain blank as a user's browser attempts to load your javascript files. Fortunately Rails will automatically place javascript files to the bottom of your page if you use the asset pipeline or specify javascript files using the javascript_include_tag.
EDIT: Most modern browsers now optimize Javascript loading automatically so you can mostly ignore this advice.
Improving Back-end Performance
Cache, Cache, Cache!
Among all backend performance optimizations, caching is among the most effective for producing dramatic performance gains. A well implemented caching regime can greatly minimize the damage of inefficient queries within your backend during periods of high scalability. Content that is accessed frequently, yet changes relatively infrequently, benefits the most from caching.
Caching is so powerful that it brought down the page load times of App2 mentioned above from 34 seconds to less than a second in production. There is simply no other performance enhancement on the back-end that can come even close to what we got from caching.
Overall, when doing performance optimization with caching, start high then go low. The gains you will get will be greater for less effort.
From high to low, some types of caching available to you are:
HTTP caching (does not cache your server, but involves a user's browser caching content locally by reading HTTP headers)
Page caching (memcache)
Action Caching (memcache)
Fragment caching (memcache) or Russian doll caching (a favoured technique for caching with fragments)
Model caching (memcache)
To learn more about caching, a good place to start is here: http://guides.rubyonrails.org/caching_with_rails.html
Index Everything
If you are using SQL for your database layer, make sure that you specify indexes on join tables for faster lookups on large associations used frequently. You must add them during migrations explicitly since indexing is not included by default in Rails.
N+1 queries
A major performance killer for Rails apps using relational (SQL) databases are N+1 queries. If you see in your logs that your app is making many database read/writes for a single request, then it's often a sign you have N+1 queries. N+1 queries are easy to miss during development but can rapidly cripple your app as your database grows (I once dealt with an that had twelve N+1 queries. After accumulating only ~1000 rows of production data, some pages began taking over a minute to load).
Bullet is a great gem for catching N+1 queries early as you develop your app. A simple method for resolving N+1 queries in your Rails app is to eager load the associated Model where necessary. E.g. Post.all changes to Post.includes(:comments).all if you are loading all the comments of each post on the page.
Upgrade to Rails 4 and/or Ruby 2.1.x or higher
The newer version of Rails contains numerous performance improvements that can speed up your app (such as Turbolinks.)
Ruby 2.1.x+ contain much better garbage collection over older versions of Ruby. So far reports of people upgrading have found notable performance increases from upgrading.
I am missing many improvements here, but these are a few performance improvements that I can recommend. I will add more when I have time.

Photo resize. Client-side or server-side?

I create a photo-gallery site. I want an each photo to have 3 or 4 instances with different sizes (including original photo).
Is better to resize a photo on client-side (using Flash or HTML5) and upload all the instances of this photo to a server separately? Or it's better to upload a photo to a server only one time, but resize it using server resources (for example GD)?
What would be your suggestions?
Also it's interesting to know, how does big sites do this work? For example 500px.com (this site for each photo creates 4 instances and all works fast enough) or Facebook.
There are several schools of thought on this topic, it really comes down to how many images you have an how likely it is that the images will be viewed more than once. It is most common for all of the image sizes to be created using a tool like Adobe Photoshop, GIMP, Sizzlepig or GD (locally or on A server, not necessarily the web server) then upload all the assets to the server.
Resizing before you host the image takes some of the strain off of the end-user's web browser and more importantly reduces the amount of bandwidth required to host the site (especially useful when you are running a large site and paying per GB transferred)
To answer your part about really big sites, some do image scaling ahead of time, others do it on the fly, but typically it's done server side.

debugging a slow site -> long delay between connection and data sending

i ran a test from pingdom tools to check the loading time of my website... the result is that i have a lot of files that, in spite of being very small (5kB), take a lot of time (1 second or more) to load because there is a big delay between the beginning of the connection and the beginning of data downloading (in pingdom tools, this results in a very large green bar).
Have a look at this for example: http://tools.pingdom.com/default.asp?url=http%3a%2f%2fwww.giochigratis-online.net%2f&id=5691308
How can i lower the "green bar" time? Is this an apache problem (like, i dont know, the number of max. parallel connections, or something similar...), or an hardware problem? Cpu-limited, bandwith-limited, or what else?
I see that many other websites have very little green bars... how do they reduce the delay between the connection and the real data sending?
Thanks!
ps.: the site is made with drupal. Homepage generation takes about 700ms
pps.: i tested 3 other websites on the same server: same problem.
I think it could be the problem with max no. of parallel connections as you mentioned - either on server or client side. For instance, Firefox has default of network.http.max-connections-per-server = 15 (see here) while you have >70 files to be downloaded in your domain and next 40 from Facebook.
You can reduce number of loaded images by generating sprites i.e. the image consisting of multiple small images, and then using CSS to diplay them properly in places that you want. This is widely used e.g. by Google - see http://www.google.com/images/nav_logo83.png