Website optimization - optimization

have can i speed up the loading of images - specialy when i open the website for the first time it takes some time for images to load...
Is there anything i can do to improve this (html, css)?
link
Thank to all for your answers.

Crop the size of http://www.ursic-ei.si/datoteke/d4.jpg! It's 900 pixels wide, and most of that (half?) is empty and white. Make the image smaller and then use background-position and backgroud-color to compensate for anything you trimmed off the edges.
You have a lot of extra newlines in your HTML source. Not hugely significant, but theoretically - since in HTML there's no practical difference between one new line and two - you might want to remove some.

For images, you should consider a content delivery network (CDN), which will cache your images and other files and server them faster than you web server.
This is a must for any high-traffic website.

On the client, you can multipart download; e.g. in Firefox there's a bunch of settings under network.http.pipelining that help speed downloads.
On the server, there's little you can do (though you can gzip text-based files). The client must just know how to cache.

Since in your question you only ask about the images, I guess you already know that the cost of php processing and/or javascript is minor. If you want to speed up the images you can reduce their size, increase the compression rate... also try different formats. JPG is not always the best one.
Try GIF and/or PNG, also with these you can reduce the number of colors. Usually this formats are way better than JPG when you have simple pictures with few colors.
Also consider if some of your images are a simple patter that can be reproduced/repeated several times. For example, if you have a background image with a side banner, you just need one line and repeat it several times.

Related

Watermarking Plugin Performance - Is FastScaling an Option?

I'm wanting to use ImageResizer to serve thumbnails that are scaled and watermarked on the fly on a high traffic website.
My testing has shown that the Watermarking plugin results in a significant decrease in throughput compared to just scaling them with FastScaling.
Scaled: 150+ images per second
Scaled & Watermarked: 35 images per second
I dug through the Watermark Plugin code and saw that it's using GDI+ for its image manipulations. Is it possible to make it use the more performant FastScaling plugin instead?
This is something we would like to improve. Currently, if either Watermarking (or the DRM red dot) are in use, performance reverts to GDI+ levels.
I would be happy to assist on a pull request for this, or discuss other options.

How to avoid imageresizing if width and height is same as original?

Is there a way (param) to avoid Imageresizing to process an image if the height and width are the same as original ?
If not, where and how do I cancel the scale process in a plugin ?
The ImageResizer does not scale images that are already the appropriate size. It does, however, decode them, strip metadata, and re-encode them into a web-compatible and web-efficient format (usually jpg or png).
If you're wanting the ImageResizer to serve the original file, skipping the entire process, that's a different question, which I'll try to answer below.
Here's the primary challenge with that goal: To discover the width and height of the source image file, you must decode it - at least partially.
This optimization is only useful (or possible) in limited circumstances
The format of the source file allows you to parse the width & height without loading the entire file into memory. JPG/PNG yes, TIFF - no, 30+ formats supported by the FreeImageDecoder, no.
The source file is on local, low-latency disk storage, and is IIS accessible - eliminates UNC path and plugins S3Reader, SqlReader, AzureReader, RemoteReader, MongoReader, etc.
No URL rewriting rules are in place.
No custom plugins are being used.
The image is already in a web-optimized format with proper compression settings, with metadata removed.
No other URL commands are being used
No watermarking rules are in place.
You do not need to control cache headers.
You are 100% sure the image is not malicious (without re-encoding, you can't ensure the file can't be both a script and a bitmap).
In addition, unless you cached the result, this 'optimization' wouldn't, in fact, improve response time or server-side performance. Since the dimension data would need to be decoded separately, it would add uniform, significant overhead to all requests whether or not they happened to have a dimension match.
The only situation in which I see this being useful is if you spend a lot of time optimizing compression in Photoshop and don't want the ImageResizer to touch it unless needed. If you're that concerned, just don't apply the URL in that scenario. Or, set process=no to keep the original bytes as-is.
It's definitely possible to make a plugin to do this; but it's not something that many people would want to use, and I can't envision a usage scenario where it would be a net gain.
If you want to plunge ahead, just handle the Config.Current.Pipeline.PreHandleImage event and replace e.ResizeImageToStream with code that parses the stream returned by e.GetSourceImage(), apply your dimension logic (comparing to Config.Current.GetImageBuilder().GetFinalSize(), then reset the stream and copy it verbatim if desired like this:
using (Stream source = e.GetSourceImage())
StreamExtensions.CopyToStream(source,stream); //4KiB buffer
That might not handle certain scenarios, like if the image actually needs to be resized 1px smaller, but you're adding 1 px border, etc, but it's close. If you're picky, look at the source code for GetFinalSize and return the image bounds instead of the canvas bounds.

Ajax request to SQL returns hundreds of images to JSON -> CSS Sprites? Base64?

We are writing a Web-based events calendar with thousands of theatre shows, festivals, and concerts in a SQL database.
The user goes to the Website, performs a search, the SQL server returns JSON, and jQuery code on the client side displays the 200+ events.
Our problem is that each event has an image. If I return URLs, the browser has to make 200+ HTTP GET requests for these tiny (2-4Kb) images.
It is tempting to pack all the images into one CSS sprite, but since the user searches can return any combination of images, this would have to be done dynamically and we'd lose control over caching. Every search would have to return every image in a brand new CSS sprite. Also, we want to avoid any solution that requires dynamically reading or writing from a filesystem: that would be way too slow.
I'd rather return binary images as part of the JSON, and keep track of which images we already have cached and which we really need. But I can't figure out how. It seems to require converting to Base64? Is there any example code on this?
Thanks for any help you can give! :)
The web site you are using (StackOverflow) has to provide 50 avatars for 50 questions shown in the questions page. How does it do it? Browser makes 50 requests.
I would say, you had better implement pagination so that the list does not get too big. Also you can load images on the background or when the user scrolls and gets to the image.
Keeping track of images downloaded is not our job, it is browser's. Our job is to make sure the URL is unique and consistent and response headers allow caching.
Also base64 will make the stream at least 33% longer while it's processing on the client side is not trivial - I have never seen an implementation but probably someone has done some javascript for it.
I believe all you need is just pagination.
It looks like the original poster has proceeded with essentially this solution on their own, however based on their comment about 33% space overhead, I don't think they observed an unexpected phenomenon that you get when base64-ing a bunch of images into one response entity and then gzipping it...believe it or not, this can easily produce an overall transfer size that is smaller than the total of the unencoded image files, even if each unencoded image was also separately gzipped.
I've covered this JSON technique for accelerated image presentation in depth here as an alternative to CSS sprites, complete with a couple live samples. It aims show that it is often a superior technique to CSS sprites.
Data is never random. For example you could name your sprites
date_genre_location.jpg
or however you organise your searches. This might be worth it. Maybe.
You'll need to do the math
Here is what we decided.
Creating sprites has some downsides:
To reduce image loss, we need to store the originals as PNGs instead of JPEGs on the server. This is going to add to the slowness, and there's already quite some slowness in creating dynamic CSS sprites with ImageMagick.
To reduce image size, the final CSS sprite must be a JPG, but JPGs are lossy and with a grid of images, things get weird at the edges as JPG tries to blend and compress. We can fix that by putting a blank border around all the images but even so, this adds complexity.
with CSS sprites, it becomes harder to do intelligent caching; we have to pack up all the images regardless of what we've sent in the past. (or we have to keep all the old CSS sprites regardless of whether they contain many or few images we still need).
I'm afraid we have too many combinations of date-category-location for precomputing CSS sprites, although surely we could handle part of the data this way
Thus our solution is to Base64 it, to actually send each image one by one. Despite the 33% overhead, it is far less complex to code and manage and when you take caching issues into account, it may even be less data transfer.
Thank you all for your advice on this!

BIG header: one jpg or several png?

I've read some of posts here about png/jpg/gif but still I'm quite confused..
I've got a big header on my website :
width:850px height:380px weight:108kb
And it's jpg. A woman + gradient + some layers on top and behing her..
Do you think 108kb it's too much? I was thinking about cut it to png pieces..Would that be a bad idea? What's your suggestions?;) THX for help;]
It depends on the nature of the image, if it's a photograph, JPEG would give the highest quality/compression ratio, if it was pixel stuff like writing or clipart, or have some transparency, then it's GIF/PNG to choose (GIF,PNG8 offers one level transparency, while PNG24 offers a levelled transparency for each pixel).
When I'm confused, I usually save the picture in all three formats and decide what gives the best quality/size deal within the size limits I need, I also try to lower down JPEG quality to the level where the image is still good quality (because that varies from image to another).
Also, if it was a photograph with some writing, split it into a JPEG photograph with a transparent GIF writing overlay (because writing edges look distorted in JPEG).
So, when you are confused, give the three a try and decide, with time you'll gain experience what format suits what content the most.
Actually, 108kb for an image of that size isn't abnormal. It's even more optimal to have it in 1 image: your browser only needs to perform 1 get-request. If you break it up into multiple images, the user needs to wait longer for the site to load.
PNG probably wouldn't help you since JPG is usually much more efficient at handling gradients. PNG would be better if you had big unicolored spaces in your image.

Possibilities to compress an image (size)

I am implementing an application. In that I need to find out a way to compress the image (size). Because it will help a lot for me to making the space comfortable in the database(server).Please help me on this.
Thanks in advance,
Sekhar Behalam.
Your options are to reduce the dimension of the images and/or reduce the quality by increasing the compression. Are the images photographic in nature (JPG is best) or simple solid colour graphics (use PNGs)?
If the images are JPG (lossy compression) you can simply load and re-save them with a higher compression setting. This can result in a large space saving.
The image quality will of course decline, but you can get away with quite a lot of compression in JPG before it is noticeable. What is acceptable of course is determined by the use of the images (which you have not stated).
Hope this helps.
Also consider pngcrush, which is a utility that is included with the SDK. In the Project Settings in Xcode, there's an option to "Compress PNG Images." Make sure to check that. Note that this only works for resource images (as far as I know)—but you haven't stated if these images will be user-created/instantiated, or brought into the app bundle directly.