Preventing Images being cached in the browser - browser-cache

I have a feature of "Browse Pictures" where there are thumbnails and when a user clicks it expands.
Now, both these images are stored in separate virtual directories with different sizes, the larger being 200*200 px.
Still it only shows the smaller image when I click it to enlarge, instead of the 200*200 images.

You can add a random URL parameter to the image's href, so that the HTML rendered looks like
<img src="http://static.example.com/some/large/image.jpg?234234652346"/>
instead of
<img src="http://static.example.com/some/large/image.jpg"/>

It sounds like you don't want to prevent them from being cached as such, but you want to give them different URLs.
If they do have different URLs, then this is not a caching problem.
To prevent caching, you use a cache-control:no-cache HTTP response header when serving the images. (are you using Apache?)
But if you really prevent caching, your data transfer will be higher than it needs to be, every time they visit your gallery, they will be fetching your images.

Related

Differ images cause drop the google rankings

I'am working with a large project with lot of images. to increase it's page speed, chrome "lighthoues" recommend me to differ images. But my company gives priority to the ranking of the page. I'am not sure how this effect for google crawlers.
As you know after dffer the images, there is no real image url under the "src" attribute. So how can google understand and optimize my images? can some one provide me a realiable resource to understand the problem?
above is a sample differed image tag. As you can see src tag doesn't contain the actual image. actual image is under data-src attribute which will be loaded to the site using javascript.
I just wanna know how does this affect to our SEO/Page-ranking?
I thought I had read somewhere that lazy-loading is fine for SEO but to be sure I did some googling and found the following. Spoiler alert; googlebot will render the full page and thus all images will have populated src="" attributes.
https://yoast.com/ask-yoast-lazy-load/

Why is there a time span between different network requests?

I'm optimizing the loading times in a web app and I don't know what's the problem. Firebug's Net panel is showing time holes between requests.
Can someone explain me this chart?
The gap between the requests can have two reasons:
Time needed to parse the requested page
When you request a URL, the browser needs to parse the returned contents to check whether they contain URLs to other ressources like JavaScripts, CSS files, images, etc. Subsequently requested ressources need to be parsed, too. So e.g. CSS files can contain references to images. Though the contents of the CSS file first need to be parsed to get those URLs.
Dynamically requested ressources
Using JavaScript resources can be requested asynchronously. These requests can be triggered e.g. through AJAX or by dynamically inserting DOM nodes like <img src="xyz.png" alt=""> into the page.

Improve loading speed withouth loosing search ranking

I have a webpage whit many areas whose visibility can get toggled by the user.
The default visibility state for those area is hidden (css, display: none).
I don't have control to what's going to be put inside, but it could be a lot of images.
I saw with firefox's network observer all images where loaded with the page. This is quite a waste of bandwidth since the user might choose not to display every areas.
I came to a workarround, I put all that content inside a <script type="late-rendering"></script> and to avoid any potential conflict (eg: "" inside the content), I replace all "<" with "8691jQfdtxm" (randomly picked string). Then when the user want to make an area visible, I just fill the area with that content after replacing 8691jQfdtxm with "<".
It works fine, but I think proceeding like this will make crawlers (eg: Google) think my webpage is pure garbage. How could I avoid that?
Unless search engines were heavily relying on the alt tags of your images, or their filenames, there is little risk you will loose search rankings. If your site does load more quickly instead, it will provide a better user experience, which will be probably detected by Google, and this influences rankings positively.
Google executes a lot of Javascript these days. And your trick of breaking the html with a random string seems hokey to me.
I would preload all the textual content ( e.g. have it all in there on first load, with the div closed via display:none ). This content will not count as much as visible content - but it does count.
Then I'd do a delayed loading of the images. Like with make all your images something like:
<img src="blank.jpg" loadlater="realimage.jpg">
blank.jpg can be a tiny image. when the div opens you can use javascript/jquery to rewrite each src with loadlater.

Hotlinking: How do I differentiate between an image in a <img> tag versus an image link?

I want to allow partial hotlinking to images on my website. I want to allow a specific site (Reddit) to be able to show an image from my website on their page, but if they click on the link to the image from that site, it should go to an image viewing page, rather than directly the image itself.
For example:
This other website should be able to have this
<img src="http://mySite.com/myImage.jpg"/>
on their page, and it should show the image. However, if they have this:
Link Text
A user who clicks on that link should get redirected to an image viewing page that contains some html, including the image, rather than directly to the image.
I'm trying to achieve this via mod_rewrite. However, those two cases have the same HTTP_REFERER. Is there anyway for my server to differentiate between that?
There is no way to do such thing! but if second site is your you can put optional query string at the end of URL the distinguish between them!

How to tell image search which image matters?

Google image search seems to do a poor job on a site I run in identifying which image on a page should be indexed. In addition it doesn't seem to link that image with lots of the associated data.
Are there any ways of focusing attention for spiders on particular images and associated data, do they need to be within the same tags, or adjacent on the page?
A few tips:
Use a descriptive name, i.e. "tabby-cat.jpg" instead of "img02396.jpg".
Use alt tags on images.
Use descriptive text on the page and around the image.
Make sure the images are in the generated source, i.e. if you click "View source" in your browser, you see <img> tags.
It's also useful to validate your site at http://validator.w3.org in case there are major errors like missing brackets etc that could prevent a spider from parsing the page. (Note: I wouldn't worry about making everything 100% valid since Google is fine with invalid code)
Images in CSS (i.e. backgrounds) are not indexed AFAIK. However I'd suggest using CSS backgrounds for "design" images (a subtle way of getting Google to ignore site headers, custom borders, shadows, etc).
Nor are any images generated from Javascript.
Make sure you're not blocking images through robots.txt. I know that Joomla does this by default.
Sign up at Google Webmaster Tools, add your site, then allow it to be used in Google's "Image Labeller" game which should help tag images.
All images on a page should be indexed. If they aren't then improve your alt tags and possibly rename the image file. There really isn't anything more you can do since search-engines do not read any other context for the image itself except size. If google thinks the image is a duplicate it won't index it either.
Of course if images really do inherit context from the surrounding page then you could just use less images or move them into CSS.
I think Search robot can not read images as we do, so the simple and must thing you should do to your images is using descriptive names, so that spider could know what this image all about. Second one is using ALT tags on images, put in keywords relating to the images.
Those thing are what I do.