I'am working with a large project with lot of images. to increase it's page speed, chrome "lighthoues" recommend me to differ images. But my company gives priority to the ranking of the page. I'am not sure how this effect for google crawlers.
As you know after dffer the images, there is no real image url under the "src" attribute. So how can google understand and optimize my images? can some one provide me a realiable resource to understand the problem?
above is a sample differed image tag. As you can see src tag doesn't contain the actual image. actual image is under data-src attribute which will be loaded to the site using javascript.
I just wanna know how does this affect to our SEO/Page-ranking?
I thought I had read somewhere that lazy-loading is fine for SEO but to be sure I did some googling and found the following. Spoiler alert; googlebot will render the full page and thus all images will have populated src="" attributes.
https://yoast.com/ask-yoast-lazy-load/
Related
I have a bigger webpage and it would take days to add the loading=lazy attribute to all img tags on my site. Is it useful to use something like $('img'). attr('loading', 'lazy') (does this work?) to the site, or will it just make the site more slower?
It doesn‘t necessarly have the expected effect - if you‘re adding the attributes via JavaScript, the page itself has already been parsed by the browser and their preloading scripts as well and all of those images would be been put to the download queue, as if the attribute wouldn‘t have existed on them.
So I would heavily recommend to add those attributes within the source code itself already.
I am trying to share a post from my website(blog) onto Google plus but it isn't showing the featured image of the article, instead it is just showing the title and link of the article. I have microdata and also "og" tags for my page. When tested using Google Structured data testing tool, it is showing all good. I expect to get some help here. If I am trying to share the home page, it is showing an image, however if I am trying to share any post from the website, it is not showing any image. Please help, let me know if you need any more info, would be happy to provide.
One of post's from website
The og:image meta tag is being used by google plus rather than the image property within your http://schema.org/BlogPosting -as #abraham pointed out this is a broken link, it should go to http://top10grocerysecrets.com/Top-10-foods-for-releiving-inflammation.jpg - currently it includes /wp-content/uploads/sites/17/2015/07/ which isn't part of the image's path.
In the structured data it is valid, but not correct: BlogPosting has an image set but without a full path which may be why it gets ignored: the source should begin http:// etc. This is also needed if you want the image to appear in the google search results preview.
The WebPage element does not have an image set: only the BlogPosting does. Consider setting the same image property using a meta tag inside the WebPage element if fixing the BlogPosting image's path does not resolve the structured data issue, e.g.
<meta itemprop="image" content="http://top10grocerysecrets.com/Top-10-foods-for-releiving-inflammation.jpg" nt-post-thumbnail wp-post-image" alt="Print" />
In the structured data there are two unrelated mistake
the BlogPosting has author set to a link with fixed IP address http://162.244.66.231/top10grocerysecrets/author/cyoung this will reduce the chance of it connecting the blog with C Young's profile on the website.
the file name http://top10grocerysecrets.com/Top-10-foods-for-releiving-inflammation.jpg has 'releiving' in it, which is not the spelling used in the text on the image itself. This doesn't matter a great deal.
I have a webpage whit many areas whose visibility can get toggled by the user.
The default visibility state for those area is hidden (css, display: none).
I don't have control to what's going to be put inside, but it could be a lot of images.
I saw with firefox's network observer all images where loaded with the page. This is quite a waste of bandwidth since the user might choose not to display every areas.
I came to a workarround, I put all that content inside a <script type="late-rendering"></script> and to avoid any potential conflict (eg: "" inside the content), I replace all "<" with "8691jQfdtxm" (randomly picked string). Then when the user want to make an area visible, I just fill the area with that content after replacing 8691jQfdtxm with "<".
It works fine, but I think proceeding like this will make crawlers (eg: Google) think my webpage is pure garbage. How could I avoid that?
Unless search engines were heavily relying on the alt tags of your images, or their filenames, there is little risk you will loose search rankings. If your site does load more quickly instead, it will provide a better user experience, which will be probably detected by Google, and this influences rankings positively.
Google executes a lot of Javascript these days. And your trick of breaking the html with a random string seems hokey to me.
I would preload all the textual content ( e.g. have it all in there on first load, with the div closed via display:none ). This content will not count as much as visible content - but it does count.
Then I'd do a delayed loading of the images. Like with make all your images something like:
<img src="blank.jpg" loadlater="realimage.jpg">
blank.jpg can be a tiny image. when the div opens you can use javascript/jquery to rewrite each src with loadlater.
I have a feature of "Browse Pictures" where there are thumbnails and when a user clicks it expands.
Now, both these images are stored in separate virtual directories with different sizes, the larger being 200*200 px.
Still it only shows the smaller image when I click it to enlarge, instead of the 200*200 images.
You can add a random URL parameter to the image's href, so that the HTML rendered looks like
<img src="http://static.example.com/some/large/image.jpg?234234652346"/>
instead of
<img src="http://static.example.com/some/large/image.jpg"/>
It sounds like you don't want to prevent them from being cached as such, but you want to give them different URLs.
If they do have different URLs, then this is not a caching problem.
To prevent caching, you use a cache-control:no-cache HTTP response header when serving the images. (are you using Apache?)
But if you really prevent caching, your data transfer will be higher than it needs to be, every time they visit your gallery, they will be fetching your images.
Google image search seems to do a poor job on a site I run in identifying which image on a page should be indexed. In addition it doesn't seem to link that image with lots of the associated data.
Are there any ways of focusing attention for spiders on particular images and associated data, do they need to be within the same tags, or adjacent on the page?
A few tips:
Use a descriptive name, i.e. "tabby-cat.jpg" instead of "img02396.jpg".
Use alt tags on images.
Use descriptive text on the page and around the image.
Make sure the images are in the generated source, i.e. if you click "View source" in your browser, you see <img> tags.
It's also useful to validate your site at http://validator.w3.org in case there are major errors like missing brackets etc that could prevent a spider from parsing the page. (Note: I wouldn't worry about making everything 100% valid since Google is fine with invalid code)
Images in CSS (i.e. backgrounds) are not indexed AFAIK. However I'd suggest using CSS backgrounds for "design" images (a subtle way of getting Google to ignore site headers, custom borders, shadows, etc).
Nor are any images generated from Javascript.
Make sure you're not blocking images through robots.txt. I know that Joomla does this by default.
Sign up at Google Webmaster Tools, add your site, then allow it to be used in Google's "Image Labeller" game which should help tag images.
All images on a page should be indexed. If they aren't then improve your alt tags and possibly rename the image file. There really isn't anything more you can do since search-engines do not read any other context for the image itself except size. If google thinks the image is a duplicate it won't index it either.
Of course if images really do inherit context from the surrounding page then you could just use less images or move them into CSS.
I think Search robot can not read images as we do, so the simple and must thing you should do to your images is using descriptive names, so that spider could know what this image all about. Second one is using ALT tags on images, put in keywords relating to the images.
Those thing are what I do.