Avoiding Duplicate Content on my website (for blog posts and testimonials) - seo

We say that "Duplicate Content is a bad thing for Search Engines".
On my website (http://www.prezzio.net), I display a selection of 6 blog posts on the bottom of my home page. Clicking on each of them allow the visitor to reach the full blog post.
I think this is a Duplicate Content problem, isn't ? Because these 6 resumes are displayed on the home page but also on the blog section. What can I do to avoid penalization by Search Engines ?
I have the same problem for my testimonials: on my home page I display 2 of my testimonials. On another page I display all of my testimonials. So I have Duplicate Content for these 2 testimonials.
Thanks for your suggestions.

Related

Google plus one counts lost after adding 'www' to website URL

After I changed my website URL in my Google+ page (from http://unimojo.ir to http://www.unimojo.ir for getting better SEO results) theses things are happening:
My home page plus one count got reset.
Before the change, by clicking on the +1 button on any post in the page, it was adding the click to my home page plus one badge. But after the change it doesn't any more.
Anyone knows what I can do about it?

Human-readable URL change

Question is the following, we have site with video. Where address is video title, which can changing all the time. For example user upload video and name it "nice video" then he rename it to "nice video in London". So in this case URL also changed from "http://example.com/video123/nice-video" to http://example.com/video123/nice-video-in-london.
From my research I found that dailymotion using canonical pointing to the page without any keywords in the URL (example.com/video123). So question which URL will be in SERP?
Question, how should we care of this? Thank you so much in advance for any suggestions on it.
Regards,
Constantine
Answer: You will put in the canonical link the link of the page that you intent to give the credit to. The page that is gonna show on SERP is the one its' link is INSIDE the canonical link tag and not the one that HAS the tag.
Why:
Page0 = http://example.com/video123/nice-video-in-london
Page1= http://example.com/video123/nice-video
The canonical link is used so u can make clear to the crawl bots the page is a "dublicated content" and the original is the "canonical link". So in your example the search engine is looking at the page 0 which is "http://example.com/video123/nice-video-in-london" and find a canonical tag. The search engine understands that this is a dublicated content and looks at the link in the canonical tag (canonical=---->original page1"http://example.com/video123/nice-video"<----) and realises that every traffic u are getting from page 1 should be added to the traffic of page 0. And for that reason the page 1 --->video123/nice-video-in-london.<--- is getting zero traffic while the page 0 --->video123/nice-video<--- is getting traffic accounted for both pages AND this page will show on SERP for obvious i think reasons.
Let me know if u have more questions on that or if you need some more details on how or why it works that way.

How can I make search engines find the blog in my website?

I'm trying to improve the SEO of a website. I'm using the tool WooRank to improve some aspects of this web, and I created a blog with Blogger. I have put an image in my homepage with a link to my blog in Blogger, but WooRank keeps warning me that I don't have a blog. My blog has links to my website in my profile, and in the entries. Some SEO tools can find my blog from a landing page, linking to the blogspot URL from an image in the header and in the footer, but some not. Any ideas on how can I solve this issue?
Try adding a text link using the anchor text 'blog', or if you haven't already, include the word 'blog' in the alt attribute on the image that you link out to the blog.
Sam
Try submitting the link to The Google URL submitter for a highlight at indexing. Also, make sure your robots.txt is not blocking the webpage or the folder from being indexed. If your website does not have a robots file, add a blank one.
Focus on Google, once they index the page, the other engines will follow.

How to avoid same content and keyword for multiple pages and focus on master page only

Hello Good morning all,
Hope all doing well, i am in a triangular situation for my multiple details pages, i want some idea how i can avoid google to not to crawl my details page and to crowl its container page which contains 90% same keyword, meta and url. For an example i have one page which is a master page of multiple categories here http://www.estatemarker.com/ahmedabad/industrial-properties.html it contains multiple categories when i open a category it let us to another page http://www.estatemarker.com/ahmedabad/industrial-warehouse.html which is subcategory now this subcategory has area vise listings of multiple pages this pages are same as this subcategory page but these contains listing of area only and this page has actual listing posted by brokers, now problem is i want to focus google on this page area vise page only but this page contains 50 more listing which opens a details page and details page contains 90% of same keyword and other SEO stuffs per the area vise page page. I need a guidance how i should avoid google to not to crawl this details page and area vise page instead.
Any help will be appreciated.
Thanks in advance
you can give canonical tag or can use
Robot.txt codes like
User-agent: *
Disallow: *?dir=*
Disallow: *&order=*
Disallow: *?price=*
to face such problems.
http://www.goinflow.com/duplicate-content-ecommerce-seo/

How do I setup a robots.txt which allows all pages EXCEPT the main page?

If I have a site called http://example.com, and under it I have articles, such as:
http://example.com/articles/norwegian-statoil-ceo-resigns
Basically, I don't want the text from the frontpage to show on Google results, so that when you search for "statoil ceo", you ONLY get the article itself, but not the frontpage which contains this text but is not of the article itself.
If you did that, then Google could still display your home page with a note under the link saying they couldnt crawl the page. This is because robots.txt doesnt stop a page being indexed. You could noindex the home page, though personally I wouldnt recommend it.