Is it a good idea to use name in this situation? [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a network of about 200 blogs (Wordpress Multisite), and all of them show links to all the other ones on a sidebar on the right hand side (basically 200+ links on the right hand side of every single page). I have it set to rel="nofollow" now, but I was wondering if changing it to rel="noindex, nofollow" would be a good idea?
Thank you for any input.

nofollow
nofollow only means that a bot should not follow this link. If you are concerned only about Google (as your tag suggests) this will probably be of help:
How does Google handle nofollowed links?
In general, we don't follow them. This means that Google does not
transfer PageRank or anchor text across these links. Essentially,
using nofollow causes us to drop the target links from our overall
graph of the web. However, the target pages may still appear in our
index if other sites link to them without using nofollow, or if the
URLs are submitted to Google in a Sitemap. Also, it's important to
note that other search engines may handle nofollow in slightly
different ways.
[Source]
However, adding this attribute is in no way a hard restriction, there is no standard, and some bots may ignore it altogether. Also, search engines may still flag the page as a linkbuilding site depending on content/link ratio.
noindex
noindex is not used in links by Google (I do not know about others). It is meant for the robots <meta> attribute in the html header and applies to the whole page. So it is most likely no use for you. Example:
<meta name="robots" content="noindex"/>
linkbuilding
200 links are however not very user-friendly either. You should seriously consider reducing the number of links by (for example) selecting those that have a similar topic.
As you read this, look to the right, yes, here on Stack Overflow, there is a "Box" titled Related. This is how you do it. Imagine them putting every single topic ever created in there... Not very useful.
Also if you do this with some logic like I suggested above and not just randomly selecting N links from the list, you can probably remove the nofollow, since the links will become useful and Google likes useful links.
You could then also add a "spotlight" for low-traffic sites (those would probably need the nofollow though)

Related

What impact does having multiple domain names for a site, have on SEO rankings? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have searched but not really found anything clear on the matter from what I have read so far, what impact does having your domain name across multiple tlds (e.g. mycompany.com and mycompany.fr and mycompany.es) have on your rankings?? I'm being told having them point to the same content is likely to get the site shot down by google.
Google doesn't have a parked domain detector according to Matt Cutts, so if the domain names simply all point to one location it won't hurt you.
However, if you have duplicate content that's another story. In your example it sounds like you might have multiple sites that all have the same content, but are different domain names.
Matt Cutts, the head of Google's Webspam team, claims that duplicate content will not hurt your ranking. You can watch that video here
He gives the disclaimer that it can hurt if it's "spammy" without going into very specific detail what that actually means. In my experience (I've had about 5-6 clients that did this) Google would typically look at one of their domains and ignore the duplicates, but not hurt their main site. The only exception to this is if one of the sites that isn't your main one starts getting more backlinks or traffic and then Google sees it as more relevant and then ignores your main site's content... Google's going to favor the duplicate that appears the most relevant.
I'm pretty cautious about duplicate content though because it has the possibility of hurting your site if Google thinks it's "spamy" and they change their algorithm so frequently now that its hard to keep up.
My recommendation is set up the other domain names as parked domains instead of duplicating the site. As you build up any backlinks focus on linking to just one domain name too.
Yes, if these serve the same content, it will sooner or later trigger a content issue or some kind of manually penalty. If Google finds out you own all those domain names (or they belong to a small network of owners), then they will take action for sure. The penalty will sink you in SERPs.
It is not natural to have many domain names sharing the same content. It does not happen by accident and there is no good reason one would need to achieve this.
I would never recommend using different ccTLDs for the same content in the same languages.
However, if the websites are localized, you can use hreflang and "connect" each version of a page with appropriate language. Check this link: https://support.google.com/webmasters/answer/189077?hl=en

hide text or div from crawlers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
Improve this question
lets say i have a text
<span class="hide">for real</span><h2 id='show'>Obama is rocking the house</h2>
<span class="hide">not real</span><h2 id='show'>Bill gates is buying stackoverflow</h2>
i need the crawler to just read the
<h2 id='show'>Obama is rocking the house</h2>
<h2 id='show'>Bill gates is buying stackoverflow</h2>
can we do that?
im a bit confused here say that a hidden div is readed by google
Does google index pages with hidden divs?
but when i google for a sec, i found out that google doesnt read hidden div. so which is right?
http://www.seroundtable.com/archives/002971.html
what i have in mind is to ofucate it like using css instead.,
i can put my text in a image. output it using image generator or something.
FYi, serving different content to users then to search engines is a violation of Google's terms of service and will get you banned if you're caught. Content that is hidden but can be accessed through some kind of trigger (navigation menu links is hovered over, the clicks on an icon to expand a content area, etc) is acceptable. But in your example you are showing different content to search engines specifically for their benefit and that is definitely what you don't want to do.
The best way to suggest that a webcrawler not access content on your site is to create a robots.txt file. See http://robotstxt.org. There is no way to tell a robot to not access one part of a page
http://code.google.com/web/controlcrawlindex/docs/faq.html#h22
If you are going to use CSS remember that robots can still read CSS files! You could include the CSS file in the robots.txt file, though to exclude it.
If you really must have indexed and non-indexed content on the same page, maybe you should use frames and have the non-indexed frame listed in the robots.txt file as not to be indexed.
Well behaved crawlers will follow the robots.txt guidance, e.g. Google, but naughty ones will not. So, there is no guarantee.
I can confirm that google does read the hidden div, while it's not showing up in the search results.
The reason I know: I admin a website that has backlinks on a highly respected non profit. As the non profit doesn't want to show up in search results for a company website, they hide the links.
However, if I check google's webmaster tools, I can see the backlinks form this non profit.

Hiding products behind a form [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I’m building a webshop that sells tires. I think it would be best user-friendly-wise to hide my products behind a search form, where you can select tire dimension, price range etc.
I’ve been told that Google will never submit a form, when crawling a site, so if I “hide” the products by using a form, does Google ever index my products?
If no, how do I best work around this? I’ve been thinking about doing a regular menu with category submenus (By brand, price range, speed limit etc.), so that Google can crawl my links and then replace the menu with a form using javascript. Then Google will crawl the links and the user will browse by form. But if I have 3000 products, could it cause duplicate content, flag for link spam (if there is such a thing) etc. ?
If the only way to find your products is to complete and submit a form then, no, Google nor any other search engine will be able to find and index that content. To get around this you have a few options:
Have an HTML sitemap on your site that also links to your products. Besides being a good way to generate internal links with good anchor text, it also allows search engines an alternative means to find that content.
Submit an XML sitemap. This is similar to an HTML sitemap except it is in XML and not publicly visible.
Use progressive enhancement and have a menu available to users who don't have JavaScript turned on. Then using JavaScript recreate your form functionality (assuming this increases usability).
You shouldn't run into any duplicate content issues unless you can get to same product using more then one URL. None of the above should cause that to happen. But if how you implemented your products can cause this to happen just use canonical URLs to identify the main URL. Then if the search engines see multiple pages for the same content they know which one is the main page and to include it in their search results.
To avoid any on-site duplicate content issues, you can use the canonical tag to indicate the primary content page. This works quite well for ecommerce sites where there are often multiple ways to reach a product listing.
Another way having a separate page for each product is helpful for SEO is that this can be used to create a good link for your visitors to share via social networking, forums, blogs and so forth. So, instead of sharing something like mytirestore.com?q=24892489892489248&p=824824 they can share something like mytirestore.com/firestone/r78-13. This kind of keyword targeted external link will also work wonders for you SEO for product specific keywords.

SEO and Internal Links: Which is Better if Matters? Absolute or Relative [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I am charging SEO of my company's SEO, which I really hate. I believe a web site with decent web design and semantic code(structure), spiced up with attractive content is the best thing we should do. Yet, we are still far from there, in me case especially. So usually I take a very close look at other sites, their design, code, etc. And I suspect I got paranoid on this.
Today, I find a highly respected site which is using absolute internal links while we are using relative links. As far as I know, it does not matter, but I can not help asking you guys to make sure about this.
If this is a ridiculous question, then I am sorry. As I said I become a paranoia.
Taken from the Search Engine Optimisation FAQ at the SitePoint Forums:
Should I use relative links or absolute links?
Absolute links. It is recommended by Google as it is possible for crawlers to miss some relative links.
If I can find the link that Google states this I'll update this post.
EDIT: This might be what the post is referring to, but I've stated my reasons as to why this might be correct in the comments.
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35156
I never heard or seen anything that indicates it matters. All you're likely to do is complicate your development. The "highly respected" site is getting good ranking because it's popular, that's all.
It's pretty well a given that search engines store the full path at some point, it's unlikely they wouldn't perform this conversion during the crawl process to remove duplicates.
I don't really follow your logic anyway. You know good structure, relevant content and popularity are the key to ranking so what makes you think you'll gain anything by spending even a minute on random optimisations like this?
I highly doubt Google will be missing any relative links. Apparently the latest version of their Crawler will even execute some javascript. Don't bother with absolute links, instead, great a good sitemap and submit it to google through webmaster tools. Yahoo and Microsoft also allow you to submit your sitemap so it might be worthwhile to look into that too - google it.
I don't think there is an answer to this question, but I will weigh in anyways. Personally,
I think that using absolute URLs is the best. The web is full of crappy content scrapers. Many of the people who wrote these scrapers forget to change the original URLs (in absolute links) before they post the content onto their own page. So, in that regard, absolute URLs can turn into a really dodgy way to get a couple extra links.
If I follow that, it seems logical that absolute links would also be a great indicator of duplicate content caused by content scrapers.
A couple of years ago, I did some research into what happens to a page's search rankings when you dramatically change content/navigation (ie - in the case of a dramatic re-design). At that point, I found that having absolute URLs seemed to spook Google a little less. But, there were some problems with my research:
a) The 'absolute URL bonus' was barely quantifiable (an average of less than two positions of difference)
b) The 'absolute URL bonus' only lasted a few weeks before Google settled down and started treating both pages the same
c) The research is two years old and the Google algorithm has changed dramatically in that time
When I add a and b together, I'm left with a very unsettled feeling. Google gets a little weird from time to time, so the bonus may have been a fluke that I attributed to absolute URLs. Good old experimental bias.....Either way though, the difference was so slight and lasted for such a short time that I don't think it is worth spending a whole lot of extra time making absolutes!
Best of luck with your site
Greg

Getting Good Google PageRank [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
In SEO people talk a lot about Google PageRank. It's kind of a catch 22 because until your site is actually big and you don't really need search engines as much, it's unlikely that big sites will link to you and increase your PageRank!
I've been told that it's easiest to simply get a couple high quality links to point to a site to raise it's PageRank. I've also been told that there are certain Open Directories like dmoz.org that Google pays special attention to (since they're human managed links). Can anyone speak to the validity of this or suggest another site/technique to increase a site's PageRank?
Have great content
Nothing helps your google rank more than having content or offering a service people are interested in. If your web site is better than the competition and solves a real need you will naturally generate more traffic and inbound links.
Keep your content fresh
Use friendly url's that contain keywords
Good: http://cars.com/products/cars/ford/focus/
Bad: http://cars.com/p?id=1232
Make sure the page title is relevant and well constructed
For example: Buy A House In France :. Property Purchasing in France
Use a domain name that describes your site
Good: http://cars.com/
Bad: http://somerandomunrelateddomainname.com/
Example
Type car into Google, out of the top 5 links all 4 have car in the domain: http://www.google.co.uk/search?q=car
Make it accessible
Make sure people can read your content. This includes a variety of different audiences
People with disabilities: Sight, motor, cognitive disabilities etc..
Search bots
In particular make sure search bots can read every single relevant page on your site. Quite often search bots get blocked by the use of javascript to link between pages or the use of frames / flash / silverlight. One easy way to do this is have a site map page that gives access to the whole site, dividing it into categories / sub categories etc..
Down level browsers
Submit your site map automatically
Most search engines allow you to submit a list of pages on your site including when they were last updated.
Google: https://www.google.com/webmasters/tools/docs/en/about.html
Inbound links
Generate as much buzz about your website as possible, to increase the likely hood of people linking to you. Blog / podcast about your website if appropriate. List it in online directories (if appropriate).
References
Google Search Engine Ranking Factors, by an SEO company
Creating a Google-friendly site: Best practices
Wikipedia - Search engine optimization
Good content.
Update it often.
Read and digest everything at Creating a Google-friendly site: Best practices.
Be active on the web. Comment in blogs, correspond genuinely with people, in email, im, twitter.
I'm not too sure about the domain name. Wikipedia? What does that mean? Mozilla? What word is that? Google? Was a typo. Yahoo? Sounds like that chocolate drink Yoohoo.
Trying to keyword the domain name shoehorns you anyway. And it can be construed as a SEO technique in the future (if it isn't already!)
Answer all email. Answer blog comments. Be nice and helpful.
Go watch garyvee's Better Than Zero. That'll motivate you.
If it's appropriate, having a blog is a good way of keeping content fresh, especially if you post often. A CMS would be handy too, as it reduces the friction of updating. The best way would be user-generated content, as other people make your site bigger and updated, and they may well link to their content from their other sites.
Google doesn't want you to have to engineer your site specifically to get a good PageRank. Having popular content and a well designed website should naturally get you the results you want.
A easy trick is to use
Google webmaster tool https://www.google.com/webmasters/tools
And you can generate a sitemap using http://www.xml-sitemaps.com/
Then, don't miss to use www.google.com/analytics/
And be careful, most SEO guides are not correct, playing fair is not always the good approach. For example,everyone says that spamming .edu sites is bad and ineffective but it is effective.