Hiding products behind a form [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I’m building a webshop that sells tires. I think it would be best user-friendly-wise to hide my products behind a search form, where you can select tire dimension, price range etc.
I’ve been told that Google will never submit a form, when crawling a site, so if I “hide” the products by using a form, does Google ever index my products?
If no, how do I best work around this? I’ve been thinking about doing a regular menu with category submenus (By brand, price range, speed limit etc.), so that Google can crawl my links and then replace the menu with a form using javascript. Then Google will crawl the links and the user will browse by form. But if I have 3000 products, could it cause duplicate content, flag for link spam (if there is such a thing) etc. ?

If the only way to find your products is to complete and submit a form then, no, Google nor any other search engine will be able to find and index that content. To get around this you have a few options:
Have an HTML sitemap on your site that also links to your products. Besides being a good way to generate internal links with good anchor text, it also allows search engines an alternative means to find that content.
Submit an XML sitemap. This is similar to an HTML sitemap except it is in XML and not publicly visible.
Use progressive enhancement and have a menu available to users who don't have JavaScript turned on. Then using JavaScript recreate your form functionality (assuming this increases usability).
You shouldn't run into any duplicate content issues unless you can get to same product using more then one URL. None of the above should cause that to happen. But if how you implemented your products can cause this to happen just use canonical URLs to identify the main URL. Then if the search engines see multiple pages for the same content they know which one is the main page and to include it in their search results.

To avoid any on-site duplicate content issues, you can use the canonical tag to indicate the primary content page. This works quite well for ecommerce sites where there are often multiple ways to reach a product listing.
Another way having a separate page for each product is helpful for SEO is that this can be used to create a good link for your visitors to share via social networking, forums, blogs and so forth. So, instead of sharing something like mytirestore.com?q=24892489892489248&p=824824 they can share something like mytirestore.com/firestone/r78-13. This kind of keyword targeted external link will also work wonders for you SEO for product specific keywords.

Related

How does spy tools like ali hunter and ppspy get data from shopify stores [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
How does spy tools like Ali-hunter and ppspy get data from Shopify stores, normally in order to get this data you'll need to use a webhook, but this only applies to your store and stores that installed your application.
PPSPY does the following.
It reads your Shopify sitemap to find the products in the store.
.../sitemap_products_1.xml
As fallback, it parses the URL:
.../collections/all?sort_by=best-selling - and tries
to find the products there.
Next, it uses the JSON URL from Shopify. There again it tries to find
all products. An example URL:
.../products.json?page=1&limit=250 - most store owners don't even know this exists.
After that, it calls the JSON URL for each product. You can get this
URL in your online store by simply opening a product page and writing
".json" after it in the URL. Example URL:
.../products/your-productname.json.
In this JSON there is a field "updated_at". This field is updated every time a change is made. Also, when an order take place (the stock is changed).
And with this, it is possible to track the sales (approximately).
They are called web scraper or crawlers. They go to your product page (going through all the links in your ecommerce) and understand the content of the page. They extract the product name and the product price. They will do that every X hours or X days and collect the information, so they don't need any webhook.
In theory you could make your page complicated enough to not make it easy to crawl, for example you could show the price with Javascript (the crawlers typically have javascript not enabled). But that would make your website less accessible, especially to Google, which is in fact another crawler.

Is it a good idea to use name in this situation? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a network of about 200 blogs (Wordpress Multisite), and all of them show links to all the other ones on a sidebar on the right hand side (basically 200+ links on the right hand side of every single page). I have it set to rel="nofollow" now, but I was wondering if changing it to rel="noindex, nofollow" would be a good idea?
Thank you for any input.
nofollow
nofollow only means that a bot should not follow this link. If you are concerned only about Google (as your tag suggests) this will probably be of help:
How does Google handle nofollowed links?
In general, we don't follow them. This means that Google does not
transfer PageRank or anchor text across these links. Essentially,
using nofollow causes us to drop the target links from our overall
graph of the web. However, the target pages may still appear in our
index if other sites link to them without using nofollow, or if the
URLs are submitted to Google in a Sitemap. Also, it's important to
note that other search engines may handle nofollow in slightly
different ways.
[Source]
However, adding this attribute is in no way a hard restriction, there is no standard, and some bots may ignore it altogether. Also, search engines may still flag the page as a linkbuilding site depending on content/link ratio.
noindex
noindex is not used in links by Google (I do not know about others). It is meant for the robots <meta> attribute in the html header and applies to the whole page. So it is most likely no use for you. Example:
<meta name="robots" content="noindex"/>
linkbuilding
200 links are however not very user-friendly either. You should seriously consider reducing the number of links by (for example) selecting those that have a similar topic.
As you read this, look to the right, yes, here on Stack Overflow, there is a "Box" titled Related. This is how you do it. Imagine them putting every single topic ever created in there... Not very useful.
Also if you do this with some logic like I suggested above and not just randomly selecting N links from the list, you can probably remove the nofollow, since the links will become useful and Google likes useful links.
You could then also add a "spotlight" for low-traffic sites (those would probably need the nofollow though)

hide text or div from crawlers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
Improve this question
lets say i have a text
<span class="hide">for real</span><h2 id='show'>Obama is rocking the house</h2>
<span class="hide">not real</span><h2 id='show'>Bill gates is buying stackoverflow</h2>
i need the crawler to just read the
<h2 id='show'>Obama is rocking the house</h2>
<h2 id='show'>Bill gates is buying stackoverflow</h2>
can we do that?
im a bit confused here say that a hidden div is readed by google
Does google index pages with hidden divs?
but when i google for a sec, i found out that google doesnt read hidden div. so which is right?
http://www.seroundtable.com/archives/002971.html
what i have in mind is to ofucate it like using css instead.,
i can put my text in a image. output it using image generator or something.
FYi, serving different content to users then to search engines is a violation of Google's terms of service and will get you banned if you're caught. Content that is hidden but can be accessed through some kind of trigger (navigation menu links is hovered over, the clicks on an icon to expand a content area, etc) is acceptable. But in your example you are showing different content to search engines specifically for their benefit and that is definitely what you don't want to do.
The best way to suggest that a webcrawler not access content on your site is to create a robots.txt file. See http://robotstxt.org. There is no way to tell a robot to not access one part of a page
http://code.google.com/web/controlcrawlindex/docs/faq.html#h22
If you are going to use CSS remember that robots can still read CSS files! You could include the CSS file in the robots.txt file, though to exclude it.
If you really must have indexed and non-indexed content on the same page, maybe you should use frames and have the non-indexed frame listed in the robots.txt file as not to be indexed.
Well behaved crawlers will follow the robots.txt guidance, e.g. Google, but naughty ones will not. So, there is no guarantee.
I can confirm that google does read the hidden div, while it's not showing up in the search results.
The reason I know: I admin a website that has backlinks on a highly respected non profit. As the non profit doesn't want to show up in search results for a company website, they hide the links.
However, if I check google's webmaster tools, I can see the backlinks form this non profit.

Getting Good Google PageRank [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
In SEO people talk a lot about Google PageRank. It's kind of a catch 22 because until your site is actually big and you don't really need search engines as much, it's unlikely that big sites will link to you and increase your PageRank!
I've been told that it's easiest to simply get a couple high quality links to point to a site to raise it's PageRank. I've also been told that there are certain Open Directories like dmoz.org that Google pays special attention to (since they're human managed links). Can anyone speak to the validity of this or suggest another site/technique to increase a site's PageRank?
Have great content
Nothing helps your google rank more than having content or offering a service people are interested in. If your web site is better than the competition and solves a real need you will naturally generate more traffic and inbound links.
Keep your content fresh
Use friendly url's that contain keywords
Good: http://cars.com/products/cars/ford/focus/
Bad: http://cars.com/p?id=1232
Make sure the page title is relevant and well constructed
For example: Buy A House In France :. Property Purchasing in France
Use a domain name that describes your site
Good: http://cars.com/
Bad: http://somerandomunrelateddomainname.com/
Example
Type car into Google, out of the top 5 links all 4 have car in the domain: http://www.google.co.uk/search?q=car
Make it accessible
Make sure people can read your content. This includes a variety of different audiences
People with disabilities: Sight, motor, cognitive disabilities etc..
Search bots
In particular make sure search bots can read every single relevant page on your site. Quite often search bots get blocked by the use of javascript to link between pages or the use of frames / flash / silverlight. One easy way to do this is have a site map page that gives access to the whole site, dividing it into categories / sub categories etc..
Down level browsers
Submit your site map automatically
Most search engines allow you to submit a list of pages on your site including when they were last updated.
Google: https://www.google.com/webmasters/tools/docs/en/about.html
Inbound links
Generate as much buzz about your website as possible, to increase the likely hood of people linking to you. Blog / podcast about your website if appropriate. List it in online directories (if appropriate).
References
Google Search Engine Ranking Factors, by an SEO company
Creating a Google-friendly site: Best practices
Wikipedia - Search engine optimization
Good content.
Update it often.
Read and digest everything at Creating a Google-friendly site: Best practices.
Be active on the web. Comment in blogs, correspond genuinely with people, in email, im, twitter.
I'm not too sure about the domain name. Wikipedia? What does that mean? Mozilla? What word is that? Google? Was a typo. Yahoo? Sounds like that chocolate drink Yoohoo.
Trying to keyword the domain name shoehorns you anyway. And it can be construed as a SEO technique in the future (if it isn't already!)
Answer all email. Answer blog comments. Be nice and helpful.
Go watch garyvee's Better Than Zero. That'll motivate you.
If it's appropriate, having a blog is a good way of keeping content fresh, especially if you post often. A CMS would be handy too, as it reduces the friction of updating. The best way would be user-generated content, as other people make your site bigger and updated, and they may well link to their content from their other sites.
Google doesn't want you to have to engineer your site specifically to get a good PageRank. Having popular content and a well designed website should naturally get you the results you want.
A easy trick is to use
Google webmaster tool https://www.google.com/webmasters/tools
And you can generate a sitemap using http://www.xml-sitemaps.com/
Then, don't miss to use www.google.com/analytics/
And be careful, most SEO guides are not correct, playing fair is not always the good approach. For example,everyone says that spamming .edu sites is bad and ineffective but it is effective.

how to get the googlebot to get the correct GEOIPed content? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
OK. This problem is doing my head in. And I don't know if there even IS a definitive answer.
We have a website, lets call it mycompany.com. It's a UK-based site, with UK based content. Google knows about it, and we have done a load of SEO on it. All is well.
Except, we are about to relaunch my company, the GLOBAL brand, so we now need mycompany.com/uk, mycompany.com/us, and mycompany.com/au, for the various countries local content. We are using GEOIP, so if someone from the US loads mycompany.com, they get redirected to mycompany.com/us etc.
If someone isn't in one of those three countries (US, Australia, or UK) they get the UK site.
This is all well and good, but we dont want to lose the rather large amount of Google juice we have on mycompany.com! And worse, the Google bot appears to be 100% based in the US, so the US site (which is pretty much out LEAST important one of the three) will appear to be the main one.
We have thought about detecting the bot, and serving UK content, but it appears Google may smack us for that.
Has anyone else come across this situation, and have a solution?
As long as Google can find mycompany.com/uk and mycompany.com/au, it'll index all three versions of the site. Your domain's Google juice should apply to all three URLs just fine if they're on the same domain.
Have you thought about including links for different sites on the homepage? Google could follow those and index their content as well - in turn indexing the UK content.
If you instead using uk.mycompany.com, us. mycompany.com etc, then you can
register them with google webmaster tools and specifically tell google which country they are from.
This might still work with folders rather than subdomains, but I haven't tried it.
One way to get round that, thinking about it, would be to 301 redirect uk.mycompany.com to mycompany.com/uk, then you'd be telling Google, as well as keeping your existing structure.
#ross: yes, we have links between the sites. It' just the home page, and which one comes up when someone searches for "my company" in google.
Thanks!
google alerts just brought me to this thread.
The domain name that was previously used in your question is my blog and the domain name is not for sale.
Are you just using this as an example domain - for the purpose of this discussion only? The convention is to use example.com as it is reserved for this exact purpose.
Some clarification would be appreciated.