I have a website : www.mysite.com
I am trying to find out how is the best way to get better subpages , with a high Google Rank
And in this website i have categories : www.mysite.com/category
In this category i will have subpages with photos added, so there will be extra links like this : www.mysite.com/category?page=2 ,www.site.com/category?page=3 .....
My question , is it better FOR SEO/RANKING to have all the new created pages like this ?
www.mysite.com/category/page=2 , www.mysite.com/category/page=3 ....
Thank you
Andu
Some advices for better SEO Ranking hope can hep you:
Rewrite URLs in more friendly SEO way, for example www.mysite.com/category?CategoryName=Software can be rewritten in www.mysite.com/Software/ to do this use URL Rewrite Tool on your web server.
In your case I would design the Urls in this way
instead of
www.mysite.com/category/page=1
www.mysite.com/category/page=2
I would use
www.mysite.com/NameCategory/01/
www.mysite.com/NameCategory/02/
In this way you will gain keyword reach Urls laso more human readable.
I recommend that you use the following structure:
domain.com/category1/post1.html
domain.com/category2/post2.html
As you increase your posts the categories will take more force if you have a good group of keywords.
Regards.
Related
I am working on the SEO optimization of a multistore. For example I have in the same language and same currency website:
www.test.com
subdomain.test.com
Why like that? Because main is for wholesales customers and subdomain for retail customers.
We have too much products so it's impossible to make different text for shared products.
So we had to set the product for both stores. So the duplication is almost 100% (of course the menu and some information around product is a little bit different but the product is the same) For us and also for Google is main www.test.com.
What is the best in this case to get from google the best ratio? I'm wondering if our main website isn't a little bit go down cos of duplication on subdomain.
I was thinking about setting subdomain to noindex,nofollow and let Google index only main website.
Or if this isn't problem for Google I can let like now but I'm not sure.
You can use canonical tag on the subdomain pages. Canonical Tag tells search engines that a specific URL represents the master copy of a page. Using the canonical tag prevents problems caused by identical or "duplicate" content appearing on multiple URLs. Following is the syntax of canonical tag:
<link rel="canonical" href="https://www.test.com"/>
Is it possible to help search engines by giving them a list of urls to crawl? It might be hard to make the site SEO friendly when using heavy AJAX logic. Let's say that the user chooses a category, then a sub-category and a product. It seems unnecessary to give categories and subcategories urls. But giving only products a url makes sense. When I see the url for the product, I can make the application navigate to that product. So, is it possible to use robots.txt or some other method to direct search engines to the urls I designate?
I am open to other suggestions if this somehow does not make sense.
Yes. What you're describing is called a sitemap -- it's a list of pages on your site which search engines can use to help them crawl your web site.
There are a couple ways of formatting a sitemap, but by far the easiest is to just list out all the URLs in a text file available on your web site -- one per line -- and reference it in robots.txt like so:
Sitemap: http://example.com/sitemap.txt
Here's Google's documentation on the topic: https://support.google.com/webmasters/answer/183668?hl=en
I'm rebuilding a site from the ground up, but the site I'll be replacing already ranks pretty well for SEO.
I have a number of pages in the format of the following:
http://URL/SECTION/ANOTHERSECTION/send-me-information-on-PRODUCTNAME.php
"send-me-information-on-" is consistent across all products.
I can write redirects on a per product basis, but I've got more than 200 products so it would be great to handle this using a rewrite rule.
What I need to achieve is the following New URL:
http://URL/SECTION/ANOTHERSECTION/product-information-request.php?product=PRODUCTNAME
Now I understand for SEO purposes, this probably isn't the best approach, but I'd like to maintain a single information request page.
I figured the best approach would be to use a Regex to match the string, and set an environment variable which I'd use in the resulting URL. I'm not too familiar with .htaccess rules though.
Can anyone help me achieve this?
RewriteRule ^/([^/]+)/([^/]+)/send-me-information-on-([^.]+).php$ $1/$2/product-information-request.php?product=$3 [QSA,L]
I have recently started using Google Webmaster Tools.
I was quite surprised to see just how many links google is trying to index.
http://www.example.com/?c=123
http://www.example.com/?c=82
http://www.example.com/?c=234
http://www.example.com/?c=991
These are all campaigns that exist as links from partner sites.
For right now they're all being denied by my robots file until the site is complete - as is EVERY page on the site.
I'm wondering what is the best approach to deal with links like this is - before I make my robots.txt file less restrictive.
I'm concerned that they will be treated as different URLS and start appearing in google's search results. They all correspond to the same page - give or take. I dont want people finding them as they are and clicking on them.
By best idea so far is to render a page that contains a query string as follows :
// DO NOT TRY THIS AT HOME. See edit below
<% if (Request.QueryString != "") { %>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
<% } %>
Do I need to do this? Is this the best approach?
Edit: This turns out NOT TO BE A GOOD APPROACH. It turns out that Google is seeing NOINDEX on a page that has the same content as another page that does not have NOINDEX. Apparently it figures they're the same thing and the NOINDEX takes precedence. My site completely disappeared from Google as a result. Caveat: it could have been something else i did at the same time, but i wouldn't risk this approach.
This is the sort of thing that rel="canonical" was designed for. Google posted a blog article about it.
Yes, Google would interprete them as different URLs.
Depending on your webserver you could use a rewrite filter to remove the parameter for search engines, eg url rewrite filter for Tomcat, or mod rewrite for Apache.
Personally I'd just redirect to the same page with the tracking parameter removed.
That seems like the best approach unless the page exists in it's own folder in which case you can modify the robots.txt file just to ignore that folder.
For resources that should not be indexed I prefer to do a simple return in the page load:
if (IsBot(Request.UserAgent)
return;
I have an existing journal website with the following url structure
http://example.com/dbtable_id/
(eg. http://example.com/89348/)
where 89348 is the primary key id of the journal article.
I want to add the title of the article to the url for SEO purposes like
http://example.com/dbtable_id/article-title
(eg. http://example.com/89348/hello-world)
I like this approach because I don't need to change the PHP code since it will still look up the article by dbtable_id. All I have to do is append url friendly titles to relevant links in template files and add one more rule to a .htaccess file.
Is there anything I should be concerned about? Am I following best practices? Will the possibility for mismatch between "dbtable_id" and "article-title" affect SEO?
There are some that argue that shallow paths are better than deeper paths, but I don't put too much stock in this. A semantic page with a screwed up URL will always do better than an unsemantic page with a "perfect" URL.
So i say, go for it. As long as it doesn't have any querystring parameters, you should be fine.