SEO Question, and about Server.Transfer (Asp.net) - seo

So, we're trying to up our application in the rankings in the search engines, and one way our SEO guy told us to do that was to register similar domains...for example we have something like
http://www.myapplication.com/parks.html
so..we acquired the domain parks.com (again just an example).
Now when people go to http://www.parks.com ...we want it to display the content of http://www.myapplication.com/parks.html.
I could just put a forwarding page there, but from what i've been told that makes us look bad because it's technically a permanent redirect..and we're trying to get higher in the search engine rankings, not lower.
Is this a situation where we would use the Server.Transfer method of ASP.net?
How are situations like this handled, because I've defiantly seen this done by many websites.
We also don't want to cheat the system, we are showing relevant content and not spam or tricking customers in anyway, so the proper way to do achieve what i'm looking for would be great.
Thanks

Use your "similar" domain names to host individual and targetted landing pages that will point to your master content.
It's easier to manage and you will get a higher conversion rate.
Having to create individual page will force you to write relevent content and will increase the popularity of the page.
I also suggest you to not only build landing pages, but mini sites (of few pages).
SEO is sa very high demanding task.
Regarding technical aspects: Server.Transfer is what you should use. Never use Response.Redirect, Google and other search engines will drop your ranking.
I used permanent URL rewrite in the past. I changed my website and since lots of traffic was coming from others website linking mine, I wanted to have a permanent solution.
Read more about URL rewriting : http://msdn.microsoft.com/en-us/library/ms972974.aspx

Related

SEO: how can dynamic URL with query strings be searched by search engine bots?

I’m developing an ecommerce web site in ASP.NET using SQL server 2008 database.
Most of my pages are database driven and all the content is gathered from a SQL Server.
Every product page is created dynamically from data coming from the database, hence every product’s page URL has a unique query string, containing a “product_id” variable.
*Example: http://www.myecommence.com/products.aspx?product_id=1*
I'd like to improve my Search Engine Optimization.
Dealing with a small number of products could be fine but what if I
had more than 1000 products, how could every product be crawled?
How does the google spider/bot know that a product_id with a
hypothetical number of 767 exists?
I’ve been googleing this, still I can’t understand how pages that
have absolutely no reference in the site or external sites can be
crawled? If this is possible the spider should know how to read the
website’s database tables, but I guess that this is not the case.
At this point since most of the pages and links are dynamic how
could they be indexed, the same thing applies to “user detail” pages
that are accessed via query string using a “user id=n”?
Probably what I’m asking has already been discussed but still I don’t have clear some points.
I would advise using Mod Rewrite rules to make your URLs search engine friendly.
This is very important for Google.
As is a good category structure.
Eg:
domain.com/t-shirts/girls/star-wars-t-shirt/
is far better than
domain.com/products.aspx?product_id=1*
Here is some info:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
http://www.wrox.com/WileyCDA/Section/id-305997.html
To answer your questions:
Dealing with a small number of products could be fine but what if I had more than 1000 products, how could every product be crawled?
If you have a good sitemap / menu structure etc, it is likely that Google will crawl all your pages.
How does the google spider/bot know that a product_id with a hypothetical number of 767 exists?
Via crawling your site, via your sitemap, via the menu system on the site etc. However always remember: Google is not psychic - it cannot find a page unless you tell how to / link to it.
I’ve been googleing this, still I can’t understand how pages that have absolutely no reference in the site or external sites can be crawled? If this is possible the spider should know how to read the website’s database tables, but I guess that this is not the case.
If you have no reference - you are doing something wrong. Improve your site structure.
At this point since most of the pages and links are dynamic how could they be indexed, the same thing applies to “user detail” pages that are accessed via query string using a “user id=n”?
Nothing wrong with a dynamic URL per-se - but again I would recommend implementing search engine friendly URLs via Mod Rewrite or similar - see the above resources.
Good luck,
Colin
Modern systems optimize for SEO by allowing for either custom or automated URLs that remap to your id based url pattern. This URL style allows for a fully custom word for word product title or keyword/description, which carries more weight than a random id number in a URL.
To ensure all individual pages are indexed, you generally benefit most from submitting or making available a sitemap xml. More info from google on generating one here:
https://code.google.com/p/googlesitemapgenerator/
Hope that gets you going in the right direction!

What's the best practice , using subdomains, archive SEO , keep the system scalable , and isolate the applications?

We are developing a website quite similar with ebay.com and in order to upgrade/maintain it without much effort we decided to split/isolate different parts of the website like ebay does too (e.g the item page/application will be served from cgi.domain.com , signin application from signin.domain.com, shopping cart application from offer.domain.com, search features from search.domain.com etc ). Each major application/function of the site will be deployed on a different server. Another reason for isolation the applications is the security.
I also need to mention that one application is deployed on google app engine .
However we received some "warnings" that this will affect the SEO dramatically so I have 2 questions :)
Is it true ? Do the subdomains decrease the pagerank of the website ?
If it's true how can we sort this out ? Should we use a different server which should act as a routing/proxy and make a kind of rewrite (e.g search.domain.com => domain.com/search etc) ?
What's the best practice to archive the simplicity/isolation of the applications + SEO + security + scalability in a website ?
Thank you in advance !
Search engines no longer see sub-domains as separate sites. This was new as around Sept 2011. Whether your link-juice carries over is another thing, and it's not really explained (as of yet). Here is a reference: http://searchengineland.com/new-google-classifies-subdomains-as-internal-links-within-webmaster-tools-91401
No, multiple subdomains will not decrease page rank of the main website. However, they don't contribute to page rank either (because the search engines see them as separate sites).
However, for the sort of site that you're working on, that looks like it would be OK. For example, the only thing you really want indexed is product listings anyway - you don't need it to index login, search results and stuff like that. Also, since external websites aren't going to link to your login pages or search results either (I assume they'll only link to product pages as well) then you don't really care about those other sites contributing to your page rank.
Personally, I think people put too much focus on making sites "SEO" friendly. As long as the site is user-friendly then SEO-friendly will follow as well.

Is a deep directory structure a bad thing for SEO?

a friend of mine told me that the company he works at are redoing their SEO for their large website. Large == both number of pages and traffic they get a day.
Currently they have a (quote) deeply nested site , which i'm assuming means /x/y/z/a/b/c.. or something. I also know it's very unRESTful from some of the pages i've also seen -> eg. foo.blah?a=1&b=2&c=3......z=24 (yep, lots of crap in the url).
So updating their SEO sounds like a much needed thing.
But, they are going flat. I mean -> totally flat. eg. /foo-bar-pew-pew-abc-article1
This scares the bollox out of me.
From what he said (if i understood him right), each - character doesn't mean a new heirachial level.
so /foo-bar-pew-pew-abc-article1 does not mean /foo/bar/pew/pew/abc/article1
A space could be replace by a -. A + represents a space, but only if the two words are suppose to be one word (whatever that means). ie. Jean-Luke will be jean+luke but if i had a subject like 'hello world, that would be listed ashello-world`.
Excuse me while i blow my head up.
Is this just mean or is it totally silly to go completly flat. To mean, I was under the impression that when SEO people say keep it as flat as possible, they are trying to say keep it to 1 or 2 levels. 4 is the utter max=.
Is this me or is a flat heirachy a 'really really good thing' for seo ... for MEDIUM and LARGE sites (lots of resources, not necessairly lots of hits/page views).
Well, let's take a step back and look at what SEO is supposed to accomplish; it's meant to help a search engine identify quality, relevant content for users based on key phrases and terms.
Take, for example, the following blog URLs:
* http://blog.example.com/articles/2010/01/20/how-to-improve-seo/
* http://blog.example.com/how-to-improve-seo/
Yes, one is deep and the other is flat; but the URL structure is important for two reasons:
URL terms and phrases are high-value targets for determining relevance of a page by a search engine
A confusing URL may immediately force a user to skip your link in the search results
Let's face it: Google and other search engines can associate even the worst URLs with relevant content.
Take, for example, a search for "sears kenmore white refrigerator" in Google: http://www.google.com/search?q=sears+kenmore+white+refrigerator&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a.
Notice the top hit? The URL is http://www.sears.com/shc/s/p_10153_12605_04665802000P , and yet Google replaces the lousy URL with www.sears.com › Refrigerators › Top Freezers. (Granted, 2 results down is the true URL.)
If your goal for SEO is optimized organic relevance, then I would wholeheartedly recommend generating either key/value pairs in the URL, like www.sears.com/category/refrigerators/company/kenmore (meh), or phrase-like URLs like www.sears.com/kenmore/refrigerators/modelNumber. You want to align your URLs with the user's search terms and phrases to maximize your effort.
In the end, if you offer valuable content and you structure your content and site properly, the search engines will accurately gather it. You just need to help them realize how specific and authoritative your content is. :)
Generally the less navigation to reach content the better. But with a logical breadcrumb strategy and well thought out deep linking the excess of directory depth can be managed and not hurt seo and the visibility in search.
Remember that Google is trying to return the most relevant link and the best user experience, so if your site has 3 urls coming up for the same search term and it take 2 or 3 exits to find the appropriate content, Google will read that as bad and start lowering all of your urls in SERPs.
You have to consider how visitors will find your content - not navigate it. Think content discovery and just navigation.
HTH
Flat or deeply nested really shouldn't affect the SEO. The key part is how those individual pages are linked to will determine how they get ranked. I did write some basic stuff on this years ago see here, but essentially if pages are not buried deeply within a site, i.e. it takes several clicks (or links from Google's perspective) then they should rank fairly much the same in either case. Google used to put a lot more weight on keywords in URL's but this has been scaled back in more recent algorithm changes. It helps to have keywords there, but its no longer the be-all and end-all.
What you/they will need to consider are the following two important points:
1) How will the URL structure be perceived by the users of the site? Will they they be able to easily navigate the site and not have to rely on the URL structure in the address bar?
2) In making navigational changes such as this its vitally important to set-up redirects from old url's. Google, hates 404's and they should either put in 410 (Gone) HTTP responses for pages are no longer valid or 301 HTTP response for permanent redirects (with new url).
In making any large changes such as this you can save loads of time getting the site indexed successfully by utilising XML sitemaps and Google's webmaster console.

Does a "blog" sub-domain help the pagerank of your main site?

I have my main application site https://drchrono.com, and I have a blog sub-domain under http://blog.drchrono.com. I was told by some bloggers that the blog sub-domain of your site helps the pagerank of your main site. Does traffic to your blog sub-domain help the Google Pagerank of your site and count as traffic to your main site?
I don't think Google gives any special treatment to sub domains named "blog". If they did, that would be a wide open door for abuse, and they're smart enough to realize that.
At one time, I think there were advantages to putting your blog on a separate subdomain though. Links from your blog to your main site could help with your main site's page rank if your blog has a decent page rank.
However, it seems like that has changed. Here's an interesting post about setting up blog subdomains vs. folders. It seems like they are actually treated the same by Google now, although nobody but Google really knows for sure how they treat them.
With regard to traffic, your Google ranking is only incidentally related to the amount of traffic your site gets. Google rankings are based primarily on content and number & quality of incoming links, not on how much traffic you get. Which makes sense since Google really has no way of knowing how much traffic you get to your site other than perhaps the traffic they send there via Google searches.
Not directly, but...
I do not know if "blog" specifically helps the pagerank of your site in some special way - google guards its pagerank secrets fairly well. If you really wanted to find out, you would create two sites roughly the same content but one with blog in the domain name and one without. Index them and see if the pagerank settings are different. My gut instinct is - no.
It is known that google indexes the name of the site and it improves your chances of getting listed on the search results if the site name corresponds to the search terms. So, it would be reasonable to assume that (unless google specifically removed indexing of the word blog) that when someone searched for a main search term and "blog" the chances of your site showing up would be slightly higher.
For example, it should help searches for: drchrono blog.
By the way, google changes its algorithms all the time, so this is just speculation.
according to an article on hubspot.com
The search engines are treating subdomains more and more as just portions of the main website, so the SEO value for your blog is going to add to your main website domain. If you want your blog to be seen as part of your company, you should it this way (or the next way).
however they go on to say there isn't a big difference between blog.domain.com and domain.com/blog
you can read the full article here: hubspot article on blog domains
One thing using a sub-domain will help is your sites Alexa rank.
Alexa give rank to all pages using your main domain. If you use the Alexa Toolbar you I see all subdomains have the same rank as your main page. So hit's to your sub's will count toward your sites Alexa.
I don't think the subdomain will anything to the pagerank, but however, it might make content easier to find than in a folder.
Let's say you search for something on google, from your page, I could search for
domain:blog.drchrono.com someTopic or articleImLookingFor
Since it is a subdomain, I would guess it counts as traffic to the main site.
Personally, if I was to setup a blog, I would go for the subdomain and would probably set up a redirect from
http://drchrono.com/blog to
http://blog.drchrono.com
blog.domain.tld and www.domain.tld are not treated as unrelated sites, assuming they're handled by the same final ns authority. It has never been clear to me if pages are ranked entirely independently or if a reputation for a domain and hence it's subdomains figures into it beyond just being linked to.
But if I read your question differently, I'd say there's no difference in doing either:
I've tried setting up pages at both photos.domain.tld/stuffAboutPhotos and www.domain.tld/photos/stuffAboutPhotos for a month at a time. I found no noticeable difference between the search engine referral rates.
But then it's actually hard to do this independently of other factors.
Therefore I conclude that despite the human logic indicating that the domain is more important, there is no advantage to putting a keyword in the domain as opposed to the rest of the url, except to be sure it's clearly delimited (use slash, dash, or underscore in the rest of the url).
If Google has a shortlist of keywords that do rank better in a domain name than in the rest of the url, they're definitely not sharing it with anyone not wearing a Google campus dampened exploding collar.
Google treat a subdomain as a domain. If this wasn't true, then all those blogspot blogs would have had a higher SERPS.
With subdomains it is a bit easier as Google "knows" it is a "separate" site. With sub-directories it is tricky. Though, with sub-domains it is the same. Google would rank these ones anything between PR0 and PR3 in the past year, currently:
PR1: of-cour.se
Cheers!
Not really. Blogs do do some nice things to the SEO for your sites, but if they're inside the site it doesn't work the same.
A better option is have a completely separate domain that contains the blog (something like drchronoblog.com), and have lots of links from the blog site to the main site.
That way search engines see the links but do not make the connection between the blog and the main site, and thus it makes your page rank better.
It wont give your site higher priority just because you have a blog. subdomain.
But im sure more people will find your site if they search for blogs..´
And therefore more traffic´, more traffic, more visits though the search engines and so on..
So id say yes :)
Since PageRank is dealing with the rank on search engine. Let's make a little test:
https://www.google.com/search?q=blog
you may see that
example.com/blog
rank higher than
blog.example.com
This almost in the same figure for whatever domains.
However when it were possible, I will fight more to get blog.wordpress.com as it treated on any search engine as my own profile than a folder named wordpress.com/blog that for sure still belong to wordpress.com.
The only way a blog can help you as far as SEO depends on the content in your blog. Just having a blog isn't enough.

Google Page Rank - New Domain / Link Structure Migration

i've been tasked with re-organizing a pure HTML site into a CMS. if all goes well, the new site will eventually become the main URL, and the old domain will be phased out. the old domain has a decent enough page rank, and the company wishes to mitigate any loss of page rank for that. in looking over the options available, i've discovered a few things:
it's better to use a 301 redirect when you're ready to make the switch (source).
the current site does not have a sitemap, so adding one and submitting it may help their future page rank.
i'll need to suggest to them that they contact people currently linking to them to update their links.
the process for regaining an old page rank takes awhile, so plan on rebuilding links while we see if the new site is flexible enough to warrant switching over completely.
my question is: as a result of a move to a CMS driven site, the links to various pages will change to accommodate the new structure. will this be an issue for trying to maintain (or improve) the current page rank? what sort of methods are available to mitigate the issue of changing individual page URL's? is there a preferable method beyond mapping individual pages to their new locations with 301 redirects? (the site has literally hundreds of pages, ugh...)
ex.
http://domain.com/Messy_HTML_page_with_little_categorization.html ->
http://newdomain.com/nice/structured/pages.php
i realize this isn't strictly a programming question, however i felt the information could be useful to developers who are tasked with handling this sort of thing in addition to development of the site.
edit: additions in italics
If you really truly want to ensure that page rank is not lost, you will want to replace the old content with something that performs a proper 301 redirect to the new location. With a 301 redirect the search spiders will know that the content is moved and the page rank typically carries over. It also helps external links.
However, the down side is that after a certain period of time you just have to get rid of the old domains.
You can make a handler for HTML files and map the old pages to the new structure with a 301 redirect.