domain.com/blog/How-To-Code/3 (page 3)
domain.com/user/alicejohnson/comments
OR
domain.com/How-To-Code/3
domain.com/alicejohnson/comments
Facebook and Quora does it the 2nd way: http://www.quora.com/Swimming/Can-one-swim-from-New-Zealand-to-Australia They eliminate the noun and go straight to the object.
Stackoverflow does it the first way: What is the correct way to do REST urls?
Which should I do?
Most importantly, how does this affect SEO?
Also, if I do the 2nd version, how do I go about writing the "router" for that?
Perhaps think of 'blog' and 'user' above as namespaces. If there are multiple uses for How-To-Code you might have to put something behind it.
Some people tend to be pedantic about "proper REST" but I try to avoid this. I would design your URL schemes so it fits your needs, works well with tools and allows you to simply paste URLs into a browser to test your code.
Related
I just had a quick question about URL structures. Out of these structures which is more commonly used, and should be used more for best practice. Or if you had any other ways that are even better, that would be appreciated.
Create all add type pages with a prefix of add- and then the rest
http://example.com/add-account
or create a folder for all adding functionality
http://example.com/add/account
From a SEO point of view, I would think that add-account would be nice. But like Joël said, if you are focusing on a sane URL structure, /account/add would suffice, and I do not think it would be worse for SEO.
It's not that important for an account page anyway. If it were a clothes shop I would recommend example.com/products/women/dresses/red-dress-with-flowers as the URL. Not women-dresses-red-dress-with-flowers :)
Can REST like URLs be used in a Seaside application maintaining all references to continuations? That is, all good things of Seaside but with pure indexable URLs.
I am aware of the WARestfulComponentFilter in Seaside-REST, but if I start here will I be able to use the continuations, call, answer etc? Will it be worthwhile to give it a try? I just need to know opinions.
This depends on what you want to do. If you do not want to see the Seaside session and continuation parameters in the browser's location bar, then this is difficult to achieve completely. But it certainly is possible to build applications that produce indexable urls in Seaside. Perhaps the best place to look at an example is the source code of the Pier CMS.
Wether it is possible to keep "all goods things of Seaside" but use "pure indexable urls" depends on your app and what is in your session state. The session and continuation parameters of Seaside reference the actual session state on the server. If you want a url that references exactly the same thing (but is clean and indexable), you will need to communicate the entire state in the url. However, in most cases, you will not want to encode the user's session state in there. So, this is something you need to do yourself and Seaside provides the right entry points for that.
Lukas Renggli's presentation explaining RESTful urls in Seaside will give you understanding on how to create indexable urls. Getting rid of the _s (session parameter) can be done in various ways in Seaside 3.1. You can customize the WASessionTrackingStrategy or use one of the predefined ones (for example using a cookie). Getting rid of the continuation parameter in your app is more difficult: email thread on REST Urls in a Seaside app.
Finally, as mentioned in another answer, take a look at Seaside REST
In summary: generating indexable urls is possible (shown in the Pier CMS), removing the session parameter from the url bar is easy as well but removing the continuation parameter from the browser's url bar requires manual hacking.
Did you look at the framework SeasideRest? It should help you. Sadly, I note that it was last updated in 2011, so it is probably not maintained.
We all know that showing inexistent stuff to Google bots is not allowed and will hurt the search positioning but what about the other way around; showing stuff to visitors that are not displayed for Google bots?
I need to do this because I have photo pages each with the short title and the photo along with textarea containing the embed HTML code. googlebot is taking the embed code and putting it at the page description on its search results which is very ugly.
Please advise.
When you start playing with tricks like that, you need to consider several things.
... showing stuff to visitors that are not displayed for Google bots.
That approach is a bit tricky.
You can certainly check User-agents to see if a visitor is Googlebot, but Google can add any number of new spiders with different User-agents, which will index your images in the end. You will have to constantly monitor that.
Testing of each code release your website will have to check "images and Googlebot" scenario. That will extend testing phase and testing cost.
That can also affect future development - all changes will have to be done with "images and Googlebot" scenario in mind which can introduce additional constraints to your system.
Personally I would choose a bit different approach:
First of all review if you can use any methods recommended by Google. Google provides a few nice pages describing that problem e.g. Blocking Google or Block or remove pages using a robots.txt file.
If that is not enough, maybe restructuring of you HTML would help. Consider using JavaScript to build some customer facing interfaces.
And whatever you do, try to keep it as simple as possible, otherwise very complex solutions can turn around and bite you.
It is very difficult to give you very good advise without knowledge of your system, constraints and strategy. But I hope my answer will help you out to choose good architecture / solution for your system.
Boy, you want more.
Google does not because of a respect therefore judge you cheat, he needs a review, as long as your purpose to the user experience, the common cheating tactics, Google does not think you cheating.
just block these pages with robots.txt and you`ll be fine, it is not cheating - that's why they came with solution like that in the first place
I have found that one of the keywords I would like to be found in the search engine has a domain that I can register. This is not a good name for the general project, and so not for all the web, but it is a good definition or explanation of a part. Is it a good idea to register something like this just to point it to a section of the web? I mean is this effective from the SEO point of view? but most important, is it a good practice?
Interesting question, in therms of SEO, this is NOT a good practice, and google can punish your website (so i'd not recommend it), but...
...if this word is really easy to remember and you think the user will try it to access your site without needing to search for it, it may be "acceptable", because you won't lose online visits.
Anyway.. you should avoid black hat techniques.
Google updates "Panda", "Penguin" later versions discourage this type of technique. Naming your domain as same to the specific research like www.healthcaremedicine.com. That means if someone search for the products health care medicine. Your website is shown at top.
I think that is what you mean.
In past years people name their website closer to the search result but now it is not recommended. It may for some time take your site to top. But that will not last long. Your site should have to provide what it promise its visitors. At least they have to spend some time.
Exact Match Domain is what you are referring to, as answered above is not advisable, but if you find it useful you can register and go ahead.
How to keep you off from penalties.
Do not stuff KW's in title and descriptions, as your domain already has the KW
When doing off-page SEO, Do not choose anchor text as your Main KW, instead use URL itself as anchor text and some generic anchor text like click here, more info, read more.
These can save you from penalty. I still see a lot of EMD's ranking just by being careful with the usage of KW's and anchor text.
What is considered to be best practice for url structuring these days?
for some reason i thought including an extension at the end of a url was once you got down to the 'lowest' part of your hierarchy e.g.
/category/sub-category/product.html
then all category urls would be:
/category/sub-category/
rather than including an extension at the end because there is still further to go down the structure.
looking forward to your thoughts.
Andy.
EDIT
Just for clarification purposes: I'm looking at this from an ecommerce perspective.
Your question is not very clear, but I'll reply as I understand it.
As to use or not to use file extensions, according to Google's representative Matt Cutts, Google crawls .html, .php, or .asp, but you should keep away from .exe, .dll,.bin. They signify largely binary data, so may be ignored by Googlebot.
Still, when designing SEO friendly URLs, keep in mind they should be short and descriptive, so you may use your keywords to rank higher. So, if you have good keywords in your category names, why not let them be visible in the URL.
Make sure you're using static instead of dynamic URLs, they are easier to remember, and they don't change.