Can I have SEO-friendly web pages with SharePoint 2010 for Internet Sites? - sharepoint-2010

I'm just starting out with SharePoint 2010 for Internet Sites, but most of what I've seen both in the app and in example sites look like the format of the URL is pretty fixed.
For example, most reference sites that I've seen hav their home page as 'pages/default.aspx'. I'd like to have something like '/home'. Is that possible? If so, is it fairly simple to do or is it pretty involved? The reference sites I saw are very good looking sites. I'm surprised they have such ugly URls.
Thanks in advance for any help!

Check this post http://blog.mastykarz.nl/friendly-urls-sharepoint-site-4-steps-iis7-url-rewrite-module/
There is no OOTB features to do this. We also write custom HTTP modules for friendly URLS.

Sharepoint is generally pretty good for the url. 2010 has gotten a little better to. Create your top level sites e.g. Home | Products | Pricing. Then creating a site under any of these site will give you the url like yourdomain.com/Products/Pages/whatever-my-page-is-called.aspx

Related

how can I replace the URL on search engines with the business name?

I'm building a site and I've noticed that when I search my site's name on google, unlike other sites my websites Url is displayed instead of the business title.
Can somebody please help me figure this out?
I've built my site using Wordpress and my SEO tool is Yoast
image for context:
I think one of the quickest ways would be to update and verify your business profile on Google.
You can find out more here:
https://support.google.com/business/answer/3039617?hl=en#zippy=%2Cedit-your-own-business-profile%2Cbusiness-name

Track how often link was clicked

I am currently running a website where I promote different coffees from pubs in my city. On my website I have links to the different coffees.
I have recently seen some of this links being shared on Facebook and other social networks.
So I was wondering if it is somehow possible to track how often one of this links are being clicked?
I have tried using redirects to my site but Facebook uses my pictures in the previews, whereas I don't want this because it is misleading.
I have seen that this works with Bitly so it must somehow be possible?
And there are of course different services providing this, but it would be nice if it would run without any foreign services.
So basically I am looking for a solution which will let me know how often a link, origination from my site was clicked in Facebook, Google+ or any other forum.
There definitely is. Try looking into Google Analytics, it will show you show much data from your personal websites and links that it can blow your mind! Here is the link
Google Analytics helps you analyze visitor traffic and paint a
complete picture of your audience and their needs. Track the routes
people take to reach you and the devices they use to get there with
reporting tools like Traffic Sources. Learn what people are looking
for and what they like with In-Page Analytics. Then tailor your
marketing and site content for maximum impact.
You can even get a free package to use!
Hope this helps!
Yes you have plenty of analytical options.
Something as straight forward as Google Analytics for example.
If you are using cpanel on your hosts server, you even have options such as AWSTATS, which will also provide information.
If all else fails you can even use post data stored in your apache / nginx logs.
Since you have amended your question you might want to check out this tool. It is not google. :)
It is called Click Meter and performs Link Tracking and provides click reports, etc

How do I make a VB.NET application that can rip websites from Google?

I'm trying to make a VB.NET application that can take some text in (via Text Box) and then search that text in Google, and take the resulting URLs from Google.
So my questions are:
How do I make it search Google?
How do I make it take the found URLs from Google?
And any ideas represented through code would be appreciated, I know I some how have to rip the URLs from google, but how?
It sounds like you want to use the Google search engine but have everything located inside your application. In this case, Google provides an API that will allow you to do this without doing web scraping or other hacks. You can read more about it and see examples (in .NET but usually C#) here:
http://code.google.com/p/google-api-for-dotnet/
Here is an example of how to do this in your code:
http://www.c-sharpcorner.com/UploadFile/mem_1910/1st08162006033511AM/1st.aspx

Is this a blackhat SEO technique?

I have a site which has been developed completely in flash. Now the site owners do not want to shift to a more text/html based site. So am planning to create an alternative html/text based site which the googlebot will get redirected to. (By checking the useragent). My question is that is this allowed officially by google?
If not then how come there are many subscription based sites which display a different set of data to google compared to the users? Is that allowed?
Thank you very much.
I've dealt with this exact scenario for a large ecommerce site and Google essentially ignored the site. Google considers it cloaking and addresses it directly here and says:
Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
Instead, create an ADA compliant version of the website so that users with screen readers and vision aids can use your web site. As long as there as link from your home page to your ADA compliant pages, Google will index them.
The official advice seems to be: offer a visible link to a non-flash version of the site. Fooling the googlebot is a surefire way to get in trouble. And remember, Google results will link to the matching page! Do not make useless results.
Google already indexes flash content so my suggestion would be to check how your site is being indexed. Maybe you don't have to do anything.
I don't think showing an alternate version of the site is good from a Google perspective.
If you serve up your page with the exact same address, then you're probably fine. For example, if you show 'http://www.somesite.com/' but direct googlebot to 'http://www.somesite.com/alt.htm', then Google might direct search users to alt.htm. You don't want that, right?
This is called cloaking. I'm not sure what the effects of it are but it is certainly not whitehat. I am pretty sure Google is working on a way to crawl flash now so it might not even be a concern.
I'm assuming you're not really doing a redirect but instead a PHP import or something similar so it shows up as the same page. If you're actually redirecting then it's just going to index the other page like normal.
Some sites offer a different level of content -- they LIMIT the content, they don't offer alternative and additional content. This is done so it doesn't index unrelated things generally.

How to find inbound links to a given URL on the fly?

Technorarati's got their Cosmos api, which works fairly well but limits you to noncommercial use and no more than 500 queries a day.
Yahoo's got a Site Explorer InLink Data API, but it defines the task very literally, returning links from sidebar widgets in blogs rather than just links from inside blog content.
Is there any other alternative for tracking who's linking to a given URL (think of the discussion links that run below stories on Techmeme.com)? Or will I have to roll my own?
Well, it's not an API, but if you google (for example): "link:nytimes.com", the search results that come back show inbound links to that site.
I haven't tried to implement what you want yet, but the Google search API almost certainly has that functionality built in.
Is this for links to Urls under your control?
If so, you could whip up something quick that logs entries in the Referrer HTTP header.
If you wanted to do to this for an entire web site without altering application code, you could implement as an ISAPI filter or equivalent for your web server of choice.
Information available publicly from web crawlers is always going to be incomplete and unreliable (not that my solution isn't...).