Yesterday, Site on Page1, Today, Site Gone - seo

I understand the whole work of SEO etc. However my site has been steadily holding page 2 for months. I had time to add contextual links, and site went direct to page 1 yesterday. This site and URL now can't be found at all. I should think if anything it would get back to page 2.
Thanks a lot for adding thoughts?
Thanks a lot,
Byron

You have not mentioned anything important to answer your question.
Anyway, assuming that you are searching in Google for your website, I have one suggestion.
Do check your site health in webmaster tools. And also read the basic seo optimization guides. Enhance the webpages with HTML5 technologies and Schema. Have patience.

Related

Could you explain how Google bot's visit frequency is determined?

Dear fellow developers,
I am operating a website for my small research items.
I found one weird thing regarding my website, nowadays, and would like to ask a question.
Although my website has never been updated often (probably, once or twice a month), Google bots visited my website almost 10 ~ 20 times everyday.
So, my website has almost nothing to crawl and index, the bots kept visiting my website.
Is there anyone who can tell me why?
How is the frequency of the bots' visits determined?
I am well aware of the fact that Google did not expose the technical details of the crawlers but any wild guess will be welcomed.
Thank you for your answer, in advance

Getting a forum to be indexed under Google's "Discussions" tab

I'm building a forum software using node.js called NodeBB. I've been able to get Google to index the pages well enough, but I'm having trouble getting those results to show up in the Discussion tab like current established forum software.
I found a response to another SO question saying to use rich snippets. I've experimented around with it but with no luck.
It may be interesting to note that another next gen forum called Discourse seems to be having a similar problem (ex. discourse, howtogeek); I'm not sure if they have a solution for it either.
While I am unsure of what Google explicitly looks for regarding scraping of discussions, following the guidelines established by schema.org may help you tailor your site towards Google scraping.
Similar sites like Xenforo use these rich snippets/schema tags, and they are scraped properly... although other forums don't seem to do anything at all and yet get scraped properly anyway.

Is there a way that is more efficient than sitemap to add/force recrawl/remove your website's index entries in google?

Pretty much that is the question. Is there a way that is more efficient than the standart sitemap.xml to [add/force recrawl/remove] i.e. manage your website's index entries in google?
I remember a few years ago I was reading an article of an unknown blogger that was saying that when he write news in his website, the url entry of the news will appear immediately in google's search result. I think he was mentioning about something special. I don't remember exactly what.. . some automatic re-crawling system that is offered by google themselves? However, I'm not sure about it. So I ask, do you think that I am blundering myself and there is NO OTHER way to manage index content besides sitemap.xml ? I just need to be sure about this.
Thank you.
I don't think you will find that magical "silver bullet" answer you're looking for, but here's some additional information and tips that may help:
Depth of crawl and rate of crawl is directly influenced by PageRank (one of the few things it does influence). So increasing your site's homepage and internal pages back-link count and quality will assist you.
QDF - this Google algorithm factor, "Query Deserves Freshness", does have a real impact and is one of the core reasons behind the Google Caffeine infrastructure project to allow much faster finding of fresh content. This is one of the main reasons that blogs and sites like SE do well - because the content is "fresh" and matches the query.
XML sitemaps do help with indexation, but they won't result in better ranking. Use them to assist search bots to find content that is deep in your architecture.
Pinging, especially by blogs, to services that monitor site changes like ping-o-matic, can really assist in pushing notification of your new content - this can also ensure the search engines become immediately aware of it.
Crawl Budget - be mindful of wasting a search engine's time on parts of your site that don't change or don't deserve a place in the index - using robots.txt and the robots meta tags can herd the search bots to different parts of your site (use with caution so as to not remove high value content).
Many of these topics are covered online, but there are other intrinsic things like navigational structure, internal linking, site architecture etc that also contribute just as much as any "trick" or "device".
Getting many links, from good sites, to your website will make the Google "spiders" reach your site faster.
Also links from social sites like Twitter can help the crawlers visit your site (although the Twitter links do not pass "link juice" - the spiders still go through them).
One last thing, update your content regularly, think of content as "Google Spider Food". If the spiders will come to your site, and will not find new food, they will not come back again soon, if each time they come, there is new food, they will come a lot. Article directories for example, get indexed several times a day.

How to Develop a Successful Sitemap

I have been browsing around on the internet and researching effective sitemap web pages. I have encountered these two sitemaps and questioning their effectiveness.
http://www.webanswers.com/sitemap/
http://www.answerbag.com/sitemap/
Are these sitemaps effective?
Jeff Atwood, (One of the guys who made this site) wrote a great article on the importance of sitemaps.
I'm a little aggravated that we have
to set up this special file for the
Googlebot to do its job properly; it
seems to me that web crawlers should
be able to spider down our simple
paging URL scheme without me giving
them an explicit assist.
The good news is that since we set up
our sitemaps.xml, every question on
Stack Overflow is eminently findable.
But when 50% of your traffic comes
from one source, perhaps it's best not
to ask these kinds of questions.
So yeah, effective for people, or effective for google?
I would have thought a HTML sitemap should be useful to a human, whereas these 2 sites aren't. If you're trying to target a search engine then a sitemap.xml file that conforms to sitemaps.org would be a better approach. Whilst the html approach would work it's easier to generate a xml file and have your robots.txt file pointing at this.

Implementing CAPTCHA after 50% of Article

We are planning to put large number of Business Research Reports and Articles from our intranet on to the Internet. However, we don't want others to copy the content and host it on their own.
I read about protection by CAPTCHA and was wondering if this is possible. Readers should be able to read 50% of the article for FREE after which a CAPTCHA should be entered to read the rest of the article [In this way we are making life little harder for those copycats]
Any pointers on how to implment this ? The content is in HTML and programming experience in Perl, PHP. Can hire others if required.
Aditionally, search engine will crawl half of the article and wondering if it will penalize the site for not being able to crawl the rest of the article since it won't be able to crack the CAPTCHA ?
Thanks.
There's a really good Captcha service provided by Recaptcha - http://recaptcha.net/
There is a PHP class that you can use to do all the hard work.
It's important to bear in mind that search engines aren't able to solve a Captcha and so they will only index the first half of the report. As long as this half contains largely the correct key words, it shouldn't cause a massive problem. Don't make the mistake of "detecting" a search engine and showing them different content to a normal user as the major search engines think that this is spamming.
An alternative solution would be to use a service like Copyscape (http://www.copyscape.com/) to protect your content.
I know this is not what you're asking, but please take into account that CAPTCHAs are universally broken, and will not protect your content. You said the first half is free, does that mean you intend to charge for the other half? CAPTCHA won't help you here at all...
But even if you're just trying to prevent automated scraping, CAPTCHA still won't do the trick. Check out my answer to another captcha question... Or you can go straight to the ppt I presented at OWASP last year.
Readers should be able to read 50% of the article for FREE after which a CAPTCHA should be entered to read the rest of the article
Have your PHP programmer output 50% of the article. On the bottom, add a captcha. If the user types in the correct captcha, output 100% of the article.
Any pointers on how to implment this ? The content is in HTML and programming experience in Perl, PHP. Can hire others if required.
As a PHP programmer, I use http://www.phpcaptcha.org to implement captcha.
Aditionally, search engine will crawl half of the article and wondering if it will penalize the site for not being able to crawl the rest of the article since it won't be able to crack the CAPTCHA ?
No, it won't penalize you but that particular section will not be shown on the search results.
As already mentioned reCAPTCHA is a good way to go.
Have a look at Captcha::reCAPTCHA on CPAN which according to the CPAN rating reviews "Works out of the box"
If your want Captcha then there are plenty of modules that do this on CPAN ;-)
Hope that helps.