Negative Captchas - help me understand spam bots better - captcha

I have to decide a technique to prevent spam bots from registering my site. In this question I am mainly asking about negative captchas.
I came to know about many weaknesses of bots but want to know more. I read somewhere that majority of bots do not render/support javascript. Why is it so? How do I test that the visiting program can't evaluate javascript?
I started with this question Need suggestions/ideas for easy-to-use but secure captchas
Please answer to that question if you have some good captcha ideas.
Then I got ideas about negative captchas here
http://damienkatz.net/2007/01/negative_captch.html
But Damien has written that though this technique likely won't work on big community sites (for long), it will work just fine for most smaller sites.
So, what are the chances of somebody making site-specific bots? I assume my site will be a very popular one. How much safe this technique will be considering that?
Negative captchas using complex honeypot implementations here described here
http://nedbatchelder.com/text/stopbots.html
Does anybody know how easily can it be implemented? Are there some plugins available?
Thanks,
Sandeepan

I read somewhere that majority of bots do not render/support javascript. Why is it so?
Simplicity of implementation — you can read web page source and post forms with just dozen lines of code in high-level languages. I've seen bots that are ridiculously bad, e.g. parsing HTML with regular expressions and getting ../ in URLs wrong. But it works well enough apparently.
However, running JavaScript engine and implementing DOM library is much more complex task. You have to deal with scripts that do while(1);, that depend on timers, external resources, CSS, sniff browsers and do lots of crazy stuff. The amount of work you need to do quickly starts looking like writing a full browser engine.
It's also computationally much much expensive, so probably it's not as profitable for spammers — they can have dumb bot that silently spams 100 pages/second, or fully-featured one that spams 2 pages/second and hogs victim's computer like a typical web browser would.
There's middle ground in implementing just a simple site-specific hack, like filling in certain form field if known script pattern is noticed in the page.
So, what are the chances of somebody making site-specific bots? I assume my site will be a very popular one. How much safe this technique will be considering that?
It's a cost/benefit trade-off. If you have high pagerank, lots of visitors or something of monetary value, or useful for spamming, then some spammer might notice you and decide workaround is worth his time. OTOH if you just have a personal blog or small forum, there's million others unprotected waiting to be spammed.

How do I test that the visiting program can't evaluate javascript?
Create a hidden field with some fixed value, then write a js which increments or changes it and you will see in the response..

Related

Hiding a page part from Google, does it hurt SEO?

We all know that showing inexistent stuff to Google bots is not allowed and will hurt the search positioning but what about the other way around; showing stuff to visitors that are not displayed for Google bots?
I need to do this because I have photo pages each with the short title and the photo along with textarea containing the embed HTML code. googlebot is taking the embed code and putting it at the page description on its search results which is very ugly.
Please advise.
When you start playing with tricks like that, you need to consider several things.
... showing stuff to visitors that are not displayed for Google bots.
That approach is a bit tricky.
You can certainly check User-agents to see if a visitor is Googlebot, but Google can add any number of new spiders with different User-agents, which will index your images in the end. You will have to constantly monitor that.
Testing of each code release your website will have to check "images and Googlebot" scenario. That will extend testing phase and testing cost.
That can also affect future development - all changes will have to be done with "images and Googlebot" scenario in mind which can introduce additional constraints to your system.
Personally I would choose a bit different approach:
First of all review if you can use any methods recommended by Google. Google provides a few nice pages describing that problem e.g. Blocking Google or Block or remove pages using a robots.txt file.
If that is not enough, maybe restructuring of you HTML would help. Consider using JavaScript to build some customer facing interfaces.
And whatever you do, try to keep it as simple as possible, otherwise very complex solutions can turn around and bite you.
It is very difficult to give you very good advise without knowledge of your system, constraints and strategy. But I hope my answer will help you out to choose good architecture / solution for your system.
Boy, you want more.
Google does not because of a respect therefore judge you cheat, he needs a review, as long as your purpose to the user experience, the common cheating tactics, Google does not think you cheating.
just block these pages with robots.txt and you`ll be fine, it is not cheating - that's why they came with solution like that in the first place

Captcha's + Differnet Possibilities

I wanted to run some captcha possibities past people to see if they are easily by passed by bots etc.
What if colors were used - eg: there is a string of 10 characters are you ask people to type the red characters of where there are 5? Easy to bypass?
I've noticed a captcha on plentyoffish that involves typing in the characters under the circles. This seems a touch more complex - would this be more challenging for bots?
The other idea I was thinking was putting the requirement in an image as well meaning like in no. 1 above - you can put "type the red characters" in an image and this could change with different colors. Any value here?
Interested in what people think.
cheers
Colours are easy to bypass. A bot just takes the red channel and gets the answer. It is even easier than choosing between many possible solutions. The same applies to any noise that has another colour than the letters the user needs to find.
Symbols that don't touch the letters are very easy to ignore. Why would a bot even look at those circles that probably always stay at the same position? (valid but wasn't asked here)
Identifying circles or other symbols is easier than identifying letters, if one can do the latter, a simple symbol is no challenge.
I think captchas are used too frequently in places where they aren't the best tool. For instance, are you trying to prevent registration spam? Why use a captcha rather than email validation?
What are your intentions and have you considered alternatives to the (relatively ineffective) captcha technology?
As a side note, if you have to use them, I prefer KittyAuth myself :) http://thepcspy.com/kittenauth/#5
Color blind people will have trouble separating red from green letters. People who have trouble reading and understanding descriptions, or have other disabilities may have trouble reading the captchas too.
In some of these, the texts are so mangled that almost everyone has a hard time reading them.
I think captcha's, if used at all, should be quite easy to read. The one with the dots and triangles is doable, although it's a matter of time before someone writes an algorithm to hack them. It is very easy for computers to read this kind too.
The best way to deal with this, is increase moderation. Make your site so that it isn't rewarding to spam it at all. Don't make it the problem of your users.
Also, if you're gonna use captcha's, it may be better to build something yourself than to use common libraries. I've found that these are easier hacked, probably because it is more rewarding to write a captcha solver for something that is used by thhousands of sites.
No matter which CAPTCHA you construct, spammers will find a way to work around it, given enough incentive. Large CAPTCHA services like reCAPTCHA, for instance, get bypassed by outsourcing solving them to cheap labor in India(source).
If you run a small site, your best bet is to make your own mini-CAPTCHA, which asks a simple question. If it isn't a standard question, isn't a standard CAPTCHA module and isn't a large site, it isn't worth it for the spammers to automate bypassing it.
I've been working on a community site for an organization at my university, and we've had trouble with spammers registering, despite us using every CAPTCHA module in the book. As soon as we made our own simple one-question CAPTCHA, all spam stopped. The key to preventing this sort of spam often lies in uniqueness.

How does Google Know you are Cloaking?

I can't seem to find any information on how google determines if you are cloaking your content. How, from a technical standpoint, do you think they are determining this? Are they sending in things other than the googlebot and comparing it to the googlebot results? Do they have a team of human beings comparing? Or can they somehow tell that you have checked the user agent and executed a different code path because you saw "googlebot" in the name?
It's in relation to this question on legitimate url cloaking for seo. If textual content is exactly the same, but the rendering is different (1995-style html vs. ajax vs. flash), is there really a problem with cloaking?
Thanks for your put on this one.
As far as I know, how Google prepares search engine results is secret and constantly changing. Spoofing different user-agents is easy, so they might do that. They also might, in the case of Javascript, actually render partial or entire pages. "Do they have a team of human beings comparing?" This is doubtful. A lot has been written on Google's crawling strategies including this, but if humans are involved, they're only called in for specific cases. I even doubt this: any person-power spent is probably spent by tweaking the crawling engine.
Google looks at your site while presenting user-agent's other than googlebot.
See the Google Chrome comic book page 11 where it describes (even better than layman's terms) about how a Google tool can take a schematic of a web page. They could be using this or similar technology for Google search indexing and cloak detection - at least that would be another good use for it.
Google does hire contractors (indirectly, through an outside agency, for very low pay) to manually review documents returned as search results and judge their relevance to the search terms, quality of translations, etc. I highly doubt that this is their only tool for detecting cloaking, but it is one of them.
In reality, many of Google's algos are trivially reversed and are far from rocket science. In the case of, so called, "cloaking detection" all of the previous guesses are on the money (apart from, somewhat ironically, John K lol) If you don't believe me set up some test sites (inputs) and some 'cloaking test cases' (further inputs), submit your sites to uncle Google (processing) and test your non-assumptions via pseudo-advanced human-based cognitive correlationary quantum perceptions (<-- btw, i made that up for entertainment value (and now i'm nesting parentheses to really mess with your mind :)) AKA "checking google resuts to see if you are banned yet" (outputs). Loop until enlightenment == True (noob!) lol
A very simple test would be to compare the file size of a webpage the Googlbot saw against the file size of the page scanned by an alias user of Google that looks like a normal user.
This would detect most suspect candidates for closeer examination.
They call your page using tools like curl and they construct a hash based on the page without the user agent, then they construct another hash with the googlebot user-agent. Both hashes must be similars, they have algorithms to check the hashes and know if its cloaking or not

Another answer to the CAPTCHA problem? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Most sites at least employ server access log checking and banning along with some kind of bot prevention measure like a CAPTCHA (those messed-up text images).
The problem with CAPTCHAs is that they poss a threat to the user experience. Luckily they now come with user friendly features like refresh and audio versions.
Anyway, like linux vs windows, it isn't worth the time of a spammer to customize and/or build a script to handle a custom CAPTCHA example that only pertains to one site. Therefore, I was wondering if there might be better ways to handle the whole CAPTCHA thing.
In A Better CAPTCHA Peter Bromberg mentions that one way would be to convert the image to HTML and display it embedded in the page. On http://shiflett.org/ Chris simply asks users to type his name into an input. Examples like this are ways to simplifying the CAPTCHA experience while decreasing the value for spammers. Does anyone know of more good examples I could use or see any problem with the embedded image idea?
Image presented as HTML table is just a technical speed bump. There's no difficulty in extraction of pixels from such document.
IMHO CAPTCHA puts focus on a wrong thing – you're not interested whether there's a human on the other side. You wouldn't like human to spam you either. So take a step back and focus on spam:
Analyze text (look for spammy keywords, use bayesian filtering)
Analyze links (blacklist spammy domains – SURBL, LinkSleeve)
Look at traffic patterns and block floods
There's no single perfectly accurate method, but you can use few of them and weight the result to get pretty close.
Have a look at source code of Sblam! (it's a completely transparent server-side comment spam filter).
Alternatives to captchas are going to be to consider the problem from other angles. The reason for this is because captchas are built around the idea that a human and computer actor can be distinguished. As Artificial intelligence progresses, this will always become an increasingly difficult problem as the gap between computer and human users shrinks.
The technique used here on slashdot is for other users of the site to act as gatekeepers, marking abuse and removing offending posts before they become noticeable to a wide audience.
Another technique is to detect spam-like posts directly, using the same technology used to filter spam from email. Obviously it isn't 100% effective for email, and wont be for other uses, either, but if you can filter out 75% of the spam with very few false positives being filtered, then other techniques will only have to deal with the remaining 25%.
Keep a log of spam-related activity, so that you can track trends about offending ip addresses, content of posts, claimed user agent, and so forth, so that you can block abusive users at a routing level.
In nearly all cases, your users would rather put up with the slight inconvenience of abuse prevention, than the huge inconvenience of a major spam problem.
Ultimately, the arms race between you and spammers is one of cost-benefit. Initially, it will cost spammers close to nothing to spam your site, but you can change that to make it very difficult. Even if they continue to spam your site, the benefit they recieve will never grow beyond a few innocent users falling for their schemes. Once the cost of spamming rises sharply above the benefit, the spammers will go away.
Another way to benefit from that is to allow advertising on your site. Make it inexpensive (but not free, of course) and easy for legitimate advertisers to post responsible marketing material for your users to see. Would be spammers may find that it is a better deal to just pay you a few dollars and get their offering seen than to pursue clandestine methods.
Obviously most spammers won't fit in this category, since that is often more about getting your users to fall victim to malware exploits. You can do your part for that by encouring users to use modern, up to date browsers or plugins so that they become less vulnerable to those same exploits.
This article describes a technique based on hashed field names (changing with each page view) with some of them being honeypot fields (i.e. the request is rejected if they're filled) that are hidden from human users via various techniques.
Basically, it relies on spam scripts not being sophisticated enough to determine which form fields are actually visible. In a way, that is a CAPTCHA, since in order to solve it reliably, not only would they have to implement HTML, CSS and JavaScript fully, they'd also have to recognize when a field is too small to see, colored the same as the background, hidden behind another field, placed outside the browser's viewport, etc.
It's the same basic problem that makes Web Standards a farce: there is no algorithm to determine whether a webpage "looks right" - only a human can decide that.
seen this?
It's a system with cute pictures instead of captcha ;)
But I still think honeypots are a better solution - they're so cheap&easy&invisible
I really think that Dinah hit the nail on the head. The fact seems to be that the beauty of the whole CAPTCHA setup is that there is no standard. Standardizing would only help the market to be more profitable.
Therefore it seems that the best way to handle the CAPTCHA problem is to come up with a fairly hard system for bots to catch that is NOT used by anyone else on the planet. It could be a question system, a very custom image creator, or even a mix of JS calls that only browsers respect.
By the time that your site is big enough for spammers to care you should have the budget to rethink your CAPTCHA setup and optimize it much more. In the mean time we should be monitoring our server logs and banning bad agents, refers, and IP's.
In my case I created a CAPTCHA image that I believe is very different from any other CAPTCHA I have seen. This should do fine for now along side my Apache logs + htaccess banning and Aksimet checking. Maybe I should spend time on a reporting feature as well.
although not a true image captcha, good turing test is asking users a random question - common options are: is ice hot or cold? 5+2= ..? etc.

Most effective form of CAPTCHA?

Of all the forms of CAPTCHA available, which one is the "least crackable" while remaining fairly human readable?
I believe that CAPTCHA is dying. If someone really wants to break it, it will be broken. I read (somewhere, don't remember where) about a site that gave you free porn in exchange for answering CAPTCHAs to they can be rendered obsolete by bots. So, why bother?
Anyone who really wants to break this padlock can use a pair of bolt cutters, so why bother with the lock?
Anyone who really wants to steal this car can drive up with a tow truck, so why bother locking my car?
Anyone who really wants to open this safe can cut it open with an oxyacetylene torch, so why bother putting things in the safe?
Because using the padlock, locking your car, putting valuables in a safe, and using a CAPTCHA weeds out a large spectrum of relatively unsophisticated or unmotivated attackers. The fact that it doesn't stop sophisticated, highly motivated attackers doesn't mean that it doesn't work at all. Using a CAPTCHA isn't going to stop all spammers, but it's going to tremendously reduce the amount that requires filtering or manual intervention.
Heck look at the lame CAPTCHA that Jeff uses on his blog. Even a wimpy barrier like that still provides a lot of protection.
I agree with Thomas. Captcha is on its way out. But if you must use it, reCAPTCHA is a pretty good provider with a simple API.
I believe that CAPTCHA is dying. If someone really wants to break it, it will be broken. I read (somewhere, don't remember where) about a site that gave you free porn in exchange for answering CAPTCHAs to they can be rendered obsolete by bots. So, why bother?
If you're a small enough site, no one would bother.
If you're still looking for a CAPTCHA, I like tEABAG_3D by the OCR Research Team. It's complicated to break and uses your 3D vision. Plus, it being developed by people who break CAPTCHAs for fun.
If you're just looking for a captcha to prevent spammers from bombing your blog, the best option is something simple but unique. For example, ask to write the word "Cat" into a box. The advantage of this is that no targeted captcha-breaker was developed for this solution, and your small blog isn't important enough for someone to actually develop one. I've used such a captcha on my blog with some success for a couple of years now.
This information is hard to really know because I believe a CAPTCHA gets broken long before anybody knows about it. There is economic incentive for those that break them to keep it quiet.
I used to work with a guy whose job revolved mostly around breaking CAPTCHA's and I can tell you the one giving them fits currently is reCAPTCHA.
Now, does that mean it will forever, call me skeptical.
I wonder if a CAPTCHA mechanism that uses collage made of pictures and asks human to type what he sees in the collage image will be much more crack-proof than the text and number image one. Imagine that the mechanism stitches pictures of cat, cup and car into a collage image and expects human visitor to tick (checkboxes) cat, cup, and car. How long do you think will hackers and crackers will come up with an algorithm to crack the mechanism (i.e. extract image elements from the collage and recognize the object depicted by each picture) ...
If you wanted you could try out the Microsoft Research project Asirra: http://research.microsoft.com/asirra/
CAPTCHAS, I believe should start being considered heavily when designing the UX. They're slow, cumbersome, and a very poor user experience. They are useful, don't get me wrong but perhaps you should look into designing a honeypot.
A honeypot is created by adding a hiddenfield at the bottom of the form. Because spam bots will fill in all the fields on the page blindly you can do a check:
If honeypotfield <> Empty Then
"No Spam TY"
Else
//Proceed with the form
End If
This works until there is a specifically designed spambot for your site, so they can choose to fill out selected input fields.
For more information: http://haacked.com/archive/2007/09/11/honeypot-captcha.aspx/
As far as I know, the Google's one is the best that there is. It hasn't been broken by computer programs yet. What I know that the crackers have been doing is to copy the image and then send it to many phishing websites where humans solve them to enter those websites.
It doesn't matter if captchas are broken or not now -- there are Indian firms that do nothing but process captchas. I'm with the rest of the group in saying that Captchas are on their way out.
Here is a cool link to create CAPTCHA..... http://www.codeproject.com/aspnet/CaptchaImage.asp
Just.. don't.. There are several reasons use of captcha is not advised.
http://www.interfacegeek.com/dont-ever-use-captchas/
I use uniqpin.com - it's easy to use and not annoying for users. So, bots can recognise a text, but can't recognize a image.
Death by Captcha can solve any Regular CAPTCHA (incude reCAPTCHA), but not Speedcoin Cryptocurrency Captcha.
Death by Captcha - http://deathbycaptcha.com
Speedcoin Captcha - http://speedcoin.co/info/captcha/Speedcoin_Captcha.html