openX targeting channel setup ("OR"-functionality) - channel

I have a site that uses OpenX for banner delivery. I want a banner that shows up when people are in a sports section of my site, so only when people are on
site.com/football
site.com/baseball
I set up a targeting channel for that which only shows banners when the site Page URL contains "football baseball". I use the iframe version of the invocation code. My zone has either football or baseball in it this works. But when I enter "football baseball" or "football,baseball" it doesn't work.
A workaround could be to have a new limitation rule for each of the keywords, but somehow there must be an "or" functionality in there, or am I mistaking?

I think there are a couple of ways you should be able to handle this.
Create multiple delivery limitations (as you mentioned). In the Targeting Channel or directly in the Delivery tab for the banner, when you "Add" multiple limitations there should be a drop down in between each limitation allowing you to specify the an operator AND or OR. The benefit of setting this up as a Targeting Channel is that you can create complex groupings of limitations that aren't possible at the banner level alone.
The other option you could try is using "Regex match". When you select the Delivery Limitation of "Site - Page URL", instead of selecting contains you can select "Regex match". This should allow you to specify a value similar to football|baseball or something more complex.

Related

Generate a URL to Rally objects using FormattedID?

I would like to write web pages that have links to Rally issues (Test Cases, Defects, etc). I would like to be able to generate a URL with the FormattedID. Is this possible? Or do I really need the objectID? For example:
http://rally1.rallydev.com/363953481d/detail/testcase/TC1665
(or something like that, instead of the cryptic object id)
The following allows users to go directly to the detail page of a work product without having to know the Object ID:
https://rally1.rallydev.com/#/search?keywords=US1234
This relies on a feature of Rally's search functionality and isn't officially supported - so the above URL isn't guaranteed to work forever. However it's a decent way to use Formatted ID's as a workaround.
Searching by just keyword will give you the item with that ID and related items (e.g. other items that mention the desired ID in their name). Sometimes this is fine, sometimes not.
To search for DE75700 and DE72760 only, and no others use
https://rally1.rallydev.com/#/search?keywords=FormattedID%3ADE75700%20FormattedID%3ADE72760
This is equivalent to manually typing
FormattedID:DE75700 FormattedID:DE72760
in the Rally search box.
As a corollary to the main answer I have defined a Chrome browser bookmark which will take me right to any Rally item by its ID.
The URL for this bookmark in full is:
javascript:(function(){window.location = "https://rally1.rallydev.com/#/search?keywords=" + prompt("Enter ID:");})();
When this bookmark is activated, it prompts you like so:
I find this to be a huge time saver.
Thanks to Displaying a prompt from javascript Chrome bookmark.

Stop spam without captcha

I want to stop spammers from using my site. But I find CAPTCHA very annoying. I am not just talking about the "type the text" type, but anything that requires the user to waste his time to prove himself human.
What can I do here?
Requiring Javascript to post data blocks a fair amount of spam bots while not interfering with most users.
You can also use an nifty trick:
<input type="text" id="not_human" name="name" />
<input type="text" name="actual_name" />
<style>
#not_human { display: none }
</style>
Most bots will populate the first field, so you can block them.
I combine a few methods that seem quite successful so far:
Provide an input field with the name email and hide it with CSS
display: none. When the form is submitted check if this field is
empty. Bots tend to fill this with a bogus emailaddress.
Provide another hidden input field which contains the time the page
is loaded. Check if the time between loading and submitting the page
is larger the minimum time it takes to fill in the form. I use
between 5 and 10 seconds.
Then check if the number of GET parameters are as you would expect.
If your forms action is POST and the underlying URL of your
submission page is index.php?p=guestbook&sub=submit, then you
expect 2 GET parameters. Bots try to add GET parameters so this
check would fail.
And finally, check if the HTTP_USER_AGENT is set, which bots sometimes don't set,
and that the HTTP_REFERER is the URL of the page of your form. Bots
sometimes just POST to the submission page causing the HTTP_REFERER
to be something else.
I got most of my information from http://www.braemoor.co.uk/software/antispam.shtml and http://www.nogbspam.com/.
Integrate the Akismet API to automatically filter your users' posts.
If you're looking for a .NET solution, the Ajax Control Toolkit has a control named NoBot.
NoBot is a control that attempts to provide CAPTCHA-like bot/spam prevention without requiring any user interaction. NoBot has the benefit of being completely invisible. NoBot is probably most relevant for low-traffic sites where blog/comment spam is a problem and 100% effectiveness is not required.
NoBot employs a few different anti-bot techniques:
Forcing the client's browser to perform a configurable JavaScript calculation and verifying the result as part of the postback. (Ex: the calculation may be a simple numeric one, or may also involve the DOM for added assurance that a browser is involved)
Enforcing a configurable delay between when a form is requested and when it can be posted back. (Ex: a human is unlikely to complete a form in less than two seconds)
Enforcing a configurable limit to the number of acceptable requests per IP address per unit of time. (Ex: a human is unlikely to submit the same form more than five times in one minute)
More discussion and demonstration at this blogpost by Jacques-Louis Chereau on NoBot.
<ajaxToolkit:NoBot
ID="NoBot2"
runat="server"
OnGenerateChallengeAndResponse="CustomChallengeResponse"
ResponseMinimumDelaySeconds="2"
CutoffWindowSeconds="60"
CutoffMaximumInstances="5" />
I would be careful using CSS or Javascript tricks to ensure a user is a genuine real life human, as you could be introducing accessibility issues, cross browser issues, etc. Not to mention spam bots can be fairly sophisticated, so employing cute little CSS display tricks may not even work anyway.
I would look into Akismet.
Also, you can be creative in the way you validate user data. For example, let's say you have a registration form that requires a user email and address. You can be fairly hardcore in how you validate the email address, even going so far as to ensure the domain is actually set up to receive mail, and that there is a mailbox on that domain that matches what was provided. You could also use Google Maps API to try and geolocate an address and ensure it's valid.
To take this even further, you could implement "hard" and "soft" validation errors. If the mail address doesn't match a regex validation string, then that's a hard fail. Not being able to check the DNS records of the domain to ensure it accepts mail, or that the mailbox exists, is a "soft" fail. When you encounter a soft fail, you could then ask for CAPTCHA validation. This would hopefully reduce the amount of times you'd have to push for CAPTCHA verification, because if you're getting enough activity on the site, valid people should be entering valid data at least some of the time!
I realize this is a rather old post, however, I came across an interesting solution called the "honey-pot captcha" that is easy to implement and doesn't require javascript:
Provide a hidden text box!
Most spambots will gladly complete the hidden text box allowing you to politely ignore them.
Most of your users will never even know the difference.
To prevent a user with a screen reader from falling into your trap simply label the text box "If you are human, leave blank" or something to that affect.
Tada! Non-intrusive spam-blocking! Here is the article:
http://www.campaignmonitor.com/blog/post/3817/stopping-spambots-with-two-simple-captcha-alternatives
Since it is extremely hard to avoid it at 100% I recommend to read this IBM article posted 2 years ago titled 'Real Web 2.0: Battling Web spam', where visitor behavior and control workflow are analyzed well and concise
Web spam comes in many forms, including:
Spam articles and vandalized articles on wikis
Comment spam on Weblogs
Spam postings on forums, issue trackers, and other discussion sites
Referrer spam (when spam sites pretend to refer users to a target
site that lists referrers)
False user entries on social networks
Dealing with Web spam is very difficult, but a Web developer
neglects spam prevention at his or her
peril. In this article, and in a
second part to come later, I present
techniques, technologies, and services
to combat the many sorts of Web spam.
Also is linked a very interesting "...hashcash technique for minimizing spam on Wikis and such, in addition to e-mail."
How about a human readable question that tells the user to put in the first letter of the value he put in the first name field and the last letter of the last name field or something like this?
Or show some hidden fields which are filled with JavaScript with values like referer and so one. Check for equality of these fields with the ones you have stored in the session before.
If the values are empty, the user has no javascript. Then it would be no spam. But a bot will at least fill in some of them.
Surely you should select one thing Honeypot or BOTCHA.

How does Google track search result clicks? Is this the best way?

As the question states, I'm trying to figure out how google tracks clicks on search results. When you view the source, you find the following:
<em>Yahoo</em>!
The function rwt is, which is pretty messy:
windows.rwt=function(b,d,e,g,h,f,i,j){
var a=encodeURIComponent||escape,c=b.href.split("#");
b.href=["/url?sa=t\x26source\x3dweb",d?"&oi="+a(d):"",e?"&cad="+a(e):"","&ct=",a(g),"&cd=",a(h),"&url=",a(c[0]).replace(/\+/g,"%2B"),"&ei=7_C2SbqXBMW0-AbU4OWnCw",f?"&usg="+f:"",i,c[1]?"#"+c[1]:""].join("");
b.onmousedown="";
return true};
So it looks like Google is changing the href of the a tag to /url?... which I'm assuming is where their tracking is. From LiveHeaders in Firefox, it looks like this page is redirecting the browser to the original href of the a tag.
Is this correct and is this the best method of tracking clicks on links on your site, such as ads?
It's actually changing the href of the link rather than the window location. It's setting b.href, and b refers to the link itself. This runs in onmousedown, so when you release the mouse and the click is handled you magically get sent to that new href.
Any click tracking pretty much comes down to sending the user to some equivalent of Google's /url?... script, counting the click, and performing a 302 redirect to the real destination.
This javascript href replacement has the advantage of automatically filtering out any robots that don't run scripts. The downside is that it also filters out any real people that have javascript disabled. If, like Google, you just care which link is most popular with your real human users, this works out quite well. The clicks that you do record should be representative of real human traffic, and you can safely ignore the clicks from non-javascript users because they probably have the same preferences anyway.
Most adverts just link straight to the counting URL with no javascript replacement. This means that you definitely count every real click on the link, but you need to worry about filtering out requests from robots, since they'll now see your counting URL too.
Which you prefer really depends on why you want to track the clicks.
I think most people expect ads to click through via some sort of tracking system, so I shouldn't worry too much about following this particular javascript implementation - as much as anything that's probably there to ensure that the user sees the correct link in the browsers status bar, that various other interesting bits of info (search terms, position on the result set at the time, who you are, etc) are sent across (without you realising it) and that the links still work if JavaScript is disabled.
Generally, yes directing the user through some tracking page with the ID of the ad they have clicked on, and possibly some additional indication of where they have come from is sensible - that way you aren't relying on other mechanisms (such as JS event handlers) to track clicks on the links, it's certainly the way most ad systems I've used work.

What's the best way to test a site which displays differently depending on the client location?

I am using an IP location lookup to display localised prices to customers depending on whether they are visiting from the UK, US or general EU and defaulting to the US price if the location can't be determined.
I could easily force the system to believe I'm from a specific country for testing but still there is no way of knowing for sure that it's displaying correctly when a visitor from abroad accesses my site. Is the use of some proxy the only viable way of testing a site like this? If so how would I go about tracking down one that I can use to test my site from various countries of origin?
You should be able to achieve that by using proxies. http://www.proxy4free.com/page1.html has a bunch. That site just came from a Google search; I've never used proxies like this before though, so there may be better sites out there.
This is not about how to test, but rather how you identify your visitors.
Instead of using IP-lookup to determine their geographical location, you should instead grab the information about the locale they use from the useragent string.
F.instance, I'm a norwegian, and when I go to useragent.org I see that my browser sends "nb-NO" as the language my machine uses.
You can easily use that to customize currency, dates etc on your site.
If the website is indexed in Google's cache, you can visit the google with the proper URL address. ex. http://www.google.co.uk/
And see if it's displaying properly in the cache.
#Frode:
Checking system locale in iseragent string might be misleading.
I go to Canada, and set my system locale as French. So it might show the user EU prices as opposed to showing US price. Many such cases are possible where locale wont give accurate info about the end users desired "price class" in this particular application mentioned.
=AD
If you want to use geo-ip location to detect a user's language, using a proxy probably is the best way to do so.
There are a lot of lists of open proxies on the web, mostly listed with the countries. Google has quite a lot of search results on this topic. Of the top results, I have used SamAir to test some stuff before.
Searching for a working open proxy with an acceptable speed in the correct country can be a tedious task. Also keep in mind that you should not use any these proxy servers to submit any sensitive data, because you never know who runs them. This could be a kinda trustworthy ISP (ie. not from GB ;D), a honeypot to collect data, or an illegal open proxy hosted by some trojan.

How do you access browser history?

Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer.
I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago?
Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php
There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/
Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy.
Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from.
You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access.
Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page.
The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has.
Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html