Although we try to show our users all the information they need to know about a course, sometimes they still want to take a look at the course provider’s own website. That’s why we include a link to the provider’s webpage on our own training pages. We do of course track these clicks, as we consider them conversions and we get paid per click next to getting paid per lead. But how do you reliably track these clicks?
Solution: track clicks on the server
A new problem: crawlers
It’s not difficult to filter these IP’s, but the problem is that they change quicker than you can block them. Of course we do block them and fix our statistics, so that our clients don’t pay for fake clicks. But this is obviously not a very elegant and scalable solution.
Is there a silver bullet?
It seems like there are three main approaches to this problem:
- use a server side approach and have lots of false positives which you will be battling like there’s no tomorrow.
Oh, and we’re not even considering putting one of those ugly “Thanks for clicking, wait till we redirect you to the other website while we let you look at some more advertising” redirect pages…
We’d like to hear other people’s experiences: how do you track external clicks and how do you cope with these problems? Is there a silver bullet for external click tracking?