Click Popularity sorting algorithms are based on the premise that by tracking how many users click on a link, and how long they stay at a website, they are allowing users, rather than the search engines or web directory editors, to organize search results. They claim to do so by measuring click popularity and measuring stickiness.
Click popularity algorithms are one of the most effective ways of presenting relevant search results. However, they are vulnerable to manipulation by click-bots which attempt to artificially boost click-through rates.
- Click popularity is calculated by measuring the number of clicks each web site receives from a search engine's results page. The theory is that the more often the search result is clicked, the more popular the website must be. The problem here is that a website operator can create a misleading description which induces click throughs to irrelevant websites.
- Stickiness measurement is a really great idea in theory, but many environmental variables can skew the results. The measurement works by recording the amount of time a user spends at each website that they click through to from a search engine result. This time is calculated by measuring the time that elapses between each of the user's clicks on the search engine's results page. The premise being that a user will click the first result, and either spend time reading a relevant web page, or will click on the back button, and look at the next result. The longer a user spends on each page, the more relevant it must be. This measurement does go a long way to fixing the problem with "spoofing" the click popularity results. It does of course raise issues with search abandonment - how long until the click no longer counts? What about a click that happens just before your lunch break?