A Test To Trick Search Engines Convinces A Credit Union To Buy More Domains

According to story in the AP, a new study by security researcher Jim Stickley shows how even search engines can get tricked into ranking phishing sites.

“Stickley created a Web site purporting to belong to the Credit Union of Southern California, a real business that agreed to be part of the experiment.”

His phony site got a No. 2 ranking on Yahoo and landed in the top slot on Bing, ahead of even the credit union’s real site.”

“””The experiment convinced Credit Union of Southern California that it should protect itself by being more aggressive about buying domain names similar to its own.””

“””Domains generally cost a few hundred dollars to a few thousand dollars each — a pittance compared with a financial institution‘s potential liability or loss of goodwill if its customers are ripped off by a fake site.””

Interesting to note that Google didn’t fall for the fake site, never ranking it higher than on the 6th result page.

The Credit Union just realized something we have talked about since we have been blogging.

Companies need to be proactive and secure possible typo’s and variations while then can for registration fees rather than go through the much costlier process of a WIPO or UDRP or the public relations cost of having a scam site take advantage of its good name.

Comments

  1. says

    @Erica,
    The test was 1.5 years long and clearly Google would rank a 1.5 year old domain on P1. My blog is around 2 years old and ranks all over the place in Google, including many #1 spots and P1 spots.

    I often see domain / sites rank well on Yahoo or Bing and poorly on Google and reversed, with also see many sites rank well on Google and poorly on Bing and Yahoo.

    I personally put more of my focus to rank well on Google due to their higher share of the search market.

  2. Ngoc says

    Google’s algorithm is pretty simple.
    Yahoo and Bing haven’t figured it out….

    Its called “human bots”, basically Google bot does a search/queries every so and so, compiles the data. The human bot goes in take a look at sites, sees things, write comments and tell bots to “fi this happens, do that” etc., which is a way to perfect the system. Every so and so, the bot goes back to the human bot for “approval” if a situation happens.

    Which is why Google is so “precise” unlike yahoo and bing. They haven’t figured that out. Its called process refinement, using a number of combinations of human and bots, repeated forever to get the best results.

    Common Microsoft, Yahoo, with billions of dollars…should actually hire teams of people to be human bots.

Join the Discussion