Competition in search marketing can be tough. Regardless of number of businesses/products/services relevant to a specific keyword there is only one top position and unless it’s your site at the top you miss out on the hefty share of the search traffic generated by that keyword. The lower the result is displayed the less attention it gets.
Even if you are in “business” of black hat SEO and can use whatever dirty tricks you like, you still can’t guarantee the top position for the most popular keywords since there are already many established reputable sites and other black hats competing for the same keywords. But if you can’t always get the top position, you can still try to make your results look more attractive than the rest and increase their click through rate, right? Right! And this post will be about one of such tricks
This is the second (more techie) part in the series of posts about a new wave of the Google Image poisoning attack. This part will heavily refer to the detailed description of the attack that I made back in May. Most of the aspects are still true so I will only talk about changes here. If you want to have a complete picture, I suggest that you read the original description first.
After May 18th, I noticed that doorway pages no longer redirected me anywhere when I clicked on poisoned search results. Neither to bad sites nor to home pages of compromised sites. Instead they displayed the spammy content generated for search engine crawlers only.
That was strange. That could never happen if the old algorithm was still in use.
Then I checked the cache directories (./.log/compromiseddomain.com/) and found new maintenance files there: don.txt and xml.txt. The don.txt file contained HTML template of spammy pages and was a replacement for the shab100500.txt file used by the original algorithm. The xml.txt contained the following string: bG92ZS1ibG9nY29tLm5ldA==, which decoded (base64) to “love-blogcom.net“. It was clear it was a more secure replacement for xmlrpc.txt that stored the domain name of a remote malicious server in plain text.
A few days later, the xml.txt files was replaced by xml.cgi, which was a clever step since .cgi files produce server errors when you try to open them in directories that aren’t configured to execute CGI scripts.
So I knew that the doorway script was updated, but I couldn’t understand why the doorways exhibited no malicious behavior when I clicked on hijacked image search results. That didn’t make much sense. What was the purpose of showing those spammy unintelligible pages without trying to monetize the traffic? The only plausible idea was they were playing the “long game” and needed some time to have the new pages rank well without risks of being identified as cloaked or malicious content, and when many pages reach prominent positions in search results they’ll start redirect web searchers to bad sites. Well, that was a working hypothesis until I got the source code of the new doorway script. The reality is crooks don’t want to play “long games” if they can monetize right away – the new doorway pages did redirect to bad site but my virtual environment wasn’t properly configured to trigger the redirects.
Continue – Dissecting the updated Google Image poisoning attack »»
In May, I wrote a big article about my investigation of a massive Google Image poisoning attack. A quick recap: cybercriminals created millions of doorway pages on dozens of thousands compromised websites. Those pages exploited a flaw in Google Image search algorithm that made it possible for pages with hot-linked images to hijack search results of websites where the images actually belonged to. The attack scheme was very efficient and hundreds of thousand (if not millions) people clicked on poisoned image search results every day.
Not only did I publish results of my investigation on my blog but also shared a great deal of gathered information (lists of compromised sites, algorithms, etc.) with Google and antivirus vendors. I hope this made some difference as I started observe changes literally the next day after the article publication.
In this 2-part series of posts, I will talk about what’s changed since then. Specifically about how Google addressed this problem (part I) and how cybercriminals changed the attack scheme (part II).