A few days ago I investigated a hack where the following script was injected into web pages:
<sc ript src="hxxp://www .copytech .lu/js/java.js"></script>
The script was at the very top of the HTML code and in the middle of the page. It was a WordPress site so I suggested to check the index.php and theme files for the malicious code.
The topmost script was indeed in the theme’s index.php file. But theme files didn’t contain the script that I found in the middle of web pages’ HTML code.
A few weeks ago I published an article about an attack that hosted malware on a fast flux network of infected PCs and used a clever algorithm based on Twitter trends to generate four new hard-to-predict domain names every day.
Shortly after that I was contacted by foks, who shared some interesting information. He conducted his own investigation and found out how hackers injected those scripts into legitimate web pages. He also found a new (buggy) version of the malicious script.
After a series of posts about Google Image poisoning campaigns that used hot-linked images a main trick to get top positions in search results, I’d like to describe a different Google Image poisoning attack that affects WordPress blogs and uses self-hosted images.
This is the second (more techie) part in the series of posts about a new wave of the Google Image poisoning attack. This part will heavily refer to the detailed description of the attack that I made back in May. Most of the aspects are still true so I will only talk about changes here. If you want to have a complete picture, I suggest that you read the original description first.
After May 18th, I noticed that doorway pages no longer redirected me anywhere when I clicked on poisoned search results. Neither to bad sites nor to home pages of compromised sites. Instead they displayed the spammy content generated for search engine crawlers only.
That was strange. That could never happen if the old algorithm was still in use.
Then I checked the cache directories (./.log/compromiseddomain.com/) and found new maintenance files there: don.txt and xml.txt. The don.txt file contained HTML template of spammy pages and was a replacement for the shab100500.txt file used by the original algorithm. The xml.txt contained the following string: bG92ZS1ibG9nY29tLm5ldA==, which decoded (base64) to “love-blogcom.net“. It was clear it was a more secure replacement for xmlrpc.txt that stored the domain name of a remote malicious server in plain text.
A few days later, the xml.txt files was replaced by xml.cgi, which was a clever step since .cgi files produce server errors when you try to open them in directories that aren’t configured to execute CGI scripts.
So I knew that the doorway script was updated, but I couldn’t understand why the doorways exhibited no malicious behavior when I clicked on hijacked image search results. That didn’t make much sense. What was the purpose of showing those spammy unintelligible pages without trying to monetize the traffic? The only plausible idea was they were playing the “long game” and needed some time to have the new pages rank well without risks of being identified as cloaked or malicious content, and when many pages reach prominent positions in search results they’ll start redirect web searchers to bad sites. Well, that was a working hypothesis until I got the source code of the new doorway script. The reality is crooks don’t want to play “long games” if they can monetize right away – the new doorway pages did redirect to bad site but my virtual environment wasn’t properly configured to trigger the redirects.
Continue – Dissecting the updated Google Image poisoning attack »»
Recently, I helped one company to remediate security problems with their four websites. It was quite an usual iframe injection attack. FTP logs clearly showed how attackers used FTP to infect legitimate files on server. So the question was, how could FTP credentials be stolen?
Of course, I pointed them to my blog post where I described how malware stole passwords and all the login details saved in 10 most popular FTP clients (e.g. Filezilla, CuteFTP, Total Commander, etc.). Indeed, recent malware scan revealed two suspicious items on their computer. One of them was identified as “Spyware.Passwords“. The only problem was the site owner said they didn’t use those FTP clients and kept all passwords in KeePass. Moreover, they manages 50 websites and only four of them got infected.
The answer became quite clear when they found an old copy of SmartFTP on their computer. There had been 5 FTP account (including passwords) saved there. Four of them were the four hacked sites! So what about the fifth? No doubt all five site credentials had been stolen, but the fifth site wasn’t hacked because its password had been changed after the last use of SmartFTP — so the stolen password was not valid by the moment of the hacker attack. This also explains why the rest 45 sites were not hacked — their passwords weren’t stolen.
Not only should you avoid saving passwords in your current FTP client, but also make sure they are not saved in old programs that may still reside on your computer.
A few days ago, I blogged about the hacker attack that used the BlackHole toolkit and injected “createRSS” and “defs_colors” malicious scripts into legitimate websites. I’ve worked with a few webmasters of infected sites since then and now have some important additional information that I want to share here.