Watch out for botnet-driven Google Dorks, the next automated cyber attacks

Botnets have been taking down web sites for years by overwhelming sites with too much traffic. But now the swarms of compromised computers are being unleashed for the first time on an old kind of vulnerability: Google Dorks.

Google Dorks have been around for a while, as the name for an attack where hackers scan web sites, using commonly used links within company networks, to see if there are any unsecure links that can be used to break into a company’s web site. A report being released today by Imperva warns that the combination of the highly automated botnets and the Google Dorks are a new vector for hackers to break into companies on a massive scale.

Hackers sometimes manually scan sites for such stray links, but that’s like looking for a needle in the haystack. They have now figured out how to automate their scanning. They do so by getting botnets, or farms of compromised computers that have been hijacked without the owners’ knowledge. These botnets are used to automatically search through a series of links that may be related to a company’s web site. They use the botnets and Google Dorks to uncover weaknesses, and then they launch conventional hacking attacks against them. The result of these attacks can be contaminated web sites, data theft, data modification, or compromised company servers.

The hackers can efficiently use popular search engines as an attack platform to retrieve sensitive data. Botnets automate the process and can evade anti-automation detection techniques commonly deployed by the search engine providers. By using bots that are distributed throughout the world, the hackers fool the search engines into thinking that the searching is being done by real human individuals, not a herd of bots controlled by a hacker.

“This is what the hackers do to conduct cyber reconnaissance,” said Rob Rachwald, a senior security strategist at security firm Imperva, in an interview. “This used to be a manual process, but now it’s automated.”

With the automation, attackers can get a filtered list of potentially vulnerable web sites in a very short time. Mining search results can expose neglected sensitive files and folders, and unearth network logs and unprotected network-attached devices.

With botnets, the hackers can run 80,000 queries in a day, eluding detection and efficiently fishing for attack targets. Imperva’s Application Defense Center observed a particular botnet in action during the May-June time frame and witnessed its use against a well-known search engine provider. By tracking this botnet, Imperva found how attackers lay the groundwork to simplify and automate the next stages in an attack campaign against web apps.

“We found out because we were observing,” Rachwald said.

Today, search engines detect automated search routines by detecting the searcher’s internet protocol, or IP, address. If the same address is used over and over again for slightly different searches, the search engines block it. But botnets consist of computers scattered around the world, all using different IP addresses. Hackers can hide their identities behind these botnets, which are available on the underground for rental.

The botnets can be used with a distributed search tool to find distinguishable resource names and specific error messages that say more than they should. Dorks are often exchanged between hackers in forums. Some of the lists of Dorks are posted on various web sites. Dorks and exploits go hand in hand.

In the attack that Imperva observed, the attackers used dorks that match vulnerable web applications and search operators that were tailored to a specific search engine. For each unique search query, the botnet examined hundreds of returned results. Full told, the number of queries topped 550,000 queries, including one day with 81,000 queries — all via¬† single botnet.

The attackers targeted e-commerce sites and content management systems. The more success they had, the more the attackers refined their search terms. Imperva saw 4,719 different variations of dorks used in the attacks.

Fortunately, there are some solutions that Google, Bing and Yahoo can use to protect against these attacks. Search engines are in a unique position to identify botnets that abuse their services and can thus find out more about the attackers. The search engines can identify unusual queries such as those that contain terms from publicly available Dork databases, or queries that look for sensitive files. By doing so, search engines can come up with more blacklisted IP addresses. Google can force some searchers to fill out a CAPTCHA form, (where you look at handwritten characters and type the word that you see), to prove they are human searchers.

Rachwald said that web site creators should attack themselves using common Dork search terms and find out if they are vulnerable. They should also mask their links so that they are harder to guess.Web application firewalls should be able to detect and block attempts at finding application vulnerabilities. The web sites can also use reputation controls to block attacks coming from known malicious sources.


Mobile developer or publisher? VentureBeat is studying mobile marketing automation. Fill out our 5-minute survey, and we'll share the data with you.
0 comments