cyber attack

The Rise Of Social Media Botnets

In the social Internet, building a legion of interconnected bots -- all accessible from a single computer -- is quicker and easier than ever before.

The Internet economy is a fascinating development of our time -- whatever you’re looking for, there’s sure to be an e-commerce marketplace gushing with buyers and sellers. The Internet has done to markets what social networks have done to global interactions: created an open, democratized venue with outrageously low barriers to entry. If you have an Internet connection, like nearly half of the earth’s population, you can purchase a ShamWow, pay someone to stand in line for you, download Adobe Photoshop, or even buy a social botnet.

Anatomy of a social botnet
Cyber criminals use social media botnets to disseminate malicious links, collect intelligence on high profile targets, and spread influence. As opposed to traditional botnets, each social bot represents an automated social account rather than an infected computer. This means building a legion of interconnected bots is much quicker and easier than ever before, all accessible from a single computer.

The person commanding the botnet, also known as a bot herder, generally has two options for building their botnet. The first is fairly ad hoc, simply registering as many accounts as possible to a program that allows the herder to post via the accounts as if they were logged in. The second approach is to create the botnet via a registered network application: the attacker makes a phony app, links a legion of accounts, and changes the setting to allow the app to post on behalf of the associated accounts. Via the app, the herder then has programmatic access to the full army of profiles. This is essentially how ISIS built their Dawn of Glad Tidings application, which acts as a centralized hub that posts en masse on behalf of all its users.

Types of social botnet attacks
With the rise of social media, a social botnet can be used to amplify the scope of an attack or automate the dissemination of malicious links. A few types of common attacks include:

Hashtag hijacking. Hashtag hijacking involves leveraging a hashtag to target a certain organization or group. By appropriating organization-specific hashtags, bots distribute spam or malicious links that subsequently appear in organization’s circles and news feeds, effectively focusing the attack on that group. 
Trend-jacking/watering hole. Trend-jacking is similar to hashtag hijacking in that bots use the hashtags to direct their attack. Attackers pick the top trends of the day to disseminate the attack to as broad an audience as possible. In doing so, the attacker makes a “social watering hole” around the trend by planting the payload where the potential victims are interacting; think of a crocodile at the edge of a watering hole, letting the prey come to him. 
Spray and pray. Spray and pray involves posting as many links as possible, expecting to get only a click or two on each. These bots will often still intersperse odd or programmatically generated text-based posts, simply to fly under the social network’s Terms of Service radar. This tactic often leverages clickbait and is coupled with one of the above strategies. 
Retweet storm. Most social networks have an eye peeled for malicious activity. One clear indicator of malicious botnet activity is a post that is instantly reposted or retweeted by thousands of other bot accounts. The original posting account is generally flagged and banned, but the reposts and retweets remain. The parent account, known as the martyr bot, sacrifices itself to spread the attack.
Click/Like Farming. Bots are ideal for inflating followers: a seedy marketing strategy designed to make a page or conversation look more popular.

Monetizing a social botnet
Malicious botnets exist on a spectrum of maliciousness but at their core, all have one of a handful of motivations. On the more benign end of the spectrum is shady marketing. Botnets are leveraged to increase followers or disseminate links and ads. Paying a bot herder to repost or favorite an ad on social media can go a long way in reaching the target audience.

Most botnets fall between the middle and top of the maliciousness spectrum. In the middle of the spectrum are the spam bots: fairly benign from a cyberattack standpoint but still a massive organizational risk if they hijack a company hashtag or target employees and customers. These bots post links to fake Viagra websites, pornography, or too-good-to-be true diet pills, which can do serious damage to brand reputation if they go unchecked.

On the outright malicious top-end of the spectrum are phishing and malware bot campaigns. Bot herders leverage botnets to distribute these links across social media. The lucrative part of the attack involves selling the phished information or the myriad of ways malware is leveraged to extort money, be it data theft, ransomware, blackmail, or banking Trojans.

Unlike traditional botnets, social botnets are not as readily leveraged in DDoS attacks. Bots can repost content, but can’t make requests on an IP address. However, social botnets are leveraged as Command & Control devices to coordinate DDoS attacks by re-posting instructions, including attack date/time, port numbers, domains, and target IPs.

Welcome to the botnet store. In cyber criminal marketplaces and hacker hubs, one of the most traded and highest selling goods are the credentials for a social botnet. Not only do bot herders outright sell their social botnets, but they also rent their botnets. People will pay herders to access their botnets for a discrete amount of time or to control a certain number of bots. Consider a bot herder like the landlord of a massive apartment complex. The highest bidder gets access for a specified amount of time before the herder changes tenants.

An ancient Roman writer, Publilius Syrus, described the foundation of economics succinctly: “Everything is worth what the buyer will pay for it.” For the buyer, social botnets provide a tangible, lucrative value. For the bot herders, building and maintaining their botnets is a full time business.

Luckily for the herders, business is booming.

This article was originally published in Dark Reading. See the full article here.


A Match Made in Heaven: Fraud and Social Media

Since the days of Friendster and GeoCities, fraud has thrived on social media.

Social media is the fraudsters’ playground—an unregulated, highly visible, easily exploitable platform that connects with billions of people and serves a host of purposes in a hacker’s repertoire. Many fraudulent accounts are mere satire or innocuous trolling, but others are created with far more devious intentions.

Even inexperienced cyber criminals can carry out low-tech attacks via social media by building convincing profiles and connecting to the right people. In a targeted attack, hackers connect with colleagues and friends of the target, a tactic called “gatekeeper friending,” to appear more legitimate once connecting to the target itself.

In the unverified world of social media, fraudsters lay claim to whatever they like—that they work at the same organizations, have the same alma mater, or share all the same goals and interests. Never in the history of human communication has deceit been easier. With these elements in place, the hackers can request sensitive information or ask for money. If the target believes the account to be a coworker, relative, or love interest, these things are openly shared.

In a SEC Form 10 filing Facebook estimates that nearly 15 million of its accounts are “undesirable.” Even more are considered “false” accounts—nearly 100 million. According to Barracuda Labs, Twitter is similarly fraudulent—about 10% of accounts. Expect these trends to grow. Fake accounts can be leveraged in more technical attacks as well, such as phishing or malware attacks. Launching such a campaign from a well connected, legitimate-looking profile increases the efficacy of any malware or phishing campaign.

Imitating a brand is also particularly simple. A quick Google image search to get the company logo, and a hacker can set up a fake customer service representative account. Again, these can be low-tech, used to slander the company, or for more advanced ends, such as to spread malware links via targeted scams and attacks. These fraudulent accounts will often try to phish company employees into disclosing brand account credentials or sensitive company data. These attacks can be spread using company hashtags both to make the account seem more legitimate and to amplify the attack across the company’s social footprint.

Impersonations can also target the employees of an organization. These attacks often start with a senior executive impersonator account requesting sensitive information or account credentials from subordinates. Hackers can then use these credentials to gain access to the legitimate brand accounts and post anything they choose, from malicious links to slander and abuse.

Fake accounts have existed since the beginning of social media. A handful of examples from the past half-decade: In 2010 a Paramount Entertainment impersonator rattled off racist and inappropriate tweets. Last year, a Thai woman stole some $200,000 using a fake Furby Instagram account. Also in 2013, a fraudulent Southwest Airlines Facebook page boasted some 2000 followers and an Instagram scam promised VIP deals on American Airlines, Jet Blue, Delta, United Airlines and Emirates.

The app InstLike, as seen in the picture, tricked over 100,000 users into letting the app hijack their account and like random photos. In January, fake Twitter accounts disguised as market researchers connected to traders in the finance world and claimed several small companies were under investigation by the Department of Justice—hackers rode the ensuing stock plunge.

One group historically prone to social media fraud is the military. Hackers launch “romance scams,” in which fake profiles of servicemen abroad connect with loved ones at home, or even initiate online relationships. Once the targeted party believes they are communicating with a real person, the hacker will request money. One unnamed military official in particular has some 30 imposter Facebook accounts. More troubling is the nearly 100 fake Skype accounts – the most popular means of communication between military personnel and loved ones at home, and thus the easiest target for “romance scams.” Even the Russian social networking site VK has 75 different profiles under this same military official’s name.

Most recently, the fake Jamie Dimon Twitter account took center stage in the news of fraudulent social media activity. It began benignly, posting tweets like, “We are excited to announce that our CEO James Dimon has joined Twitter. This account is managed by the Global Media Relations Department.” The account followed notable business leaders and tweeted several times throughout the day.

For organizations, the cost of social media fraud varies on the type and breadth of the attack. Customer scams have serious business implications further down the road, in the form of customer loyalty and support costs. Executive impersonations can result in brand reputation damage or stock manipulation. Businesses are beginning to understand the full scope of this problem—a third of users say they have been sent malware on social, 24% of SMBs say they have been compromised via social, and 72% of companies believe employees’ use of social media poses a threat to their organization.

As long as social media exists, fraud will persist as a problem. It’s time for organizations to take the threat of social media very, very seriously.

This article was originally published in Security Week. See the full article here.