The social media giant Meta warned today that malware actors are increasingly spreading their attack infrastructure across multiple platforms, presumably to make it more difficult for individual tech companies to detect their malicious activity. The company added, though, that it views the shift in tactics as a sign that industry crackdowns are working, and it says it is launching additional resources and protections for business users with the goal of raising the barriers for attackers even more.
On Facebook, Meta has now added new controls for business accounts to manage, audit, and limit who can become an account administrator, who can add other administrators, and who can perform sensitive actions like accessing a line of credit. The goal is to make it more difficult for attackers to use some of their most common tactics. For example, bad actors may take over the account of an individual who is employed by or otherwise connected to a target company, so the attacker can then add the compromised account as an administrator on the business page.
Meta is also launching a step-by-step tool for businesses to help them flag and remove malware on their enterprise devices and will even suggest using third-party malware scanners. The company says it sees a pattern in which users’ Facebook accounts are compromised, the owners regain control, and then the accounts are re-compromised because the targets’ devices are still infected with malware or have been reinfected.
“This is an ecosystem challenge, and there’s a lot of adversary adaptation,” says Nathaniel Gleicher, Meta’s head of security policy. “What we’re seeing is adversaries working really hard, but defenders moving more systematically. We’re not just disrupting individual bad actors; there are a number of different ways that we are countering them and making it harder.”
The move to distribute malicious infrastructure across multiple platforms has advantages for attackers. They may distribute ads on a social network like Facebook that aren’t directly malicious but that link to a fake creator page or other niche profile. On that site, attackers can post a special password and link to a file-sharing service like Dropbox or Mega. Then they can upload their malicious file to the hosting platform and encrypt it with the password from the previous page to make it harder for companies to scan and flag. In this way, victims follow the bread crumbs through a chain of legitimate-looking services, and no one site has a complete view of every step in the attack.
As public interest in generative AI chatbots like ChatGPT and Bard has ramped up in recent months, Meta also says it has seen attackers incorporating the topic into their malicious ads, claiming to offer access to these and other generative AI tools. Since March 2023, the company says, it has blocked more than 1,000 malicious links used in generative AI-themed lures so they can’t be shared on Facebook or other Meta platforms, and it has shared the URLs with other tech companies. It has also reported multiple browser extensions and mobile apps related to these malicious campaigns.