AI-generated YouTube videos spreading info-stealing malware, Here’s how
• Who are the founders of YouTube?
Jawed Karim, Steve Chen, Chad Hurley
According to a report by cyber intelligence firm CloudSEK , YouTube has recently experienced a surge in videos that include harmful links to infostealers in their descriptions. Many of these videos utilize AI-generated personas to deceive viewers into trusting them.Since November 2022, there has been a significant increase of 200-300% in content uploaded to the video hosting website that tricks viewers into installing well-known malware like Vidar, RedLine, and Raccoon . The videos claim to be tutorials on how to download illicit copies of popular paid-for design software such as Adobe Photoshop Autodesk 3ds Max, and AutoCAD.The tutorial videos have become increasingly sophisticated, evolving from simple screen recordings and audio walkthroughs to now utilizing AI to create a realistic portrayal of a person guiding the viewer through the process. The goal is to create a more trustworthy appearance and deceive viewers into downloading malware.According to CloudSEK, the use of AI-generated videos is growing for legitimate purposes like education, recruitment, and promotion, but unfortunately, cybercriminals are also taking advantage of this technology for their malicious purposes.Infostealers are a type of malware that infiltrate a user's system and steal personal and valuable information, including passwords and payment details. They are often spread through malicious downloads and links, such as those found in video descriptions in this case. The stolen data is then uploaded to the attacker's server.CloudSEK has highlighted that YouTube, with its 2.5 billion monthly users, is a prime target for threat actors. To avoid detection by the platform's automated content review process, attackers employ various tactics to deceive the algorithm. These tactics include using region-specific tags, adding fake comments to make videos appear legitimate, and flooding the platform with multiple videos to compensate for any removed or banned content. CloudSEK discovered that as many as 5-10 of these malicious videos are uploaded every hour.For SEO optimization, attackers also use hidden links and random keywords in different languages to manipulate YouTube's recommendation algorithm. To conceal the malicious nature of the links, link-shortening services like bit.ly and file hosting services such as MediaFire are frequently utilized.According to CloudSEK, relying solely on traditional string-based rules will not be enough to detect malware that uses dynamically generated or encrypted strings. Instead, they recommend that organizations adopt a more manual approach to threat detection, where tactics and techniques of threat actors are closely monitored to correctly identify potential threats.Moreover, CloudSEK suggests conducting awareness campaigns that share simple advice such as avoiding clicking on unknown links and using multi-factor authentication to secure accounts, preferably with an authenticator app.