6, Feb 2024
Generative AI is the Pride of Cybercrime Services
Highlights:
- Generative AI as a Cybercrime Tool: Cybercriminals are increasingly using generative AI for sophisticated cybercrimes, including social media impersonation, spam campaigns, AI-based deepfake services and KYC verification services.
- AI-Powered Black-Hat Platforms: The rise of AI-driven platforms for creating and managing fake social media accounts, offering services to automate content generation and account activity for illicit purposes.
- Evolution of Spam, AI-based deep fake voice and face services and KYC Frauds: The integration of AI in spam services to bypass security controls, ability of AI-based deep fake voice, face and lip services and in KYC verification services for creating fake identification documents, signifying a new level of sophistication in cybercrimes.
Over the past year, generative AI and ChatGPT have continued to gain prominence in the ongoing struggle between attackers and defenders.
While many industries continue to explore the promise of AI to augment their capabilities, cybercriminals have also seen the powerful potential of AI in exploiting vulnerabilities and creating new attack vectors.
At the beginning of 2023, we exposed the initial hints of cybercriminals showing interest in using ChatGPT to create malware, encryption tools, and other attack vectors that leverage Generative AI. In addition, Russian cybercriminals immediately started to discuss how to bypass any restrictions to begin using ChatGPT for illicit purposes.
One year after the launch of ChatGPT, we observe that the use of generative AI has become the new normal for many cybercrime services, especially in the area of impersonation and social engineering. Some of them have realized the potential of generative AI as a differentiator to increase the effectiveness of their services and are even bragging about it.
In this blog, we provide examples of 4 Russian underground AI-powered services that utilize generative AI built-in as part of their illicit tools and platforms:
Black-Hat Platform for Social Media Impersonation on large scale
Deepfakes Service
Malicious Spam Tool
KYC Verification Services
Case 1: AI-powered Black-Hat Platform for Social Media Impersonation
Fake social media accounts pose a significant cybersecurity threat due to their potential for malicious activities, brand impersonation, spreading disinformation, and much more.
In December 2023, an experienced threat actor, with official status of a seller in a “reputable” Russian underground forum offered for sale a ready-to-go platform that uses AI as a core module to generate content for social media platforms such as Instagram, Facebook, Twitter, and Telegram. This can be used to almost fully automate maintenance of fake accounts on social media.
In one case study provided as an example, the threat actor shows generated content for female models. In another use case, the actor generated a series of fake profiles that mimic those of successful financial traders.
The threat actor explained that he only realized just how powerful these tools are while working on his platform and he now offers two business models:
Fake social media accounts management as a service – The threat actor and his team create all the necessary accounts on Instagram, Facebook, Telegram, and Twitter. They then automatically generate the content and promote the accounts to give them visibility and a sense of authenticity by joining relevant groups, “liking” similar topic accounts, etc.
As proof of concept, the actor shows an example of 20 accounts of female models that were created and run using AI.
The platform can simultaneously create content for over 200 accounts and generate daily posts, reels, etc. The AI-managed accounts attract followers and the traffic from all the accounts can be used for any malicious purposes.
This managed service costs $50 for a single Instagram account per month and a bundle of connected fake social media accounts in 4 networks is $70. The minimum required order is for 10 accounts.
A stand-alone platform – The platform is sold “as is” and the buyer handles the management of all AI-driven accounts by himself. One of the core features of platform ownership is the ability to upload content and enrich it using AI.
The price of the platform is $5,000.
Case 2: AI-Powered Deepfakes Service
On December 31st, the New Year’s Eve, another impersonation service was introduced in a major Russian underground forum.
This service is focused on providing AI-based deepfake services in three areas:
1. Lip Sync – for 100$ per 30 seconds of the content.
2. Deepfakes which include lip sync and face replacement – for 150$ per 30 seconds.
3. Voice Acting – for $30 per 1 minute.
Using services described in cases 1 and 2, separately, or combined can create possible impact in two significant areas avenues:
Creation of fake profiles army in social media to promote certain political agendas or products.
Impersonation of celebrities or corporate executives which can lead to severe brand reputation damage or initiation of cyber-attacks.
Case 3: AI-Powered Malicious Spam Tools and Services
Malicious spam is one of the oldest illicit services found on underground cybercrime forums. Spam is the most common initial vector for various attack scenario objectives such as phishing and credential harvesting, malware distribution, scams/fraud, etc.
One spam service was launched in November 2023 by a reputable threat actor who claims over 15 years of criminal experience. After receiving positive feedback on his service, he proceeded to make his spam services AI-powered, specifically by ChatGPT.
Using ChatGPT helped randomize the spam text and created a higher rate of success that the spam email would reach the victim’s inbox.
As one customer of this service said, the AI-driven spam service helped him bypass anti-spam and anti-phishing controls of popular webmail services and achieved a 70% successful delivery rate to the targeted email address.
Do you want to know how much the average hacker needs to invest to successfully deliver 70,000 malicious emails? The 100,000 package of spam email costs $1,250 (which can be paid in Bitcoin, Monero or USDT).
On the demand side of spam tools, currently, cybercriminals looking for new spam tools are requesting that the tools must include ChatGPT-powered randomization function as part of their technical specifications. This automatically creates unique text for each spam email, which helps it to easily bypass anti-spam filters.
Case 4: KYC Verification Services
Know Your Customer (KYC) procedures have become standard practice for companies providing financial services due to the need for enhanced security, risk mitigation, and regulatory compliance.
KYC also plays a crucial role in retrieving access to an account in case the legitimate owner is unable to use traditional methods like password reset. The company usually requires customers to undergo a KYC process to confirm their identity and ensure that they are legitimate account holders. This typically involves providing valid identification documents, such as a government-issued ID, passport, or driver’s license, along with additional verification steps like a photo with a document.
A whole underground market exists with shady services such as creating images of fake documents for verification. Previously, this kind of cybercrime job was done mostly by manually manipulating relevant images.
Now, however, one of the KYC Darkweb services vendors said that with the advent of artificial intelligence, he had recently integrated AI technology that significantly sped up the process of creating fake verification documents without sacrificing quality.
- 0
- By Rabindra
11, Jan 2024
‘Deepfake’ is a big challenge for the entire world: Ram Nath Kovind
Former President of India spoke at the 55th convocation of IIMC
New Delhi, January 11: Delivering the 55th Convocation Address of the Indian Institute of Mass Communication (IIMC), former President of India, Shri Ram Nath Kovind, said that ‘Deepfake’, Fake News and Misinformation poses a significant challenge for the entire world. Today, anyone can use digital means to intentionally spread misinformation. He said that journalists graduating from institutions such as IIMC have to ensure that we put up a fight against the spread of Fake News and Misinformation. IIMC Chairman Shri R. Jagannathan, Director General Dr. Anupama Bhatnagar and Additional Director General Dr. Nimish Rustagi were present on the occasion.
During the convocation held at Pragati Maidan’s Bharat Mandapam, postgraduate diploma certificates were awarded to students from the batches of 2021–22 and 2022–23 (IIMC Delhi and its Regional Centers at Dhenkanal, Aizawl, Amravati, Kottayam and Jammu). Also, 65 students from both batches were honoured with various awards.
Speaking as the Chief Guest, Shri Kovind highlighted IIMC’s recognition as a ‘Centre of Excellence’ in the field of education and training in mass communication. He emphasised the significant contribution of IIMC students to Indian journalism and mentioned the institute’s commitment to preparing students for careers in journalism.
Shri Ram Nath Kovind advised the students, who are entering the field of journalism and media, to be prepared to tackle the misuse of rapidly advancing technology. He urged them to ensure that citizens receive accurate information amidst the challenges of fake news, misinformation, and deepfakes that the nation and the world are currently facing.
Shri Kovind urged the graduating students to stay away from the trend of sensationalizing news to garner greater TRPs. He cautioned against employing such shortcuts and urged everyone to preserve the values of journalism. He asserted that the power to build a developed India by 2047 lies in the hands of the youth, and they should use this power wisely.
Director General of IIMC, Dr. Anupama Bhatnagar, expressed IIMC’s commitment to providing every student with opportunities for comprehensive development necessary for their overall growth. The convocation ceremony witnessed the participation of over 700 students, professors, officials, and others.