Is AI Safe? CERT-In’s New Advisory Unveils Hidden Cyber Threats

Titiksha Srivastav
By Titiksha Srivastav - Assistant Editor
8 Min Read

NEW DELHI: The Indian Computer Emergency Response Team (CERT-In) has released a comprehensive advisory, CIAD-2025-0013, addressing the security risks associated with the increasing adoption of generative artificial intelligence. The advisory highlights critical vulnerabilities in AI models and provides essential guidelines to mitigate threats.

With AI becoming an integral part of various industries, including healthcare, finance, cybersecurity, and media, its growing use has also given rise to sophisticated cyber threats. Attackers are exploiting weaknesses in AI training data, input mechanisms, and model architectures to manipulate outputs, steal sensitive information, and bypass security measures. CERT-In warns that such vulnerabilities could have serious implications for individuals, businesses, and government organizations.

Emerging Threats in AI Systems

The advisory outlines multiple attack vectors that malicious actors can exploit to compromise AI security.

One of the major concerns is data poisoning, where cybercriminals manipulate an AI model’s training data, leading to incorrect outputs, biased decisions, or compromised functionality. This can be achieved by inserting deceptive data or modifying existing data to mislead the model.

Empanelment for Speakers, Trainers, and Cyber Security Experts Opens at Future Crime Research Foundation

Another significant risk is adversarial attacks, where attackers subtly modify inputs in a way that appears unchanged to human observers but causes AI models to make incorrect predictions. These attacks can be used to bypass security checks, such as AI-driven fraud detection or content moderation systems.

CERT-In also highlights model inversion and model stealing as growing threats. Model inversion techniques allow attackers to extract sensitive information about the training data by analyzing an AI system’s responses. Meanwhile, model stealing involves reverse-engineering an AI model by continuously querying it, enabling attackers to create a near-identical version without investing in extensive training efforts. This could lead to intellectual property theft, particularly in industries that rely on proprietary AI-driven innovations.

A more direct form of attack is prompt injection, where malicious instructions or hidden commands are embedded into AI prompts to manipulate the system’s responses. This method can be used to bypass content filters, expose sensitive information, or generate harmful outputs that violate ethical guidelines.

Another concern raised in the advisory is hallucination exploitation, a phenomenon where AI models generate misleading or entirely false information. Cybercriminals can take advantage of this flaw to spread misinformation, conduct fraud, or deceive users through convincingly fabricated content. This is particularly dangerous in fields such as journalism, finance, and healthcare, where inaccurate AI-generated information could have far-reaching consequences.

The advisory also warns about backdoor attacks, where hidden triggers are implanted in an AI model during the training phase. These triggers can be later activated to make the model behave unexpectedly, potentially leading to security breaches and unauthorized access.

Best Practices for Safe AI Usage

To mitigate the risks associated with AI-powered applications, CERT-In has issued a set of best practices for individuals and organizations to follow.

The advisory emphasizes the importance of choosing AI applications carefully. Cybercriminals are increasingly developing fake AI tools to distribute malware and steal user data. It is essential to verify the authenticity of AI applications before installation, preferably using tools that have been reviewed and approved by trusted sources.

Now Open: Pan-India Registration for Fraud Investigators!

Another critical recommendation is to avoid sharing personal and sensitive information with AI services, especially those operating on cloud-based platforms. Many AI tools collect user data to improve their models, and unauthorized access to this information could lead to privacy violations or data breaches. Organizations handling confidential data should refrain from using publicly available AI systems for professional tasks.

Organizations are also advised to carefully configure AI tool access rights to prevent unauthorized access to business data, emails, document repositories, and other sensitive assets. It is crucial to regularly review and update these permissions to ensure they align with security policies.

The advisory further cautions against relying solely on AI for accuracy. AI models are only as effective as the quality and completeness of the data they are trained on. If training datasets are outdated, incomplete, or biased, AI-generated outputs can be unreliable. Users must cross-verify information with trusted sources before making critical decisions based on AI-generated insights.

Another key recommendation is to use AI tools strictly for their intended purpose. AI should be leveraged to assist with information processing, content generation, and task automation, but not as a replacement for human judgment in sensitive areas such as legal and medical decision-making.

CERT-In also advises users to strengthen their cybersecurity practices, including using strong passwords and enabling multi-factor authentication when accessing AI services. This reduces the risk of unauthorized account access, especially in professional environments where AI tools may be integrated with workplace systems.

To protect personal identity and prevent data breaches, users are encouraged to use anonymous accounts when signing up for AI services. If sensitive data must be input, it should be anonymized to minimize exposure.

Another crucial guideline is to stay vigilant against plagiarism risks. Many AI-generated texts can closely resemble existing content, increasing the risk of unintentional plagiarism. Businesses and content creators should carefully review AI-generated material to ensure originality before publishing or using it commercially.

Finally, the advisory warns against falling for AI-generated scams and deepfake content. Threat actors are using generative AI to create highly convincing phishing emails, fraudulent advertisements, and even deepfake videos for deceptive purposes. Users must remain cautious and verify the authenticity of digital content before trusting it.

The Growing Need for AI Security Awareness

As AI continues to evolve and integrate into everyday life, cybersecurity threats targeting AI systems are also becoming more advanced. The CERT-In advisory serves as a crucial reminder that while AI presents immense benefits, its vulnerabilities must not be overlooked.

The organization urges businesses, government agencies, and individual users to proactively adopt security measures, conduct regular risk assessments, and stay informed about emerging AI threats. Ensuring responsible AI usage will be key to maintaining trust, privacy, and security in an increasingly AI-driven world.

Follow The420.in on

 TelegramFacebookTwitterLinkedInInstagram and YouTube

Share This Article