Guest ArticleNews

AI in 5 years: Preparing for Intelligent, Automated Cyber Attacks

Sundar Balasubramanian, Managing Director, Check Point Software Technologies - India & SAAR

Sundar Balasubramanian, Managing Director for India and SAARC at Check Point Software Technologies

A seasoned and much accomplished business leader with diverse sales, business and partner leadership experience in the IT industry, Sundar has siezable experience in working with industry leaders like EMC, VMware, Microsoft and IBM. With a strong grip over various business processes, he has always led from the front and has driven direct sales, new business creation, partnerships, channel management and service delivery. Sunder has also been credited with building high performance businesses and teams, and executing swift turn-around for troubled businesses.

AI is not only transforming productivity but fundamentally rewriting the cyber risk landscape that enterprises and individuals must urgently address

Organisations around the world are rapidly adopting AI to drive efficiency and transform operations, marking a shift into an era where cybersecurity is entering a turning point where AI fights AI. The phishing scams and deepfakes of today are only precursors to a coming era of autonomous, self-optimising AI threat actorscapable of planning, executing, and refining attacks with little to no human involvement.

India, in particular, faces a significant cybersecurity challenge amid rapid AI adoption. Organizations worldwide are embracing AI for efficiency, but this evolution is shifting cybersecurity into a new era where AI battles AI. In India, phishing scams and AI-fueled deepfakes foreshadow a future dominated by autonomous, self-optimizing AI threat actors capable of planning, executing, and refining attacks with minimal human oversight.

In September 2025, Check Point Research’s global threat intelligence found that 1 in every 54 GenAI prompts from enterprise networks posed a high risk of sensitive-data exposure, affecting 91 percent of organisations that use AI tools regularly.

These developments highlight that AI is not only transforming productivity but fundamentally rewriting the cyber risk landscape that enterprises and individuals must urgently address. These statistics show that AI isn’t just reshaping productivity, it’s rewriting the rules of cyber risk.

The Expanding Attack Surface of AI: Four Critical Threat Vectors Organisations Must Consider

As artificial intelligence (AI) and generative AI become deeply integrated into modern business operations, they are also transforming the cyber threat landscape. Attackers are no longer relying solely on traditional tools — they are embedding AI into their tactics, techniques, and procedures to launch more scalable, adaptive, and sophisticated campaigns. The following four vectors are already emerging from today’s AI ecosystem and represent some of the most pressing security considerations for organisations in 2025 and beyond.

1. Autonomous AI Attacks: Machine-Driven Threat Campaigns

Cybercriminals are increasingly experimenting with autonomous AI-driven attacks, where machine agents independently plan, coordinate, and execute multi-stage campaigns. These AI systems share intelligence, adapt in real time to defensive measures, and collaborate across thousands of endpoints — functioning like self-learning botnets without human oversight.

Recent examples, such as prototypes like ReaperAI, show how autonomous AI can chain reconnaissance, exploitation, and data exfiltration into a seamless operation. This machine-speed evolution presents a major challenge for security operations centres (SOCs), which risk being overwhelmed by swarms of adaptive, self-organising attacks that generate thousands of alerts, test policies and shift tactics in real time.

2. Adaptive Malware Fabrication: Self-Evolving Malicious Code

In 2024, multiple underground forums began advertising “AI malware generators” capable of writing, testing, and debugging malicious code automatically. These tools use feedback loops to learn which variants bypass detection — turning each failed attempt into fuel for the next success.

The use of AI-generated polymorphic malware is redefining how attackers build and deploy malicious software. Where traditional malware relied on minor code changes to bypass detection, modern generative models — from GPT-4o to open-source LLMs — can now produce unique, functional malware variants in seconds.

3. Synthetic Insider Threats: AI-Powered Impersonation and Social Engineering

The insider threat is rapidly evolving with the rise of synthetic identities and AI-generated personas. Built from stolen employee data, voice samples, and internal messages, these AI agents can convincingly mimic real users — sending authentic-looking emails, joining video calls with deepfaked voices, and infiltrating collaboration platforms with precise linguistic and behavioural patterns.

Recent “vibe hacking” cases showed how threat actors embedded social-engineering goals directly into AI configurations, allowing bots to negotiate, deceive, and persist autonomously. As AI voice cloning becomes indistinguishable from the real thing, verifying identity will shift from who is speaking to how behaviourally consistent their actions are, a fundamental change in digital trust models.

4. AI Supply Chain and Model Poisoning: Compromising the Core

The rapid adoption of third-party and open-source AI models has created a vast new attack surface: the AI supply chain. In 2025, several research labs demonstrated data-poisoning attacks where altering just 0.1% of a model’s training data could cause targeted misclassification — for example, instructing an AI vision system to misidentify a stop sign as a speed-limit sign. In cyber security contexts, this could mean an intrusion-detection model misclassifying a malicious payload as benign.

Why These AI Threats Are Different

AI-driven cyberattacks stand apart because they combine speed, autonomy, and intelligence at a scale human attackers can’t match.

Unlike traditional threats, machine-made attacks learn and adapt continuously. Every failed exploit becomes training data, creating a self-improving threat ecosystem that evolves faster than conventional defences. Check Point Research notes that AI-driven tools like Hexstrike-AI framework, originally built for red-team testing, was weaponised within hours to exploit Citrix NetScaler zero-days.

These attacks also operate with unprecedented precision. Generative AI enables personalised phishing, multilingual deepfakes, and synthetic insider personas that mimic tone and behaviour so well they bypass human suspicion and automated filters alike. At the same time, AI execution removes the “human fingerprint” like typing errors, time-zone patterns, or linguistic traces. This makes attribution and detection increasingly difficult.

Finally, AI is democratising cybercrime. Tools that automate scanning, exploitation, and negotiation are lowering barriers for less-skilled attackers, expanding the threat landscape. By 2030, ransomware and data theft will be orchestrated almost entirely by autonomous AI systems capable of running 24/7 operations without human oversight.

What Organisations Must Do Now to Prepare

The risks associated with AI-driven threats are real — but they don’t mean businesses should abandon vibe coding, AI-assisted development, or generative AI tools. Much like the early days of cloud migration or the shift to hybrid work, the challenge now is learning how to embrace AI securely without creating new vulnerabilities.

To reduce risk and build long-term resilience, organisations should focus on these four core strategies:

1. Choose Security-Aware AI Tools and Guide Them Wisely

Select AI platforms that are built or configured with security-first principles. Shape how large language models (LLMs) operate by including prompts that reference validation, encryption, and safe defaults — so secure practices are integrated from the start. Always limit the data you expose to AI tools, keeping sensitive files, credentials, or production datasets out of reach. If testing is needed, use sanitised or synthetic data and review permissions to ensure AI only accesses what it needs.

2. Implement Zero Trust for AI

Apply least-priviledge access to AI systems. Authenticate every API call, enforce continuous verification, and monitor AI-to-AI interactions to prevent lateral movement.All code generated by AI should be peer-reviewed, tested, and scanned for vulnerabilities before deployment. Human oversight ensures logic, security, and compliance requirements are properly met — and helps catch red flags that AI may overlook.

3. Secure the Supply Chain and Dependencies

AI can accelerate development, but it also introduces new risks through third-party libraries, plugins, and code suggestions. Every new dependency should be treated as untrusted until verified. Validate sources, check reputation, and run security scans before integration. Robust supply chain security is essential, especially as AI-assisted coding tools can make dependency sprawl harder to track.

4. Automate and Embed Security Throughout the Development Lifecycle

Make DevSecOps a standard part of your AI strategy. Automate security checks across your CI/CD pipeline to detect insecure code, exposed s

ecrets, and misconfigurations before they reach production. Pair these automated guardrails with continuous training for developers, analysts, and business teams so they understand fundamentals like input validation, access control, and secure data handling.

5. Govern GenAI Usage Company-Wide

Unchecked AI tool use is a growing source of data leakage. CPR’s September 2025 cyberattack statistics research found 15 percent of enterprise AI prompts contained potentially sensitive information such as customer records and proprietary code.

Conclusion: Building Resilience for the AI Era

The AI arms race has begun and the rise of GenAI data leakage shows how quickly automation is redefining risk for enterprises worldwide. As we look ahead, the evolution of AI will not just accelerate — it will fundamentally redefine how organisations secure their digital future.

From synthetic insiders impersonating employees to adaptive malware that rewrites its own code, the threat landscape is transforming rapidly. To stay ahead, organisations must fight fire with fire. Prevention-first, AI-powered, cloud-delivered platforms are now essential — integrating predictive analytics, behavioural intelligence, and autonomous remediation to stop threats before they occur. Security will need to evolve from reactive tools into AI-powered, cloud-delivered platforms that predict and pre-empt attacks before they occur, integrating threat-informed intelligence, autonomous remediation, and continuous governance.

Check Point’s Infinity AI Threat Prevention Engine, powered by ThreatCloud AI, already analyses millions of indicators from over 150,000 networks to block zero-day attacks in real time, while Harmony SASE and Harmony Browse secure GenAI usage and browser interactions at the cloud edge.

The future of cyber security will belong to those who embrace this platform-centric mindset — consolidating visibility and control across the digital estate and embedding Zero Trust and secure-by-design principles at every layer, turning AI from a risk into a decisive advantage. As we celebrated Cybersecurity Awareness Month a few weeks ago, organisations should focus on raising literacy around both the benefits and risks of AI, preparing their people and infrastructure for a world where prevention, automation, and intelligence are the only sustainable strategies. Those who act now will transform AI from a risk into a decisive advantage — and build true digital resilience for the decade ahead.

Related posts

Eudia Sets Up AI-Driven Excellence Hub in Bengaluru Post Securing up to $105M Funding

SME Channels

Former Goldman Sachs VP Affix Hypersign as CSO

SME Channels

Dell Technologies Unveils Solutions for the Edge

SME Channels

Leave a Comment