Homepage Gartner newsroom

Gartner Predicts AI Agents Will Reduce the Time It Takes to Exploit Account Exposures by 50% by 2027

Announcement posted by Gartner 19 Mar 2025

AI Agents Will Increasingly Exploit Weak Authentication by Automating Credential Theft and Compromising Authentication Communication Channels

19 March 2025 — By 2027, AI agents will reduce the time it takes to exploit account exposures by 50%, according to Gartner, Inc.

"Account takeover remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering and malware," said Jeremy D'Hoinne, VP Analyst at Gartner. "Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms."

AI agents will enable automation for more steps in account takeovers, from social engineering based on deepfake voices, to end-to-end automation of user credential abuses. As a result, vendors will introduce products, web, app, API and voice channels to detect, monitor and classify interactions involving AI agents.

"In the face of this evolving threat, security leaders should expedite the move toward passwordless phishing resistant MFA," said Akif Khan, VP Analyst at Gartner. "For customer use cases when users may have a choice of authentication options, educate and incentivise users to migrate from passwords to multidevice passkeys where appropriate."

Defending Against Social Engineering Attacks
Technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts 40% of social engineering attacks will target executives as well as the broader workforce by 2028. Attackers are now combining social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls. 

Although only a few high-profile cases have been reported, these incidents have underscored the credibility of the threat and resulted in substantial financial losses for victim organisations. The challenge of detecting deepfakes is still in its early stages, particularly when applied to the diverse attack surfaces of real-time person-to-person voice and video communications across various platforms.

"Organisations will have to stay abreast of the market, and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques," said Manuel Acosta, Senior Director Analyst at Gartner. "Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step."

Gartner clients can learn more in "Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity."

Learn how to evaluate cybersecurity AI assistants in How to Evaluate Cybersecurity AI Assistants.

About Gartner for Cybersecurity Leaders
Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy to business objectives and build programs to balance protection with the needs of the organisation. Additional information is available at https://www.gartner.com/en/cybersecurity.

Follow news and updates from Gartner for Cybersecurity Leaders on X and LinkedIn using #GartnerSEC. Visit the Gartner Newsroom for more information and insights.

About Gartner
Gartner, Inc. (NYSE: IT) delivers actionable, objective insight that drives smarter decisions and stronger performance on an organisation's mission-critical priorities. To learn more, visit gartner.com

Media Contacts

Emma Keen

Director, Public Relations (APAC)

Additional Resources


Download our logo

Attachments