Microsoft’s New Copilot Actions for Windows 11 Face Scrutiny Over Potential Security Implications

One expert compared Microsoft’s security warning to the CYA manoeuvre in law, which is to shield the party from potential liabilities.

Advertisement
Written by Shaurya Tomer, Edited by Ketan Pratap | Updated: 20 November 2025 17:58 IST
Highlights
  • Microsoft has cited risks like hallucinations and cross-prompt injection
  • Experts warn user habituation could weaken security boundaries
  • They compared Copilot Actions to dangerous macro-like automation

Copilot Actions are currently available to Windows 11 Insiders in preview

Photo Credit: Microsoft

Microsoft recently announced the gradual rollout of Copilot Actions on Windows 11. It is an experimental agentic capability available to Windows Insiders that automates tasks by assigning an artificial intelligence (AI) chatbot to everyday actions, such as organising files or sending emails. The company, however, cited novel security risks and recommended that this feature be utilised only if users understand its security implications. Experts have now warned about Copilot Actions, calling Microsoft's security boundaries “not really a boundary.”

Microsoft's Warning About Copilot Actions

Despite all of the capabilities of agentic AI, there are still functional limitations in terms of their behaviour and tendency to hallucinate and deliver unexpected outputs, Microsoft explained in a blog post. These are associated with the inherent defects in large language models (LLMs). Consequently, there may be instances in which the AI twists the facts and provides factually inaccurate and even bizarre responses.

Advertisement

Further, there are novel security risks associated with agentic AI applications. For example, cross-prompt injection (XPIA) can result in malicious content being embedded in documents and UI elements. This carries the risk of agent instructions being overridden, leading to data exfiltration, malware installation, or other unintended actions.

Microsoft said that it has built appropriate guardrails to mitigate such instances. To begin with, all of the automated actions carried out by Copilot Actions are observable and distinguishable from those manually taken by a user. Further, AI agents that collect and utilise users' protected data must meet or exceed the security and privacy standards of the data being consumed.

Advertisement

Lastly, all requests for user data utilisation, as well as actions to be taken, will be approved by the user. The company further clarified that IT administrators can turn off an agent workspace at both account and device levels for workspace accounts, using mobile device management (MDM) apps.

Despite the company's efforts, the new Copilot Actions feature has come under scrutiny from security experts.

Advertisement

What Experts Say

In a conversation with ArsTechnica, independent security researcher Kevin Beaumont compared Copilot Actions to macros in Microsoft Office.

Macros, notably, are sequences of instructions that automate repetitive tasks by running them as a single command. While their usefulness has been explored to save time and increase efficiency, the company has also periodically warned about the risks associated with macros, which can be used to run malicious code to steal data, transfer files, or install malware. Thus, Microsoft disables macros by default.

Advertisement

“Microsoft saying ‘don't enable macros, they're dangerous'… has never worked well. This is macros on Marvel superhero [drugs],” Beaumont warned.

Another concern raised by experts was the challenges faced by experienced users in detecting when the AI agents are being exploited by attackers. While Microsoft's security goals were commended, the reliance on users reading and approving the risk warnings appearing on dialogue windows was also reported to be a matter of concern, potentially diminishing the overall protection value.

“Sometimes those users don't fully understand what is going on, or they might just get habituated and click ‘yes' all the time. At which point, the security boundary is not really a boundary,” Earlence Fernandes, a professor specialising in AI security at the University of California, told the publication.

Meanwhile, one expert reportedly compared Microsoft's security warning to the CYA manoeuvre in law, which is to simply shield the party from potential liabilities. They claimed Microsoft does not have any contingency to deal with hallucinations and prompt injection, which makes Copilot Actions “fundamentally unfit for almost anything serious”.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Ek Haseen Saazish Kasak OTT: Know When, Where to Watch the Romance Thriller
  1. Rocket Lab Sends Up Test Satellites for Europe’s Next-Gen Navigation System
  2. Zootopia 2 Is Now Streaming: Know Where to Watch the Disney Cop Comedy Sequel
  3. Ek Haseen Saazish Kasak OTT Release: Know When and Where to Watch the Romance Thriller
  4. Vadh 2 Streaming Now: Where to Watch Neena Gupta, Sanjay Mishra’s Crime Thriller
  5. Scientists Identify 45 Earth-Like Planets Beyond Our Solar System
  6. Euphoria Is Streaming Online: Know Where to Watch Sara Arjun's Social Thriller
  7. Valathu Vashathe Kallan Is Now Streaming: Know All About Jeethu Joseph's Crime Thriller
  8. Band Melam OTT Release: Know Where to Watch the Telugu Romantic Musical Film
  9. Microsoft Releases New AI Models That Can Generate Images, Audio and Transcribe Text
  10. Redmi K Pad 2, New Redmi Laptops Tipped to Launch Alongside Redmi K90 Ultra
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.