• Home
  • Ai
  • Ai News
  • Microsoft’s New Copilot Actions for Windows 11 Face Scrutiny Over Potential Security Implications

Microsoft’s New Copilot Actions for Windows 11 Face Scrutiny Over Potential Security Implications

One expert compared Microsoft’s security warning to the CYA manoeuvre in law, which is to shield the party from potential liabilities.

Microsoft’s New Copilot Actions for Windows 11 Face Scrutiny Over Potential Security Implications

Photo Credit: Microsoft

Copilot Actions are currently available to Windows 11 Insiders in preview

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • Microsoft has cited risks like hallucinations and cross-prompt injection
  • Experts warn user habituation could weaken security boundaries
  • They compared Copilot Actions to dangerous macro-like automation
Advertisement

Microsoft recently announced the gradual rollout of Copilot Actions on Windows 11. It is an experimental agentic capability available to Windows Insiders that automates tasks by assigning an artificial intelligence (AI) chatbot to everyday actions, such as organising files or sending emails. The company, however, cited novel security risks and recommended that this feature be utilised only if users understand its security implications. Experts have now warned about Copilot Actions, calling Microsoft's security boundaries “not really a boundary.”

Microsoft's Warning About Copilot Actions

Despite all of the capabilities of agentic AI, there are still functional limitations in terms of their behaviour and tendency to hallucinate and deliver unexpected outputs, Microsoft explained in a blog post. These are associated with the inherent defects in large language models (LLMs). Consequently, there may be instances in which the AI twists the facts and provides factually inaccurate and even bizarre responses.

Further, there are novel security risks associated with agentic AI applications. For example, cross-prompt injection (XPIA) can result in malicious content being embedded in documents and UI elements. This carries the risk of agent instructions being overridden, leading to data exfiltration, malware installation, or other unintended actions.

Microsoft said that it has built appropriate guardrails to mitigate such instances. To begin with, all of the automated actions carried out by Copilot Actions are observable and distinguishable from those manually taken by a user. Further, AI agents that collect and utilise users' protected data must meet or exceed the security and privacy standards of the data being consumed.

Lastly, all requests for user data utilisation, as well as actions to be taken, will be approved by the user. The company further clarified that IT administrators can turn off an agent workspace at both account and device levels for workspace accounts, using mobile device management (MDM) apps.

Despite the company's efforts, the new Copilot Actions feature has come under scrutiny from security experts.

What Experts Say

In a conversation with ArsTechnica, independent security researcher Kevin Beaumont compared Copilot Actions to macros in Microsoft Office.

Macros, notably, are sequences of instructions that automate repetitive tasks by running them as a single command. While their usefulness has been explored to save time and increase efficiency, the company has also periodically warned about the risks associated with macros, which can be used to run malicious code to steal data, transfer files, or install malware. Thus, Microsoft disables macros by default.

“Microsoft saying ‘don't enable macros, they're dangerous'… has never worked well. This is macros on Marvel superhero [drugs],” Beaumont warned.

Another concern raised by experts was the challenges faced by experienced users in detecting when the AI agents are being exploited by attackers. While Microsoft's security goals were commended, the reliance on users reading and approving the risk warnings appearing on dialogue windows was also reported to be a matter of concern, potentially diminishing the overall protection value.

“Sometimes those users don't fully understand what is going on, or they might just get habituated and click ‘yes' all the time. At which point, the security boundary is not really a boundary,” Earlence Fernandes, a professor specialising in AI security at the University of California, told the publication.

Meanwhile, one expert reportedly compared Microsoft's security warning to the CYA manoeuvre in law, which is to simply shield the party from potential liabilities. They claimed Microsoft does not have any contingency to deal with hallucinations and prompt injection, which makes Copilot Actions “fundamentally unfit for almost anything serious”.

Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Shaurya Tomer
Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence (AI), he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and ...More
Meta SAM 3 Open-Source AI Models Can Detect, Track and Construct 3D Models of Objects in Images

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »