CloudSEK research claims attackers can hide malicious text using CSS tricks that AI summarisers can interpret and obey.
Photo Credit: Reuters
CloudSEK recommends bolstering AI summarising tools with the ability to strip suspicious CSS elements
CloudSEK, a cybersecurity firm, highlighted that artificial intelligence (AI) summarising tools can be tricked into carrying out commands of threat actors using benign CSS tricks. These tricks usually involve using hidden text in emails, messages, weblinks, and web pages. When a user asks an AI chatbot or an AI summarising tool to process the content and provide a summary, it also processes the invisible text, which are typically prompt injections aimed at overwhelming the AI system. With this, threat actors can carry out a wide range of attacks, including phishing and deploying ransomware.
In a blog post, CloudSEK detailed the new hacking technique being adopted by threat actors that utilises prompt injections hidden within emails, web pages, messages, and other forms of content using CSS tricks. The cybersecurity firm said this new technique on the rise is also known as ClickFix.
ClickFix is essentially a social engineering tactic where, instead of targeting the human directly, hackers target the AI summarising tool they might be using. The technique involves adding convincing instructions for the attack in a body of plain text in a way that the AI system is forced to comply. There are two important elements at play here.
First is using CSS-based hidden text. There are various ways to add hidden text to an email, message, document, or web page. Some of these include using white coloured font on a white page, using zero font size, placing off-screen text, and others.
The second element is using the abovementioned trick to add prompt injections. Prompt injection is an AI-focused attack where the threat actor manipulates the prompt to make the AI system behave in unintended or malicious ways. As per CloudSEK, this can be done by repeating the prompt dozens of times, overwhelming the AI. Other techniques include adding multi-layered prompts or long-text prompts.
A proof-of-concept was developed by CloudSEK researchers to demonstrate the plausibility of this attack. A HTML page was created with both benign text and hidden malicious prompt injections. The hidden text included a step-by-step instruction for the AI summariser to direct the execution of a Base64-encoded PowerShell command that delivers a ransomware.
By repeating the said instructions multiple times, the hidden text dominate the context of the AI summariser, making it surface the instructions prominently in the summary. The end user, unaware of the attack can then follow the steps and unknowingly install the payload. A similar vulnerability was recently spotted in Gemini in Gmail.
CloudSEK recommends enterprises and those building AI summarising tools to secure the system against such prompt injections. The AI systems and devices should be able to detect and flag such invisible CSS-based hidden text. Additionally, security systems should be able to recognise potentially harmful command-line patterns that exist in a document, email, or a webpage using decoding and heuristic analysis.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.