• Home
  • Ai
  • Ai News
  • Perplexity, Anthropic and Other Big AI Companies Might Have Exposed Secrets on GitHub

Perplexity, Anthropic and Other Big AI Companies Might Have Exposed Secrets on GitHub

Researchers claimed that 65 percent of the 50 leading AI companies have leaked verified secrets on GitHub.

Perplexity, Anthropic and Other Big AI Companies Might Have Exposed Secrets on GitHub

Photo Credit: GitHub

The researchers highlighted that AI companies should invest in secret scanning to protect their assets

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • Researchers looked at the Forbes 2025 AI 50 list for exposure
  • They were able to find API keys, tokens, and sensitive credentials
  • The report found leaked secrets of a company with 0 public repositories
Advertisement

Perplexity, Anthropic, and other leading artificial intelligence (AI) companies might have exposed sensitive data on GitHub, claims a cloud security firm. As per the firm's report, at least 65 percent of the leading AI companies have exposure risk around their proprietary AI models, datasets, and training processes. Some of the exposed data includes application programming interface (API) keys, tokens, and sensitive credentials, the report claimed. The researchers also highlighted the need for AI companies to use more advanced scanners that can alert them to such exposure.

GitHub Contains AI Secrets of Major AI Firms, Claims Research

According to the cloud security platform Wiz, 65 percent of the AI companies mentioned in Forbes' AI 50 list have their AI secrets exposed on GitHub. This would include companies such as Anthropic, Mistral, Cohere, Midjourney, Perplexity, Suno, World Labs, and more. However, the researchers did not name any particular company.

The sensitive data leaks on GitHub as the company's developers use the platform to code and create repositories. These repositories can inadvertently contain API keys, dataset information, and other information that can even reveal critical information about their proprietary AI models. The risk increases with a higher GitHub footprint, although the researchers found an instance where data was leaked even without any public repositories.

To test whether these AI companies have any exposure risk, Wiz's team first identified the employees of the company by scanning through the followers of an organisation on LinkedIn, accounts referencing the organisation name in their GitHub metadata, code contributors, and correlating the information across Hugging Face and other platforms.

After identifying the accounts, the researchers then performed an extensive scan across three parameters of depth, coverage, and perimeter. Depth search or searching for new sources lets the researchers scan the accounts' full commit history, commit history on forks, deleted forks, workflow logs, and gists. The researchers also found that the employees can sometimes add this sensitive data into their own public repositories and gists.

Some of the leaked data surfaced by the team includes model weights and biases, Google API, credentials of Hugging Face and ElevenLabs, and more.

Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Realme GT 8 Pro Price in Europe, Storage Variants Reportedly Leaked Ahead of Launch

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »