IndiaAI Mission received more than 400 proposals in the second round of EoI under the ‘Safe & Trusted AI’ pillar.
Photo Credit: Unsplash/Markus Winkler
Multiple academic institutions, startups, and research firms were selected by IndiaAI
IndiaAI Mission announced the selection of five artificial intelligence (AI) projects on Monday that will develop tools to make the technology safer and devoid of biases. The mission, under the Ministry of Electronics and Information Technology (MeitY), launched the second round of Expression of Interest (EoI) and invited applications under the “Safe & Trusted AI” pillar in December 2024. After receiving more than 400 proposals from academic institutions, research firms, and startups, the agency has decided on five projects that will be funded and guided to build specialised tools.
The selected projects will focus on areas such as deepfake detection, bias mitigation in AI models, and penetration testing tools, with a broader aim of making the country's AI ecosystem more secure, transparent, and inclusive, according to details shared by the IndiaAI Mission.
1. Saakshya: Multi-Agent Deepfake Detection Framework — Led by IIT Jodhpur and IIT Madras, this project is working on developing retrieval-augmented generation (RAG) techniques and multi-agent systems to detect manipulated media and improve governance around deepfake content.
2. AI Vishleshak: Audio-Visual and Signature Forgery Detection — Developed by IIT Mandi in collaboration with the Directorate of Forensic Services in Himachal Pradesh, this deepfake detection tool will work to detect forged audio, video, and handwritten signatures. The tool is being made to tackle adversarial attacks and support a wide range of domains.
3. Real-Time Voice Deepfake Detection — This project, developed by IIT Kharagpur, targets real-time detection of voice spoofing and manipulation. This is likely focused on telephone-based phishing scams, voice authentication forgery, and other impersonation crimes.
4. Evaluating Gender Bias in Agricultural AI Systems — This interesting project is being developed by Digital Futures Lab and Karya. It is aimed at creating tools to detect and mitigate underlying data in agriculture-focused LLMs that give rise to responses with gender bias, ensuring models do not favour one gender over another.
5. Anvil: Penetration Testing and Evaluation Tool for Generative AI — Led by Globals ITES and IIT Dharwad, the project is aimed at stress testing LLMs and AI systems to find vulnerabilities, weaknesses or adversarial attack surfaces. It is also developing evaluation tools to correctly estimate indigenous models' capabilities.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.