In an era where AI is reshaping our digital landscape at an unprecedented pace, we face a critical threat to the very foundations of digital security and trust. The emergence of AI-generated content, particularly deepfakes and manipulated media, has opened Pandora's box of potential misuse and deception. While VC floods into GenAI, the equally crucial field of protection against these remains woefully underfunded and overlooked.
The urgency of addressing this challenge cannot be overstated. Recent incidents paint a alarming picture:
Corporate Fraud: In 2021, a UK-based company fell victim to a sophisticated attack where AI was used to mimic a CEO's voice, resulting in a fraudulent transfer of $243,000.
Financial Scams: Deepfake videos of high-profile figures like Elon Musk have been used to promote fraudulent cryptocurrency schemes, leading to significant financial losses for investors worldwide.
Political Manipulation: A deepfake video circulated showing a French diplomat making false, inflammatory statements, leading to international tensions that required substantial diplomatic efforts to resolve.
Misinformation Campaigns: Former U.S. President Donald Trump reposted AI-generated deepfakes and manipulated images of Taylor Swift to Truth Social, depicting the pop star and "supporters" endorsing him in the upcoming election. This incident showcases how deepfakes can be weaponized for political gain and voter manipulation.
Social Media Deception: We're witnessing a proliferation of deepfake-based influencer accounts on platforms like Instagram. For instance, an account with the handle @aayushiiisharma__ has amassed around 100,000 followers with hyper-realistic AI-generated content. Such accounts could potentially be used to scam followers or deceive brands hoping to advertise with real influencers.
Personal Security Threats: Voice-based scams are on the rise, where criminals use AI to emulate the voices of loved ones, often to request emergency financial aid. A recent case in saw a woman lose Rs 1.4 lakh (approximately $1,700) to such a scam.
The scale of this threat is set to escalate dramatically. Gartner Research predicts that by 2025, a staggering 30% of deepfakes will be indistinguishable from authentic media without specialized detection tools. This forecast underscores the pressing need for advanced detection technologies to safeguard our digital interactions.
We're a team of researchers and engineers from IIT Delhi and IIT Roorkee who are developing a state-of-the-art deepfake detection tool for mobile and web, aiming to identify 98-99% of modified media.
Key Features:
Multi-Platform: Both mobile and pc.
High Accuracy: Aiming for a 98-99% detection rate, significantly higher than current industry standards. Yes it is not infeasible.
Real-Time: While you scroll or while you talk.
Continuous Learning: Utilization of machine learning to adapt to new deepfake techniques as they emerge.
Privacy-Focused: Strict adherence to data protection regulations, ensuring user privacy and data security.
That's me, I am an honorary veteran of Air Force.
Well not really!!
We will be offering a 1 year free access to the tool to every contributor with a contribution >= $10 on Manifund. We will send the invites on the same email as used on Manifund to make the contribution.
50% - Research team stipends (We are currently paying through our pockets)
30% - Cloud compute costs (Got minimal credits to survive upon)
10% - Privacy certifications (US and EU)
10% - Operational expenses (coworking space, travel)
Utsav Singhal -> LinkedIn
Sarthak Gupta -> LinkedIn
4 research interns from IIT Delhi and IIT Roorkee
No part of the project would count as fail. Every step is incremental and we will open source the tool if we are not able to deliver the above promised accuracy capability by the end of March 2025.
$7,715 grant from Entrepreneurship First (incubator). No more promises or funding other than this.
Feel free to ask any questions or clarifications. We are based in HSR, Bangalore. Feel free to visit us at Urban Vault 65, HSR, Bangalore, India. More info: https://satya-img.github.io/