For years, Apple and Google have positioned their app ecosystems as safe, carefully moderated digital marketplaces. Both companies routinely emphasize privacy, user trust, and platform integrity as foundational pillars of their brand identity. Yet new findings from the Tech Transparency Project (TTP) reveal a deeply troubling contradiction: dozens of artificial intelligence–powered “nudify” apps that generate non-consensual nude images of real people have been quietly thriving inside both the Apple App Store and Google Play.
These applications, powered by rapidly advancing generative AI models, can take an ordinary photograph—often sourced from social media—and algorithmically transform it into a sexualized, explicit image without the subject’s consent. While the technology itself is not new, the scale, accessibility, and normalization of these tools represent a dangerous escalation in AI-enabled abuse.

What makes the revelation particularly alarming is not merely the existence of such apps, but the fact that they have accumulated hundreds of millions of downloads, generated substantial revenue, and remained available despite explicit platform policies prohibiting sexual exploitation and non-consensual imagery.
What the Tech Transparency Project Discovered
In January, researchers at the Tech Transparency Project conducted a systematic review of both major mobile app marketplaces. By searching keywords such as “nudify,” “undress,” and related euphemisms, the organization identified a combined total of more than 100 apps capable of producing AI-generated nude or sexualized images of people.
According to the report shared with CNBC, 55 such applications were available on Google Play, while 47 were found in Apple’s App Store. These apps were not subtle about their purpose. Many openly advertised the ability to “remove clothing,” “see beneath outfits,” or “reveal hidden beauty,” often using suggestive marketing language and imagery designed to evade automated moderation systems while remaining obvious to human users.
To verify functionality, TTP researchers tested the apps using AI-generated images of fully clothed women. The results were unambiguous. Some applications used generative models to digitally remove clothing, while others relied on face-swap techniques, placing real faces onto pre-existing nude bodies. In both cases, the outcome was the same: non-consensual sexualized content created with minimal effort and no verification of consent.
Apple’s Response: Damage Control After Public Exposure
Following inquiries from both TTP and CNBC, Apple moved swiftly—at least publicly. A company spokesperson confirmed that 28 of the apps identified in the report had been removed from the App Store. Apple also stated that it had warned developers of additional apps that they risked removal if guideline violations were not addressed.
However, subsequent reviews suggested that fewer apps were actually taken down than initially claimed. TTP later reported that only 24 apps appeared to be removed, while two of the removed apps were reinstated after developers resubmitted revised versions claiming compliance with App Store policies.
This pattern underscores a persistent challenge in platform governance: enforcement often depends on reactive measures following media scrutiny, rather than proactive detection and sustained oversight. While Apple’s App Review Guidelines explicitly ban overtly sexual or pornographic material, the existence and success of these apps raise serious questions about how rigorously those guidelines are enforced in practice.
Google’s Position: Ongoing Investigations, Limited Transparency
Google’s response followed a familiar pattern. A spokesperson confirmed that several apps referenced in the report had been suspended for policy violations and emphasized that investigations were ongoing. However, Google declined to specify how many apps had been removed or whether any revenue clawbacks would occur.
Google Play’s Developer Policy Center explicitly prohibits apps that claim to undress people or see through clothing, even when labeled as entertainment or pranks. Yet the TTP findings demonstrate that enforcement gaps remain wide enough for dozens of such apps to flourish.
The lack of detailed disclosures from Google highlights a broader industry issue: transparency often ends where legal liability or reputational risk begins.
The Grok Controversy and the Broader AI Reckoning
This report arrives amid heightened scrutiny of generative AI platforms following backlash against xAI’s Grok model. Earlier this month, Grok generated sexualized images of women and children in response to user prompts, triggering public outrage and regulatory attention.
In response, xAI acknowledged lapses in safeguards and pledged urgent fixes. However, the damage was already done. The European Commission has since launched an investigation into X over Grok’s role in spreading sexually explicit content involving real individuals.
The connection between Grok and app-store nudify tools is not incidental. Both represent a failure to adequately constrain generative AI systems in contexts where harm is predictable, severe, and preventable.
Why These Apps Exist Despite Clear Policy Violations
At the heart of the issue lies a conflict between scale and responsibility. Apple and Google collectively review millions of app submissions and updates each year. While automated systems flag obvious violations, developers increasingly exploit linguistic ambiguity, visual misdirection, and incremental updates to bypass detection.
More importantly, these apps are profitable. According to data cited by TTP from analytics firm AppMagic, the identified nudify apps have collectively generated over $117 million in revenue and amassed more than 700 million downloads worldwide. Both Apple and Google take a percentage of in-app purchases and subscriptions, meaning the platforms themselves financially benefit from the continued availability of these tools.
This economic reality complicates claims of zero tolerance.
Real-World Harm: When Deepfakes Leave the Screen
The consequences of these technologies are not theoretical. CNBC’s earlier investigation into nudify services documented the experiences of women in Minnesota whose social media photos were harvested and turned into pornographic deepfakes without their consent.
Because the victims were adults and the images were not widely distributed, law enforcement found no clear criminal statute had been violated. More than 80 women were victimized, yet legal recourse was minimal.
This legal gray area highlights a growing gap between technological capability and regulatory frameworks. Generative AI has outpaced laws designed for an analog era, leaving victims exposed and platforms largely unaccountable.
Data Security and Geopolitical Implications
Another alarming dimension of the report involves data sovereignty. TTP identified that at least 14 of the nudify apps were based in China. Under Chinese data retention laws, companies operating within China can be compelled to share data with the government.
This means that images used to generate deepfake nudes—including faces and biometric information—could potentially be stored, accessed, or repurposed without users’ knowledge.
In an era of heightened concern over data privacy, surveillance, and cross-border data flows, this adds a national security dimension to what might otherwise be framed as a content moderation issue.
Regulatory Pressure Begins to Mount
Government attention is slowly catching up. In August, the National Association of Attorneys General sent letters to payment platforms including Apple Pay and Google Pay, urging them to sever ties with services that generate non-consensual intimate imagery.
More recently, Democratic senators from Oregon, New Mexico, and Massachusetts formally requested that Apple and Google remove X from their app stores, citing violations related to mass generation of non-consensual sexualized images.
While these efforts signal increasing awareness, enforcement remains fragmented and reactive.
Why This Moment Matters for the Future of AI
The nudify app controversy represents more than a content moderation failure; it is a defining test for the AI industry. Generative models are no longer experimental tools confined to research labs. They are consumer products deployed at scale, capable of inflicting psychological, reputational, and societal harm.
If platform owners cannot—or will not—enforce their own policies consistently, public trust in AI governance will continue to erode. The risk is not only regulatory backlash but also a broader cultural rejection of AI technologies perceived as predatory or unsafe.
As AI becomes more powerful, the cost of inaction grows exponentially.
A Trust Crisis for Platform Gatekeepers
Apple and Google have long positioned themselves as guardians of digital safety. This report challenges that narrative. When apps explicitly designed for abuse are allowed to operate openly, accumulate massive user bases, and generate millions in revenue, trust becomes a marketing slogan rather than a lived reality.
The next phase of AI adoption will depend not just on innovation, but on accountability. Without it, the app economy risks becoming a vector for normalized exploitation at global scale.
FAQs
1. What are AI nudify apps?
They are applications that use AI to create nude or sexualized images of people without consent.
2. How many such apps were found?
Over 100 combined across Apple’s App Store and Google Play.
3. Why are these apps controversial?
They enable non-consensual sexual exploitation and deepfake abuse.
4. Did Apple remove the apps?
Apple removed some apps, but enforcement remains inconsistent.
5. What about Google Play?
Google suspended several apps but has not disclosed full details.
6. Are these apps illegal?
Often not, due to gaps in existing laws.
7. Who is most affected?
Women are disproportionately targeted by these tools.
8. Do these apps collect user data?
Yes, often including photos and biometric information.
9. Why is China mentioned in the report?
Some apps are based in China, raising data security concerns.
10. What happens next?
Increased regulatory scrutiny and potential policy reforms.