The recent crackdown on AI photo-editing ads—specifically those claiming to “remove anything”—stems from a mix of technical exaggeration and serious ethical violations. Regulatory bodies like the ASA and FTC have stepped in to pull these ads for three primary reasons:
Thank you for reading this post, don't forget to subscribe!1. The “Magic Fix” Fallacy
Most ads claimed their AI could seamlessly delete any object with a single tap. In reality:
- Predictive Limitations: AI doesn’t “see” behind objects; it generates a “best guess” to fill the gap.
- False Results: Regulators found that many “after” shots in ads were actually created using professional desktop software, not the mobile app itself. This constitutes deceptive marketing.
2. Safety and “Deepfake” Risks
Perhaps the most controversial aspect involved ads that hinted the software could remove clothing.
- Policy Violations: By marketing the tool as a way to “see through” or “remove” layers on people, apps violated strict safety guidelines regarding non-consensual AI-generated imagery.
- Platform Bans: App stores (Apple and Google) and social media networks (TikTok/Meta) have zero tolerance for tools marketed—even subtly—for creating deepfakes.
3. Unrealistic Performance Claims
The “remove anything” slogan was deemed inherently misleading. While AI is great at removing a power line or a distant tourist, it struggles with complex foreground objects. The ads failed to disclose that:
- Large removals often leave visual artifacts (blurring or warping).
- The high-resolution “perfection” shown in the commercial rarely matched the actual user experience.
Comparison: Ad Claims vs. Reality
| The Ad Claim | The Reality | Regulatory Verdict |
| “Remove Anything” | Only works on simple backgrounds. | Misleading |
| “One-Tap Perfection” | Often requires multiple tries or manual cleanup. | Exaggerated |
| Implied “X-Ray” Effects | Promotes harmful/unethical use cases. | Safety Violation |
















