Advertisement

Following a global backlash over Grok-generated images, particularly those involving non-consensual, sexually explicit, or misleading content, X swiftly restricted its image generation features to paid, verified users. The company positioned the change as a safety measure, arguing that paid access would reduce misuse, improve accountability, and facilitate the identification of bad actors.

While the intent appeared corrective, the decision leaned on an assumption X itself has struggled with in the past: that charging users creates better behavior. History on the platform suggests otherwise.

Grok logo
Grok logo

1Old Twitter verification misuse shows the flaw

Even before AI tools were part of the platform, Twitter’s verification system had credibility issues. The blue tick was originally meant to confirm identity, but over time it became more of a status symbol than a trust indicator. Verified accounts were frequently involved in misinformation, harassment, and impersonation, while enforcement remained inconsistent.

When verification later became a paid feature, the problem became more visible. Fake brand accounts, misleading profiles, and coordinated impersonation campaigns surfaced rapidly, all carrying the same blue tick meant to reassure users. The lesson from this period is clear. Verification alone does not regulate behaviour, especially when deployed at scale. Applying the same logic to AI image generation repeats a familiar mistake.

Back