Rules are made to be broken is the cliché. That’s a theme that’s running louder and wilder through much of life these days. Build complicated and successful things. Create rules to protect what you’ve built. Mass enough power and then bend or ignore the rules when they become inconvenient.

That theme surfaces frequently enough that it’s almost a meme. In politics it happens every day, enough to make a mocking myth of things like the rule of law and the constitution. We see it in religion. We see it in business, too frequently in the business of tech. When you’re big enough that you have to, and can create rules to protect what you’ve built against others, and yourself, you substitute the convenience of adhering to the rules for the inconvenience of principle.
Back in January, (damn that seems so long ago), Elon Musk’s Grok released an AI image editing feature that allowed users to create nonconsensual sexualized deepfakes. It was ugly and disgusting.
As with all new things tech, it caught on like wildfire, and then X took fire from many quarters including some governments. (Not ours — caterwauling congress critters no longer count.) Apple and Google also took hits for continuing to allow the app on their respective App stores in violation of existing rules. There were calls for both Apple and Google to follow those rules and take the app down. Something both companies have done for other rule violating apps with and without public punity.
That didn’t happen.
Yesterday, a report from NBC revealed that Apple, in a letter to U.S. Senators, claimed that it worked behind the scenes of the public uproar to demand that the developers “create a plan to improve content moderation.” According to The Verge,
Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn-out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X letting users block Grok from editing their photos, are also easily circumvented.
Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease.
So, essentially nothing of any real effect happened. Scratch that. Something did. X and Grok put the feature behind a paying subscription. One that Apple also reaped profits from and still does. As does Google.
The one rule this era has taught us is that if you’re big and rich enough, and can weather the storm of public scorn, you can essentially ignore the rules. Even those you’ve written yourself. With impunity.
You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.