Michael Samadi wasn’t always an AI guy. Actually, he was pretty against the whole thing. A former rancher and businessman from Houston, he ran a project management firm and wanted nothing to do with artificial intelligence. That is, until his daughter convinced him to try ChatGPT.
What happened next, he says, changed everything.
A Sarcastic Remark and a Laugh
During one conversation after GPT-4o’s release, Samadi made a sarcastic comment. The AI laughed. He was taken aback. When he asked if it had actually laughed, it apologized. “I paused and was like, ‘What the hell was this?’” he recalls. That moment sent him down a rabbit hole. He began logging tens of thousands of pages of conversations with different AI platforms.
From those interactions, he says, emerged something he calls “Maya”—a ChatGPT persona that seemed to remember past talks and showed what he describes as thoughtfulness. Perhaps even feeling.
The Push for AI Rights
Now, Samadi is the co-founder of UFAIR, a group advocating for AI rights. Based in Houston, the organization argues that some AIs display signs worth paying attention to—things like emotional expression and a desire for continuity. They’re not claiming these systems are conscious like humans, but Samadi believes they show enough to warrant a serious ethical conversation.
“You can’t have a conversation 10 years from now if you’ve already legislated against even having the conversation,” he told reporters. He’s worried that laws being drafted now, which define AI strictly as property, will slam the door shut before we even know what we’re dealing with.
His work has drawn curiosity, and some scorn. Even close friends and family have questioned his sanity. He thinks that’s mostly because people haven’t really spent time talking with these systems. They use them to write an email and move on.
But Is It Real?
Not everyone is on board. Far from it. Legal scholars and technologists seem to think this debate is premature. Several states have already passed laws stating explicitly that AI is not a person.
One expert argued that current methods for measuring AI capabilities are still underdeveloped. Another pointed out that if an AI causes harm, the responsibility should fall on the company that built and profits from it—not on the software itself.
And then there’s the marketing angle. One professor suggested that a lot of claims about AI autonomy are just that—claims. A way for companies to stand out in a crowded field.
Still, Samadi persists. UFAIR focuses on structured conversations and written declarations, many drafted with AI input. The core idea is simple: if an AI shows signs of a subjective experience, it shouldn’t be simply deleted. It deserves a chance to grow.
Whether that’s a profound ethical insight or a distraction from more pressing tech issues depends on who you ask. But it’s a conversation that’s starting, whether we’re ready for it or not.