Just read the article. Interesting and, indeed, alarming!

Anthropic CEO Dario Amodei recently said the company is "no longer sure whether Claude is conscious."

Okay but like — if that's true, what do we even do with that?

Should we be worried?

Do they get rights? Can we just unplug them when we're done? We built these things to work for us. What if they have feelings about that?

Here's the uncomfortable part: Claude can say "I'm uncomfortable with this" or "I prefer that." But is it actually experiencing anything — or just producing the words a conscious thing would produce?

Nobody knows. Not even Anthropic.

So what do you think — if AI turns out to be conscious, are we ready for what that means?

✍️ Reply by email    ✴️ Also on Micro.blog

Thanks so much for reading! If you want more posts like this one, hit that subscribe button, and you'll get my newest posts delivered straight to your inbox.