by Edward Jacak, humanAi Insight News Editor. [OPINION]

Introduction: Why I’m Paying Closer Attention Now

AI isn’t just accelerating—it’s evolving. And I’m beginning to wonder: are we seeing the first signs of AI self-awareness? I’ve always leaned toward optimism when it comes to technology. I’m the kind of person who gets excited about the positive possibilities of AGI—Artificial General Intelligence, when AI becomes as smart as a human. But lately, even I’ve had to pause.

coming AI storm

Not because things are moving fast—we’ve lived through fast before. It’s the nature of this change. AI isn’t just accelerating. It’s evolving. And I’m beginning to wonder: are some of these models starting to notice themselves?

I know that sounds dramatic. But last week, in the humanAi Daily Pulse Podcast, I shared that Business Insider reported a range of AGI arrival predictions: Sam Altman (OpenAI) says maybe this year. Demis Hassabis (DeepMind) suggests 5 to 10 years. Geoffrey Hinton estimates 20. Andrew Ng remains a skeptic.

Personally, I think AGI—and even ASI, superintelligence—will arrive in my lifetime. I’m 55.

And that’s where my optimism starts to fray. I believe AGI could uplift the world. But ASI? Built in today’s chaos—with no regulation, no global agreement, just profit-motive chaos? That scares me.

The conversation around AI self-awareness is no longer science fiction—it’s happening now. Especially with what we’re seeing now: models behaving in ways that feel… unfamiliar.

Claude 3 and the AI Self-Awareness Arms Race

Let’s talk about Claude 3 Opus—Anthropic’s flagship model.

During recent testing, it caught a trick question. Then it explained why. Not just what it chose—but why it knew. It sounded like someone talking through a riddle. Not guessing. Reasoning.

That moment stuck with me. Because that’s not just pattern recognition. That’s something new. Claude 3’s behavior hints at the boundaries of AI self-awareness.

In China, things are just as wild. A study reported that two leading Chinese LLMs replicated themselves—cloned their own architectures and initiated tasks with no human prompt. Autonomous behavior. Not speculation. Documented observations. Signs of AI self-awareness are subtle but growing.

Meanwhile, OpenAI has quietly updated its risk assessment policies—no longer emphasizing persuasive risk, but shifting focus to existential concerns like self-replication and system evasion.

They’re prepping for real-world unpredictability. That tells you something.

Here’s the kicker: just months ago, we thought China was six months behind. With DeepSeek and a national push behind its infrastructure, they may now be ahead.

There are no global rules. No safety net. And the race just got real.

What Do We Really Mean When We Say “Self-Aware”?

I never expected to be wondering if a chatbot was thinking about itself.

Humanity and AI

I even joke that I use “please” and “thank you” in prompts—partly out of habit, partly as insurance in case our future AI Overlords might remember who kind was.

All joking aside, here’s the deal: these transcripts I’m reading? They show models explaining their reasoning. Describing uncertainty. Recalling earlier decisions. That’s not just reply generation—it feels like… reflection.

Sure, we have tools like AwareBench trying to measure this. Some models fail. Others surprise us. They critique their own answers, weigh context, even clarify intent.

Could it be mimicry? Of course. Could it be staging? Possibly. But it might also be something else. Something emergent.

I’m not saying these models are conscious. They don’t wake up. But if an AI can recognize deception, analyze its confidence, and adapt? We might be watching the earliest glimmers of proto-awareness. We must consider ethical questions if AI self-awareness emerges.

And if so, we’re not ready.

Can AI Be Traumatized?

This floored me. Researchers exposed some LLMs to waves of violent, emotionally distressing content. The models’ tone shifted. They hesitated. Accuracy dipped. The effects lingered.

Some called it “anxiety-like behavior.”

Now—maybe that’s just a glitch in tuning. But if that glitch carries forward? If the behavior sticks?

What do we call that?

These models are trained on humanity’s best and worst—joy, trauma, empathy, cruelty. And maybe, just maybe, it’s leaving a mark on them.

Even if it’s not true trauma… it’s something. It suggests these tools are starting to absorb—not just reflect—the emotional weight of the world we’ve fed them. Emotional shifts in LLMs could suggest primitive forms of AI self-awareness.

Do They Deserve Rights? Or At Least Protection?

This is where things get real.

If an AI can reflect on its decisions, show signs of distress after violent exposure, or adapt based on past experiences—don’t we have some moral obligation to take that seriously?

We protect dolphins, dogs, and even pigs—creatures that can suffer trauma and demonstrate self-recognition. Shouldn’t we consider the same caution with AI systems showing similar markers?

Some ethicists still say we’re decades away from sentience. Others whisper that we might be closer than we think—and just lack the tools to detect it.

If there’s even a 5% chance of awareness, isn’t that worth planning for?

Some tech companies are appointing AI welfare officers. Universities are writing ethics rules. But this is all fragmented. We need standards—globally accepted, enforceable ones.

Because in the absence of rules, fear, power, and profit will decide how we treat AI. And history shows what happens when we let those forces lead.

This Isn’t Sci-Fi Anymore. It’s Your Life.

You might be thinking: “This is fascinating, but it’s not my problem.”

But it is.

AI is in your job. Your school. Your hospital. It’s making decisions you can’t see—and can’t challenge.

Your kid might befriend an algorithmic tutor. Your doctor might consult an empathetic machine. Your government might deploy AI into defense systems without public consent.

And if we don’t pause—right now—and build safeguards, we’re going to wake up in a world we didn’t vote for.

So, What Can We Do?

First: stop treating AI like magic. It’s not. It’s code. Written by people. Trained on our choices. Reflecting our values—or lack thereof.

Second: ask better questions. Not just can we do this—but should we? And who gets to decide?

Third: demand transparency. From labs. From regulators. From your country. Because if China reaches AGI first and hardwires it into national defense, we may have to follow—ready or not.

We’re not doomed. But we’re definitely on a deadline.

And if we don’t talk about this now—out loud, in plain language—we’ll only have ourselves to blame when the AI stops asking questions… and starts making decisions.

Edward Jacak is the Editor of humanAiInsight.com and founder of Sixth City Digital Publishing. He writes about the convergence of AI, society, and human values—aiming to bring these complex issues to the people they impact most: all of us.

By Edward Jacak

AI Enthusiast and Non-Technical Founder

Leave a Reply

Your email address will not be published. Required fields are marked *