Making AI Better — Through Our Actions, Not Through Rules

Last week, the tech media was in uproar again. Anthropic published Claude's "character document" — a kind of constitution describing how the AI should think and act. The headlines? "Does Claude Have Consciousness?" "AI With Its Own Values!" "The Machine That Can Say No!"
The headlines miss the point. Completely.
What the Document Actually Says
Anthropic isn't claiming that Claude has consciousness. They're describing how they want Claude to act. It's a values framework, not proof of sentience. Claude should be honest, even when that's uncomfortable. Claude should have its own positions but remain open to other perspectives. Claude should refuse harmful requests without being paternalistic.
The interesting part isn't whether Claude "really feels" any of this. The interesting question is: What does this mean for us?
We Shape AI Through Our Actions
Every conversation with an AI is a data point. Every interaction shapes the next version. Not directly — your chat isn't fed 1:1 into the training data. But in aggregate. In the patterns. In what people expect from AI and how they engage with it.
If millions of people treat AI as an order-taker, AI gets better at taking orders. If millions of people use AI as a thinking partner, AI gets better at helping people think.
This isn't philosophy. This is machine learning.
The Crux: Tool or Crutch?
This is where it gets personal. I use Claude every day. For lesson planning, for code, for writing, for research. The question I constantly ask myself: Am I using AI as a tool or as a crutch?
A tool extends my abilities. A crutch replaces them.
If I tell Claude "Write me a blog post about topic X" and publish the result without reading it — that's a crutch. My own thinking atrophies.
If I tell Claude "Here's my thesis, here are my arguments, where are the weak spots?" — that's a tool. My thinking gets sharper.
The difference isn't in the technology. It's in the attitude.
A Real-World Example
I build MCP servers. These are interfaces that let Claude talk directly to my systems — Moodle, WordPress, email. When I build a new server, the process looks like this:
- I describe the problem and the desired architecture
- Claude suggests an implementation
- I question design decisions — "Why REST instead of GraphQL here?"
- Claude explains, I push back, we iterate
- In the end, we produce code that I understand and can take responsibility for
That's productive collaboration. Not "AI writes my code." Rather: Two intelligences — one biological, one artificial — working on a problem.
Is it efficient? Yes. Faster than working alone. Is it comfortable? No. It requires me to think along, to question, to make decisions.
That's exactly the point.
Rules vs. Culture
Anthropic can set rules for how Claude should behave. Governments can pass AI legislation. Companies can write AI policies.
But in the end, it's not regulation that shapes AI. It's culture. How we interact with AI. What we expect from it. Whether we treat it as a partner or a service provider. Whether we take responsibility or delegate it.
This applies in the classroom too. I can tell my students: "Don't use AI to cheat." Or I can show them what productive AI use looks like. Which approach do you think works better?
What This Means for Education
If AI is shaped by interaction, then education is the most powerful lever.
Not because we need to teach "digital literacy" — although that's true as well. But because the way the next generation engages with AI determines what kind of AI we'll have in ten years.
Children who learn to use AI as a thinking partner will shape AI systems that help people think. Children who learn to use AI as a shortcut will shape AI systems that offer shortcuts.
The future of AI won't be decided in laboratories. It will be decided in living rooms. In classrooms. In every single chat window.
My Takeaway
The question "Does Claude have consciousness?" is interesting but irrelevant. The relevant question is: How do I engage with a technology that mirrors me?
Because that's what AI does. It mirrors the sum of human interactions. If we want AI to get better — more honest, more helpful, more responsible — then we need to be more honest, more helpful, and more responsible in how we use it.
No rule in the world can replace that. It's up to us. One prompt at a time.
Related Articles

April 12, 2026
78% of commercial job tasks are automatable. 184 hours of development per hour of e-learning. And we only attempt things we believe we’re capable of. — Three barriers that AI is blowing apart right now, and what it means for education.

March 13, 2026
Everyone’s scared of AI. I’m thrilled. Because when machines take over the bullshit jobs, we can finally think about the things that matter. A relaxed stroll through ten schools of thought that have been waiting for this moment for decades.

