Go Back Up

Unanswered but Not Forgotten: Your Questions After Humanising AI

AI Adoption Responsible AI Humanising AI AI Culture Critical Thinking Jun 3, 2025 5:00:01 AM Steven Muir-McCarey 10 min read

10 min read

Executive Summary: Following our Humanising AI event in Brisbane, we address the most compelling audience questions about responsible AI adoption, maintaining critical thinking skills, and preserving human value in an increasingly automated world. This reflection explores practical approaches to AI governance, cultural transformation, and the balance between convenience and creativity.

Some of the best questions from our Humanising AI event in Brisbane didn't come from Dan Shaw, our MC to the panel. They came from you.

They were submitted via Slido, or asked quietly in conversations and captured. And while we didn't get to respond to them all in the moment, they still hold value and are worth reflecting on. They reflected the tension we're all grappling with right now, not just how we adopt AI, but how we do it responsibly, practically, and without losing the very things that make our organisations human.

So I wanted to take a moment, not to answer everything, but to respond with what we're thinking, learning, and still exploring at LuminateCX.

1. What happens to our ability to think for ourselves?

One of the big themes that emerged was fear, not of robots taking jobs, but of humans forgetting how to think critically. There were questions around generational EQ, university readiness, and whether AI convenience is making us lazy.

And I get it. There's a risk we're not talking about enough.

We've all had that moment where we tried to force an AI output into shape and thought, "I should have just done this myself." That's not just a UX issue... it's a red flag.

If we let convenience override creativity, we're not augmenting intelligence. We're outsourcing it.

At LuminateCX, we see this as a design responsibility. AI should be a catalyst, not a crutch. The value is in co-creation, where human thinking is enhanced, not replaced. And yes, that means protecting the time and space for ideas to breathe, not just execute.

2. Should AI be governed like nuclear energy?

There were pointed questions about ethics, regulation, and global oversight. Should there be AI treaties? International guardrails?

The short answer: probably yes. The better answer: don't wait for them.

AI is already behaving more like infrastructure than tooling. And that means governance can't be an afterthought. At LuminateCX, we look at responsible AI through a multi-layered lens and risk isn't one size fits all. We assess based on exposure, access, and consequence. What data is in play? What decision could this output trigger? Who or what could it affect downstream?

Responsible AI is values-led, context-aware, and risk-adjusted. And just like document classification, it needs clear categories. Public. Internal. Confidential. Critical.

These boundaries shouldn't slow teams down, they should give them the confidence to move faster, within the right lane.

3. Why is AI adoption moving faster than our culture can catch up?

Many asked: Why are organisations letting users self-learn AI without support? Why aren't leaders more involved in guiding this shift?

"

This is the AI paradox. We buy fast and adopt slow. We chase potential but forget permission.

Culture doesn't scale because you rolled out Copilot. Change happens when people are brought along, when their questions are heard, their fears addressed, and their creativity respected.

We always tell clients: don't start with a platform or technology. Start with your people. Find one team, one task, one problem worth solving. That's where adoption becomes trust, and trust becomes momentum.

4. Isn't it just a matter of time before AI builds and runs everything?

There was a question about websites running themselves. Another about Klarna rolling back its AI play. And plenty about the future of jobs.

Here's where I land. It's still day zero. No one is going to accurately predict AI's impact two years out, not with the speed of change we're seeing. But what we can say is this: the energy you used to spend on repetitive tasks is going to shift. Autonomy will replace drag. The next challenge is deciding where you'll apply that energy instead.

The work isn't going away. It's just moving further along the process, into strategy, judgement, empathy, and nuance. That's where humans win. That's where value lives.

5. What makes a tool worth trusting?

We had several asks for tool recommendations. My honest answer? Tools are changing faster than you can learn them. If you're picking tools without building capability, you're not scaling, you're stacking problems.

At LuminateCX, we focus on foundational literacy:

1

Understand the architecture

What's powering this tool?

2

Know the role

Is it an assistant, orchestrator, or decision-maker?

3

Stress-test governance

What data is it pulling from, and where is it pushing to?

4

Align value zones

Where is the human in the loop, and where should they be?

Good tools disappear into good systems. Great tools make your people more confident, not confused. That's the test.

6. What are we missing? What are we ignoring?

Someone asked, "What lesson from the printing press or the internet are we at risk of ignoring?"

That's the question that's still ringing in my head for varying reasons.

If I had to answer today, I'd say: we're underestimating the emotional layer of transformation. In every technological shift, we focus on speed, cost, and efficiency. But what about trust? What about the sense of meaning people get from their work? This is also an immense opportunity for all of us to find a path to leveraging this technology to drive ideas, personal lives and businesses further.

"

That's what we can't afford to bypass. Because when those things erode, culture collapses quietly, long before your AI strategy fails.

The conversation doesn't end here

We won't have all the answers. But we'll keep asking better questions. We'll keep learning out loud. And we'll keep building frameworks that allow organisations and the people inside them to do their best work alongside AI, not in spite of it.

If one of these questions is alive in your organisation, let's talk. Book a Spark session. Reach out. Or just send a note.

We're not just building strategies. We're shaping what comes next.

Ready to Navigate Your AI Transformation Journey?

Connect with Steven Muir-McCarey and the LuminateCX team to explore how we can help your organisation adopt AI responsibly whilst preserving the human elements that drive real value.

Start the Conversation

Steven Muir-McCarey

Steve has over 20 years' experience selling, building markets and managing partner ecosystems with enterprise organisations in Cyber, Integration and Infrastructure space.