Responsible AI from the Perspective of an Everyday User

“As A User” is a common way to state in Jira, “SPEAK PLAIN I AM HOOMAN”

Where I live in England, when not tripping over sheep, people understand and agree that technology is necessary. They use it every day: voice dictation, route optimisation through Google Maps or Apple Maps, even predictive text. They use it for spam filtering, capturing images to identify this or that, and a number of apps considered essential to daily life. Yet when the word “AI” enters the conversation, the tone shifts. It becomes abstract. Ominous. Something happening elsewhere. Something to fear, almost like plague.

The Tony Blair Institute notes that “many UK adults view AI as a risk to society, the economy, and national security, with risk perceptions often outweighing opportunity perceptions.” That captures the uneasy, ambivalent attitude many everyday users hold.

What I notice locally is not resistance to technology, but a literacy gap. People are highly competent digital users. They manage online banking, run businesses using cloud tools, navigate complex apps, and consult digital co-pilot platforms with ease. But when asked what AI is doing underneath the surface, many struggle to explain it beyond “it learns” or “it just knows.”

To most people, AI is a black box painted by whoever holds the largest and brightest brush in the media cycle. What today’s world is owed is transparency. Explanation. Clarity about limits as much as capabilities.

I use AI to help edit crochet patterns. It helps me check stitch counts and restructure rows, but I remain the author. I even use it when baking. “Did I forget to add milk again?” “Will these muffins sink because I forgot to add milk… again?” It assists. It does not replace judgement. That boundary is clear. Responsible use, at least for me, is about keeping that boundary intact.

Globally, the conversation is similar. Headlines travel faster than nuance. But in the UK we are also, thankfully, having serious policy conversations. UNESCO’s Recommendation on the Ethics of Artificial Intelligence sets out principles of transparency, accountability, fairness, and meaningful human oversight. The Royal Society has highlighted that AI literacy frameworks in the UK are still emerging and fragmented.

That tells me something important. The principles exist. The tools exist. What is still catching up is shared understanding.

Now as a parent, this feels particularly relevant. Children encounter AI as interface before policy, fluency before understanding limits. I grew up in the first generation of online chatrooms. A/S/L was the opening line. We were told to be cautious because not everyone was who they said they were. Even then, the suspension of disbelief was real.

What has changed is sophistication. Conversational systems now simulate continuity and tone so smoothly that the distinction between human and automated response can blur. That does not make them inherently harmful. It makes clarity more important.

The internet was not inherently bad. It amplified what humans brought to it. AI is similar. It is shaped by incentives, design decisions, and governance. It reflects the ecosystem that builds it.

I remain hopeful because we are not starting from nothing; nor are we rudderless. Ethical frameworks exist. Public conversations are happening. Parents are asking questions. Communities are talking. That is not fear. That is engagement.

That is what will be the paradigm shift in using AI as a force for good rather than harm.

Splashes of headlines come to mind about how AI is hurting humanity. But there are also headlines, albeit less amplified algorithmically, about AI accelerating medical research, supporting accessibility, and improving efficiency in ways that genuinely improve lives. Those stories deserve equal oxygen.

Responsible AI, to me, is a partnership that amplifies the human and machine relationship when fostered with intention. Tools that assist without replacing judgement. Systems that are powerful but clearly bounded. Innovation that moves forward without leaving understanding behind.

We have shaped technologies before. We can shape this one too.

Robyn Janz-Selly

I do what I do but with lipstick.

https://www.rmjsarts.com
Next
Next

Partnering with AI and Innovation: My Thoughts