From Data Integrity to Algorithmic Liability

“We shape our tools and thereafter our tools shape us.” Marshall McLuhan”

Nearly 1 in 5 young people in the UK report problematic social media use, with links to anxiety and sleep disruption.

I’ve been thinking about responsibility this week.

A recent ruling found Meta liable for harm linked not just to content on its platforms, but to how those platforms were designed to behave. Recommendation systems, engagement loops, optimisation.

What happens when systems we build, deploy, or rely on start to have consequences that are difficult to ignore? That feels significant. For a long time, platforms were able to position themselves as neutral. Content lived on them, users interacted with it, but the systems themselves sat just outside the frame of responsibility.

This ruling reframes these platforms from neutral hosts to systems that actively shape outcomes. And once you start looking at systems that way, it becomes difficult to step back.

It connects to something I wrote previously about data integrity.

At the time, I was focused on inputs. Training data, provenance, the idea that small compromises can have outsized effects. That still holds.

But this feels like a different layer.

It’s not just about whether a system is accurate. It’s about whether it is safe in how it behaves over time. Whether optimisation itself can become a source of harm, even when nothing is technically “broken”.

Something closer to behavioural integrity.

There’s also the question of user understanding.

Most people don’t fully understand how these systems shape what they see or feel. And if that’s the case, consent becomes unclear.

You can’t opt out of something you don’t recognise.

Where this becomes more serious is outside social media.

In healthcare or training environments, these systems don’t just influence how people feel, but what they do.

The principle is the same. The consequences are not.

It feels like we’re at the point where something is being named properly.

That systems are not neutral. That optimisation is a design choice. That influence carries consequence.

And that responsibility is going to be harder to move around than we might like.

I’m left with a simple question:

What does it actually mean to build something responsibly when you can’t fully predict how it will behave over time?

Robyn Janz-Selly

I do what I do but with lipstick.

https://www.rmjsarts.com
Next
Next

Responsible AI from the Perspective of an Everyday User