What happens when you think AI is lying about you?


Imagine the scene: you're at home with your family and your phone starts pinging... people you know are warning you about something they've seen about you on social media.

It's not the best feeling.

In my case, it was a screenshot, apparently taken from Elon Musk's chatbot Grok, although I couldn't verify it, placing me on a list of the worst spreaders of disinformation on X (Twitter), alongside some big US conspiracy theorists.

I had nothing in common with them, and as a journalist, this was not the sort of top 10 I wanted to feature in.

I don't have access to Grok in the UK so I asked both ChatGPT and Google's Bard to make the same list, using the same prompt. Both chatbots refused, with Bard responding that it would be "irresponsible" to do so.

I've done a lot of reporting about AI and regulation, and one of the big worries people have is how our laws keep up with this fast-changing and highly disruptive tech.

Experts in several countries have agreed that humans must always be able to challenge AI actions, and as time goes on AI tools are increasingly both generating content about us and also making decisions about our lives.

There is no official AI regulation in the UK yet, but the government says issues about its activity should be folded into the work of existing regulators.

I decided to try to put things right.

My first port of call was X - which ignored me, as it does most media queries.

I then tried two UK regulators. The Information Commissioner's Office is the government agency for data protection, but it suggested I go to Ofcom, which polices the Online Safety Act.