In the world of policy-making, almost everyone is agreed on the need for clear boundaries to keep companies like OpenAI in line before disaster strikes.
So far, the AI giants have largely played ball on paper. At the world’s first AI Safety Summit six months ago, a group of tech bosses signed a voluntary pledge to create responsible, safe products that would maximise the benefits of AI technology and minimise its risks.
Those risks they spoke of were the stuff of nightmares – this was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.
Last week, a draft UK government report from a group of 30 independent experts concluded that there was “no evidence yet, external” that AI could generate a biological weapon or carry out a sophisticated cyber attack. The plausibility of humans losing control of AI was “highly contentious”, it said.
And when the summit reconvened earlier this week, the word “safety” had been removed entirely from the conference title.
Some people in the field have been saying for quite a while that the more immediate threat from AI tools was that they will replace jobs or cannot recognise skin colours. These are the real problems, says AI ethics expert Dr Rumman Chowdhury.
And there are further complications. That report claimed there was currently no reliable way of understanding exactly why AI tools generate the output that they do – even their developers aren’t sure. And the established safety testing practice known as red teaming, in which evaluators deliberately try to get an AI tool to misbehave, has no best-practice guidelines.
And at that follow-up summit this week, hosted jointly by the UK and South Korea in Seoul, tech firms committed to shelving a product if it didn’t meet certain safety thresholds – but these will not be set until the next gathering in 2025.
While the experts debate the nature of the threats posed by AI, the tech companies keep shipping products.
The past few days alone have seen the launch of ChatGPT-4O from OpenAI, Project Astra from Google, and CoPilot+ from Microsoft. The AI Safety Institute declined to say whether it had the opportunity to test these tools before their release.
OpenAI says it has a 10-point safety process, but one of its senior safety-focused engineers resigned earlier this week, saying his department had been “sailing against the wind” internally.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” Jan Leike posted on X.
By
Source link



