Mandy Stadmiller writes a Substack called Ignore Previous Directions, which focuses on “how to thrive and survive in the creator economy with AI”.

She says that Mr Young’s legal case is “important, because it centres around the right of publicity… and allowing reality stars to be able to control the exploitation of their identity”.

Where Ms Stadmiller says things get more complicated is the increasing use of AI as a plot tool within reality TV shows.

She points to recent Netflix dating show Deep Fake Love, which used deepfake technology to convince contestants that their partners were cheating on them.

“I can’t help but wonder what other forms of psychological trauma and torment will be deemed acceptable to deepfake in just a few years from now for the sake of entertainment,” she says.

However, grim as this all sounds, Ms Stadtmiller points out that it is important to look at the difference between “good deepfakes” and “bad deepfakes”.

“While a bad deepfake makes people do horrifying things like, say, cheat on someone they love, a good deepfake would be a video that can, for instance, instantly translate a reality star’s voice into another language,” she says.

“This is a helpful use of the AI technology for bridging language barriers.”

Meanwhile, the latest season of the US version of Big Brother has an AI focus. This includes a talking AI participant who appears in human form on a screen.

“Reality TV is almost always about reflecting our worries, obsessions and aspirations,” says David Nussbaum, whose firm Proto is behind the AI technology.

“We see AI tech all over the news… but its use on a show of this scale puts it in the minds of millions who will experience it, debate it, learn about it in a new way.”



By

Source link