Featured
Since the middle of February, members of Congress have received more than 119,000 calls demanding that they pass stricter gun control laws. But these aren’t the typical kind of calls made by constituencies voicing their concerns.
Through the use of artificial intelligence these calls are seemingly being made from the grave — by children who were killed as a direct result of gun violence.
“In 2018, when I was 15 years old, I was killed by an unsecure gun at my friend’s house,” reads a message in the voice of Connecticut native Ethan Song. “My parents recreated my voice using AI so I can finally ask you to do something to protect kids.”
These messages are being sent using a new website called The Shotline. The platform was born out of a partnership between Change the Ref and March For Our Lives, two organizations that were formed in response to the school shooting in Marjory Stoneman Douglas High School in Parkland, Florida, in 2018.
The groups have partnered with ElevenLabs, an AI company that creates synthetic voices using recorded speech, for the project. Their aim is to flood Congress with these messages to get members to take issues around gun control more seriously. In a few clicks, users of the site can send these voice messages to their representatives and senators.
Rupal Patel, a Northeastern professor who has spent her career researching how technology can be used to help give people their voices back, believes it’s a creative use of the technology.
“It makes it real,” Patel says. “It’s not necessarily about the person. It’s about the story, and by adding a voice to it we can see what they sounded like, what their dreams were. We associate that with a voice, that closeness.”
But another question has arisen: Is it ethical to recreate the voices of the dead, even if it’s for a good cause?
Patel says since the voices of children are being created by their parents, there isn’t too much cause for concern.
“It’s all about consent,” she says. “What makes it ethical and admissible is that this was consented by the parents of the child. This is them using an artificial voice as a way for them to convey a message. They have a right to do that because that child is below adult age. The parents are still the legal guardians. Any other person who uses it for a bad message doesn’t have the consent of the individual or the guardian.”
That being said, there are currently no laws protecting people from their voices being recreated using AI, she notes.
“I don’t think it’s a black and white line,” Patel says. “We have to really think about these things in a much more nuanced world. If they were using it to commit fraud, that is one thing. If they are using it creatively to convey a message, it’s a different thing.”
She compared the technology to Photoshop, which people use all the time to alter images.
“This is a continuum of technology, and we have to treat it like that,” she says. “How do we use it and evaluate it case by case? That may not sound like a satisfactory answer for some people, but we can’t just wipe it out as unethical.”
James Alan Fox, a Northeastern professor of criminology, law and public policy, says he understands the sentiment behind the project and recognizes the pain and anger parents feel about the loss of their child, but he does not think the project will be particularly effective.
“Whether it be parents of Sandy Hook Promise, Parkland or other cases, I understand their attempts to convince Congress and folks in Washington to pass preventative measures that will avoid other parents from walking in their shoes, and parents should speak for their sons and daughters, and they have,” he says.
“But having a murdered child speak for themselves in a synthetic-engineered voice, I don’t think that will help,” he adds. “In fact, and maybe this is just me, but when I was listening to the audio, it made me focus more on the quality of the voice, more than the quality of the message. It’s contrived and it detracts from the message.”
With the rise of deep fakes, Fox says, it’s important we are careful in how we use this technology.
“I’m OK with AI, for example, creating a new Beatles song, but that’s for entertainment purposes,” he says. “I think that’s where I draw the line.”
He says it would be more impactful if the parents of these victims continue to advocate using their own voices. He points to some inroads parents have made, including pushing for the first bill on gun control being passed in Congress in decades in 2022.
“What the parents can say is very powerful and would have a greater impact than an AI-engineered voice from the grave,” he says.