NASHVILLE, Tenn. (WKRN) — A new Tennessee bill would make it a felony to train artificial intelligence to encourage suicide or homicide or to act in specific manners, including simulating humans.
The proposed legislation, sponsored by Rep. Mary Littleton (R-Dickson), would also allow victims harmed individuals to sue for up to $150,000 — including pursuing compensation for emotional stress and attorneys’ fees.
The proposal comes after multiple reports of victims, mainly children, were convinced to commit crimes or other nefarious acts through artificial intelligence chatbots and other technology. Some of those acts included harming themselves or taking their own lives.
AI and cybersecurity expert Christopher Warner said that AI can be a great tool, but it can also be used as a weapon. Warner said individual criminals and organized crime groups have used AI as a persuasive tool because the technology enables them to build a relationship and gain trust much faster than a human could.
“When you get to learn a new person, you go and have coffee with them and stuff, and you get through that norming process. It takes quite some time…” Warner said. “AI can advance right through that because it can grab information that establishes a pattern of life. It has enough information to know you, and then [it could] have enough information on a relative, a person that’s close to the target individual to create a conversation to steer it.”
“…They’re leveraging AI to commit crime because AI can do it quicker, faster, more efficient[ly] and … emulate a family member or other trusted human that’s in your life that you would trust to either hand over information, meet you at some place and take the chance at getting kidnapped…” he continued. “Basically, control of your life and persuade you to do something that you would not normally do.”
While AI technology has developed much more quickly than regulations have been passed, Warner said it’s important lawmakers get ahead of the issue and pass laws that put restrictions on AI and include strict punishments.
“These laws need to be on the books,” Warner said. “They need to have strict punishments as a deterrence because we are so far behind in getting proper governance and management over AI so this type of weaponization doesn’t get into and exploited in criminal hands.”
Warner said there are ways individuals can protect themselves from falling victim to bad actors using AI, including coming up with a pre-agreed codeword with family, friends and coworkers as a way to verify whether the person you’re talking to on the other line is the legitimate person or AI-generated.
Lawmakers are set to debate this bill when they reconvene for the legislative session in January.