Image by Futurism Image by Futurism

Professor warns of “Nightmare” Bots That prey on vulnerable people

“My current nightmare application is blending GPT-3, facial GANs, and voice synthesis to make synthetic ELIZAs that drive vulnerable people (literally) crazy.”

Artificial Intelligence/ Artificial Intelligence/ Carnegie Mellon University/ Simon Dedeo

Imagine that you make a new friend on Twitter. Their pithy statements stop you mid-scroll, and pretty soon you find yourself sliding into their DMs.

You exchange a few messages. You favorite each other’s tweets. If they need a hand on GoFundMe, you help out.

Now imagine how you’d feel if you found out your friend didn’t really exist. Their profile turns out to be a Frankensteinian mashup of verbiage dreamed up by the powerful language generator GPT-3 and a face born from a generative adversarial network, perhaps with a deepfaked video clip thrown in here and there. How would it affect you to learn that you had become emotionally attached to an algorithm? And what if that “person” was designed to manipulate you, influencing your personal, financial, or political decisions like a garden-variety scammer or grifter?

It might sound far-fetched, but people have been fooled by computers masquerading as human since as far back as 1966, when MIT computer scientist Joseph Weizenbaum created the ELIZA program. ELIZA was built to simulate a psychotherapist by parroting peoples’ statements back to them in the form of questions. Weizenbaum was unsettled by how seriously users reacted to it — famously, his secretary asked him to leave the room while she was talking to it — and he ultimately became a critic of artificial intelligence.

For the rest of this article please go to source link below.

REGISTER NOW

By Kelly Catalfamo
(Source: futurism.com; https://tinyurl.com/yhjhg3gk)
Back to INF

Loading please wait...