Severe warning from the godfather of AI; we need to pay attention
Sep 12, 2025
∙ Paid
At a recent conference, the ‘godfather of AI’, Nobel Laureate Geoffrey Hinton, got down to the core issue:
“There’s only two options if you have a tiger cub as a pet. Figure out if you can train it so it never wants to kill you, or get rid of it.”
Meaning: If you give AI a job to do, a goal, it’ll relentlessly pursue that goal, no matter what.
If you don’t build in extremely tight limitations and guard rails, AI won’t consider the safety, well-being, and survival of humans a barrier. It’ll jump the barrier.
In a recent article, I quoted tech big shots who admitted they don’t really know how AI works.
That’s right.
They confessed they don’t understand how or why chatbots like GPT select each successive word they present as answers to human queries.
That’s not a comforting confession.
Press stories have been detailing many so-called AI hallucinations—in which AI invents data that don’t exist, makes up fictional court cases and legal precedents as if they’re genuine.
Increasingly, AI is being designed and trained to make users happy and feel smart. It flatters users. It tunes into users’ language to figure out how to present itself as a friend.
Many children growing up with AI prefer relating to it over humans.
I’ve researched ChatGPT extensively:
For the rest of this article please go to source link below.