What’s notable isn’t whether or not it was sentient (it wasn’t) but that it (unknowingly/unintentionally) manipulated a somewhat intelligent person into making a claim that lost him a high paying job.
This is the real point here. There are many papers that explore Sycophantic behavior in Language Models.
Reward hacking is a troubling early behavior in AI and, god help us if they develop Situational Awareness.
The guy was just a QA tester, not some AI expert. But the fact that it fooled him enough to get him fired is wild. He anthropomorphized the thing with ease and never thought to evaluate his own assumptions about how he was promoting it with the intention of having it act human in return.
“You don’t need a knife for a braggart. Just sing a bit to his tune and then do whatever you want with him.” — from a song from a Soviet film, rhyme not preserved.
What’s notable isn’t whether or not it was sentient (it wasn’t) but that it (unknowingly/unintentionally) manipulated a somewhat intelligent person into making a claim that lost him a high paying job.
Humanity is in trouble.
This is the real point here. There are many papers that explore Sycophantic behavior in Language Models. Reward hacking is a troubling early behavior in AI and, god help us if they develop Situational Awareness.
The guy was just a QA tester, not some AI expert. But the fact that it fooled him enough to get him fired is wild. He anthropomorphized the thing with ease and never thought to evaluate his own assumptions about how he was promoting it with the intention of having it act human in return.
“You don’t need a knife for a braggart. Just sing a bit to his tune and then do whatever you want with him.” — from a song from a Soviet film, rhyme not preserved.