If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.
I’m trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don’t know what in doing), and while it’s not some no-name distro, it’s also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it’s infuriating. Bard too.
If this thing was sentient, it’d be a lot better at this stuff. Or at least be able to say, “I don’t know, but I can help you figure it out”.
I think this is a huge problem with current AIs is that they are forced to generate an output, particularly in a very strict time constraint. “I don’t know” should be a valid answer.
I’m more talking about hallucinations. There’s a difference between “I’m not sure”, “I think it’s this but I’m confidently wrong”, and “I’m making up bullshit answers left and right”.
If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.
I’m trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don’t know what in doing), and while it’s not some no-name distro, it’s also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it’s infuriating. Bard too.
If this thing was sentient, it’d be a lot better at this stuff. Or at least be able to say, “I don’t know, but I can help you figure it out”.
I think this is a huge problem with current AIs is that they are forced to generate an output, particularly in a very strict time constraint. “I don’t know” should be a valid answer.
At this point I’m probably not sentient either
Are we? Do we have free will or are our brains are just deterministic models with 100T parameters as mostly untrained synapses?
I’m more talking about hallucinations. There’s a difference between “I’m not sure”, “I think it’s this but I’m confidently wrong”, and “I’m making up bullshit answers left and right”.