dr froyd
__________________________________________________
- Local time
- Today 12:23 PM
- Joined
- Jan 26, 2015
- Messages
- 1,485
there's a lot of talk nowadays about how this chat bot is getting us closer to AGI (artificial general intelligence) or even closer to a superhuman AI that will first put us all out of work and subsequently become the ruler of the world.
without getting into the philosophical aspects of intelligence let us look at why this is not the case at all even in practical terms
first, what is the design concept of this AI:
in order for the AI to generate responses which look good to the human eye its training loss function was essentially human judgment itself; a separate ML algo was trained to predict a human's judgment of the responses, trained on a relatively small dataset of actual human evaluations of text. With this separate ML algo one can generate massive datasets of "fake" human evaluation of the ChatGPT output without asking a single human.
once the actual ChatGPT is trained, the result is quite clear: it will generate responses that "look good" to a human, and look human, but by design this AI cannot be smarter than a human, in fact it will be about as "smart" as the average human who evaluated the original responses for the evaluation-predictor algo. But, it cannot really be "smart" either because the loss function targets structure of text that looks good to a human, not the actual factfulness or truth content of the text. This is why it sometimes generated some quite bizarre conversations where it insisted on things that were blatantly false (like what the current year is).
my thesis: by design this is a dumb text-generator that produces human-like responses but not much more.
without getting into the philosophical aspects of intelligence let us look at why this is not the case at all even in practical terms
first, what is the design concept of this AI:
in order for the AI to generate responses which look good to the human eye its training loss function was essentially human judgment itself; a separate ML algo was trained to predict a human's judgment of the responses, trained on a relatively small dataset of actual human evaluations of text. With this separate ML algo one can generate massive datasets of "fake" human evaluation of the ChatGPT output without asking a single human.
once the actual ChatGPT is trained, the result is quite clear: it will generate responses that "look good" to a human, and look human, but by design this AI cannot be smarter than a human, in fact it will be about as "smart" as the average human who evaluated the original responses for the evaluation-predictor algo. But, it cannot really be "smart" either because the loss function targets structure of text that looks good to a human, not the actual factfulness or truth content of the text. This is why it sometimes generated some quite bizarre conversations where it insisted on things that were blatantly false (like what the current year is).
my thesis: by design this is a dumb text-generator that produces human-like responses but not much more.