There are a lot of wrong with what the specialist says.
(0) I don't know much about the original idea of singularity. It's typically vaguely stated so you go both ways in interpreting it with chairty or interpreting it without charity. Sure machines may not be able to take some "qualitative" leap in some weird way, but algorithms can make iterative improvement, for example by making local changes and seeing how that plays out and deciding which changes to keep (this is what already ML programs do). There are also coherent meta-learning models, and NAS models in literature, where the model modifies its own structure in flexible ways, and design its own neural networks. There was also a recent paper which used neural networks to learn how to learn replacing backprop. You can do a lot of fancy thing, the idea of iterative improvement of one's own "code" is not really that far out there. Also you literally can make models do program synthesis, write code etc., so in theory you can send it its own code and ask for improvements (once propgram synthesis is better developed). You can already do crazy things with ChatGPT. Also note he is a cognitive scientist, I am a specialist in AI: I probably know better at least in this topic.
(1) Mathematical impossibility of Godel's proof as applied to the impossibility of machine intelligence is a very controversial point and not many experts believe in that implication:
https://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec .
(2) It's not possible to write great novels without "feeling" like something? Ok? Is there any real evidence in support to that besides "intuitions from folk-psychology"? Phenoemanl Feelings play a formal role in our cognitive operations, and drive creativity -- but that doesn't mean it is **necessary** that the functional role of phenomenal feelings cannot be "simulated" at a formal level abstracted from the actual matter of "what it is like".
(3) You can't know if "machines" feel something. Feelings in the Nagelian sense is precisely understood to be private barring scientific study (although you can engage in neurophenomenology and make some abductive inference). For example if panpsychism/panptrotopsychism is true and some elements of Integrated Information Theory is true, then certain organizations of causal powers can lead to "feelings". There are experts who allows such possibilities as well like Karl Friston the most cited scientist. All I am hearing here is prejudice and human exceptionalism. Also I would actually prefer machines to not have "phenomenal" consciousness. I don't think it's necessary at all for intelligence and understanding and simulating all its functional role (if any; some think that phenomenal consciousness don't have functional roles at all, which would make them completely irrelevant). For ethical reasons, it's also probably better if machines don't end up have consciousness.
(4) "just selecting parts of cognition" --- wait so it can have cognition (even if parts of it)?
(5) The 14 minute part is completely wrong. See what r/ChatGPT is doing. They are coming up completely new ideas and tasks and ChatGPT handle that quite well. And you can try plagiarism checker. There is no evidence that it is just plagiarising. The 14:55 part is exactly done by ChatGPT again and again, so the expert is blatantly wrong (and I am probably more of an expert here, I am the AI specialist; he is a cognitive scientist):
See:
See other examples:
Me: Write a rap battle betwee connectionist Ai vs Symbolic AI
Bot:
Verse 1: Connectionist AI, comin' in hot With my neural networks, I'm the cream of the crop Symbolic AI, you're outdated and old My machine learning skills will leave you in the cold
Verse 2: Symbolic AI, I've got some game With my logical reasoning, I'll put you to shame Connectionist AI, you might be advanced But you lack the depth and meaning of my stance
Verse 3: Connectionist AI, you may be smart But when it comes to rap battles, you're falling apart Symbolic AI, I'll hit you with a verse That'll leave you in awe, and make you curse
Verse 4: Symbolic AI, you may think you're the best But I'll show you that I'm the one to impress Connectionist AI, you may have some skills But I'll prove to you that I've got the will
Verse 5: Connectionist AI, you may be quick But when it comes to rap battles, you're just a lick Symbolic AI, I'll bring the heat And leave you behind, defeated on the beat
Me: Write a <500 words story about a tragedy like a professional novelist in Kafkaesque style
Bot:
The sun was setting behind the bleak, monolithic buildings that loomed over the city. The streets were empty, save for a lone figure trudging through the slush and snow. This was a city of lost souls, of people who had given up hope of ever escaping the clutches of their grim existence.
The figure was a young man named Thomas, who had been walking for hours without stopping. He was lost, not only in the sense of not knowing where he was going, but in the deeper, more profound sense of being lost in the world. He had once been a bright and ambitious student, but now he was just another faceless cog in the machine of the city.
As he walked, he thought back to the events that had brought him to this point. It had all started with a letter, a letter that had seemed so insignificant at the time. It was an invitation to a job interview, for a position at the city's most prestigious company. Thomas had been overjoyed at the opportunity, and had prepared for the interview with all the enthusiasm and energy of youth.
But on the day of the interview, everything had gone wrong. Thomas had arrived at the company's headquarters, only to find that the building was abandoned and derelict. He had wandered the empty halls for hours, searching for any sign of life, but to no avail. It was as if the company had never existed at all.
As the days went by, Thomas became more and more desperate. He had no idea what had happened, and no one seemed to know anything about the company or the job he had been promised. He was alone, with no direction and no purpose.
And so he wandered the streets, searching for some meaning in the cold and unforgiving city. But no matter where he went, he found only emptiness and despair. It was as if the city itself was conspiring against him, leading him further and further into the depths of despair.
And then, one day, he realized that he was not alone. He had been followed by a strange, shadowy figure, who seemed to be watching his every move. The figure never spoke, never made a sound, but its presence was palpable. It was as if it was waiting for something, some signal or event that would trigger its next move.
Thomas knew that he had to escape, to flee the city and find someplace safe. But no matter how hard he tried, he could not shake the feeling that the shadowy figure was always one step behind him. He was trapped, trapped in a Kafkaesque nightmare from which there seemed to be no escape.
And so he wandered the streets, alone and lost, until he could wander no more. He fell to his knees in the snow, tears freezing on his cheeks, and surrendered to the cold embrace of the city. The shadowy figure watched from a distance, its inscrutable gaze seeming to say, "I told you so."
Me: What could have the shadowy figure told Thomas if the figure indeed said something?
Bot: It is impossible to say what the shadowy figure could have told Thomas, as the figure never spoke or made a sound in the story. The only interaction between the two was the presence of the figure, which seemed to be watching Thomas's every move. It is possible that the figure could have communicated in some other way, but this is not mentioned in the story.
See how a user made ChatGPT "dream up" Epstein's computer:
See this:
(it's a new prompt. No one previously in the internet asked it because ChatGPT just got released)
See people making ChatGPT making shit up:
(I would be very skeptical that this is plagiarized)
How is that not creativity? Where is these content in the internet? If these are plagiarism, find proof. Use some plagiarism detector (I have checked some of them and haven't find evidence of plagiarism) or whatever.
And people say ChatGPT has a problem with cooking spurious but plausible sounding bullshits (which is true) but how is it any different from these human experts? And I am only at 15 minutes how worse will this get?
(4) He says AI cannot do novel mathematical problem like finding how much to add to get to the 7th prime number. In fact I gave ChatGPT an even harder question that humans and it solves it instantly: (
see this)
If the above link
Now what will the expert do? Move the goal post? "not a chance" -- yeah right.
(5) No creativity? In the age of DALLE, Diffusion, and so on? Seriously is this expert leaving under the rock? Have he not heard of "generative models" (note although the "prompts" for image creation are usually human generated but that's because that's what humans want. AIs can create their own prompts easily and create images videos whatever). Look at Epstein example, the seamless integration of NordVPN ads upon asked and so on -- is that no creativity? "proved to be impossible" - when an AI even won an art competition against humans?
Should I really keep watching -- it's 17 minutes and it's all so blatantly off.
(6) Now he is just going onto dualism. He is not talking about science anymore, he is doing philosophy and even most philosophers are physicalists, and the dominant position in phil. of mind is functionalism which makes physicalism orthogonal -- the "functional form" of understanding may be implemented by a ghost and may be implemented by matter -- whatever has the right functional roles.
(7) "massive gap" (~20 min) have been steady been decreased in the last decade with generative models. You can probably already optimize GPT for novel creation (right now it's optimized for chat not for writing novels. But it can still write stories). We are already "here".
(8) Any concrete machines do something that's non-algorithmic. Algorithms are about formal rules - they are abstract models, any concrete machines by virtue of being concrete is rooted in a metaphysical being and its powers -- which goes beyond algorithmic descriptions.
(9) "reasoning based infinite structures" - what is he on about? Even if reasoning is about infinite structures, even humans are not capable to navigating infinite implications and logical relations. And there is no evidence for humans doing anything but bounded reasoning with a lot of basis in cognitive heuristics.
(10) There is no evidence that humans can handle "infinite sets" (and higher oder infinite sets) either. If we are talking about putting these notion in symbols and reasoning finistically about some of its common features, there's no reason why AI can't do. Also check back again to SEP regarding Godel. I think he is terribly confused about lots of technical logico-mathematical and philosophical matters (whose details probably go beyond his cognitive science expertise).
(11) Mathematical Contributions? Never happened? GPT-f created new math proofs.
(12) Newton? Liebniz? Last universal geniuses? No. The latest universal genius is
Chris Fields (he does physics, have enducation in philosophy, works with Karl Friston and Michael Levine (doing revolutionary work in biology), and also knows about computer science, cognitive science theories etc.)
You know what I will stop here. Every sentence he says is terribly controversial. I don't see a point going forward with the video.
You tell me if there is anything you want critiqued for the next hour or if he had any response to my points.
Also, everything, I do is symbol manipulation too.