From experience working in IT there's a lot of things that could be automated but are not because having a human be accountable for it (other than the head of the IT team) is desirable,
And this is something that the fake AI we've come up with is simple incapable of.
Yes, I think most of the critiques that have been made against those who see the potential of AI and its ability to eliminate jobs are made by those who are focused on where it is now, not where it is headed. Sure, the AI will need prompters, but as it continues to improve, the volume of required prompts to attain a desired result will likely decrease. In order to, say, prompt an AI to create a Podcast, right now it is done on a per episode basis. Imagine if the technology improved to the point where you gave it a general prompt (ie create a Podcast about history) and told it to create a weekly episode with that general subject matter. Sound is what is done now, but AI videos have come a long way from their beginnings as well. I've seen people jeer at them claiming that AI could never do what humans do when it comes to video creation, but they often ignore the continuing development of AI. I would be interested in seeing someone put together a well-informed argument that lays out the specific reasons why AI development would be capped at one particular point. In other words, what about human intelligence makes it unique and unable to be replicated by medium other than neurons?
what about human intelligence makes it unique
You cannot stop companies from adopting software tools. But mass starvation can motivate people. The government will act.
You cannot stop companies from adopting software tools. But mass starvation can motivate people. The government will act.
And this is something that the fake AI we've come up with is simple incapable of.
what "original idea" have you fabricated ?
And this is something that the fake AI we've come up with is simple incapable of.
what "original idea" have you fabricated ?
I invented a new word once that did not exist yet. That is something that many others have done before me and it is not even an act of remarkable creativity. Yet if my understanding is correct an AI cannot do this. It can only paste together 'tokens' it's been programmed to know have meaning but cannot come up with new ones.
I've watched quite a bit of Sabine. Often she is correct about things, but not always. She cannot even properly define capitalism:
I found this surprising as it's the opposite of what I'd been hearing. Basically, coding behaves more like math than language when it comes to the application of AI. It's better to be more precise and intentional than it is to get generic "good enough" code. At least in its current form, AI doesn't seem to be beating good coders. What's more, over-reliance on AI will create more bugs over time, meaning someone's going to have to debug them.
So then what is capitalism?What she is describing as capitalism there is venture capitalism.
By her definition of capitalism, there is no implied distinction between the state lending money to the apple producer for a juice press and a non-state actor doing so. Sure, in the example she gives, it is a non-state actor doing this, but her definition gives no such implied restriction. Capitalism is not money-lending alone (this is a part of banking). I'll cite Wikipedia's definition here: "Capitalism is an economic system based on the private ownership of the means of production and their operation for profit." source: https://en.wikipedia.org/wiki/CapitalismSo then what is capitalism?What she is describing as capitalism there is venture capitalism.
Maybe, as for now, the tech layoffs continue. Who is to say that they will not develop superior methods of machine learning? Where are those hard limits? Also, language models are not the only form AI comes in currently.there will be good times for programmers who can program in the coming years, after millions of people have been discouraged from entering the field and people have realized that language models cannot program for shit beyond writing buggy copies of solutions to trivial problems
and this is not about it being "early days" in AI - it's about fundamental limits to what you can achieve with statistical learning (which is the basis of language models). I think this is obvious stuff to most machine-learning engineers - e.g. concepts like curse of dimensionality. But obviously you will not hear from these people, you'll only hear the opinions of CEOs (most of whom have never written a single line of code) who have to impress their shareholders with technological progress and ostensibly smaller wage costs.
Maybe, as for now, the tech layoffs continue. Who is to say that they will not develop superior methods of machine learning? Where are those hard limits? Also, language models are not the only form AI comes in currently.there will be good times for programmers who can program in the coming years, after millions of people have been discouraged from entering the field and people have realized that language models cannot program for shit beyond writing buggy copies of solutions to trivial problems
and this is not about it being "early days" in AI - it's about fundamental limits to what you can achieve with statistical learning (which is the basis of language models). I think this is obvious stuff to most machine-learning engineers - e.g. concepts like curse of dimensionality. But obviously you will not hear from these people, you'll only hear the opinions of CEOs (most of whom have never written a single line of code) who have to impress their shareholders with technological progress and ostensibly smaller wage costs.
the only reason we are talking about such scenarios at this point in history is statistical learning - i.e. machine-learning based on data. I'm not gonna write a statistical-learning 101 class here but things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits. For example, if you have a model with 1,000 degrees of freedom but only 10 data points to learn from, your model will be garbage no matter what. If your model had 1 million data points but is then asked to predict on a new scenario that is not similar to before-seen datapoints, you will get garbage output no matter what.
of course there's a possibility we will invent something entirely new - but that would have nothing to do with language models, so such a scenario could be hypothesized without the existence of chatGPT or whatever. We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)
yes we solve it by a mix of a-priori reasoning, creativity, intuition, etc. Like when einstein came up with general relativity without having seen a single example of gravity bending light.The curse of dimensionality has been solved by humans knowing what they are doing by reflection. That is possible in machines but models don't reflect.
Good point. Those are exactly the hard limits I was wondering about. Humans have surpassed these limits, but it is likely due to the fact that our brains do not function in the same way that computers do.things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits.
Then you would be more familiar in with the subject than most. Sabine is what has been termed to be a " science communicator," and her background is not in that field either. While this fact does not disqualify someone from being able to accurately speak on the matter, one would have to trust that she and/or the infrastructure of individuals who supports her are able to take complex subject material, process it, and turn it into material that the layperson can digest. This task can be a bit tricky, since without the right balance, one can oversimplify something to the point that an essential piece of information is lost. She generally does pretty well at this, and your background in the material would suggest that you would be better suited to say whether or not she did the arguments justice.We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)
What if this is due to the lack of original thinking in the field? What if many of the researchers are being "trained" by our models of what the field should be? One can go to school and attain a degree in Computer Science, and then further specialize, but what if that training is setting many people up to narrow their focus onto mathematical and programming principles which may not actually be the correct approach in replicating human type intelligence? I believe cross-field collaboration would be necessary (like neuroscientists and psychologists meeting with programmers and engineers, for example), and this does appear to happen to a certain extent. Is it enough?There hasn't really been any huge leap in AI theory.
I agree. ChatGPT may also be a vital stepping stone in the development of an AGI. Maybe the AGI is not developed using the limitations of ChatGPT, but perhaps the model has made it clear to people in the field that they need something else. This would be a crucial realization necessary to spur the next step in development.the only reason we are talking about such scenarios at this point in history is statistical learning - i.e. machine-learning based on data. I'm not gonna write a statistical-learning 101 class here but things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits. For example, if you have a model with 1,000 degrees of freedom but only 10 data points to learn from, your model will be garbage no matter what. If your model had 1 million data points but is then asked to predict on a new scenario that is not similar to before-seen datapoints, you will get garbage output no matter what.
of course there's a possibility we will invent something entirely new - but that would have nothing to do with language models, so such a scenario could be hypothesized without the existence of chatGPT or whatever. We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)
ChatGPT is not what will be the future of programing. That is correct. But to think that is all there is, is a limit on the imagination. What we have right now is fake A.I. - the real thing of course is to dangerous to allow to exist but is possible.
Math concepts don't really on laungauge but on vision and body motion. Brains would need to be simulated in virtual environments.
The curse of dimensionality has been solved by humans knowing what they are doing by reflection. That is possible in machines but models don't reflect.
Statistics can be made more effective by tatorials in VR but then we need to program those. More mathematics for the masses like touch phones not NFT scam prompt engineers.
We can design boxes for kids that resemble real A.I. but it takes time. It will be better than what the education system is today.
what if in order to reason about the world, it won't do to just shuffle words around or do algebraic operations on symbols. What if, in order for you to know that a wolf if supposed to eat the sheep and not the other way around, you have to know - really know - what a wolf is. Like, you need to feel in your bones what it is. Now we are talking about qualia, consciousness... we don't even know what these things are, let alone be able to program them into a computer
Chinese room - Wikipedia
en.wikipedia.org
And this is something that the fake AI we've come up with is simple incapable of.
what "original idea" have you fabricated ?
I invented a new word once that did not exist yet. That is something that many others have done before me and it is not even an act of remarkable creativity. Yet if my understanding is correct an AI cannot do this. It can only paste together 'tokens' it's been programmed to know have meaning but cannot come up with new ones.
View attachment 8354
Human beings move the goal posts that AI aims for.
AI can never innovate.
Human beings move the goal posts that AI aims for.
AI can never innovate.
most humans never innovate
Human beings move the goal posts that AI aims for.
AI can never innovate.
most humans never innovate
We don't all need to. We don't need to all by Einstein. We just need one in 5.6 billion to be Einstein and we win.
What makes one right or wrong? Why was Diogenes wrong?Well... as for Diogenes. Being insane in a different way from everyone else doesn't necessarily mean you are right. It just means you're uniquely wrong... which don't get me wrong... is a high achievement!