For everyone single company that is actually innovating and working towards legitimately replacing humans in the workplace there's probably at least like two-five companies that are just trying to make a quick buck, which are run by opportunists who don't even know what they're doing.
It's almost as if with crypto, whole bunch of fraudsters kind of ruined the perception of crypto before it even got a proper start.
I don't know the specifics it would take to run and process AI systems to a large consumer base, but it's probably a lot and the business model isn't refined yet.
I guess I could have my own personal server at home with my own database of language models and training happening. Kind of like crypto miners, but in this case it's more than just about hoarding GPUs to solve cryptographic puzzles, it seems like it'd be pretty much a big investment of time to do this at a large scale.
Even a ten-teraflop computer can run a simple AGI.
I mean it is all about having data to model what the user wants the a.i. to do.
A basic layout of processes and practical application of reinforcement.
To begin with a foundation perception model and then a reasoning model.
It is very easy to break things down into steps/parts and then arrange them into something meaningful.
a.i. will be like the personal computer, everyone will have one with no need for a subscription system.
programing it should be as easy as web development.
because it is just a self-assembling network framework.
Imagine if someone makes a Terminator. AI could then replace soldiers.
Imagine if someone mounts an AI on a drone with a gun. Then AI could replace the police.
Of course, the big thing that is coming is automated cars. Once automated cars are legal, we can have automated trucks. Then AI can replace truck drivers. I think that's only a matter of time. Then the truck industry might go the same way that the Print industry went in the 90s and 00s.
Even a ten-teraflop computer can run a simple AGI.
I mean it is all about having data to model what the user wants the a.i. to do.
A basic layout of processes and practical application of reinforcement.
To begin with a foundation perception model and then a reasoning model.
It is very easy to break things down into steps/parts and then arrange them into something meaningful.
a.i. will be like the personal computer, everyone will have one with no need for a subscription system.
programing it should be as easy as web development.
because it is just a self-assembling network framework.
I'm not getting away the sauce on this. I'll just say that financial people have no idea how the system works. You probably know more than they do regarding this.
If you sounds confident and have a top tier school degree under your belt a VC will literally give you a bunch of money because it is chump change for them.
It is the way we build the model and use that model to ascertain the needs wants and preferences of a user. Then it is how we achieve them in the safest most effective way possible regarding the actual generality of a system.
Generality being the widest, deepest and highest way of weighing options in a temporal setting. | width, depth, breadth, time - possibilities, evaluation. (gaols subgoals - comprehensive step building)
-
Animecat wrote:
If a quantum mechanism is involved it would only increase the options we are allowed to choose from.
The brain imagines what things are like and then updates based on how our actions affect the world.
And of course, it does so in 3D.
Response stimuli
consequences of a bigger right amygdala:
The amygdala is a reponse meachism.
Our preprogrammed responses to stimuli.
A big amygdala makes it possible to respond to a wide range of stimuli but in a fixed way.
consequences of a bigger anterior cingulate cortex:
The ACC monitors motivation and regulates self-responses strategies.
It is the error detection mechanism of the brain.
It creates new ways of responding to stimuli.
A bigger ACC means having a less wide range of fixed responses and a quicker way to new ones on the fly.
–
These are both ways of dealing with stimulus responses.
The first is the instant known ways method and the second is the pause and self-filter method.
instant learning vs inhibited learning
Often there is no stronger method in this system. But only a strong polarization between those with fixed and inhibited responses.
This site uses cookies to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies. We have no personalisation nor analytics --- especially no Google.