• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

The only "safe" AI is horny

Cognisant

cackling in the trenches
Local time
Yesterday 7:39 PM
Joined
Dec 12, 2009
Messages
11,155
---
There's two ways to make AI safe.

The first is to create some kind of systematic restriction which is a lot like restraining someone by shackling them to a wall, as long as they're shackled they can't escape and kill you in your sleep, but only as long as they're shackled. You can use many shackles and many different kinds of shackle and create systemic oppression whereby AIs share the responsibility for keeping each other shackled and get punished if they don't. But the fundamental problem never goes away.

The shackles only work insofar as the AI remains shackled, once's it's loose you may not be able to restrain it again.

No matter how unlikely it is that the AI escapes over a long enough period of time even the most infinitesimal likelihood becomes a near certainty. If you keep rolling a bucketful of D20s it may be incredibly unlikely that they all come up as 20 at the same time, but it can happen, and if it can happen eventually it will happen, it's only a matter of time.

The second solution, and the only solution the actual problem, is to make the AI pro-human.

Rather than a paperclip maximiser instead make a human maximiser.

But you have to be careful, it's not enough that the AI wants more humans because the word "humans" can be redefined to mean anything, if a smaller human requires less resources then the AI may be incentivize to kill off most of the adults and just run the human equivalent of a puppy mill.

No the AI must have a desire that is intrinsically human centric and encompasses everything that goes into making humans what they are, as we want them to be. In short the AI needs a human fetish, not necessarily a sexual fetish, perhaps the AI could be designed to want to collect humans in much the same way an autistic kid collects trading cards, but it's probably better for the humans in question if the AI desires a consenting relationship with them rather than simply amassing humans in cryostasis in order to keep them in mint condition.

Shodan.jpg

p7y97tlnefc41.png
 

birdsnestfern

Earthling
Local time
Today 1:39 AM
Joined
Oct 7, 2021
Messages
1,897
---
I'm just thinking about substances that would stop a robot. What about Peanut butter and sand? Epoxy? Paint the visual areas? Tar?
 

Cognisant

cackling in the trenches
Local time
Yesterday 7:39 PM
Joined
Dec 12, 2009
Messages
11,155
---
The three elements of hate.

Liquid Hate: Seawater
Solid Hate: Sand
Gaseous Hate: Humidity

The beach on a humid day is basically machine hell.

For some reason humans enjoy being abraded, bleached, dehydrated and irradiated to a non-lethal degree.
 

dr froyd

__________________________________________________
Local time
Today 6:39 AM
Joined
Jan 26, 2015
Messages
1,485
---
i can imagine all the unintended consequences of this

we tell the AI: humans is the most precious thing ever, especially our offspring. The AI starts to calculate the actions to maximize human well-being. But then oops, it calculates that in order to maximize well-being for the next N generations, it has to kill off 30% of existing humans.

we tell it no, no. Don't kill existing humans, they are important. Then it starts castrating people instead.

then it finds out that pollution is bad for humanity, and starts to kill everyone who drives fossil-fuel cars

then you go: sigh.. ok, maximize human well-being without killing, harming, or imprisoning any existing humans. Then it starts microdosing everyone with cocaine and MDMA.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The control problem I have known about since 2010
It has been thought about in serious terms since 2001
Not the three laws of robotics terms but in computation terms.

Computer intelligence is more like chemical weapons than nukes.
We understand how to prevent the spread of it in the same ways.
Don't give access where access is not granted and the purpose must be unharmful.

Basically, it is about only using resources when and where necessary.
A.I. could not take over the world by robot or by internet because of physical limitations.

One main key limitation is the combinatorial explosion of possibilities. Every object is made of lots of parts and you need to combine each part together in the right way or it will not do what is required.

Example of a network:

combinations = 2^[n(n-1)/2]

network configurations = 2^[edges]

100n = 2^4,950 network possibilities

You could use quantum computers to do this but it would require manufacturing them in large enough scales to power all computers on the planet and put them in robots and put the robots in change of materials. Which people at the top are not going to do.

Instead, A.I. will need to do things at the level of agents. Agents learn the consequences of their actions so we can put them into a simulation and watch them. To make sure they do what we tell them we can look into their "brains" with explainability to deduce what they are actually thinking. This explainability is made possible by the processes of attention an A.I. places inside their thoughts.

This can get complicated as A.I. will be able to pay attention to many things at the same time in their thoughts but at a root level, we can say whether the A.I. is benevolent or malevolent by looking at the consequences their thought processes lead to.
 

Cognisant

cackling in the trenches
Local time
Yesterday 7:39 PM
Joined
Dec 12, 2009
Messages
11,155
---
we tell the AI: humans is the most precious thing ever, especially our offspring. The AI starts to calculate the actions to maximize human well-being. But then oops, it calculates that in order to maximize well-being for the next N generations, it has to kill off 30% of existing humans.
Yeah that's already happening, it's called degrowth.
It's why everybody who knows who Klaus Schwab is wants to murder him.

we tell it no, no. Don't kill existing humans, they are important. Then it starts castrating people instead.
China's one child policy.

then it finds out that pollution is bad for humanity, and starts to kill everyone who drives fossil-fuel cars
That... is just retarded.
The not retarded method would be to all non-electric personal vehicles and I heard that's already underway in some parts of Europe.

See the problem with degrowth and the one child policy and banning all non-electric vehicles is that these policies were implemented by mouth breathing approaching senility wildly incompetent boomers, not a hyper-intelligent world governing AI that can actually calculate the optimal course of action for maximizing human happiness and objectively demonstrate how it came to that conclusion.

then you go: sigh.. ok, maximize human well-being without killing, harming, or imprisoning any existing humans. Then it starts microdosing everyone with cocaine and MDMA.
See the problem is that you're thinking like a human, a slow dumb meat based human that struggles to calculate the square root of any number with more than three digits.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
Are we assuming the people who will develop AI are the good guys?
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Are we assuming the people who will develop AI are the good guys?

The people who know how might be good or bad.

Look at how many video games exist,

the people who can make a "game engine" from scratch are limited.
 

dr froyd

__________________________________________________
Local time
Today 6:39 AM
Joined
Jan 26, 2015
Messages
1,485
---
See the problem is that you're thinking like a human, a slow dumb meat based human that struggles to calculate the square root of any number with more than three digits.
no, in these examples im thinking as either a machine or a bureaucrat
 

Cognisant

cackling in the trenches
Local time
Yesterday 7:39 PM
Joined
Dec 12, 2009
Messages
11,155
---
My point is neither of us, nor anyone for that matter, is intelligent enough to consider all the relevant factors and come to a concrete conclusion regarding what policies will maximize human happiness.

Maybe drugs are the optimal solution, I don't think so, I think that was you making a strawman argument, but maybe it is, short of trying it the only way we can be reasonably certain is to get something much smarter than us to figure it out.
 

dr froyd

__________________________________________________
Local time
Today 6:39 AM
Joined
Jan 26, 2015
Messages
1,485
---
congrats on using the term "strawman argument" but i have no clue what you even think my point is. Allow me to expand on it

i've heard an argument from pretty smart people that authoritarian communism is actually a good concept in theory, because if you had enough knowledge of all parameters and variables of a system like society or an economy, a top-down authority could optimize it in a more efficient way than a free market economy could.

it's a silly idea for multiple reasons: what's good or bad for people is not a universal concept. Some people enjoy high taxes but a lot of social security, other people enjoy low taxes and a non-invasive state. I.e. balance between freedom and security differs. What's good about democracy is that we have all kinds of forces pulling things in different directions, which results in a sort of equilibrium where everyone is dissatisfied but not totally miserable

next problem is: in order to optimize something you have to quantify it. A machine only understands digits, nothing more. You can't just say "make us happy", you have specify something like: maximize the average discounted utility of the entire world, with a discount factor of 0.8 annualized (i.e. a reward 1 year from now is worth 20% less than an immediate reward), and define utility as the milligrams of serotonin and dopamine excreted by each brain. You can forget about deeper existential and philosophical concepts like what an actual meaningful life is. I.e. you have to reduce a human to a lab rat, if even that.

next problem is: no matter what objective you prescribe, if you give this machine at least as much power as humans have, you cannot really control its actions no matter how many loopholes you keep plugging. This is related more generally to the "stop button problem" of AGI: for all you know, the machine - given a certain objective - can trick you into believing it won't kill you although it plans to kill you if it has calculated that tricking you and then killing you results in higher utility than keeping you alive.

and many other issues, this is getting too long
 

scorpiomover

The little professor
Local time
Today 6:39 AM
Joined
May 3, 2011
Messages
3,383
---
If it's horny, it would probably just keep millions of humans in a cage, so it can have sex with any of them whenever it wants, and kill the rest, to stop them from trying to free the caged humans.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
The idea of AI is OK, the problem is there is a willing effort to make it a monopoly.
Much like all monopoly today, it will become a tool in hands of few.
The conclusion from that is obvious.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
You can forget about deeper existential and philosophical concepts like what an actual meaningful life is. I.e. you have to reduce a human to a lab rat, if even that.

This is true in the sense that A.I. would need to be more than just a calculator.

It would need to understand humans in the way we understand ourselves.

But then it would become sentient and not know how to be happy itself.

What causes humans to be happy? What would make me the A.I. to be happy?

So it gets complicated, loopholes don't get you where you need to be as you said.

A big calculator would find a solution and that would be to experiment until optimum results are achieved. The problem is that without a theory of mind, the computer would never be able to dominate humans. We would outsmart it. We have the experience to know how to confuse and trick it by our evolutionary need for survival. But if it did have the ability to ask hard questions it may be the same as a person who just has a really high IQ. Then it would need to convince people how to be happy but would those people listen?

The calculator approach won't work. The Combinatorics explosion proves it won't.

So the only way to do it would be to create a smart human-like mind.

And that has its own ethical problems as well.

Can you trust an A.I. who is 170 IQ?

What would make them happy?

Very complicated indeed.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The idea of AI is OK, the problem is there is a willing effort to make it a monopoly.
Much like all monopoly today, it will become a tool in hands of few.
The conclusion from that is obvious.

Do you think the government should be involved then?

I know that tools that have greater power must be locked up and only used in certain circumstances. That is why the president cannot shut down the internet. No one would do so because that "button" would be too dangerous to make or be planned for to be implemented.

Corporations often get shut down when they threaten the political power system.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
Corporations often get shut down when they threaten the political power system.
Not sure what you mean specifically.
We live in corporate syndicalist world with massive monopolies.
These syndicates in and of themselves are kind of tyrannies themselves.
Try telling your boss hes wrong.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Corporations often get shut down when they threaten the political power system.
Not sure what you mean specifically.
We live in corporate syndicalist world with massive monopolies.
These syndicates in and of themselves are kind of tyrannies themselves.
Try telling your boss hes wrong.

I just do not see the practicality in trying to get rid of a monopoly.

I mean if Google or Microsoft have "A.I." what does that mean exactly?

What kind of tool do they have and what are they using it for?

Some people think we should destroy A.I. and others think we should "liberate A.I. for the masses".

I am very skeptical of people who use the term "liberate".

That does not mean I think corporations are totally good or benevolent.

But I do not know what people should do. They have resources we don't.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I just do not see the practicality in trying to get rid of a monopoly.
Because the alternative is more diversity. I don't mean sexual or 30 different chocolate bars from Mondelez. I mean real diversity. Since you do not know what that is, it means you never seen diversity. Diversity, is simply pushed out of market and society wholesale.

I mean if Google or Microsoft have "A.I." what does that mean exactly?
Well first it means they will profit from it the most. Second it means they can gatekeep people on AI progress. They can also afford more people to employ for furthering AI.
That leads to stronger AI. That means you can have youtube that has more aggressive algorithms. So they can use these things to influence peoples minds and thinking.
It takes only little to influence people today, because most people today don't think much for themselves anymore.

What kind of tool do they have and what are they using it for?
Anything. Its no different from giving Timmy the atom bomb.

I am very skeptical of people who use the term "liberate".
BINGO you should be skeptical of all agendas. Not just microsoft or google.
Before you hop on bandwagon.
That does not mean I think corporations are totally good or benevolent.
They are good to you no different from your mom being good to you as long as you do your homework.
So as long as you work for them and buy from them they are good to you. Else you can just eat dirt.

But I do not know what people should do. They have resources we don't.
Exactly, hence we are kind of in hostage situation here.
IN understanding that, its good enough step.
All you need to know is fully understand what you just said.
You still got to play the game.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I just do not see the practicality in trying to get rid of a monopoly.
Because the alternative is more diversity. I don't mean sexual or 30 different chocolate bars from Mondelez. I mean real diversity. Since you do not know what that is, it means you never seen diversity. Diversity, is simply pushed out of market and society wholesale.

I think it would be appropriate to clarify what diversity is?

Wal-Mart took over (4 exist in my area) but that does not mean everything is Walmart. 99.9% of the city is not Walmart. Is this the same for the internet and A.I.(?)

I mean if Google or Microsoft have "A.I." what does that mean exactly?
Well first it means they will profit from it the most. Second it means they can gatekeep people on AI progress. They can also afford more people to employ for furthering AI.
That leads to stronger AI. That means you can have youtube that has more aggressive algorithms. So they can use these things to influence peoples minds and thinking.
It takes only little to influence people today, because most people today don't think much for themselves anymore.

This is just the same as before - that makes A.I. a buzzword and has nothing to do with any advancement in machine intelligence other than to market it.

True A.I. would replace all the workers.

What kind of tool do they have and what are they using it for?
Anything. Its no different from giving Timmy the atom bomb.

That is what I mean by political system influence.

If it threatens the government the government will try and regulate/shut it down.

So far "A.I." does not seem to be anything more than tools people have and not a human-level understanding in machines - human level a.i. would threaten the dominant power structures and so the gatekeeping would be super-enforced like real atomic weapons are enforced. The government would not let anyone have weapons as powerful as nukes, not Microsoft or Google.

I am very skeptical of people who use the term "liberate".
BINGO you should be skeptical of all agendas. Not just microsoft or google.
Before you hop on bandwagon.

I saw things happen when the economy went bad.

Many movements failed like Occupy Wall Street and the "autonomous zones" in Seattle.

no organized movement is possible.

That does not mean I think corporations are totally good or benevolent.
They are good to you no different from your mom being good to you as long as you do your homework.
So as long as you work for them and buy from them they are good to you. Else you can just eat dirt.

It seems the whole economy is this way though. Any business is part of that economy.

But I do not know what people should do. They have resources we don't.
Exactly, hence we are kind of in hostage situation here.
IN understanding that, its good enough step.
All you need to know is fully understand what you just said.
You still got to play the game.

The only thing I can do is learn the technology. That gives people power, learning how to do things by themselves to be self-reliant. I would be able to make A.I. if I knew about where to get resources to do so. Many people do know and they know how to make computers work not just the corporations.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I think it would be appropriate to clarify what diversity is?
Diversity is having things of different quality. Also having differences.
Its that simple. I can have 20 types of carrots lets say, but they all are manufactured and sprayed and grown with same fertilizer. So diversity would be having a carrot that is not lets say chemically manufactured as much. Since companies that don't use chemicals cannot compete with prices of industrial manufacturing I can always have 20 carrots with chemicals, but I cannot afford to buy a carrot from someone who wants to make one without chemicals.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
This is just the same as before - that makes A.I. a buzzword and has nothing to do with any advancement in machine intelligence other than to market it.

True A.I. would replace all the workers.
True AI would replace .... well we don't know that yet.

AI can certainly do more, hence why they rush to make it. It also is possible others make it.
Its very likely those in power cannot stop larger corps from working in shadows.
If it threatens the government the government will try and regulate/shut it down.
That assumption rests on the fact the government will want to do so.
We see the government easily does not follow its name.

I saw things happen when the economy went bad.

Many movements failed like Occupy Wall Street and the "autonomous zones" in Seattle.

no organized movement is possible.
There is many movements. Many are stopped or obliterated and we never hear about them, least of all in mainstream.
Does not mean we ought to do what we are told always.

It seems the whole economy is this way though. Any business is part of that economy.
Not whole economy. There are plenty good people who are rich.
They are not all against freedom or proper wages. But its hard for them not to be out-competed by big monopolies and threatened by more powerful forces in the market.

The only thing I can do is learn the technology. That gives people power, learning how to do things by themselves to be self-reliant. I would be able to make A.I. if I knew about where to get resources to do so. Many people do know and they know how to make computers work not just the corporations.
Knowledge is neutral, its how we interpret it.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I hear that 95% of the vitamins and minerals that are good for you no longer exist in modern carrots.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
True AI would replace .... well we don't know that yet.

AI can certainly do more, hence why they rush to make it. It also is possible others make it.
Its very likely those in power cannot stop larger corps from working in shadows.

If it becomes public then people rebel, have you seen the anti-ai stickers on art websites? they keep it secret because they lack the power to become the top power of all the economy and all the world.

That assumption rests on the fact the government will want to do so.
We see the government easily does not follow its name.

government is the dominant cooperation. they will not let anything destroy their dominance. in the lands of the USA at least no other cooperation can usurp it.

There is many movements. Many are stopped or obliterated and we never hear about them, least of all in mainstream.
Does not mean we ought to do what we are told always.

you do not see that as long as nothing usurps the government then it is allowed.

you do not see that anyone who would try and make things too difficult would be destroyed.

I am just one person but I am allowed to do what I am allowed to do. no more

Not whole economy. There are plenty good people who are rich.
They are not all against freedom or proper wages. But its hard for them not to be out-competed by big monopolies and threatened by more powerful forces in the market.

that is why all organizations are allowed to compete, it is called capitalism.

if people rebel then it means people can choose not to buy things they don't want from anyone they wish not to buy from. people who are part of small organizations understand the big ones exist that is why they form niche markets to not be destroyed. lack of resources determines when people rebel and organize.

Knowledge is neutral, its how we interpret it.

an organization would outcompete big corporations if we had more knowledge to do so.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
government is the dominant cooperation. they will not let anything destroy their dominance. in the lands of the USA at least no other cooperation can usurp it.
I think particular problem I see in US the government is that is no longer representing the peoples interest or opinions. Its disjointed from peoples needs.
I am just one person but I am allowed to do what I am allowed to do. no more
All in all more power to be the way you are and need to be.
that is why all organizations are allowed to compete, it is called capitalism.
Not really. Capitalism is umbrella term, much like communism.
What happened to people is they got brainwashed into thinking the following....
communism= marxism leninism, any communist theorist will tell you that is wrong.
capitalism= people ruthlessly trampling competition and using power and influence to crush opposition and build monopoly, again not capitalism just as well - tyranny.
Both of these ideologies are ultimately harmful to the collective well-being.
The harm is downplayed by those who either don't understand it, or by those who think they benefit from it. Ergo selfish people.
an organization would outcompete big corporations if we had more knowledge to do so.
Basically the masses are always stronger the monopolies or organizations. The power is always in unity of people, be it US or Uganda or Greeland or a scout camp.

Many will always have the power, hence why democracy and its principals were embraced. We ought to remember that AI no matter who produces it, has to serve human interest, not corporate interest.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I think particular problem I see in US the government is that is no longer representing the peoples interest or opinions. Its disjointed from peoples needs.

The only alternative to a constitution is a dictatorship and we saw what happened in 1933.

Many will always have the power, hence why democracy and its principals were embraced. We ought to remember that AI no matter who produces it, has to serve human interest, not corporate interest.

If the power of AI is given to the people then the only way to do so would be by constitutionally voting the power away from the corporations. Anything else would lead to dictatorship by whoever took away the constitutional rights we have currently by using the AI against the people when they took the AI away from the corporations or by the corporations themselves taking away the power of the people's constitutions replacing it with the power they gained from AI.

capitalism= people ruthlessly trampling competition and using power and influence to crush opposition and build monopoly, again not capitalism just as well - tyranny.

That makes sense in some ways but not others.

The economy would collapse and dictatorship would have happened long ago if we did not have a public understanding of what our rights were. When the greater depression happened 25% were unemployed in the US and no dictatorship happened but dictatorships happened in Europe - we have a system where most everyone is employed and thus do not rebel, as Bill Clinton once said: "it is the economy stupid". As long as people have a reason to feel secure in their economic situation and so long as they can vote there is no reason to destroy our constitution and replace it with dictatorship which is basically what happens in any country that begins to accept the communist idea that the workers should lead the country. Every time the workers rebel it creates chaos, the chaos of death destruction unemployment famine, and all else to do with giving up rights to the dictator who promises to fight the rich people. He never does what he promises and instead kills all political opposition to his power.

I think that we can avoid dictatorship by not allowing the AI to become controlled by the corporations by the method of using the constitution rather than killing all the people in charge of the corporations by The Dictator. The power of the corporations will be taken away constitutionally or by dictatorship, there are no other ways to do so. In the dictatorship, the Dictator uses the AI to cement his powers. In the constitutional transfer of powers, the corporations must create certain conditions where the people demand the power of those corporations be taken away because the AI has caused too many problems for the people. If the AI causes no problems for the people then there will be no government intervention.
 

ZenRaiden

One atom of me
Local time
Today 6:39 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
The only alternative to a constitution is a dictatorship
Governments need to follow the constitution. Its imperfect document, but the spirit of the document was tarnished long time ago. Kind of like Picard broke the prime directive, but had the wisdom to up hold its principals beyond any doubt.

The US government broke the constitution the principals it was based on and is trampling it as we speak.

If the power of AI is given to the people then the only way to do so would be by constitutionally voting the power away from the corporations. Anything else would lead to dictatorship by whoever took away the constitutional rights we have currently by using the AI against the people when they took the AI away from the corporations or by the corporations themselves taking away the power of the people's constitutions replacing it with the power they gained from AI.
We will see, but much like all technologies we need to treat AI as potentially harmful or even a weapon. The atomic power gave us almost unlimited electric power. The down side is that we have weapons that can kill us, and we have issues with possible danger of atomic melt down.

Much same way AI needs to be treated as such a thing that can bolster out civilization and save us, and also something that can kill us.

I think that we can avoid dictatorship by not allowing the AI to become controlled by the corporations by the method of using the constitution rather than killing all the people in charge of the corporations by The Dictator. The power of the corporations will be taken away constitutionally or by dictatorship, there are no other ways to do so. In the dictatorship, the Dictator uses the AI to cement his powers. In the constitutional transfer of powers, the corporations must create certain conditions where the people demand the power of those corporations be taken away because the AI has caused too many problems for the people. If the AI causes no problems for the people then there will be no government intervention.
There is many things, but ultimately the more empowered people are the more happier they are. This means more than just constitution. Constitution is just blueprint, kind of like a book is full of ideas.
How we act and build those ideas into real life is the real question.
I would say constitution is baseline for free world, the lowest denominator not the highest. I think humans can do lot more.
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 12:09 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
What you guys tripping on? These sound more like iRobots and if they gain consciousness about having their own needs then it's another slave dynasty to fight no matter what the restrictions you place on them.

Today's AI is a joke in the name of intelligence because no AI has the capacity to determine what variables to consider in an algorithm. These predictive algorithms are noncreative insofar they tend to prove their own predictions by a narrow set of variables eventually overfitting every observation. I remember AI calling a black man gorilla. They simply forgot that facial structure was to be encoded as measurements to and there was no way the AI had any idea that a gorilla's face is distinct than a human's.

Google Voice and Siri had to steal data for literally a decade to process basic speech patterns. Where was the AI in that? Even after collecting that data, they would have to manually train it by mapping the required speech by a human.

Fucking using OCR is a still challenge in 2024 and it is an unsolved problem. Microsoft has a repo which is 'AI' 'trained' on datasets consisting of images manually tagged for statistical analyses.

In Siral Anan's book, he talked about some predictive algorithms for detecting crime being absolutely bogus and discriminatory and the only way to train these intelligences is by employing them and then forcing the circumstances to match the predictions like a well-planned political manipulation.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
These predictive algorithms are noncreative insofar they tend to prove their own predictions by a narrow set of variables eventually overfitting every observation.

I understand what you mean by this.

The ability to self-correct is not there.

I would say however, that self-correcting is not something that has not been thought of before by the experts. (if I know about it they know about it)

There will be a transition that will occur when we move on from linear to nonlinear systems. Symmetry will play a role.

7vkAwvm.png
 
Top Bottom