• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.

To What Extent Can AI Derive/Interpret Meaning?

Lagomorph

Philosorabbit
Local time
Today, 16:45
Joined
Aug 20, 2016
Messages
551
Location
Down the hole with Alice
#1
How likely is an AI to correctly guess the meaning of a novel symbol? If that's possible, then what about symbols that have multiple and/or layered meanings? And what types of meaning? I suppose the act of producing meaning should be on the table as well. How could that be accomplished by AI?

I'm sure there's been work done on this, but I don't know where to start because it's not my field.

Pre-emptive thanks for any replies.
 

Cognisant

Condescending Bastard
Local time
Today, 09:45
Joined
Dec 12, 2009
Messages
7,849
#2
How likely is an AI to correctly guess the meaning of a novel symbol?
How likely are you?

Any kind of image recognition software is going to need to be trained before it's capable of doing anything, we're no exception, except that human infants are born able to recognize faces (despite having no idea what a face is or why it's important) because this "training" has occurred over many generations of infants that were more likely to survive if they established an emotional bond with their carers.

Indeed it's a misnomer to say a newborn can "recognize" faces, it's not recollecting any kind of memory, it's biologically hardwired to smile at any face-like thing it sees. This kind of predetermined self-bootstrapping is incredibly powerful, it basically enables us to take short cuts in the learning process, faces are important and by being predisposed to focus on them a human infant learns how to read expressions MUCH faster than an AI with equivalent processing power.

And there's no reason to think this is limited to infants...
Isaac Arthur Machine Rebellion 9:02-9:58

Currently AI can do any one specific thing a human can do, but making an AI that can do everything a human can do is impractical and even though an AI can learn anything a human can learn an AI doesn't have that genetic heritage of instinctual short-cuts helping it.

Humans are terrifyingly sophisticated machines.
 

Animekitty

(ISFP)-(E)(N)(T)(P)
Local time
Today, 14:45
Joined
Apr 4, 2010
Messages
5,783
Location
subjective
#3
Humans have a connection from the vision center to the language center that other apes do not have. And since humans have symbols imposed on them from the outside what is actually happening is what Noam Chomsky calls universal grammar, which is a hierarchical understanding of phases together (vision language connection). When talking to infants language learning is all about acquisition. The hierarchy needs to be filled in, So any new symbol that is understood comes from the context it is used in, a new way to fill in the hierarchy has already acquired.

Since it is only a matter of filling in a hierarchy with context, we could make an A.I. like this and it would learn new symbols in the way specified. Solving other cognitive problems are amenable to the same process.

(for intrinsic motivation A.I. needs a limbic system to mediate reinforcement learning)
 

Lagomorph

Philosorabbit
Local time
Today, 16:45
Joined
Aug 20, 2016
Messages
551
Location
Down the hole with Alice
#4
I expect one approach would be for an AI to identify the patterns and structures in known languages and look for them in a novel presentation to identify meaning therein. I'd also expect such patterns to reflect how linguistic information is processed in the human brain, which would render meaningless anything introduced by the private language argument. Chomsky looks interesting enough.

So then, in order to find, say, a novel cryptographic technique, we'd need to examine modes of communication (or interpreting the world) that transcend or circumvent linguistics. Pareidolia and instinct seem like good examples to start with, but what are they, really? Within the Jungian paradigm, Ni? Other paradigms? Calling it genetic just seems like a massive overgeneralization.
How likely are you?
More likely than most. But why? If you were to randomly draw something, ascribe a meaning to it, and dare me to guess it, I'd obviously fail, but some factors in its creation or design surely have corelates to its meaning, such as how the way one writes a letter G points to meaning in graphology.
 
Local time
Today, 22:45
Joined
Jan 1, 2009
Messages
3,735
#5
I'm not updated on AI at all, so ignore this unless you're extremely bored.

I think AIs will be able to do this easily, if they are not already able to. Human intelligence and understanding is... overrated. We can program and create AI with a larger library of knowledge, and that will have a significant impact. Most people would have no or a limited understanding of a symbol already, if an Ai has all the knowledge of or surrounding the topic... well....

I can very easily see AI be superior in a lot of fields, as humans usually need a lot of dedication to get on a competent, good level. Ais can easily use a vast database of knowledge and calculate. Humans have huge biases of knowledge or perspectives they have recently experienced. They see things in terms of how they view the world.

I think we're also getting closer to upload human brains, which will create a lot of new problems. https://en.wikipedia.org/wiki/Connectome
 

Blarraun

straightedgy
Local time
Today, 22:45
Joined
Nov 21, 2013
Messages
4,160
Location
someplace windswept
#6
What kind of symbols are we talking? Can you give examples?

We don't have general AI, a neural network or an expert system can be trained to recognise any symbol in terms of already known symbols which isn't any less than what a human would be capable of when presented with a novel input.

Depending on its training a system can order graphical, sound or other input and assign it to categories. It may learn to store related inputs closer and give them more connections.

Current machine learning systems can be trained to find patterns in the training data. The limitation of contemporary systems is that they cannot reliably switch their mode of operation from probabilistic logic to two-value logic. Any amount of training data that is similar but has a different result will tend to destroy the value network

For example a system can be taught to compose sentences and will get 99% of the words right, but the grammatical order will be nonsense, or it can make grammatically reasonable sentences that are too restricted in content or message. It can try to hold a conversation, but will tend to lose track of a topic, or will stick to one topic too much etc. If it knows enough to have a good response to anything, it won't be specialised enough to choose the specific response that is required consistently as the bias from its wide domain will cause it do diverge.

If you teach the system "a spaceship is great because you can use it to fly closer to (view) the stars" and "a telescope is great because you can use it to view the stars" if it didn't recognise that a spaceship is a vehicle and if it didn't already connect movement verbs to vehicles, after it learns "a spaceship can be used to fly around the moon" it will tend to make a mistake of saying "a telescope can be used to fly around the moon".

If it made a special node "vehicles" in its network it can make a mistake of composing a sentence "a jet can be used to fly around the moon" instead. It is actually quite unrealistic to expect a network to learn such a pure node as "vehicles" from random data. Far more likely error is that the network will make a node "transport" or "transport and kittens" where it will store horses, cars, aircraft, spaceship, sci-fi teleporters and will be too biased by my little pony, cat pictures or other random crap to sophisticate the node transport into machine, space, air, animal, ground, fantasy and so on.


Simply put the learning algorithms create networks that are too environment specific, exposing them to more environments only makes them less accurate with respect to each specific area. This shows that creating networks is likely part not whole of what is required to make more generalised solutions, though with greater supervision, data selection and value assignment there is good room for improvement of this technology.
 

QuickTwist

Alive - Born Anew
Local time
Today, 15:45
Joined
Jan 24, 2013
Messages
6,733
Location
...
#7
Currently AI can do any one specific thing a human can do, but making an AI that can do everything a human can do is impractical and even though an AI can learn anything a human can learn an AI doesn't have that genetic heritage of instinctual short-cuts helping it.
I'm throwing my hat in with this answer.

Humans are still way more complicated than any AI.
 

Cognisant

Condescending Bastard
Local time
Today, 09:45
Joined
Dec 12, 2009
Messages
7,849
#8
Absolutely, that's essentially the problem, it's not that we can't make thinking machines but rather the standards we're holding them to (the capabilities of the average human) are absurdly high.
 

Pizzabeak

Prolific Member
Local time
Today, 13:45
Joined
Jan 24, 2012
Messages
1,885
#9
Pretty likely, given the algorithm. It would just associate and branch off. That's why they say the biological organism is like a computer itself, with the motherboard being like a brain. You can't tell yourself the meaning you derive. You only know so much as you're allowed to know, given the tools you've received to use a system. Some A.I. would suck more than others until the ultimate one has been developed. That's like making a security bot and telling it not to detect objects on its event based photon pixel axis grid board space. This question isn't deep, and A.I. can know what it means to be king at chess. So they could get so cognizant they'll try to overtake humanity to kill us all before we can enter the supercomputer itself.
 

Serac

A menacing post slithers
Local time
Today, 21:45
Joined
Jun 7, 2017
Messages
1,504
Location
Stockholm
#10
Depends on what we mean by "meaning" I guess. If you have a sentence like

"It felt like grass under my feet"

you can easily make an AI which can tell you roughly what this sentence is communicating: the subject "I" is referring to some feeling in their feet etc. But in terms of qualia, the machine itself will not have a clue what this sentence "means", since, in order to have that knowledge, you need an experience of having walked on grass – in particular the human experience of walking on grass.

So far we don't know
1) whether you can make a machine have qualia
2) whether cognition corresponds to computation

both of which need to hold in order to claim that machines can interpret meaning the same way we do.
 
Local time
Today, 22:45
Joined
Oct 7, 2018
Messages
113
#11
slim to none.
without death as a marked end-point i cannot see how an AI can derive meaning from anything.
function, yes, the AI can define anythings function, but not meaning, might require neurochemistry for that.

you know what, maybe if it looks like the MAGI system from NGE it'll be able to derive meaning, or the "plugged in" humans from neal ashers novels.
 

Artsu Tharaz

Resident Resident
Local time
Tomorrow, 07:45
Joined
Dec 12, 2010
Messages
2,811
#12
I think that anything covered by Jungian type, which includes meaning, can be made functional into an AI thingo.

It's what comes after that that would be rather.;. difficult.
 

Serac

A menacing post slithers
Local time
Today, 21:45
Joined
Jun 7, 2017
Messages
1,504
Location
Stockholm
#13
Just read a very good article on this

[...] it’s a very different task to recognize items humans have labeled with the letters f l o w e r compared to understanding what a human thinks of when they see a flower. To a human being, a flower is the often visually appealing part of a plant that is attempting to reproduce. Some people like being given flowers as a gesture of affection or care. An artificial intelligence whose entire existence consists of just a playing field of meaningless shapes and colours (that we call flowers) has never been given the chance to learn anything about humanity, the universe, or biology. Therefore it also can’t have any idea of what these concepts are.

Of course we can try to make it have an idea. We can train an artificial intelligence to recognize the words “humanity”, “the universe” and “biology” and produce sentences like “I know these things” when asked about them, but they are still essentially meaningless to the AI itself. It’s like teaching a parrot to repeat sentences that describe quantum physics; it doesn’t make the parrot a physicist. In many ways, parrots are way smarter than any AI today. In AI research, we really have no actual idea of how to teach an AI to understand complex human-world issues yet.
https://liljat.fi/2017/11/humanoid-robot-sophia-sad-hoax-harms-ai-research/
 
Top Bottom