• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Chat GPT

Black Rose

An unbreakable bond
Local time
Yesterday 9:24 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I am under the impression that it has no metacognition; is unable to correct its own mistakes by itself.
It can correct its own mistakes. You can ask it to recheck for error and fix them if any. You can do that in a loop for double-triple checks.
It is pretrained so cannot be human-level in thinking
I don't see what pre-training has to do with it being human-level or not.
i.e. cannot self-reflect / ask its own questions of itself.
It can.
All this can be implemented to create true a.i. but has not been done for safety concerns.

You have it all wrong: chat-gpt cannot do what GPT-4 can.

chat-gpt is not GPT-4

you misrepresent things and I dislike that.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:24 AM
Joined
Feb 4, 2016
Messages
282
---
Better still, if you think it's so smart, then ask it for an investment strategy that is guaranteed to make you a billionaire in 5 years with only an initial investment of 10 dollars. If you're wrong and it's not so smart, you lose 10 dollars. If you're right and it's as smart as you think it is, you're a billionaire in 5 years and can afford to retire a very rich man.
That's not really how it works, it's more like a really good search engine, if I type into Google "how to invest" it will give me a list of pages with relevant content but with ChatGPT it reads that info, distills it down to what is most relevant and presents it as an explanation in natural language.

If I asked ChatGPT about whether a given company is a good investment it can't give me an opinion on the matter, only describe past performance and explain that past performance doesn't guarantee future performance, which is solely the objective truth.
Basic ChatGPT don't have access to internet. Some like BingAI may, but all of them are prone to bullshiting. They are trained to generate convincing texts and plausible follow up. Whether they will be factual will be based on statistics of the data - but there is no clear cut gaurantee. It can make random (but plausible) paper titles and say random (wrong but plausible) thing about random authors, especially when asking questions out of the ordinary - as such they can be in many ways dangerous - because they can be master at bullshiting and you can easily lose track of when they are being factual and when not. It's trained on internet personas, so nothing is really stopping it from providing random reddit-tier subjective investiment opinions either except some filters at work in open ai or whatever.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:24 AM
Joined
Feb 4, 2016
Messages
282
---
I am under the impression that it has no metacognition; is unable to correct its own mistakes by itself.
It can correct its own mistakes. You can ask it to recheck for error and fix them if any. You can do that in a loop for double-triple checks.
It is pretrained so cannot be human-level in thinking
I don't see what pre-training has to do with it being human-level or not.
i.e. cannot self-reflect / ask its own questions of itself.
It can.
All this can be implemented to create true a.i. but has not been done for safety concerns.

You have it all wrong: chat-gpt cannot do what GTP-4 can.

chat-gpt is not GTP-4

you misrepresent things and I dislike that.
First, I apoligize if my post offended you in any way. Second, there exists ChatGPT4. Third, I am talking about GPT as a family of models. Some of the papers I linked are older - only uses GPT3/GPT3.5 (example: https://arxiv.org/pdf/2210.02441.pdf). Moreover, I have personally tested ChatGPT in the beginning and made it correct its mistakes. GPT-4 is better at it, but ChatGPT3.5 isn't completely incapable. Also you can run AutoGPT based on GPT3/3.5. Also I don't see why you are making an issue about GPT4 vs GPT3.5-based models. Both are pre-trained and your comment was aimed at pre-trained models in general.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:24 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I don't see why you are making an issue about GPT4 vs GPT3.5-based models. Both are pre-trained and your comment was aimed at pre-trained models in general.

I see. But I think that long-term memory and reflection are important.

My point is that it has crystallized intelligence but not so much self-awareness.

I can ask it how do you know such and such and it is no better than a 9-year-old.

GPT-4 is a better model, it is multimodal but I have no money to access it.

chat-gpt cannot correct itself that well:

dM6xiAI.png


chat GPT-3.5 (0.16 millimeters a year)

-

Earth's obliquity oscillates between 22.1 and 24.5 degrees[2] on a 41,000-year cycle.

24.5 - 22.1 = 2.4

Earth Diameter
12,742 km

12,742,000,000 millimeters / 360 = 35,394,444.444 millimeters

35,394,444.444 * 2.4 = 84,946,666.666 millimeters

84,946,666.666 / 41,000 years = 2,071.869 millimeters

2,071.869 / 365 = 5.676 millimeters a day

5.676 millimeters a day

SxPwhat.png


still wrong:

chat GPT-3.5 (2.07 millimeters a year)
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:24 AM
Joined
Feb 4, 2016
Messages
282
---
I don't see why you are making an issue about GPT4 vs GPT3.5-based models. Both are pre-trained and your comment was aimed at pre-trained models in general.

I see. But I think that long-term memory and reflection are important.

My point is that it has crystallized intelligence but not so much self-awareness.

I can ask it how do you know such and such and it is no better than a 9-year-old.

GPT-4 is a better model, it is multimodal but I have no money to access it.

chat-gpt cannot correct itself that well:

dM6xiAI.png


chat GPT-3.5 (0.16 millimeters a year)

-

Earth's obliquity oscillates between 22.1 and 24.5 degrees[2] on a 41,000-year cycle.

24.5 - 22.1 = 2.4

Earth Diameter
12,742 km

12,742,000,000 millimeters / 360 = 35,394,444.444 millimeters

35,394,444.444 * 2.4 = 84,946,666.666 millimeters

84,946,666.666 / 41,000 years = 2,071.869 millimeters

2,071.869 / 365 = 5.676 millimeters a day

5.676 millimeters a day

SxPwhat.png


still wrong:

chat GPT-3.5 (2.07 millimeters a year)
I think self-awareness is a pretty vague idea; and it's not clear what people are expecting in terms of AI being self-aware. It can simulate the patterns of "self-speak" associated with its prompt priming it to be an AI-assistant. Parroting patterns of words isn't necessarily self-awareness even if it is highly nuanced. But it's not clear how deep on would go with self-awareness here.

Also 9 years old are not to underestimate. I don't think I was particularly self-unaware as a 9 years old.

Regarding error-correction, of course, it won't be able to correct any and all error everytime. The question would be if there are some class of problems where on average performance would improve if the model is made to self-loop by asking itself to re-check or correct itself. My supsicion would be this would increase some performance on average - indicating some rudimentary self-correction at least.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:24 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Regarding error-correction, of course, it won't be able to correct any and all error everytime. The question would be if there are some class of problems where on average performance would improve if the model is made to self-loop by asking itself to re-check or correct itself. My supsicion would be this would increase some performance on average - indicating some rudimentary self-correction at least.

self-loop is something called a Hopfield network:

0rkMbya.png


It has problems operating in real-time so no one ever developed it.

Here is a transformer:

BVeZ0k8.png


This is what I came up with that operates on hypothesis testing in real-time:

xXP3bEn.png
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:24 AM
Joined
Feb 4, 2016
Messages
282
---
Regarding error-correction, of course, it won't be able to correct any and all error everytime. The question would be if there are some class of problems where on average performance would improve if the model is made to self-loop by asking itself to re-check or correct itself. My supsicion would be this would increase some performance on average - indicating some rudimentary self-correction at least.

self-loop is something called a Hopfield network:

0rkMbya.png


It has problems operating in real-time so no one ever developed it.

Here is a transformer:

BVeZ0k8.png


This is what I came up with that operates on hypothesis testing in real-time:

xXP3bEn.png
I don't see what Hopfield networks has to do with self-loops specifically; but this paper draws connection between Transformers and hopfields: https://arxiv.org/abs/2008.02217

I don't see what exactly you are trying to do by combining Transformers + RNN. While not your exact architecture, but similar things were done before:

Ultimately community moved out of using RNNs - they are harder to scale. Although certain variants of linear RNNs are making a comeback: https://arxiv.org/abs/2303.06349

But you can loop Transformers as well, and also add long-term memory for example by creating a vector database and such.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:24 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I don't see what exactly you are trying to do by combining Transformers + RNN.

I am considering that a.i. needs to become an "Agent". Agents operate in 3D and in real-time. They self-regulate.

My architecture is meant to scale "hypothesis testing".

GPT needs prompting. An agent would do stuff on its own.

 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:24 AM
Joined
Feb 4, 2016
Messages
282
---
I don't see what exactly you are trying to do by combining Transformers + RNN.

I am considering that a.i. needs to become an "Agent". Agents operate in 3D and in real-time. They self-regulate.

My architecture is meant to scale "hypothesis testing".

GPT needs prompting. An agent would do stuff on its own.

I would think humans too depend on prompts. For example evolution set up some initial priming, then there is constant experiential input from within mother's womb. AutoGPT is designed to be more agentic and autonomous. One just needs to provide high level goals, and it then keeps on doing its own thing on and on forever.

Scaling hypothesis testing would be something we should do; more generally we should think about biasing agents to be epistemically virtuous.
 

dr froyd

__________________________________________________
Local time
Today 4:24 AM
Joined
Jan 26, 2015
Messages
1,485
---
what if chat GPT is like in westworld where its actually used to collect data on people's brains?

just a thought
 

dr froyd

__________________________________________________
Local time
Today 4:24 AM
Joined
Jan 26, 2015
Messages
1,485
---
for someone with access to chat gpt, can you ask it if it can predict the next 30 steps of this series:

0.2794155 0.3738767 0.4646022 0.5506855 0.6312666 0.7055403 0.7727645 0.8322674 0.8834547 0.9258147 0.9589243 0.9824526 0.9961646 0.9999233 0.993691 0.9775301 0.9516021 0.9161659 0.8715758 0.8182771 0.7568025 0.6877662 0.6118579 0.5298361 0.4425204 0.3507832 0.2555411 0.1577457 0.05837414 -0.04158066 -0.14112 -0.2392493 -0.3349882 -0.4273799 -0.5155014 -0.5984721 -0.6754632 -0.7457052 -0.8084964 -0.8632094 -0.9092974 -0.9463001 -0.9738476 -0.9916648 -0.9995736 -0.997495 -0.9854497 -0.9635582 -0.9320391 -0.8912074 -0.841471 -0.7833269 -0.7173561 -0.6442177 -0.5646425 -0.4794255 -0.3894183 -0.2955202 -0.1986693 -0.09983342 0 0.09983342 0.1986693 0.2955202 0.3894183 0.4794255 0.5646425 0.6442177 0.7173561 0.7833269 0.841471 0.8912074 0.9320391 0.9635582 0.9854497 0.997495 0.9995736 0.9916648 0.9738476 0.9463001 0.9092974 0.8632094 0.8084964 0.7457052 0.6754632 0.5984721 0.5155014 0.4273799 0.3349882 0.2392493 0.14112 0.04158066 -0.05837414 -0.1577457 -0.2555411 -0.3507832 -0.4425204 -0.5298361 -0.6118579 -0.6877662 -0.7568025 -0.8182771 -0.8715758 -0.9161659 -0.9516021 -0.9775301 -0.993691 -0.9999233 -0.9961646 -0.9824526 -0.9589243 -0.9258147 -0.8834547 -0.8322674 -0.7727645 -0.7055403 -0.6312666 -0.5506855 -0.4646022 -0.3738767 -0.2794155 -0.1821625 -0.0830894 0.0168139 0.1165492 0.21512 0.3115414 0.4048499 0.4941134 0.5784398 0.6569866
this is just a basic sine wave

would be interesting to see if it understands the structure of it
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:24 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama

Mini GPT 4 AI​

3 Next Gen Vision Abilities​


 

birdsnestfern

Earthling
Local time
Yesterday 11:24 PM
Joined
Oct 7, 2021
Messages
1,897
---
Ok, interesting!
 
Top Bottom