I am not sure what your argument exactly even is.
Hume's point was that there is no apparent necessary connection between cause and effect. As an example, just looking at the cause a priori you can't figure out what the effect is, no matter how intelligent you are (without prior knowledge of...
I believe Newton, Leibniz, and Descartes were christians or at least theists. Hobbes is mixed. Either way I don't see why this is relevant.
Also we are going into tangents here. As I said my issues here are deeper than Newtonian vs Not. So pre-Newtonian vs post-Newtonian is an orthogonal side...
I don't have a problem with the view that "things have a nature that causes the effect" - that's basically the idea of dispositions - dispositional powers. It's not a unworkable thesis. I find this more plausible that laws are idealizations of dispositional activities of things (agents) than...
I don't think it's quite correct that Hume's contemporaries didn't like him. IIRC, he did achieve fame for his philosopher in his later life - although initially his treatise was relatively ignored - again that doesn't tell that he was disliked. Also "methodological naturalism" were already...
First, there isn't any clear differentiation between natural and metaphysical. Metaphysical - insofar we are talking about ontology - is just about what exists - on the nature of being. Nature is the expression of being so not something ametaphysical. Perhaps, you tried to mean "supernatural"...
I always found the idea of things somehow having "explanations" strange. Also yes, I don't buy PSR. I also don't think "natural causes" provide reasons in a strong sense in the first place (consider Hume for example, there doesn't seem to be any logically necessary connection between causes and...
I would think humans too depend on prompts. For example evolution set up some initial priming, then there is constant experiential input from within mother's womb. AutoGPT is designed to be more agentic and autonomous. One just needs to provide high level goals, and it then keeps on doing its...
"I don't know. Either there is no real explanation at all - brute fact (most plausible to me) - after all why should there even be reasons at all? or perhaps one can derive everything from logical laws (unlikely, but that's probably the only way you can fully avoid brute fact. Unless everything...
I don't see what Hopfield networks has to do with self-loops specifically; but this paper draws connection between Transformers and hopfields: https://arxiv.org/abs/2008.02217
I don't see what exactly you are trying to do by combining Transformers + RNN. While not your exact architecture, but...
I think self-awareness is a pretty vague idea; and it's not clear what people are expecting in terms of AI being self-aware. It can simulate the patterns of "self-speak" associated with its prompt priming it to be an AI-assistant. Parroting patterns of words isn't necessarily self-awareness even...
First, I apoligize if my post offended you in any way. Second, there exists ChatGPT4. Third, I am talking about GPT as a family of models. Some of the papers I linked are older - only uses GPT3/GPT3.5 (example: https://arxiv.org/pdf/2210.02441.pdf). Moreover, I have personally tested ChatGPT in...
Basic ChatGPT don't have access to internet. Some like BingAI may, but all of them are prone to bullshiting. They are trained to generate convincing texts and plausible follow up. Whether they will be factual will be based on statistics of the data - but there is no clear cut gaurantee. It can...
I would be careful with it. It has a tendency to hallucinate. While that's true for human bots as well, you can have a better estimate how to trust based on historical experience with the particular human, and/or based on sources/fact-checking/peer-reviews etc. While GPT is a being of chaos...
It can correct its own mistakes. You can ask it to recheck for error and fix them if any. You can do that in a loop for double-triple checks.
I don't see what pre-training has to do with it being human-level or not.
It can.
https://github.com/Torantulino/Auto-GPT...
Why exactly does this need explanation? Position an intelligent mind or anything does not answer this - that's just saying "it all works, because there is a working intelligence" - that's no better than saying "it works because it works". That's the main problem I have with design arguments. I...
I do mostly pursue acts that harm me psychologically. I don't think the standard "pleasures" are necessarily particularly good. So I am not sure if there is any interesting sense of harmony at all.
I seruously doubt epiphenomenalism is particularly popular.
According to epiphenomenalism...
Do you remember your experiences in the womb? Do you remember having premade models back then?
Is what you are saying based on any evidence in developmental psychology?
Even basic things like object permanence takes time to develop.
And even if some things are "premade" it doesn't really mean...
Probably not. AlphaFold is built for different things working on protien sequences instead of standard visual stuff (but I don't know the nitty gritty). Maybe we can do something with its MCTS algorithm.
The n x n connections in Transformers are based on attention created by pair-waise dot product of vectors in a sequence. The attention function can adaptively provide low weights to spurious relations and learn a tree like structure. In practice GPT will have a limited context window to manage...
Interestingly his actual doctorate seems to be in philosophy (neither AI and Cognitive Science). I wonder how he got into them: http://kryten.mm.rpi.edu/scb_vitae_092722p.pdf
His supervisor Chrisholm is also not a really cognitive science person (https://plato.stanford.edu/entries/chisholm/)...
Depends on what we exactly mean by recurrent hierarchy.
During inference, GPT generate one token at a time which you can think of it as a recurrent process.
Moreover it has multiple "layers". Which can potentially model hierarchies. During generating any token GPT attends to all past generate...
No I think there have been evidence to foresee it for about 4-5 years now. But people evaluate evidence differently based on their priors and predujices (we all have our own different predujices -- and based on that some would optimistically focus on "what we are gaining/improving upon -- and...
I would think a monte-carlo approximation for AIXI (there are some papers/thesis about that; haven't read) would be something like that where you sample plausible actions/plausible hypothesis modifications etc., and then when the tree grows out too much truncated.
See monte carlo tree search...
To an extent whether we want to make an AI "stop" or not after a task is finished is an engineering choice. With enough resources, you can for example keep ChatGPT forever live and also make it continually improve from user's upvote/downvotes in responses.
However, there are typically problems...
"AGI" is not really consensual formalized notion. Different people have different ideas as to what even "intelligence" should mean. At this point, of course, there is still a lot of path to cover, and obviously you can find out many missing elements. But AIs do actuall engineer things and...
He is an expert, yes. But AI is a broad and fast moving field. He knows which models are moving (like Transformers), but many fascinating things are often more niche and he also seems a bit outdated - for example regarding existence of generative models for image generation, art etc. ChatGPT...
I am a Phd Student in CS with specialization in NLP. But I don't really know that much comparatively especially regarding more fundamental foundational stuffs and deeper mathematical principles.
No, I don't equate knowledge with consciousness. Also both are confusing topics with little...
I don't know why we need algorithms specifically in the human brain. For example, you can fly with a balloon. To do that you don't need to look at how a bird works. The same skill can be instantiated in multiple ways. Moreover CNN is already partially inspired from human brain. It may turn out...
Building world-models is not necessarily "more than" calculator abilities. AIs (eg. model-based RL/IL) have been used to make models of their environment, learn their dynamics, they have been also been trained to be grounded in physics; and they have been used to associate multiple modalities of...
It can learn new dialog by interacting with humans. AIs have been trained from human feedback using reinforcement learning since ancient times (for more recent versions see InstructGPT and ChatGPT). Tay isn't a counterexample. It learned from humans, all right, just not all the good things.
There are a lot of wrong with what the specialist says.
(0) I don't know much about the original idea of singularity. It's typically vaguely stated so you go both ways in interpreting it with chairty or interpreting it without charity. Sure machines may not be able to take some "qualitative"...
I would distinguish understanding and sentience. I take a more functionalist approach to understanding, whereas by sentience I consider to be the presence of "phenomenal feel" -- "the what it is like"-stuff (Nagel et al.).
In that sense, I am not entirely sure that phenomenal feel is a matter...
I don't treat understanding as binary but a matter of degree. For example, even learning how texts are used in interactive context to a limited extent corresponds to a limited extend of understanding.
If it gains capability to comprehend multi-modal associations and gain corresponding skills...
That's a bit underestimating it. Those are descriptions of classical frequency-based language models eg. using PCFG.
Deep learning LMs are in some sense more mysterious. Of course, on the outside we know what they are: just matrices and non-linear functions (nothing mysterious). But it's not...
I don't think anything too much. As I said, I believe you can have multiple frameworks for free will or freedom more generally, and multiple dimensions can be associated with it. I don't see anything wrong with giving some importance to epistemic decisions and calling it to be related to free...
Behaviors don't demonstrate sentience automatically. Even a perfect intelligent system wouldn't necessarily be sentient. I don't think sentience is necessary at all for abstraction, reasoning or anything (but can be an ingredient for particular kind of implementations of abstraction/reasoning...
As meaningfulness comes from personal values, us being beings with purpose (imposed purpose by a divine order, by evolution, or those pesky supernatural bureaucrats) is immaterial. Even with no purpose (or even with a meaningless purpose), if we can find meaning in some goal, we can define our...
"Do you think our culture mythologizes marriage and life in general?"
Yes.
"Do we keep on giving gifts to our children, telling them to give gifts as well, for a reason that cannot be rationally defined?"
I don't know. I am choosing not to (for now), personally. Also it doesn't align with my...
This site uses cookies to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies. We have no personalisation nor analytics --- especially no Google.