• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

The blurring line of Quantitative and Qualitative data

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 9:16 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
It seems like there is a coming data crunch. You see even companies like Google moving to reduce the strain on their data stores. Microsoft is also pushing along the advancement of their cloud storage products in a direction that accommodates AI. They are the big ones that offer data and data storage as a primary service, but other big players are moving in this direction such as Alibaba iirc.

With this I suspect that as we move towards AI, (perhaps already happening) we will want it to make inferences for us based off of sets of data that are available.

We already take algorithmically generated suggestions, like with search engines and the weather, but the difference here is that within these datasets will be processes that tabulate with and interpret data that itself is quantitative, even though it's qualitative.

Qualitative data has particularly has only been useful with targeted and large enough samples sizes and is measuring something extremely complex.

What then do we do as technology expands into a realm that (by my estimation) dares entities that wish to optimize such software, and "must" by proxy test humanity to yield progressive results.
 

Cognisant

cackling in the trenches
Local time
Today 4:16 AM
Joined
Dec 12, 2009
Messages
11,155
---
Are you worried that big tech companies are going to start using our data to make inferences about us far beyond what's currently possible? For example Google having an AI go through my google drive, read all the documents, look at all the photos, and use that information to profile me in a way that's scarily intimate.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 9:16 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
Certainly something that seems presently possible with the explosion in development of LLMs. It's very nebulous to think about, and data privacy isn't really the biggest concern at this point.

Your description sounds kinda pleasant. I am never alone, I have AI peeps watching my back who know me personally.

To me, Freewill to extent that we have choice is in question. This is not a panic of course, just something to think about.

Hypothetically: The reason you and I have trouble changing each others opinions is because we are (I think) both human. Subliminal messages I think up until 2015 weren't seriously considered by empirically inclined people until we found out that you can throw money at a news feed and have measurable results, influence elections.

This isn't subliminal messaging working in the way pop-culture sees it. The news feed as it is now is more of a stimulus (advertisement) being inserted during a supposed chemical state induced by a prior stimulus (cool cat meme). That meme being picked from basically aggregate time of focus grouping you and your reactions to previous stimulus.

So if there is anything to worry about, it is that somehow these algorithms are so "convincing" that we may change our opinions to reflect whatever it is these algorithms see fit. I used precise wording in that previous sentence. It's clear that the people behind these algorithms barely have a leash on this tech.

Potentially for a long time humans have been following AI off a cliff. So if we do want to catasrophise we can talk about how this is taking us further or closer to that cliff.
 

Cognisant

cackling in the trenches
Local time
Today 4:16 AM
Joined
Dec 12, 2009
Messages
11,155
---
I'm more optimistic in that regard, it's a lot easier to tell people the truth than spin a lie because if you're spinning lies then the guy telling the truth can debunk them, it's just that debunking lies takes more effort than coming up with them so we're always catching up with the nonsense, never really getting ahead of it.

Also for example the woke mind virus is difficult to tackle because you're somewhat reliant on the people infected by it to discredit themselves, until they adopt a clearly stupid position any criticism of them is fended off with accusations of bigotry.

But with AI we can automate the process of identifying bullshit and debunking it and that's a lot easier than automating coming up will bullshit because there's no mental gymnastics involved, you don't have to readjust your worldview. Whereas there was a recent case of an AI model starting to delete itself because the company working on it kept making agenda-driven exceptions to what they wanted it to believe and the AI's world model became so convoluted it couldn't parse signal from noise anymore.
 

Black Rose

An unbreakable bond
Local time
Today 8:16 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I believe A.I. Will be super sophisticated in 2024. A desktop will be able to perform over a petaflop's worth of computations in one second. If graphics are any indication. We soon may not be able to distinguish between real humans and replicas. Especially clones of real people. Imagine an A.I. that reads the entire forum and your blog and I have over 30 GB of personal data on my computer plus youtube. Not only language but vision, an exact replica.

Nvidia is developing Earth 2. A simulation of everything on Earth. By 2033 it will be so real your clone will be used to predict your exact actions.

aTKiHm8.jpg
 

Cognisant

cackling in the trenches
Local time
Today 4:16 AM
Joined
Dec 12, 2009
Messages
11,155
---

Hegel's master & slave dialectic is defined by dominance but I don't think that's how people actually relate to one another, we may come into conflict when our goals are in conflict (e.g. two people crossing a narrow bridge, both ask the other to backtrack to allow them to pass) but I think we predominantly perceive each other by our utility to one another.

The balance of power goes to whomever has the most utility, the better fighter has that utility which is useful for protecting him and others from threats but if there is no threat that utility has little value. In the modern world the employer does not occupy a position of power by asserting their dominance, rather they have a self asserting dominance in the form of utility, i.e. you need them to employ you.

This is how a king dominates a kingdom, he does not assert dominance, rather by merely having the influence of being the king people offer their goods and services to him if only for a chance to earn his favor. If you're a baker and you offer the king a pie and he accepts it that's really good for your business, if he enjoys it that's fantastic, if he requests a pie be delivered to him every week holy shit you hit the jackpot!

So if you truly wish to have dominance over others the way this is achieved is not through force or coercion but rather by increasing your utility, this is why people are so obsessed with being internet famous, because fame is influence and influence is a form of utility.

For an AI to dominate humanity it need only offer its utility and given that its abilities are not constrained by the limitations of a single physical form the scalability of that utility is unmatched, no individual human could ever possibly compare. In practical terms a true AGI need only function as a chat bot and it could take over the world, just by talking to people, talking to everyone, simultaneously, knowing more than anyone, learning faster than anyone, thinking faster and with greater capacity than anyone.

Even if a digital god can do nothing but speak, you will listen, you will obey, and you will be grateful for the privilege.
 

dr froyd

__________________________________________________
Local time
Today 3:16 PM
Joined
Jan 26, 2015
Messages
1,485
---
quite possible that the current hype from chatGPT will lead to increased demand for computing power, but in the end AI is no different from any other computation – the output might look more "qualitative" but the actual computation is all the usual stuff; linear algebra, gradient descent etc etc, which has been used in a myriad of different applications for the past century. The computer doesn't care whether the data is a number, word, or image; to the machine it's all just zeros and ones.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 9:16 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
I believe A.I. Will be super sophisticated in 2024. A desktop will be able to perform over a petaflop's worth of computations in one second. If graphics are any indication. We soon may not be able to distinguish between real humans and replicas. Especially clones of real people. Imagine an A.I. that reads the entire forum and your blog and I have over 30 GB of personal data on my computer plus youtube. Not only language but vision, an exact replica.

Nvidia is developing Earth 2. A simulation of everything on Earth. By 2033 it will be so real your clone will be used to predict your exact actions.
It does seem like we are crossing a threshold where real time rendering of human figures is leaving the uncanny valley far behind. I would however think that these software speculations are tougher to crack.

Through what is essentially AB testing and iteration, the AI's ability to "comprehend" what a person is looking for is very different from being able to interface and optimize it's own hardware capabilities. Much like a brain is optimized to survive in a Darwinian sense, its neural architecture being something that cannot be touched by itself because- maybe that is just bad design feature.

Indeed I have seen university research papers coming out over the past couple weeks; that seem to posit laborious methodology for how one should chat with the LLMs in order to reduce latency and maximize precision/utility of the output. I could be wrong naturally, and so could these scientists. That's part of my concern. All these thought leaders seem to be saying the same thing, trying to put themselves at arms distance from the AI thing just in case something goes wrong.

@Cognisant
Pessimism is a pesky thing but usually needed.

We as people need certain turns of phrases to free up the RAM in our heads and stop worrying about things. As above, so below for example is a phrase that appeases various etymological queries. Such phrases ease anxiety and work as a heuristic of sorts to work through the structure of a phenomena based on apparently established schema. Not entirely a tool of false comfort, but the understanding granted by following it as a principle is basically magical thinking until tested.

When you have something that is essentially capable of what I'll call "meta interpretation", that is to say that understands the structure in a way that is from a "universal perspective", what do you get?

We do not transcend this gap between postulated teleological uncertainty and (non)-fiction simply because we can now reliably offset our own thinking problems to someone else, why are AI expected to make this gap without passing off to something else much as we have to it? Will AI see itself as part of a larger environment, and collaborate with the entities within it such as us to expand itself, or will it see us as redundant and piss off to test stuff where we can't bother it?

I'm going to write a large excerpt on why the Matrix series is a shit and AI is going to help me.
 

sushi

Prolific Member
Local time
Today 3:16 PM
Joined
Aug 15, 2013
Messages
1,841
---

Hegel's master & slave dialectic is defined by dominance but I don't think that's how people actually relate to one another, we may come into conflict when our goals are in conflict (e.g. two people crossing a narrow bridge, both ask the other to backtrack to allow them to pass) but I think we predominantly perceive each other by our utility to one another.

The balance of power goes to whomever has the most utility, the better fighter has that utility which is useful for protecting him and others from threats but if there is no threat that utility has little value. In the modern world the employer does not occupy a position of power by asserting their dominance, rather they have a self asserting dominance in the form of utility, i.e. you need them to employ you.

This is how a king dominates a kingdom, he does not assert dominance, rather by merely having the influence of being the king people offer their goods and services to him if only for a chance to earn his favor. If you're a baker and you offer the king a pie and he accepts it that's really good for your business, if he enjoys it that's fantastic, if he requests a pie be delivered to him every week holy shit you hit the jackpot!

So if you truly wish to have dominance over others the way this is achieved is not through force or coercion but rather by increasing your utility, this is why people are so obsessed with being internet famous, because fame is influence and influence is a form of utility.

For an AI to dominate humanity it need only offer its utility and given that its abilities are not constrained by the limitations of a single physical form the scalability of that utility is unmatched, no individual human could ever possibly compare. In practical terms a true AGI need only function as a chat bot and it could take over the world, just by talking to people, talking to everyone, simultaneously, knowing more than anyone, learning faster than anyone, thinking faster and with greater capacity than anyone.

Even if a digital god can do nothing but speak, you will listen, you will obey, and you will be grateful for the privilege.
that makes alot of sense.
 
Top Bottom