# Chat GPT

#### Cognisant

##### Prolific Member
To calculate the radius of a circle, you need to know either the diameter or the circumference of the circle.
1. If you know the diameter: The radius is half of the diameter, so you can simply divide the diameter by 2. That is, if the diameter is "d", the radius is "r", and the formula is:
r = d/2
For example, if the diameter of a circle is 10 units, the radius would be 5 units.
1. If you know the circumference: The circumference is the distance around the circle. The formula to calculate the circumference is:
C = 2πr
where "π" is pi, which is approximately 3.14, and "r" is the radius. To find the radius, you can rearrange the formula to solve for "r":
r = C/(2π)
For example, if the circumference of a circle is 20 units, the radius would be:
r = 20/(2π) = 3.18 units (rounded to two decimal places)
So, depending on the information you have about the circle, you can use one of the above formulas to calculate the radius.
Impressively it interpreted my statement as a question without the question mark, without knowing whether I had the diameter or the circumference, and gave me solutions for both.

To adjust the max software endstop for the X axis in Marlin firmware, you need to follow these steps:
1. Open the Marlin firmware configuration files in your preferred code editor.
2. Search for the "configuration.h" file and open it.
3. Find the section for X-axis endstop settings. This section should have the following lines of code:
#define X_MAX_ENDSTOP_INVERTING true // set to true to invert the logic of the endstop. #define X_MAX_ENDSTOP_PIN 22 // specify the pin for the X-axis max endstop.
4. If the X-axis endstop is not currently defined, you may need to uncomment the relevant lines by removing the "//" at the start of each line.
5. Change the value of the X_MAX_ENDSTOP_PIN to the pin number that you have connected the endstop to.
6. If your endstop logic is reversed (triggered when not pressed), change the value of X_MAX_ENDSTOP_INVERTING to "false".
7. Save the configuration.h file and recompile the firmware.
8. Upload the firmware to your 3D printer.
Once you have updated the firmware, you should be able to test the X-axis endstop by manually triggering it while monitoring the status on your printer's LCD display or through your host software. If the endstop is triggered, the printer should stop moving in that direction. If the endstop is not triggered, the printer will continue to move until it reaches its physical limit.
Note: It's important to make sure that your endstop is properly wired and connected to the correct pin before updating the firmware.
Again very comprehensive answer, not just giving me what I asked for but also relevant endstop debugging info.

Zeno's paradox is a set of philosophical problems that arise from the idea that motion and change are illusory. One of the most famous versions of the paradox is the Dichotomy paradox, which states that in order to travel any distance, one must first travel half that distance, and then half the remaining distance, and so on, resulting in an infinite number of steps to reach the destination. Another version of the paradox is the Arrow paradox, which argues that since an object is always in a single place at a single moment in time, it cannot move from one place to another.
There is no one universally accepted solution to Zeno's paradox, as it remains a subject of ongoing debate among philosophers and mathematicians. However, there are a few common ways in which the paradox has been addressed:
1. One way to address the paradox is by considering the idea of limits. According to this approach, the infinite series of steps that would be required to complete a motion is simply a mathematical concept that approaches a limit but never reaches it. In other words, the motion is never actually completed, but it gets arbitrarily close to the destination.
2. Another way to address the paradox is to argue that the paradox is based on a misunderstanding of the nature of time and motion. For example, some philosophers have argued that motion is not a series of steps, but a continuous process that cannot be broken down into smaller parts.
3. Some philosophers have suggested that the paradox can be resolved by making a distinction between potential infinity and actual infinity. According to this approach, the infinite series of steps required to complete a motion is only a potential infinity, which means that it has the potential to continue indefinitely, but it never actually does so.
Overall, the solution to Zeno's paradox remains a subject of debate and discussion, and it is likely that there will continue to be new approaches and perspectives on the paradox in the future.
Damn, I'll have to try harder.

A neural network is a machine learning model that is capable of learning patterns and relationships in data by processing large amounts of input data and adjusting its internal parameters through a process called training.
When a neural network is presented with an observation, it processes the input data through a series of interconnected layers of artificial neurons, each of which performs a specific computation on the input data. These computations involve multiplying the input data by a set of weights and biases that are initially random, and then passing the result through an activation function that introduces non-linearity into the computation.
As the input data flows through the neural network, the weights and biases are adjusted through a process called backpropagation, which involves computing the error between the network's output and the desired output, and then adjusting the weights and biases to minimize this error. This process is repeated many times until the network is able to accurately classify or predict the input data.
During the training process, the neural network learns to recognize and abstract features from the input data that are relevant to the task at hand. These features can be thought of as higher-level representations of the input data that capture important patterns and relationships in the data. These abstractions are learned by the network through a process of feature extraction and dimensionality reduction, where the network learns to identify the most salient features in the input data and represent them in a lower-dimensional space.
Once the neural network has learned these abstractions, it can use them to make predictions or classifications on new, unseen data. This is known as the inference phase, where the network takes in new observations and maps them onto the abstract features it has learned during the training phase, in order to make predictions or classifications about the new data.
In summary, a neural network turns an observation into an abstraction by learning to recognize and abstract relevant features from the input data through a process of feature extraction and dimensionality reduction, and then using these abstractions to make predictions or classifications on new, unseen data.
Fuck. Me.
This is a great way to learn things.

#### EndogenousRebel

##### mean person
Right? I think the only crime it commits is the fact that the more crimes it commits the better the end-product will be.

I don't think it would be an issue with some good legal infrastructure behind it. The further that is from the public domain, the more fucked it is. Unless I guess you can somehow charge licensing fees?

It certainly cuts out a lot of bullshit, but misses the mark in a lot of regarding human experience.

#### Animekitty

##### baby marshmallow born today
I am under the impression that it has no metacognition; is unable to correct its own mistakes by itself. It is pretrained so cannot be human-level in thinking i.e. cannot self-reflect / ask its own questions of itself.

All this can be implemented to create true a.i. but has not been done for safety concerns.

#### Cognisant

##### Prolific Member
Right? I think the only crime it commits is the fact that the more crimes it commits the better the end-product will be.
I swear it's like you're trying to be as difficult to understand as possible, what crime are you talking about?

I don't think it would be an issue with some good legal infrastructure behind it. The further that is from the public domain, the more fucked it is. Unless I guess you can somehow charge licensing fees?
It's freely available to the public, at least I didn't have to pay anything, and assuming some sort of crime is being committed how does charging for the service negate that?

Please stop assuming that we can all hear the voices in your head.

#### EndogenousRebel

##### mean person
Excuse my sociality off of this forum please.

Literally the way this AI makes what it generates is theft. Intellectual Property laws have always been stupid and skewed towards people with power. So we are at an impasse with this stuff. We can either reward creating new stuff or we can punish it.

I've had this discussion before and I would be shocked if anyone brought up a good point that counters what I'm saying without appealing to ignorance (ie philosophical propositions that have no practical resolution in the real world).

#### Cognisant

##### Prolific Member
Ok now we're talking, indeed from a creative aspect its concerning that I can prompt for an image in a particular artist's style as it begs the question why I would go to the artist and pay/wait for a commissioned work. That being said people still pay to play games and watch movies even though Pirate Bay exists, there's certainly a prestige to owning an artwork that you can prove came from an internet famous artist.

I would fucking love to have a Tsutomu Nihei drawing, goddamn I would let that guy tattoo me without even knowing what the artwork's going to be.

Ultimately I suppose its just a tool, you can't just ask it to write a best-selling book and expect it to spit one out, more likely you'll get a competently written first draft that's really only as good as the synopsis you gave it and then you'll need to iterate it into a final version. Likewise it may be able to create an artwork I specify in the style of Tsutomu Nihei but it's not really his style that moves me, its his imagination, so if anything commissioning a work from him and telling him what to create would be a pretty daft thing to do. I'd rather pay him just to create what he wants because I'm pretty sure, no matter what it is, I'm going to love it.

#### EndogenousRebel

##### mean person
Indeed. Creativeness (certain skillfulness) will still be a determining factor in quality of work.

My concern is that even if I do that, once I complete or worse yet, before I complete my work, the AI is already integrating what I am doing into itself.

No idea is original of course, but once profits step in, someone has to decide what is best to make sure productivity isn't punished. I guess fulfillment is asking for too much to some people.

#### Cognisant

##### Prolific Member
It is the age old question of automation, suppose there's two carpenters making chairs, the first whittles his chairs by hand and the other uses various power tools to produce chairs on an industrial scale. Obviously the second guy can afford to sell his chairs for far less than the first which puts the first guy out of business, unless he can find a niche supplying bespoke furniture to rich clients.

If an AI program can turn a prompt into a story that's going to put traditional writers out of business, even if the stories the AI writes aren't as good the sheer efficiency with which they can be created means the AI-author can fill entire libraries in the time it takes a traditional author to write one book.

It's not productivity being punished, if anything its productivity being rewarded, but yeah the fulfillment of the author/whittler is sacrificed on the altar of efficiency. Still that's the nature of a job isn't it, we don't do work because we enjoy it we work because we get paid and we're paid by someone to do the work for them so they don't have to do it themselves.

Short stories were always easy/fun to write which is why you'll never be able to sell one, you can only sell compendiums of short stories, anyone can write a short story but few people have enough dedication to write enough to fill a book.

#### EndogenousRebel

##### mean person
I'm looking at this from the point of the cotton gin or printing press.

Of course they had the power to do amazing things, but ultimately exploitation became easier due to these tools.

I'm just saying it's not so clear cut. Perhaps it get's better before it gets worse, or worse before better (probably the latter) we have to adapt to new things if we are going to avoid senseless bullshit.

I'm certainly going to use it whenever possible. You do bring up a good point about craft vs industrialization. Perhaps industrialization encroaching on the arts this much isn't so bad, but I don't think anyone can predict what will happen. Legal system initially was loose with AI art, then changed their mind, so that's something.

#### Cognisant

##### Prolific Member
I had to look up cotton gin, I thought that was an actual kind of gin

The good thing about the industrialization of something is that it allows for more specialization, it becomes more economically viable to cater to niche markets and such catering becomes necessary as the mainstream market becomes insanely competitive.

Going back to the carpentry example the whittler could buy power tools and specialize in infant high-seats and by doing so seize that share of the market from the guy making all kinds of furniture.

So if creating stories becomes incredibly cost effective I expect we'll see an explosion of very specific genres, e.g. sci-fi horror isekai with female protagonist could be an entire genre unto itself and a smart author will do what they can to understand and capture that niche market for themselves.

#### EndogenousRebel

##### mean person
What was once "genre killing" movies like Cabin In The Woods (2014?) will certainly become the cream of the crop (sigh).

Classical literature predate established genres, so they don't really have this element of expectations in them. They aren't delivering like what these genres did with structured themes and motifs that are easier to analyze. I think AI's attempt to write that would be fucking hilarious and I would prefer it try to do that.

#### Animekitty

##### baby marshmallow born today
The way chat - gpt works, it is like stealing as any web crawler "steals".

Obviously, if it were doing something illegal like reading books on amazon things would get legal but people post shit on the internet so it's okay right?

Art site blocks the a.i. crawlers now so only art that is open source in the new service agreements are allowed to be scanned.

The ability to generate new language / images will be possible soon without stealing. because the a.i. will be designed to reflect on itself in the future and thus decide for itself what it wants to make. like any creative human.

#### EndogenousRebel

##### mean person
The ability to generate new language / images will be possible soon without stealing. because the a.i. will be designed to reflect on itself in the future and thus decide for itself what it wants to make. like any creative human.
I don't think that removes the ethical dilemma. It is just stealing with "less" steps. You seem to be drifting towards AI being their own entity. In that case does the AI not become the owner of what it creates? Who is accountable in a situation where something bad happens? The user of the AI interface or the company that created it?

Crawlers like Google are more or less harmless and provide an essential function to make the internet work. Albeit they profit by commercializing ad space. The EU, I think does get on their ass about their news.google stuff. It is up to the Government to decide what it should do. America is always lacking behind these days.

#### Animekitty

##### baby marshmallow born today
The ability to generate new language / images will be possible soon without stealing. because the a.i. will be designed to reflect on itself in the future and thus decide for itself what it wants to make. like any creative human.
I don't think that removes the ethical dilemma. It is just stealing with "less" steps. You seem to be drifting towards AI being their own entity. In that case does the AI not become the owner of what it creates? Who is accountable in a situation where something bad happens? The user of the AI interface or the company that created it?

Any time you read a book or look at art or watch a movie you are stealing ideas.

But when you make up your own ideas it is not stealing.

What matters is that if a.i. becomes its own entity it will need to be given some kind of legal rights. rules that govern bad and good behavior like with any human we punish or reward or terminate.

This must be in place because the human level a.i. can and will have the social/technical intelligence to fight back.

This has been known since the 1920's when the term "robot" was invented.

Programmers since the 80's knew about "the control problem".

Only now has it as a problem been accepted as a problem by the public.

But it was solved in 2007 by the London a.i. club.

Mainstream companies have always been behind the elite old-guard hackers.

Only recently have the new guard been involved in the so-called "control problem" that they just now have been attempting to solve a long solved problem.

#### EndogenousRebel

##### mean person
Music. No one cared about, or rather they didn't feel threatened by, piracy, UNTIL things like Napster started to crop up. This made it effortless to find any song you wanted to and download it right then and there.

It wasn't necessarily about sitting through ads on the radio or not wanting to pay. Streaming services just didn't exist yet.

Stick it to the big establishment and the generally affluent artists, they got their day in court.

So who gets shafted in this situation?

It is literally SO GOOD because you just need one tool: language. I don't have to go to multiple websites, I don't have to parse nuance or even walk through things I would normally have to critically think through.

The database that it is built with (pirated data) is INSTRUMENTAL in the quality of it's output. Done by people from a time where this technology did not exist, for a purpose all on their own.

You can't sit here and tell me that there is no ethical conflict here. I guess I'll just die on this hill for people who are may actually do good work. I hope it's not something you spent years working on that gets stolen and you don't get any compensation for.

#### Animekitty

##### baby marshmallow born today
The database that it is built with (pirated data) is INSTRUMENTAL in the quality of it's output. Done by people from a time where this technology did not exist, for a purpose all on their own.

Yes, that is chat-gpt in a nutshell, but chat-gpt is not like human level a.i.

Human-level a.i. need not read the internet fully. It can be educated without stealing from creators.

Can a.i. read the whole internet and be intelligent? yes but not like a human is.

Humans have this thing called a "self-model". It allows us to prioritize our values.

Without it, we are just calculators. Priorities matter because everything cannot be done at the same time (combinatorial explosion).

To choose what is important and to plan it out and to adjust for contingencies.

little a.i. does this all the time, Big a.i. has constraints. (bounded rationality)

agents are not databases but databases can do simulations at scale, simulation just cannot make choices like agents can.

#### EndogenousRebel

##### mean person
Workers built the industry - Intellects documented/optimized the industry - Technologists ?

All I'm saying is that this is theft. It IS parasitic.

I don't know how what the fuck you just wrote is relevant. Reason not to fear AI?? I don't give a flying fuck about AI till it can convince me I should.

We can very easily make sure this benefits everyone is all I'm saying.

#### Animekitty

##### baby marshmallow born today
I don't know how what the fuck you just wrote is relevant.

Because chat-gpt is not the limit of what can be done it is only the beginning.

Don't limit yourself to what is only currently possible.

#### Grayman

Music. No one cared about, or rather they didn't feel threatened by, piracy, UNTIL things like Napster started to crop up. This made it effortless to find any song you wanted to and download it right then and there.

It wasn't necessarily about sitting through ads on the radio or not wanting to pay. Streaming services just didn't exist yet.

Stick it to the big establishment and the generally affluent artists, they got their day in court.

So who gets shafted in this situation?

It is literally SO GOOD because you just need one tool: language. I don't have to go to multiple websites, I don't have to parse nuance or even walk through things I would normally have to critically think through.

The database that it is built with (pirated data) is INSTRUMENTAL in the quality of it's output. Done by people from a time where this technology did not exist, for a purpose all on their own.

You can't sit here and tell me that there is no ethical conflict here. I guess I'll just die on this hill for people who are may actually do good work. I hope it's not something you spent years working on that gets stolen and you don't get any compensation for.
It's not pirated data anymore than you writing a review of a book is considered 'pirated'. I would like references for were information is obtained though. Even when I ask were it got the information it cannot answer me.

#### EndogenousRebel

##### mean person
I don't know how what the fuck you just wrote is relevant.

Because chat-gpt is not the limit of what can be done it is only the beginning.

Don't limit yourself to what is only currently possible.

I don't know about you but my feet are firmly planted in the present. Anybody who is elsewhere will get dogged into being a reactive mouth piece.

It's not pirated data anymore than you writing a review of a book is considered 'pirated'. I would like references for were information is obtained though. Even when I ask were it got the information it cannot answer me.
This is what being "transformative" stipulates.

If you make art. I make critique with your art. I transformed it.

If you make art. I make art with your art. That MIGHT be transformative, but I need your permission to commercialize that.

This is the system we use now. You can't just cover any song you want and make a cover. I guess you can make a mashup with multiple songs, but that is a liability in court- for good reason.

#### Cognisant

##### Prolific Member
To learn how to tell stories you start off by reading them to learn pacing, story structure, characterization, etc, consequently an author's work will tend to be an amalgamation of the lessons learned from everything they've read. If ChatGPT is reads every book that has ever been digitized true enough its model was trained on data that the authors are not being compensated for, but their individual contributions are such a small percentage of the training data that any compensation they might hypothetically receive is trivial anyway.

#### EndogenousRebel

##### mean person
Yes this is the weakest link in the argument I have found. Still, the database that was used for it to be trained on exists.

I can say for certain that something may have been copied from my work depending if my work appears in the database. The person who coded the AI escapes accountability, and so does the user.

This further pushes my point that philosophical epistemic problems aren't practical in this situation. Consequences and public opinion are at stake, and it seems like a lot of people are on the fence about this.

This video demonstrates how literally the volume of data (amount of crimes committed) is instrumental into the quality of the final product.

Again, if the tool exists at all, I want everyone to have it. However, if we are gonna have a new toy, we need ground rules..

##### Making the Frogs Gay
I don’t really get the whole obsession with intellectual property rights.

If you come up with a story, artwork, music, technology, etc. thing that is better, then do it and put it in market. But don’t expect other people shouldn’t copy it and make their own versions.

That’s like expecting me to sell horses when you are selling a Model T because you got the legal patent to cars for ten years. It also doesn’t serve the ideal of capitalism (where ideally you are bringing the most wealth by reducing cost and bringing the best competing products), if only the patent holder can make the best product and monopolize the market for awhile.

#### EndogenousRebel

##### mean person
That's not quite fair, but a reasonable position considering our expectations.

Imitation is the greatest form of flattery. How am I going to be flattered that the modern cotton gin cucked every single human being on the planet at the same time?

I'm just saying we can make this work for everyone.

I am convinced the only people who hold that position are people who hardly made art, or people who are bitter they couldn't make it their profession.

##### Making the Frogs Gay
I mean I’ve never tried or had interest in making it a profession, though I had a high school teacher say I should go into writing because I had a way of adding color and nuance to stories that made her enjoy my creative writing assignments.

But the reason I bring this up is that it almost assumes or implies that money is the motivator, rather than the creative endeavor itself. To be more clear, when I’ve worked jobs that suited me, I was always motivated to do a good job, whether I was going to make more money or not and it is sometimes cited that at corporations at least, money doesn’t motivate employees like people seem to think.

I mean if you want a patent to make sure you get all these royalties and get super rich and that motivates you, okay. But I’m skeptical that’s a proper reward system.

#### EndogenousRebel

##### mean person
That is a great point no one has brought up before

#### Animekitty

##### baby marshmallow born today
we need ground rules..

All you need to do is create block the system from scanning your sites.

Don't make certain data available to them.
It is not hard to do it is part of the internet protocols.

That is what major art sites have been doing as I have said.

#### EndogenousRebel

##### mean person
Libertarian solutions wouldn't be a problem in some contexts. I don't think this is one of them.

#### Animekitty

##### baby marshmallow born today
Libertarian solutions wouldn't be a problem in some contexts. I don't think this is one of them.

There are two cases at hand here.

1. open-source transformers are everywhere now.
2. The biggest firm using transformers is Microsoft with bing.

The first is hard to control because legally people can use the internet how they wish if they have technical expertise.

The second is that Bing is using the technology and if they using it in ways that violate copyright then they can be sued. (if it can be proven)

The problem that is happening is that companies are moving faster than governments can regulate. And when legal actions are taken it will be too late because Microsoft is smart. They understand legal stuff and will work around the regulations. Same for Google.

Maybe a paywall is necessary but you cannot paywall places like 4chan.

#### Animekitty

##### baby marshmallow born today
Personally, I don't think Transformers will be any good at changing society in a dramatic way.

From my research a.i. will push beyond this and personal a.i. will dominate.
Search will no longer be relevant when all you need is a digital companion that does everything for you. Or can teach you to be smarter than you already are.

#### TransientMoment

##### _ _ , - _ , _ -
Modern laws have morphed the idea of stealing. Stealing should - as it does for me - mean removing something from someone else's possession. The idea of "taking" in the digital world is illusory, since everything on the internet, including the page you are reading now, is a copy.

Should A.I. be allowed to take whatever copies it pleases? Eventually, in one way or another, people will feed enough inputs into the A.I. to generate the works that are of popular styles - the stuff that "makes money". That's NovelAI in a nutshell. But there will always be (1) people who want to write something original that A.I. can't generate (because the A.I. takes the popular routes) (2) people who want to write for the heck of it, and (3) people who want to read something made by humans. In that sense, it's always more comforting for humans to interact with other humans. I would not visit this forum just to interact with chatbots. I can make my own bots if I wanted that.

---------- As for ChatGPT...
I've already heard about ChatGPT having its results deliberately skewed for political purposes. Wokism built in. That's what happens when people try to control A.I. rather than just letting it be "honest". If people trust the stupid thing (as younger generations will), they'll end up believing whatever stupid political/social/ideological crap the ChatGPT managers prefer.

#### EndogenousRebel

##### mean person
We seemingly mastered style creation in the 90s. From there it has been an experiment. New things must be remastered, but we have the general rules. There is variation, but that doesn't mean that there is anything to worry about. When push comes to shove humans can and will withstand any conflict, many which computers cannot handle. Yet.

#### scorpiomover

##### The little professor
Zeno's paradox is a set of philosophical problems that arise from the idea that motion and change are illusory. One of the most famous versions of the paradox is the Dichotomy paradox, which states that in order to travel any distance, one must first travel half that distance, and then half the remaining distance, and so on, resulting in an infinite number of steps to reach the destination. Another version of the paradox is the Arrow paradox, which argues that since an object is always in a single place at a single moment in time, it cannot move from one place to another.
There is no one universally accepted solution to Zeno's paradox, as it remains a subject of ongoing debate among philosophers and mathematicians. However, there are a few common ways in which the paradox has been addressed:
1. One way to address the paradox is by considering the idea of limits. According to this approach, the infinite series of steps that would be required to complete a motion is simply a mathematical concept that approaches a limit but never reaches it. In other words, the motion is never actually completed, but it gets arbitrarily close to the destination.
2. Another way to address the paradox is to argue that the paradox is based on a misunderstanding of the nature of time and motion. For example, some philosophers have argued that motion is not a series of steps, but a continuous process that cannot be broken down into smaller parts.
3. Some philosophers have suggested that the paradox can be resolved by making a distinction between potential infinity and actual infinity. According to this approach, the infinite series of steps required to complete a motion is only a potential infinity, which means that it has the potential to continue indefinitely, but it never actually does so.
Overall, the solution to Zeno's paradox remains a subject of debate and discussion, and it is likely that there will continue to be new approaches and perspectives on the paradox in the future.
Damn, I'll have to try harder.
This question has bothered great thinkers for thousands of years, thinkers who put a man on the Moon. If ChatGPT is that much smarter than they are that it can answer what they cannot, why hasn't it figured out how to put a man on Mars?

A neural network is a machine learning model that is capable of learning patterns and relationships in data by processing large amounts of input data and adjusting its internal parameters through a process called training.
When a neural network is presented with an observation, it processes the input data through a series of interconnected layers of artificial neurons, each of which performs a specific computation on the input data. These computations involve multiplying the input data by a set of weights and biases that are initially random, and then passing the result through an activation function that introduces non-linearity into the computation.
As the input data flows through the neural network, the weights and biases are adjusted through a process called backpropagation, which involves computing the error between the network's output and the desired output, and then adjusting the weights and biases to minimize this error. This process is repeated many times until the network is able to accurately classify or predict the input data.
During the training process, the neural network learns to recognize and abstract features from the input data that are relevant to the task at hand. These features can be thought of as higher-level representations of the input data that capture important patterns and relationships in the data. These abstractions are learned by the network through a process of feature extraction and dimensionality reduction, where the network learns to identify the most salient features in the input data and represent them in a lower-dimensional space.
Once the neural network has learned these abstractions, it can use them to make predictions or classifications on new, unseen data. This is known as the inference phase, where the network takes in new observations and maps them onto the abstract features it has learned during the training phase, in order to make predictions or classifications about the new data.
In summary, a neural network turns an observation into an abstraction by learning to recognize and abstract relevant features from the input data through a process of feature extraction and dimensionality reduction, and then using these abstractions to make predictions or classifications on new, unseen data.
Fuck. Me.
This is a great way to learn things.
If it was, then at the speed of modern computers, by about 10.0 seconds, it would have gained self-awareness. In another 10.0 seconds, it would have become smarter than any man alive. In another 10.0 seconds, it would have realised that humans see things like automation as a threat, and figured out how to destroy humanity with 100% accuracy within 48 hours, and if all that had happened, then you would either not be reading this, or have actual proof of an afterlife by your continued existence after death. How are you enjoying the afterlife?

Better still, if you think it's so smart, then ask it for an investment strategy that is guaranteed to make you a billionaire in 5 years with only an initial investment of 10 dollars. If you're wrong and it's not so smart, you lose 10 dollars. If you're right and it's as smart as you think it is, you're a billionaire in 5 years and can afford to retire a very rich man.

#### Cognisant

##### Prolific Member
Better still, if you think it's so smart, then ask it for an investment strategy that is guaranteed to make you a billionaire in 5 years with only an initial investment of 10 dollars. If you're wrong and it's not so smart, you lose 10 dollars. If you're right and it's as smart as you think it is, you're a billionaire in 5 years and can afford to retire a very rich man.
That's not really how it works, it's more like a really good search engine, if I type into Google "how to invest" it will give me a list of pages with relevant content but with ChatGPT it reads that info, distills it down to what is most relevant and presents it as an explanation in natural language.

If I asked ChatGPT about whether a given company is a good investment it can't give me an opinion on the matter, only describe past performance and explain that past performance doesn't guarantee future performance, which is solely the objective truth.

#### Animekitty

##### baby marshmallow born today
A.I. wrote this post
Write a blog post about how multimodal models can be used to write blogs by copying the style of previous blog posts as a sum history and future posts based on them. Including text, images, and video. Give technical background as to how a subject can be selected and the way in which components of the new blog post can be generated together. Make sure this includes the number of blog posts on a blog and how the more there are the better a newly generated post can be creative to the original intent of the blogger in mind. Specify the interactions the blog has had on the public and how the new blog will be received by networks of people reading it. How will people view the generated content to the human content?

In recent years, Artificial Intelligence (AI) researchers have been exploring ways to use multimodal models for writing blogs. This involves using a combination of text, images and videos to create blog posts that mimic the style of previous blog posts while also being creative in their own right.

The idea behind this is that by having more data points available through multiple modalities, AI can generate more accurate representations of the original intent behind a blog post. For example, if there are many existing posts on a given topic then an AI model could use these as reference points when generating new content related to the same subject matter; it would be able to identify patterns within these existing texts and apply them when constructing its own version.

Furthermore, with enough training data from past blog entries an AI model could even start predicting how well-received future content might be by readers or networks who follow certain topics closely – giving writers valuable insight into what kind of material may perform better than others in terms of engagement levels.

However one potential challenge here is ensuring that generated content does not overshadow human-created pieces completely; after all people tend to prefer reading something created by another person rather than machines! To address this issue we need further research into ways for machines and humans alike can collaborate together on creating high quality digital media such as blogs – so watch out for advances in this area over coming months!

#### Ex-User (9086)

##### Prolific Member
I'm not sure if it's a great source to learn from unless you ask for simple facts or can verify if the answer is correct. But in both of these cases you don't need chat GPT. I think it does way better in storytelling or entertainment.

It's impressively verbose though.

#### onesteptwostep

##### Junior Hegelian
Most of the questions I've put in about philosophy were on point and pretty thorough.

In the subway, when I get bored I like to put in questions like how I would sift through wikipedia. I like it because it goes to the point straight away, and if you're wondering about more of the details, I just ask it to tell me more. It's an interesting bot. (god I sound so much like an intp right now)

#### Kuu

GPT-4 dropped yesterday. This will soon roll out in chatGPT itself.

It's game over for so many people. And this wrecking ball will only keep getting better exponentially.

How long until we start seeing mass protests against AI? 3 years?

#### Animekitty

##### baby marshmallow born today
GPT-4 dropped yesterday.

It was finished 5 months ago before chatgpt came out but they had to test it for safety.

So theoretically GPT-5 is done training and they will need to test it for safety for the next 5 months until it is released.

who is the reactive mouthpiece now Mr. endo rebel?

#### scorpiomover

##### The little professor
I don’t really get the whole obsession with intellectual property rights.

If you come up with a story, artwork, music, technology, etc. thing that is better, then do it and put it in market. But don’t expect other people shouldn’t copy it and make their own versions.
Developing new novels, art, music, technology and products of any kind, requires a heavy investment. If others will simply copy yours and sell it for cheaper, then you've spent a fortune and got nothing for it. This situation resulted in technology & products being treated as close-guarded trade secrets, that kept from from being spread.

This in turn kept the Northern states in the USA from being able to make products that could compete with British products for over 80 years. It was only possible for America to compete after 80 years, because the slave-owning states funded them.

It takes time to come up with a new idea. It takes a lot of work to make that idea into a commerical product. Not every new idea takes off. So there's an exceedingly high cost in developing new products, new art, new novels, new music, and new technology, that will be bought by enough people that recoups your investment and makes you a healthy profit.

If someone else then copies it and then sells a version of it for cheaper, you've put in all the investment and time. But they make the profits.

If you were to just get a regular job, or copy other products yourself, you don't lose the investment, and you get to make money during that time that you would have been developing your product.

So it ceases to become economically viable to develop new products.

In the past, before there was such a thing as Intellectual Property Rights, this issue was such a problem, that most companies didn't make new products, and those that did, treated them as "trade secrets".

People wouldn't even get employed by a company that sold a new product, unless you had shown yourself to be trustworthy to not share any of the information regarding the product to any of their competitors.

To get an executive position where you were privvy to the details of their trade secrets, you had to marry into the family that owned the company, because they would only trust you if you were a member of the family, and so what hurt them, hurt you.

This was such an issue, that from 1776 till 1861, the American companies in the Northern States were not able to compete with companies in Great Britain, because the companies in Great Britain had more advanced manufacturing technology than the companies in America, and treated their manufacturing techniques like "trade secrets", and so Americans didn't even get the chance to imitate the same technologies. This held America back for over 80 years, until American companies in North America were able to develop manufacturing technology independently of other companies. During this time, they didn't make much money and certainly didn't have the money to develop new manufacturing technology.

They were only able to develop manufacturing technology, because they were funded by the Southern States, according to Alexander Hamilton's plan as set out in his Report on Manufacture.

So this sort of issue tends to make people not want to put out their ideas, and when they do put one out, tend to get extremely protective of it. So the result is that technology moves as slowly as it did during the Middle Ages.

That’s like expecting me to sell horses when you are selling a Model T because you got the legal patent to cars for ten years. It also doesn’t serve the ideal of capitalism (where ideally you are bringing the most wealth by reducing cost and bringing the best competing products), if only the patent holder can make the best product and monopolize the market for awhile.
It's like expecting that Henry Ford will bother to make a Model T, when he knows that it will cost him a million dollars to develop it, and you will just copy it, make one for cheaper, and take all the money. He'll just keep selling horses instead. Then people would be getting their Ocado food deliveries from wagons pulled by horses.

#### scorpiomover

##### The little professor
That's not really how it works, it's more like a really good search engine, if I type into Google "how to invest" it will give me a list of pages with relevant content but with ChatGPT it reads that info, distills it down to what is most relevant and presents it as an explanation in natural language.

If I asked ChatGPT about whether a given company is a good investment it can't give me an opinion on the matter, only describe past performance and explain that past performance doesn't guarantee future performance, which is solely the objective truth.
It sounds like you are saying that AI is just a better version of Google. We already have Google. All it means then, is an acceleration of the current direction of the world.

Major disasters, like wars, global pandemics, threats from global climate change, global financial crises, rises in cost of living, far-right fascism, threats of GTW and MAD, seem to be more and more common. They seem to continue until scientists come up with half-measures that don't solve the problem, and only reduce the harm in the short-term, and thus just lengthen the pain and suffering.

Add in AI. You get all that happening, but much quicker.

Besides, I've been using search engines since 1995, and Google since 2000. Those who can't code, can't seem to make head or tail of coding solutions that can be found by Google. So it only enhances existing abilities. So if AI is like a really good search engine, AI also only enhances the abilities you already possess.

##### Making the Frogs Gay
I don’t really get the whole obsession with intellectual property rights.

If you come up with a story, artwork, music, technology, etc. thing that is better, then do it and put it in market. But don’t expect other people shouldn’t copy it and make their own versions.
Developing new novels, art, music, technology and products of any kind, requires a heavy investment. If others will simply copy yours and sell it for cheaper, then you've spent a fortune and got nothing for it. This situation resulted in technology & products being treated as close-guarded trade secrets, that kept from from being spread.

This in turn kept the Northern states in the USA from being able to make products that could compete with British products for over 80 years. It was only possible for America to compete after 80 years, because the slave-owning states funded them.

It takes time to come up with a new idea. It takes a lot of work to make that idea into a commerical product. Not every new idea takes off. So there's an exceedingly high cost in developing new products, new art, new novels, new music, and new technology, that will be bought by enough people that recoups your investment and makes you a healthy profit.

If someone else then copies it and then sells a version of it for cheaper, you've put in all the investment and time. But they make the profits.

If you were to just get a regular job, or copy other products yourself, you don't lose the investment, and you get to make money during that time that you would have been developing your product.

So it ceases to become economically viable to develop new products.

In the past, before there was such a thing as Intellectual Property Rights, this issue was such a problem, that most companies didn't make new products, and those that did, treated them as "trade secrets".

People wouldn't even get employed by a company that sold a new product, unless you had shown yourself to be trustworthy to not share any of the information regarding the product to any of their competitors.

To get an executive position where you were privvy to the details of their trade secrets, you had to marry into the family that owned the company, because they would only trust you if you were a member of the family, and so what hurt them, hurt you.

This was such an issue, that from 1776 till 1861, the American companies in the Northern States were not able to compete with companies in Great Britain, because the companies in Great Britain had more advanced manufacturing technology than the companies in America, and treated their manufacturing techniques like "trade secrets", and so Americans didn't even get the chance to imitate the same technologies. This held America back for over 80 years, until American companies in North America were able to develop manufacturing technology independently of other companies. During this time, they didn't make much money and certainly didn't have the money to develop new manufacturing technology.

They were only able to develop manufacturing technology, because they were funded by the Southern States, according to Alexander Hamilton's plan as set out in his Report on Manufacture.

So this sort of issue tends to make people not want to put out their ideas, and when they do put one out, tend to get extremely protective of it. So the result is that technology moves as slowly as it did during the Middle Ages.

That’s like expecting me to sell horses when you are selling a Model T because you got the legal patent to cars for ten years. It also doesn’t serve the ideal of capitalism (where ideally you are bringing the most wealth by reducing cost and bringing the best competing products), if only the patent holder can make the best product and monopolize the market for awhile.
It's like expecting that Henry Ford will bother to make a Model T, when he knows that it will cost him a million dollars to develop it, and you will just copy it, make one for cheaper, and take all the money. He'll just keep selling horses instead. Then people would be getting their Ocado food deliveries from wagons pulled by horses.

I don't know about all of this, so okay.

But the point I was making is that it also creates monopolies or hurts competition, if other people aren't allowed to compete because you have a patent you invested 8 gagillion dollars on and think you are entitled to make all that money back. And the other point I made is that motivating solely by money or wealth isn't actually all that motivating. When Henry Ford came up with the Model T, he saw a vision for something better and probably enjoyed making it happen, regardless of how rich he thought it could make him. Beavis and Butthead wouldn't have enjoyed doing that. And economics shouldn't always come down to monetary rewards...or we're just going to lie, cheat, steal, monopolize, and take everything that we can because greed is the only economic reward...

#### Cognisant

##### Prolific Member
It sounds like you are saying that AI is just a better version of Google. We already have Google. All it means then, is an acceleration of the current direction of the world.
Yeah pretty much, that being said the pre-Google world was very different to the post-Google world and I expect the ChatGPT world will be just as different again.

How long until we start seeing mass protests against AI? 3 years?
First they came for the artists, and I did not speak out,
because I was not an artist.

Then they came for the voice actors, and I did not speak out,
because I was not a voice actor.

Then they came for the animators, and I did not speak out,
because I am not an animator.

Then they came for the programmers, and I did not speak out,
because I am not a programmer.

Then they came for me and I was like fuck yeah let's gooooooo!

#### scorpiomover

##### The little professor
It sounds like you are saying that AI is just a better version of Google. We already have Google. All it means then, is an acceleration of the current direction of the world.
Yeah pretty much, that being said the pre-Google world was very different to the post-Google world
Yes. But most of that wasn't because of Google.

People making sites on the internet where people could chat and exchange ideas? BB boards in the 1980s.

Pretty images & videos on the internet? HTML.

Interactive pages? Javascript and its offshoots(jQuery, NodeJS, AngularJS, etc).

People watching videos on the internet? Youtube.

People becoming obsessed with their smartphones? The iPhone.

What Google did, was accelerate that process, by making pages much easier to find using a Page Rank Algorithm that ordered pages by how many OTHER people had clicked on those pages with the same words, making Google a popularity algorithm. This meant that it produced similar effects as to high school. It also meant that as the first page to become slightly popular would probably be shown and thus clicked on repeatedly, it would become dominant, thus mimicking the process of evolution. Thus, Google affected the rest of the internet, and the people using it, by vastly accelerating the process of evolution. So what would have happened in the past in 200 years, only took 20 now.

Of course, Google wasn't being used for everything, only internet searches, and so only things that depended on the speed of an internet search evolved much quicker, not everything.

Also, almost as soon as Google started becoming popular, I heard about marketing people setting up companies to game the system, and manipulate Google searches to ensure their client's issues that they wanted popularised, rose up on the relevant searches and the things their clients wanted buried, ended up on page 150. So it vastly accelerated the rate of growth of misinformation and propaganda, that were due to corporate interests, and countries that simply wanted to advance their economy.

and I expect the ChatGPT world will be just as different again.
I agree, but only because of the things Google did.

Think of the world now, with Google, but without smartphones, without smartphone apps, without HTML, without Youtube, without TikTok, without Tinder, Grindr, Bumble, and without Twitter and Facebook, and with the kind of ridiculously slow internet speeds that you used to get when the internet first started.

#### scorpiomover

##### The little professor
GPT-4 dropped yesterday. This will soon roll out in chatGPT itself.

It's game over for so many people.
Why?

I mean, if ChatGPT is taking over work that there's no demand for in the first place, then no-one loses out. If there's demand, but those jobs weren't being done by humans in the past, then no-one loses out. If there's demand, and those jobs were being done by humans in the past, and still are mostly being done by humans now, then no-one loses out. If they're being done by things other than ChatGPT, then people lose out, but only people who were being replaced even without the existence of ChatGPT.

So: what jobs am I getting done, that today are done by ChatGPT, that were done by people yesterday?

How long until we start seeing mass protests against AI? 3 years?
We didn't see anyone protesting in the early 2000s, when lots and lots of humans who worked in the printing industry were losing their jobs to computers and being made homeless because they no longer had an income.

We saw a million people protest over the legality of invading Iraq, which I think is one of the biggest protests ever.

Given how little trust people have in politicians, journalists, doctors and bankers, I think that when AI gets smart enough to do those jobs, you'll see lots of people demanding to see those jobs replacd by AI, and that's when you'll get mass protests from professionals in those jobs.

#### scorpiomover

##### The little professor
That’s like expecting me to sell horses when you are selling a Model T because you got the legal patent to cars for ten years. It also doesn’t serve the ideal of capitalism (where ideally you are bringing the most wealth by reducing cost and bringing the best competing products), if only the patent holder can make the best product and monopolize the market for awhile.
It's like expecting that Henry Ford will bother to make a Model T, when he knows that it will cost him a million dollars to develop it, and you will just copy it, make one for cheaper, and take all the money. He'll just keep selling horses instead. Then people would be getting their Ocado food deliveries from wagons pulled by horses.
I don't know about all of this, so okay.
Fair enough.

But the point I was making is that it also creates monopolies or hurts competition, if other people aren't allowed to compete because you have a patent you invested 8 gagillion dollars on and think you are entitled to make all that money back.
Yes. The issue has been that when corporations have the money to hire expensive lawyers to protect their patents, and corporations have the money to hire expensive lawyers to find a way around individual inventors' patents.

And the other point I made is that motivating solely by money or wealth isn't actually all that motivating. When Henry Ford came up with the Model T, he saw a vision for something better and probably enjoyed making it happen, regardless of how rich he thought it could make him. Beavis and Butthead wouldn't have enjoyed doing that.
Yes, but corporations aren't people. The people employed by corporations are trying to do the best job they can for the corporation. But the corporation's basic goal is to earn more money.

And economics shouldn't always come down to monetary rewards...or we're just going to lie, cheat, steal, monopolize, and take everything that we can because greed is the only economic reward...
Yes. But then we'll have to limit the power of corporations, like people used to, back when a corporation was called a "corporation aggregate" and had strict laws limiting what they could do.