• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Jobs at risk to be replaced by Ai and what to do about it

sushi

Prolific Member
Local time
Today 9:23 AM
Joined
Aug 15, 2013
Messages
1,809
-->
Accountant
Translator
Assembly worker
Finance trader
 

birdsnestfern

Earthling
Local time
Today 5:23 AM
Joined
Oct 7, 2021
Messages
1,840
-->
The software used in most accounting only does a part of the job and only as well as the programmer could design. Most of the software is not integrated between other systems that need to be so, such as procurement, new offerings, customers, codes, putting orders in, collecting payments, information about agreements, printing and creating invoices, sending invoices out, assigning time stamps on the first, second, third invoice, when it is paid, closing out funds and throwing excess estimates back into other fund pots, email correspondence, changes of addresses, printing and updating all of that on the front of an invoice, tracking details of itemized things, partial payments, balances due, updating each time a change is made what prints out, getting a system to automatically and correctly put data on an invoice, running programs that update incorrect information or additional information that is added after batches are run, there is too much for it to work together and it doesn't really work well enough to take over all aspects of it. You need time stamps on everything, how something is tracked, approved, rules on what is bought, its endless. So, humans are going to have to be involved in much of it anyway, no matter how good that AI is, it probably can not write a program to do everything.
 

Cognisant

Prolific Member
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
10,965
-->
From experience working in IT there's a lot of things that could be automated but are not because having a human be accountable for it (other than the head of the IT team) is desirable, instead I think we'll see more smart tools that'll enable people in these roles to be more productive.

The stuff that gets fully automated will low risk information processing, like writing inane articles to farm clicks, nobody cares if those articles are dumb, offensive or inaccurate.
 

dr froyd

__________________________________________________
Local time
Today 9:23 AM
Joined
Jan 26, 2015
Messages
1,355
-->
most people still don't understand that chatGPT is a language model, not a universal model for everything in the world. It can usually cook up some gibberish about anything in the world, but you cannot really trust it. For example in order to be an accountant you need a model of tax law, financial book keeping, know-how for the specific domain one is working in etc.

the conclusion of that is that currently the jobs that are at risk are those that can be reduced to a language model – perhaps translators, although you wouldn't be able to trust it in high-stakes situations like diplomatic meetings. Because once again, that would require a model of politics, history, human psychology, culture, etc etc.
 

birdsnestfern

Earthling
Local time
Today 5:23 AM
Joined
Oct 7, 2021
Messages
1,840
-->
Private companies are already convincing government to use AI in different defense situations, Airport activities. schools probably to watch for guns, and much more. It sounds ok from the outside, but it seems un-necessary. Can't we go a different direction and like our humanity as we are without the AI? People need to speak out on it.
 

Myself

Member
Local time
Today 9:23 AM
Joined
Jul 30, 2011
Messages
37
-->
From experience working in IT there's a lot of things that could be automated but are not because having a human be accountable for it (other than the head of the IT team) is desirable,

Exactly. ChatGPT is not afraid of giving a wrong answer that will cost the business money. When you close the browser window chatGPT has erased it's history and anything you've said is gone.

You cannot punish chatGPT for being wrong. You cannot fire it. You cannot sue it. Those things are important -- especially so for the business class which pretty much only has those tools to exercise power with.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
We'll see what happens when the standard of living continues to decline, and workers eventually feel like they need to organize, and businesses don't like this. We'll see what happens as this tech continues to improve, and people are slowly displaced from more and more industries and have to hit the streets while the rest of us blame them for "their" failures. https://layoffs.fyi/

It has been said that "new jobs will appear!" Ok, enough of them? People to repair and maintain the machines. Why would they be needed if robots can do this? Even if you had some "new sectors," there would still likely be harsh competition for those jobs. What will that do to wages? What happens when the average pay continues to decline and demand is reduced? I'd say that whoever owns the capital (wealth, AI and robots) comes out just fine. As for the rest of us, good luck. We do not seem to be at this tipping point yet, but we are on the path:

 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
Basically the modern economy is a game of musical chairs where, instead of collectively opposing the taker of chairs, the chair-havers blame the slowest among them in each round for their failure to secure a seat.
 

ZenRaiden

One atom of me
Local time
Today 9:23 AM
Joined
Jul 27, 2013
Messages
5,153
-->
Location
Between concrete walls
Well our view of labor is still medieval in many ways, work of hands and mind, are not the same due to evolving, but its sure as hell not going to be seen the same way as decade from now. AI as such is going to evolve gradually in some ways.
So gradual the profile of what is expected of people on job market will transform too.
Certainly a lot of training is necessary and retraining ability for job market is what provides income.

You either are job creator, or you run the business of providing labor.
AI will evolve in layers and small steps, but we already see how that has potential to rock the boat.

Chat GPT maybe sucky model, but its already undermining traditional models of schooling where acquiring and dumping info is considered intelligent.
Same way we no longer want kids to spend months mastering arithmetic we no longer will require students to learn dull notes in text books.

We need thinkers, and we need people who understand information.

Majority of people don't know how to think, or understand things like trends, or higher order concepts, nor do people yet have computer literacy, yet most of the first world is already starting second industrial revolution with AI that is far-cry from lets say intelligent AI, but can see progress in way of automating tasks and petty jobs.
 

Myself

Member
Local time
Today 9:23 AM
Joined
Jul 30, 2011
Messages
37
-->
Mankind has always relied on ORIGINAL THINKERS to push the boundaries of what could be done. And this is something that the fake AI we've come up with is simple incapable of. We're not at the Singularity yet, boys...
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
Yes, I think most of the critiques that have been made against those who see the potential of AI and its ability to eliminate jobs are made by those who are focused on where it is now, not where it is headed. Sure, the AI will need prompters, but as it continues to improve, the volume of required prompts to attain a desired result will likely decrease. In order to, say, prompt an AI to create a Podcast, right now it is done on a per episode basis. Imagine if the technology improved to the point where you gave it a general prompt (ie create a Podcast about history) and told it to create a weekly episode with that general subject matter. Sound is what is done now, but AI videos have come a long way from their beginnings as well. I've seen people jeer at them claiming that AI could never do what humans do when it comes to video creation, but they often ignore the continuing development of AI. I would be interested in seeing someone put together a well-informed argument that lays out the specific reasons why AI development would be capped at one particular point. In other words, what about human intelligence makes it unique and unable to be replicated by medium other than neurons?
 

fluffy

Pony Influencer
Local time
Today 3:23 AM
Joined
Sep 21, 2024
Messages
246
-->
If people lose jobs they can vote against physical labor machines doing physical labor.

You cannot stop companies from adopting software tools. But mass starvation can motivate people. The government will act.
 

LOGICZOMBIE

welcome to thought club
Local time
Today 4:23 AM
Joined
Aug 6, 2021
Messages
2,728
-->
You cannot stop companies from adopting software tools. But mass starvation can motivate people. The government will act.


46 second clip
 

Myself

Member
Local time
Today 9:23 AM
Joined
Jul 30, 2011
Messages
37
-->
And this is something that the fake AI we've come up with is simple incapable of.

what "original idea" have you fabricated ?

I invented a new word once that did not exist yet. That is something that many others have done before me and it is not even an act of remarkable creativity. Yet if my understanding is correct an AI cannot do this. It can only paste together 'tokens' it's been programmed to know have meaning but cannot come up with new ones.
 

LOGICZOMBIE

welcome to thought club
Local time
Today 4:23 AM
Joined
Aug 6, 2021
Messages
2,728
-->
And this is something that the fake AI we've come up with is simple incapable of.

what "original idea" have you fabricated ?

I invented a new word once that did not exist yet. That is something that many others have done before me and it is not even an act of remarkable creativity. Yet if my understanding is correct an AI cannot do this. It can only paste together 'tokens' it's been programmed to know have meaning but cannot come up with new ones.

1728673939940.png
 

Hadoblado

think again losers
Local time
Today 6:53 PM
Joined
Mar 17, 2011
Messages
6,936
-->

I found this surprising as it's the opposite of what I'd been hearing. Basically, coding behaves more like math than language when it comes to the application of AI. It's better to be more precise and intentional than it is to get generic "good enough" code. At least in its current form, AI doesn't seem to be beating good coders. What's more, over-reliance on AI will create more bugs over time, meaning someone's going to have to debug them.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->

I found this surprising as it's the opposite of what I'd been hearing. Basically, coding behaves more like math than language when it comes to the application of AI. It's better to be more precise and intentional than it is to get generic "good enough" code. At least in its current form, AI doesn't seem to be beating good coders. What's more, over-reliance on AI will create more bugs over time, meaning someone's going to have to debug them.
I've watched quite a bit of Sabine. Often she is correct about things, but not always. She cannot even properly define capitalism:
What she is describing as capitalism there is venture capitalism. I can acknowledge that coding still needs programmers, and that it is still sloppy an in its infancy. However, the technology has been continuously improving and I am unaware of a hard cap on its ability to do so.
 

dr froyd

__________________________________________________
Local time
Today 9:23 AM
Joined
Jan 26, 2015
Messages
1,355
-->
there will be good times for programmers who can program in the coming years, after millions of people have been discouraged from entering the field and people have realized that language models cannot program for shit beyond writing buggy copies of solutions to trivial problems

and this is not about it being "early days" in AI - it's about fundamental limits to what you can achieve with statistical learning (which is the basis of language models). I think this is obvious stuff to most machine-learning engineers - e.g. concepts like curse of dimensionality. But obviously you will not hear from these people, you'll only hear the opinions of CEOs (most of whom have never written a single line of code) who have to impress their shareholders with technological progress and ostensibly smaller wage costs.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
What she is describing as capitalism there is venture capitalism.
So then what is capitalism?
By her definition of capitalism, there is no implied distinction between the state lending money to the apple producer for a juice press and a non-state actor doing so. Sure, in the example she gives, it is a non-state actor doing this, but her definition gives no such implied restriction. Capitalism is not money-lending alone (this is a part of banking). I'll cite Wikipedia's definition here: "Capitalism is an economic system based on the private ownership of the means of production and their operation for profit." source: https://en.wikipedia.org/wiki/Capitalism

Now what does private ownership mean, exactly? Here is wikipedia again: "Private property is a legal designation for the ownership of property by non-governmental legal entities.[1] Private property is distinguishable from public property, which is owned by a state entity, and from collective or cooperative property, which is owned by one or more non-governmental entities." source: https://en.wikipedia.org/wiki/Private_property

Ok, so if we accept those definitions, I see a problem. We have the stock market, which could be said to be a collective "non-governmental entity." Maybe the stock market is selective socialism.

My major issue with her simplified example was in lack of precision.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
there will be good times for programmers who can program in the coming years, after millions of people have been discouraged from entering the field and people have realized that language models cannot program for shit beyond writing buggy copies of solutions to trivial problems

and this is not about it being "early days" in AI - it's about fundamental limits to what you can achieve with statistical learning (which is the basis of language models). I think this is obvious stuff to most machine-learning engineers - e.g. concepts like curse of dimensionality. But obviously you will not hear from these people, you'll only hear the opinions of CEOs (most of whom have never written a single line of code) who have to impress their shareholders with technological progress and ostensibly smaller wage costs.
Maybe, as for now, the tech layoffs continue. Who is to say that they will not develop superior methods of machine learning? Where are those hard limits? Also, language models are not the only form AI comes in currently.
 

dr froyd

__________________________________________________
Local time
Today 9:23 AM
Joined
Jan 26, 2015
Messages
1,355
-->
there will be good times for programmers who can program in the coming years, after millions of people have been discouraged from entering the field and people have realized that language models cannot program for shit beyond writing buggy copies of solutions to trivial problems

and this is not about it being "early days" in AI - it's about fundamental limits to what you can achieve with statistical learning (which is the basis of language models). I think this is obvious stuff to most machine-learning engineers - e.g. concepts like curse of dimensionality. But obviously you will not hear from these people, you'll only hear the opinions of CEOs (most of whom have never written a single line of code) who have to impress their shareholders with technological progress and ostensibly smaller wage costs.
Maybe, as for now, the tech layoffs continue. Who is to say that they will not develop superior methods of machine learning? Where are those hard limits? Also, language models are not the only form AI comes in currently.

the only reason we are talking about such scenarios at this point in history is statistical learning - i.e. machine-learning based on data. I'm not gonna write a statistical-learning 101 class here but things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits. For example, if you have a model with 1,000 degrees of freedom but only 10 data points to learn from, your model will be garbage no matter what. If your model had 1 million data points but is then asked to predict on a new scenario that is not similar to before-seen datapoints, you will get garbage output no matter what.

of course there's a possibility we will invent something entirely new - but that would have nothing to do with language models, so such a scenario could be hypothesized without the existence of chatGPT or whatever. We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)
 

fluffy

Pony Influencer
Local time
Today 3:23 AM
Joined
Sep 21, 2024
Messages
246
-->
the only reason we are talking about such scenarios at this point in history is statistical learning - i.e. machine-learning based on data. I'm not gonna write a statistical-learning 101 class here but things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits. For example, if you have a model with 1,000 degrees of freedom but only 10 data points to learn from, your model will be garbage no matter what. If your model had 1 million data points but is then asked to predict on a new scenario that is not similar to before-seen datapoints, you will get garbage output no matter what.

of course there's a possibility we will invent something entirely new - but that would have nothing to do with language models, so such a scenario could be hypothesized without the existence of chatGPT or whatever. We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)

ChatGPT is not what will be the future of programing. That is correct. But to think that is all there is, is a limit on the imagination. What we have right now is fake A.I. - the real thing of course is to dangerous to allow to exist but is possible.

Math concepts don't really on laungauge but on vision and body motion. Brains would need to be simulated in virtual environments.

The curse of dimensionality has been solved by humans knowing what they are doing by reflection. That is possible in machines but models don't reflect.

Statistics can be made more effective by tatorials in VR but then we need to program those. More mathematics for the masses like touch phones not NFT scam prompt engineers.

We can design boxes for kids that resemble real A.I. but it takes time. It will be better than what the education system is today.
 

dr froyd

__________________________________________________
Local time
Today 9:23 AM
Joined
Jan 26, 2015
Messages
1,355
-->
The curse of dimensionality has been solved by humans knowing what they are doing by reflection. That is possible in machines but models don't reflect.
yes we solve it by a mix of a-priori reasoning, creativity, intuition, etc. Like when einstein came up with general relativity without having seen a single example of gravity bending light.

on another note it should be kept in mind that stuff like chatGPT uses algorithms that were invented in the 1970s (e.g. neural nets) - but now applied with bigger computers, more data, and certain refinements to the methodology. There hasn't really been any huge leap in AI theory.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits.
Good point. Those are exactly the hard limits I was wondering about. Humans have surpassed these limits, but it is likely due to the fact that our brains do not function in the same way that computers do.

We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)
Then you would be more familiar in with the subject than most. Sabine is what has been termed to be a " science communicator," and her background is not in that field either. While this fact does not disqualify someone from being able to accurately speak on the matter, one would have to trust that she and/or the infrastructure of individuals who supports her are able to take complex subject material, process it, and turn it into material that the layperson can digest. This task can be a bit tricky, since without the right balance, one can oversimplify something to the point that an essential piece of information is lost. She generally does pretty well at this, and your background in the material would suggest that you would be better suited to say whether or not she did the arguments justice.

There hasn't really been any huge leap in AI theory.
What if this is due to the lack of original thinking in the field? What if many of the researchers are being "trained" by our models of what the field should be? One can go to school and attain a degree in Computer Science, and then further specialize, but what if that training is setting many people up to narrow their focus onto mathematical and programming principles which may not actually be the correct approach in replicating human type intelligence? I believe cross-field collaboration would be necessary (like neuroscientists and psychologists meeting with programmers and engineers, for example), and this does appear to happen to a certain extent. Is it enough?
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
the only reason we are talking about such scenarios at this point in history is statistical learning - i.e. machine-learning based on data. I'm not gonna write a statistical-learning 101 class here but things like curse of dimensionality, bias-variance tradeoff etc are not things you can just invent yourself out of - these are fundamental limits. For example, if you have a model with 1,000 degrees of freedom but only 10 data points to learn from, your model will be garbage no matter what. If your model had 1 million data points but is then asked to predict on a new scenario that is not similar to before-seen datapoints, you will get garbage output no matter what.

of course there's a possibility we will invent something entirely new - but that would have nothing to do with language models, so such a scenario could be hypothesized without the existence of chatGPT or whatever. We are very far away from that, at least that's my view as someone with a background in computer science and who works with machine learning on a daily basis (including reinforcement learning that you have in the video)

ChatGPT is not what will be the future of programing. That is correct. But to think that is all there is, is a limit on the imagination. What we have right now is fake A.I. - the real thing of course is to dangerous to allow to exist but is possible.

Math concepts don't really on laungauge but on vision and body motion. Brains would need to be simulated in virtual environments.

The curse of dimensionality has been solved by humans knowing what they are doing by reflection. That is possible in machines but models don't reflect.

Statistics can be made more effective by tatorials in VR but then we need to program those. More mathematics for the masses like touch phones not NFT scam prompt engineers.

We can design boxes for kids that resemble real A.I. but it takes time. It will be better than what the education system is today.
I agree. ChatGPT may also be a vital stepping stone in the development of an AGI. Maybe the AGI is not developed using the limitations of ChatGPT, but perhaps the model has made it clear to people in the field that they need something else. This would be a crucial realization necessary to spur the next step in development.
 

dr froyd

__________________________________________________
Local time
Today 9:23 AM
Joined
Jan 26, 2015
Messages
1,355
-->
what if in order to reason about the world, it won't do to just shuffle words around or do algebraic operations on symbols. What if, in order for you to know that a wolf if supposed to eat the sheep and not the other way around, you have to know - really know - what a wolf is. Like, you need to feel in your bones what it is. Now we are talking about qualia, consciousness... we don't even know what these things are, let alone be able to program them into a computer

 

fluffy

Pony Influencer
Local time
Today 3:23 AM
Joined
Sep 21, 2024
Messages
246
-->
what if in order to reason about the world, it won't do to just shuffle words around or do algebraic operations on symbols. What if, in order for you to know that a wolf if supposed to eat the sheep and not the other way around, you have to know - really know - what a wolf is. Like, you need to feel in your bones what it is. Now we are talking about qualia, consciousness... we don't even know what these things are, let alone be able to program them into a computer


You don't program all knowledge about all things. The Chinese room is an argument for why rules cannot be translated into knowledge at a fundamental level and at the laungauge level. In the case of A.I. that finds out i.e. discovers the rules of the world. That is different because learning happen. One such need then is the architecture of learning such as the basal ganglion that program the brains working memory to plan tasks and test them. This must happen inside a virtual world as the body cannot be made by current technologies whereas in simulation the flexibly is vastly expanded.
 

fluffy

Pony Influencer
Local time
Today 3:23 AM
Joined
Sep 21, 2024
Messages
246
-->
Also the brain is self referential.

Software can do that in a network graph that is dynamic and can change.
 

Myself

Member
Local time
Today 9:23 AM
Joined
Jul 30, 2011
Messages
37
-->
And this is something that the fake AI we've come up with is simple incapable of.

what "original idea" have you fabricated ?

I invented a new word once that did not exist yet. That is something that many others have done before me and it is not even an act of remarkable creativity. Yet if my understanding is correct an AI cannot do this. It can only paste together 'tokens' it's been programmed to know have meaning but cannot come up with new ones.

View attachment 8354

Yes. So it can do that. I hate to use the "true scotsman" argument but, yes. When prompted and told what the answer needs to look like... it can produce it. But you will never see an AI use a new word without being specifically told that it has to come up with a new word. It won't come up with things because it enjoys doing so.

Human beings move the goal posts that AI aims for.
AI can never innovate.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->

Myself

Member
Local time
Today 9:23 AM
Joined
Jul 30, 2011
Messages
37
-->
Well... as for Diogenes. Being insane in a different way from everyone else doesn't necessarily mean you are right. It just means you're uniquely wrong... which don't get me wrong... is a high achievement!

Being uniquely wrong is what sets humans apart from AI. They're not capable of being wrong in a unique way. They don't move the goalpost. They especially don't suffer for being wrong like we do -- which is most likely the true filter.
 

fractalwalrus

Active Member
Local time
Today 3:23 AM
Joined
May 24, 2024
Messages
473
-->
Well... as for Diogenes. Being insane in a different way from everyone else doesn't necessarily mean you are right. It just means you're uniquely wrong... which don't get me wrong... is a high achievement!
What makes one right or wrong? Why was Diogenes wrong?
 
Top Bottom