• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Link Collection

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Woman hit by self-driving car:



Franken-algorithms: the deadly consequences of unpredictable code

Andrew Smith

29-37 minutes

The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary. But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.
At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?
“In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”
If these words sound shocking, they should, not least because Ellen Ullman, in addition to having been a distinguished professional programmer since the 1970s, is one of the few people to write revealingly about the process of coding. There’s not much she doesn’t know about software in the wild.
“People say, ‘Well, what about Facebook – they create and use algorithms and they can change them.’ But that’s not how it works. They set the algorithms off and they learn and change and run themselves. Facebook intervene in their running periodically, but they really don’t control them. And particular programs don’t just run on their own, they call on libraries, deep operating systems and so on ...”
What is an algorithm?
Few subjects are more constantly or fervidly discussed right now than algorithms. But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s. At root, an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing. If a user claims to be 18, allow them into the website; if not, print “Sorry, you must be 18 to enter”. At core, computer programs are bundles of such algorithms. Recipes for treating data. On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent.
Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”). This has revolutionized areas of medicine, science, transport, communication, making it easy to understand the utopian view of computing that held sway for many years. Algorithms have made our lives better in myriad ways.
Only since 2016 has a more nuanced consideration of our new algorithmic reality begun to take shape. If we tend to discuss algorithms in almost biblical terms, as independent entities with lives of their own, it’s because we have been encouraged to think of them in this way. Corporations like Facebook and Google have sold and defended their algorithms on the promise of objectivity, an ability to weigh a set of conditions with mathematical detachment and absence of fuzzy emotion. No wonder such algorithmic decision-making has spread to the granting of loans/ bail/benefits/college places/job interviews and almost anything requiring choice.
We no longer accept the sales pitch for this type of algorithm so meekly. In her 2016 book Weapons of Math Destruction, Cathy O’Neil, a former math prodigy who left Wall Street to teach and write and run the excellent mathbabe blog, demonstrated beyond question that, far from eradicating human biases, algorithms could magnify and entrench them. After all, software is written by overwhelmingly affluent white and Asian men – and it will inevitably reflect their assumptions (Google “racist soap dispenser” to see how this plays out in even mundane real-world situations). Bias doesn’t require malice to become harm, and unlike a human being, we can’t easily ask an algorithmic gatekeeper to explain its decision. O’Neil called for “algorithmic audits” of any systems directly affecting the public, a sensible idea that the tech industry will fight tooth and nail, because algorithms are what the companies sell; the last thing they will volunteer is transparency.
The good news is that this battle is under way. The bad news is that it’s already looking quaint in relation to what comes next. So much attention has been focused on the distant promises and threats of artificial intelligence, AI, that almost no one has noticed us moving into a new phase of the algorithmic revolution that could be just as fraught and disorienting – with barely a question asked.
The algorithms flagged by O’Neil and others are opaque but predictable: they do what they’ve been programmed to do. A skilled coder can in principle examine and challenge their underpinnings. Some of us dream of a citizen army to do this work, similar to the network of amateur astronomers who support professionals in that field. Legislation to enable this seems inevitable.
We might call these algorithms “dumb”, in the sense that they’re doing their jobs according to parameters defined by humans. The quality of result depends on the thought and skill with which they were programmed. At the other end of the spectrum is the more or less distant dream of human-like artificial general intelligence, or AGI. A properly intelligent machine would be able to question the quality of its own calculations, based on something like our own intuition (which we might think of as a broad accumulation of experience and knowledge). To put this into perspective, Google’s DeepMind division has been justly lauded for creating a program capable of mastering arcade games, starting with nothing more than an instruction to aim for the highest possible score. This technique is called “reinforcement learning” and works because a computer can play millions of games quickly in order to learn what generates points. Some call this form of ability “artificial narrow intelligence”, but here the word “intelligent” is being used much as Facebook uses “friend” – to imply something safer and better understood than it is. Why? Because the machine has no context for what it’s doing and can’t do anything else. Neither, crucially, can it transfer knowledge from one game to the next (so-called “transfer learning”), which makes it less generally intelligent than a toddler, or even a cuttlefish. We might as well call an oil derrick or an aphid “intelligent”. Computers are already vastly superior to us at certain specialized tasks, but the day they rival our general ability is probably some way off – if it ever happens. Human beings may not be best at much, but we’re second-best at an impressive range of things.
Here’s the problem. Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Clashing codes
These algorithms are not new in themselves. I first encountered them almost five years ago while researching a piece for the Guardian about high frequency trading (HFT) on the stock market. What I found was extraordinary: a human-made digital ecosystem, distributed among racks of black boxes crouched like ninjas in billion-dollar data farms – which is what stock markets had become. Where once there had been a physical trading floor, all action had devolved to a central server, in which nimble, predatory algorithms fed off lumbering institutional ones, tempting them to sell lower and buy higher by fooling them as to the state of the market. Human HFT traders (although no human actively traded any more) called these large, slow participants “whales”, and they mostly belonged to mutual and pension funds – ie the public. For most HFT shops, whales were now the main profit source. In essence, these algorithms were trying to outwit each other; they were doing invisible battle at the speed of light, placing and cancelling the same order 10,000 times per second or slamming so many into the system that the whole market shook – all beyond the oversight or control of humans.
No one could be surprised that this situation was unstable. A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason. I travelled to Chicago to see a man named Eric Hunsader, whose prodigious programming skills allowed him to see market data in far more detail than regulators, and he showed me that by 2014, “mini flash crashes” were happening every week. Even he couldn’t prove exactly why, but he and his staff had begun to name some of the “algos” they saw, much as crop circle hunters named the formations found in English summer fields, dubbing them “Wild Thing”, “Zuma”, “The Click” or “Disruptor”.
Neil Johnson, a physicist specializing in complexity at George Washington University, made a study of stock market volatility. “It’s fascinating,” he told me. “I mean, people have talked about the ecology of computer systems for years in a vague sense, in terms of worm viruses and so on. But here’s a real working system that we can study. The bigger issue is that we don’t know how it’s working or what it could give rise to. And the attitude seems to be ‘out of sight, out of mind’.”
Significantly, Johnson’s paper on the subject was published in the journal Nature and described the stock market in terms of “an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan [ie highly unusual] events with ultrafast durations”. The scenario was complicated, according to the science historian George Dyson, by the fact that some HFT firms were allowing the algos to learn – “just letting the black box try different things, with small amounts of money, and if it works, reinforce those rules. We know that’s been done. Then you actually have rules where nobody knows what the rules are: the algorithms create their own rules – you let them evolve the same way nature evolves organisms.” Non-finance industry observers began to postulate a catastrophic global “splash crash”, while the fastest-growing area of the market became (and remains) instruments that profit from volatility. In his 2011 novel The Fear Index, Robert Harris imagines the emergence of AGI – of the Singularity, no less – from precisely this digital ooze. To my surprise, no scientist I spoke to would categorically rule out such a possibility.
All of which could be dismissed as high finance arcana, were it not for a simple fact. Wisdom used to hold that technology was adopted first by the porn industry, then by everyone else. But the 21st century’s porn is finance, so when I thought I saw signs of HFT-like algorithms causing problems elsewhere, I called Neil Johnson again.
“You’re right on point,” he told me: a new form of algorithm is moving into the world, which has “the capability to rewrite bits of its own code”, at which point it becomes like “a genetic algorithm”. He thinks he saw evidence of them on fact-finding forays into Facebook (“I’ve had my accounts attacked four times,” he adds). If so, algorithms are jousting there, and adapting, as on the stock market. “After all, Facebook is just one big algorithm,” Johnson says.
“And I think that’s exactly the issue Facebook has. They can have simple algorithms to recognize my face in a photo on someone else’s page, take the data from my profile and link us together. That’s a very simple concrete algorithm. But the question is what is the effect of billions of such algorithms working together at the macro level? You can’t predict the learned behavior at the level of the population from microscopic rules. So Facebook would claim that they know exactly what’s going on at the micro level, and they’d probably be right. But what happens at the level of the population? That’s the issue.”
To underscore this point, Johnson and a team of colleagues from the University of Miami and Notre Dame produced a paper, Emergence of Extreme Subpopulations from Common Information and Likely Enhancement from Future Bonding Algorithms, purporting to mathematically prove that attempts to connect people on social media inevitably polarize society as a whole. He thinks Facebook and others should model (or be made to model) the effects of their algorithms in the way climate scientists model climate change or weather patterns.
O’Neil says she consciously excluded this adaptive form of algorithm from Weapons of Math Destruction. In a convoluted algorithmic environment where nothing is clear, apportioning responsibility to particular segments of code becomes extremely difficult. This makes them easier to ignore or dismiss, because they and their precise effects are harder to identify, she explains, before advising that if I want to see them in the wild, I should ask what a flash crash on Amazon might look like.
“I’ve been looking out for these algorithms, too,” she says, “and I’d been thinking: ‘Oh, big data hasn’t gotten there yet.’ But more recently a friend who’s a bookseller on Amazon has been telling me how crazy the pricing situation there has become for people like him. Every so often you will see somebody tweet ‘Hey, you can buy a luxury yarn on Amazon for $40,000.’ And whenever I hear that kind of thing, I think: ‘Ah! That must be the equivalent of a flash crash!’”
Anecdotal evidence of anomalous events on Amazon is plentiful, in the form of threads from bemused sellers, and at least one academic paper from 2016, which claims: “Examples have emerged of cases where competing pieces of algorithmic pricing software interacted in unexpected ways and produced unpredictable prices, as well as cases where algorithms were intentionally designed to implement price fixing.” The problem, again, is how to apportion responsibility in a chaotic algorithmic environment where simple cause and effect either doesn’t apply or is nearly impossible to trace. As in finance, deniability is baked into the system.
Real-life dangers
Where safety is at stake, this really matters. When a driver ran off the road and was killed in a Toyota Camry after appearing to accelerate wildly for no obvious reason, Nasa experts spent six months examining the millions of lines of code in its operating system, without finding evidence for what the driver’s family believed had occurred, but the manufacturer steadfastly denied – that the car had accelerated of its own accord. Only when a pair of embedded software experts spent 20 months digging into the code were they able to prove the family’s case, revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output. The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, not least when the algorithms may also have to defend themselves from hackers?
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.
“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.
“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Unlike our old electro-mechanical systems, these new algorithms are also impossible to test exhaustively. Unless and until we have super-intelligent machines to do this for us, we’re going to be walking a tightrope.
Dyson questions whether we will ever have self-driving cars roaming freely through city streets, while Toby Walsh, a professor of artificial intelligence at the University of New South Wales who wrote his first program at age 13 and ran a tyro computing business by his late teens, explains from a technical perspective why this is.
“No one knows how to write a piece of code to recognize a stop sign. We spent years trying to do that kind of thing in AI – and failed! It was rather stalled by our stupidity, because we weren’t smart enough to learn how to break the problem down. You discover when you program that you have to learn how to break the problem down into simple enough parts that each can correspond to a computer instruction [to the machine]. We just don’t know how to do that for a very complex problem like identifying a stop sign or translating a sentence from English to Russian – it’s beyond our capability. All we know is how to write a more general purpose algorithm that can learn how to do that given enough examples.”
Hence the current emphasis on machine learning. We now know that Herzberg, the pedestrian killed by an automated Uber car in Arizona, died because the algorithms wavered in correctly categorizing her. Was this a result of poor programming, insufficient algorithmic training or a hubristic refusal to appreciate the limits of our technology? The real problem is that we may never know.
“And we will eventually give up writing algorithms altogether,” Walsh continues, “because the machines will be able to do it far better than we ever could. Software engineering is in that sense perhaps a dying profession. It’s going to be taken over by machines that will be far better at doing it than we are.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs.
“Where there are choices to be made, that’s where ethics comes in. And we tend to want to have an agency that we can interrogate or blame, which is very difficult to do with an algorithm. This is one of the criticisms of these systems so far, in that it’s not possible to go back and analyze exactly why some decisions are made, because the internal number of choices is so large that how we got to that point may not be something we can ever recreateto prove culpability beyond doubt.”
The counter-argument is that, once a program has slipped up, the entire population of programs can be rewritten or updated so it doesn’t happen again – unlike humans, whose propensity to repeat mistakes will doubtless fascinate intelligent machines of the future. Nonetheless, while automation should be safer in the long run, our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable. In an algorithmic environment, many unexpected outcomes may not have been foreseeable to humans – a feature with the potential to become a scoundrel’s charter, in which deliberate obfuscation becomes at once easier and more rewarding. Pharmaceutical companies have benefited from the cover of complexity for years (see the case of Thalidomide), but here the consequences could be both greater and harder to reverse.
The military stakes
Commerce, social media, finance and transport may come to look like small beer in future, however. If the military no longer drives innovation as it once did, it remains tech’s most consequential adopter. No surprise, then, that an outpouring of concern among scientists and tech workers has accompanied revelations that autonomous weapons are ghosting toward the battlefield in what amounts to an algorithmic arms race. A robotic sharpshooter currently polices the demilitarized zone between North and South Korea, and while its manufacturer, Samsung, denies it to be capable of autonomy, this claim is widely disbelieved. Russia, China and the US all claim to be at various stages of developing swarms of coordinated, weaponized drones , while the latter plans missiles able to hover over a battlefield for days, observing, before selecting their own targets. A group of Google employees resigned over and thousands more questioned the tech monolith’s provision of machine learning software to the Pentagon’s Project Maven “algorithmic warfare” program – concerns to which management eventually responded, agreeing not to renew the Maven contract and to publish a code of ethics for the use of its algorithms. At time of writing, competitors including Amazon and Microsoft have resisted following suit.
In common with other tech firms, Google had claimed moral virtue for its Maven software: that it would help choose targets more efficiently and thereby save lives. The question is how tech managers can presume to know what their algorithms will do or be directed to do in situ – especially given the certainty that all sides will develop adaptive algorithmic counter-systems designed to confuse enemy weapons. As in the stock market, unpredictability is likely to be seen as an asset rather than handicap, giving weapons a better chance of resisting attempts to subvert them. In this and other ways we risk in effect turning our machines inside out, wrapping our everyday corporeal world in spaghetti code.
Lucy Suchman of Lancaster University in the UK co-authored an open letter from technology researchers to Google, asking them to reflect on the rush to militarize their work. Tech firms’ motivations are easy to fathom, she says: military contracts have always been lucrative. For the Pentagon’s part, a vast network of sensors and surveillance systems has run ahead of any ability to use the screeds of data so acquired.
“They are overwhelmed by data, because they have new means to collect and store it, but they can’t process it. So it’s basically useless – unless something magical happens. And I think their recruitment of big data companies is a form of magical thinking in the sense of: ‘Here is some magic technology that will make sense of all this.’”
Suchman also offers statistics that shed chilling light on Maven. According to analysis carried out on drone attacks in Pakistan from 2003-13, fewer than 2% of people killed in this way are confirmable as “high value” targets presenting a clear threat to the United States. In the region of 20% are held to be non-combatants, leaving more than 75% unknown. Even if these figures were out by a factor of two – or three, or four – they would give any reasonable person pause.
“So here we have this very crude technology of identification and what Project Maven proposes to do is automate that. At which point it becomes even less accountable and open to questioning. It’s a really bad idea.”
Suchman’s colleague Lilly Irani, at the University of California, San Diego, reminds us that information travels around an algorithmic system at the speed of light, free of human oversight. Technical discussions are often used as a smokescreen to avoid responsibility, she suggests.
“When we talk about algorithms, sometimes what we’re talking about is bureaucracy. The choices algorithm designers and policy experts make are presented as objective, where in the past someone would have had to take responsibility for them. Tech companies say they’re only improving accuracy with Maven – ie the right people will be killed rather than the wrong ones – and in saying that, the political assumption that those people on the other side of the world are more killable, and that the US military gets to define what suspicion looks like, go unchallenged. So technology questions are being used to close off some things that are actually political questions. The choice to use algorithms to automate certain kinds of decisions is political too.”
The legal conventions of modern warfare, imperfect as they might be, assume human accountability for decisions taken. At the very least, algorithmic warfare muddies the water in ways we may grow to regret. A group of government experts is debating the issue at the UN convention on certain conventional weapons (CCW) meeting in Geneva this week.
Searching for a solution
Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines. More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on. One solution, employed by the Federal Aviation Authority in relation to commercial aviation, is to log and assess the content of all programs and subsequent updates to such a level of detail that algorithmic interactions are well understood in advance – but this is impractical on a large scale. Portions of the aerospace industry employ a relatively new approach called model-based programming, in which machines do most of the coding work and are able to test as they go.
Model-based programming may not be the panacea some hope for, however. Not only does it push humans yet further from the process, but Johnson, the physicist, conducted a study for the Department of Defense that found “extreme behaviors that couldn’t be deduced from the code itself” even in large, complex systems built using this technique. Much energy is being directed at finding ways to trace unexpected algorithmic behavior back to the specific lines of code that caused it. No one knows if a solution (or solutions) will be found, but none are likely to work where aggressive algos are designed to clash and/or adapt.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”. More practically, Spafford, the software security expert, advises making tech companies responsible for the actions of their products, whether specific lines of rogue code – or proof of negligence in relation to them – can be identified or not. He notes that the venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work. Johnson, for his part, considers our algorithmic discomfort to be at least partly conceptual; growing pains in a new realm of human experience. He laughs in noting that when he and I last spoke about this stuff a few short years ago, my questions were niche concerns, restricted to a few people who pored over the stock market in unseemly detail.
“And now, here we are – it’s even affecting elections. I mean, what the heck is going on? I think the deep scientific thing is that software engineers are trained to write programs to do things that optimize – and with good reason, because you’re often optimizing in relation to things like the weight distribution in a plane, or a most fuel-efficient speed: in the usual, anticipated circumstances optimizing makes sense. But in unusual circumstances it doesn’t, and we need to ask: ‘What’s the worst thing that could happen in this algorithm once it starts interacting with others?’ The problem is we don’t even have a word for this concept, much less a science to study it.”
He pauses for moment, trying to wrap his brain around the problem.
“The thing is, optimizing is all about either maximizing or minimizing something, which in computer terms are the same. So what is the opposite of an optimization, ie the least optimal case, and how do we identify and measure it? The question we need to ask, which we never do, is: ‘What’s the most extreme possible behavior in a system I thought I was optimizing?’”
Another brief silence ends with a hint of surprise in his voice.
“Basically, we need a new science,” he says.
Andrew Smith’s Totally Wired: The Rise and Fall of Joshua Harris and the Great Dotcom Swindle will be published by Grove Atlantic next February
 

Cognisant

Prolific Member
Local time
Yesterday 6:55 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention.
Yet another tragedy caused by human unreliability, how many deaths will it take before we finally get these beasts off the road?
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention.
Yet another tragedy caused by human unreliability, how many deaths will it take before we finally get these beasts off the road?
True, it's rife with contradiction. The big picture is (or can be) simple, or one or the other, but the detail isn't so black and white due to the granularity of the process. That's logic and human emotion, that "dichotomy" or axes and how to make decisions.


Saudi Arabia to begin construction on $500bn AI city where robots roam the streets
In October 2017 Hanson Robotics’ “Sophia” became the first robot to be granted citizenship when Saudi Arabia formally made her one of theirs at a conference in the nation’s capital Riya. Yesterday, Sophia joined a compatriot research team at the AI for Good Global Summit at the UN Headquarters in Geneva to discuss Saudi Vision 2030, in which the Gulf State charts a shift away from its dependence on oil revenue.

“This change will be powered by big data and artificial intelligence,” said the Kingdom’s Deputy Minister of Technology Industry and Digital Capabilities Dr. Ahmed Al Theneyan.

The jewel of the project is the smart city “NEOM”, an acronym that stands for “New Future” in Arabic. The Saudi government says it will pour US$500 billion into this mega-project, with construction expected to begin in 2020. NEOM will occupy 26,500 sq km (10,230 sq miles), 218 times larger than the city of San Francisco.

This smart city will span the Red Sea, connecting Saudi Arabia with Egypt and North Africa. City residents’ medical files, household electronics, and transportation will all be integrated with IoT systems.

Saudi Arabia is calling for global contractors, and according to media reports Amazon, IBM, and Alibaba are discussing potential partnerships with Kingdom officials. Chinese tech conglomerate Huawei is already committed to training 1,500 local engineers over the next two years.

The busy Saudi booth at the Geneva conference promoted AI not only as the engine driving NEOM, but also as a force to help the Saudi people now.

The 2015 Mina Stampede took the lives of 2,000 pilgrims at Mecca. Umm Al-Qura University professors Anas Basalamah and Saleh Basalamah introduced a research project using computer vision to manage crowd flow near the Kaaba. Deep learning algorithms can count the number of people in a scene with up to 97.2 percent accuracy. A heat map signals a warning when density exceeds 4–5 people per square meter, and the system can also monitor crowd circulation speed for safety purposes.

In Saudi Arabia one traffic accident occurs every minute, and there are 20 deaths daily on Saudi roads. Professor Basalamah tells Synced that, “computer vision is deployed here to enforce seat belt wearing and spot traffic violations.” His computer vision startup hazen.ai specializes in “building advanced traffic cameras with the capability to detect dangerous driving behavior through video analysis,” and has received a government contract to work on urban safety.



Crowd monitoring tech and heatmaps from hazen.ai
Oil producing countries are seeking new ways to power their economies, and many are looking to AI. This year, Crown Prince of Dubai Sheikh Hamdan launched a DFA program that matches government entities with private sector partners to digitalize the government. Dubai Police will use statistical AI systems to support decision-making processes, with the goal of cutting the crime rate by 25 percent by 2021.

The UAE named 27-year-old Omar bin Sultan Al-Olama its Minister of Artificial Intelligence — the world’s first such governmental position — and will host the Middle East’s biggest AI fair this year. “World AI Show” will run April 11–12 in Dubai before moving to Singapore, Mumbai, and Paris. The AI market in the United Arab Emirates is expected to reach $50 billion by 2025.

On NEOM’s announcement, Crown Prince of Saudi Arabia Mohammed bin Salman said the smart city “will allow for a new way of life to emerge that takes into account the ambitions and outlooks of humankind paired with best future technologies and outstanding economic prospects.”

As countries in the Middle East apply their considerable resources to smart/transformative technologies, will NEOM emerge as a new Mecca of AI?
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->

The $63 billion, “winner-take-all” global art market, explained.


Why is art so expensive?

Gaby Del Valle

17-22 minutes

Christie’s, the famed auction house, recently sold an AI-generated painting for $432,500. The piece, titled “Portrait of Edmond Belamy,” was made by Obvious, a French art collective, and sold for roughly 45 times its estimated worth.
The sale was controversial, though not entirely because of the painting’s steep price tag. Paying $450,000 for a buzzy work of art — especially one that may sell well later on — isn’t unheard of in the art world. The most coveted works sell for many times that. Sotheby’s Hong Kong sold a Picasso for $7.79 million in September; a pair of paintings by the late Chinese-French painter Zao Wou-Ki sold for $65.1 million and $11.5 million, respectively, at that same sale. Leonardo da Vinci’s “Salvator Mundi” sold at Christie’s last year for $450 million, making it the most expensive work of art ever sold.
According to a joint report by UBS and Art Basel released in March, the global art market saw $63.7 billion in total sales last year. But that doesn’t mean that most artists see even a small fraction of that money, since the highest-value sales usually involve one wealthy collector putting a highly sought-after work up for auction.
The money generated from that sale, then, goes to the work’s previous owner, not to the artist who made it. (Artists profit off their own work when it’s sold on what’s known as the “primary market,” i.e., directly from a gallery or from the artist herself. When art is sold on the “secondary market,” however — meaning that it’s sold by a collector to another collector, either privately or at an auction — only the seller and, if applicable, the auction house profits.)
Aside from a handful of celebrity artists — Jeff Koons, Damien Hirst, and Yayoi Kusama, to name a few — most living artists’ works will never sell in the six- or seven-figure range. The result of all of this is that a small group of collectors pay astronomical prices for works made by an even smaller group of artists, who are in turn represented by a small number of high-profile galleries. Meanwhile, lesser-known artists and smaller galleries are increasingly being left behind.
Why is art so expensive?
The short answer is that most art isn’t. Pieces sold for six and seven figures tend to make headlines, but most living artists’ works will never sell for that much.
To understand why a few artists are rich and famous, first you need to realize that most of them aren’t and will never be. To break into the art market, an artist first has to find a gallery to represent them, which is harder than it sounds. Henri Neuendorf, an associate editor at Artnet News, told me gallerists often visit art schools’ MFA graduate shows to find young talent to represent. “These shows are the first arena, the first entry point for a lot of young artists,” Neuendorf said.
Some gallerists also look outside the art school crowd, presumably to diversify their representation, since MFAs don’t come cheap. (In 2014, tuitions at the 10 most influential MFA programs cost an average $38,000 per year, meaning a student would have to spend around $100,000 to complete their degree.) That said, the art world remains far from diverse. A 2014 study by the artists collective BFAMFAPhD found that 77.6 percent of artists who actually make a living by selling art are white, as are 80 percent of all art school graduates.
edmond4.jpg
Christie’s sold its first piece of computer generated art, “Portrait of Edmond Belamy,” for $432,500. Art collective Obvious
Artists who stand out in a graduate show or another setting may go on to have their work displayed in group shows with other emerging artists; if their work sells well, they may get a solo exhibition at a gallery. If their solo exhibition does well, that’s when their career really begins to take off.
Emerging artists’ works are generally priced based on size and medium, Neuendorf said. A larger painting, for example, will usually be priced between $10,000 and $15,000. Works on canvas are priced higher than works on paper, which are priced higher than prints. If an artist is represented by a well-known gallery like David Zwirner or Hauser & Wirth, however, the dealer’s prestige is enough to raise the artist’s sale prices, even if the artist is relatively unknown. In most cases, galleries take a 50 percent cut of the artist’s sales.
This process is becoming increasingly difficult thanks to the shuttering of small galleries around the world. The UBS and Art Basel report found that more galleries closed than opened in 2017. Meanwhile, large galleries are opening new locations to cater to an increasingly global market.
Olav Velthuis, a professor at the University of Amsterdam who studies sociology in the arts, attributes the shuttering of small galleries to the rise of art fairs like Frieze and Art Basel. In a column for the New York Times, Velthuis wrote that these fairs, which often charge gallerists between $50,000 and $100,000 for booth space, make it incredibly difficult for smaller gallerists to come home with a profit. But since fairs are becoming the preferred way for wealthy collectors to buy art — they can browse art from hundreds of galleries in a single location, all while hobnobbing with other collectors — galleries have no choice but to participate.
Smaller galleries tend to represent emerging artists, putting both the dealer and artist at yet another disadvantage. “The issue is that demand for art is not distributed evenly among all living artists,” Velthuis told me in an email. “Instead, many people are going after a small number of artists. That’s what’s driving up prices.”
Given the subjective nature part in general and contemporary art in particular, it’s hard for collectors to discern whether an artist is truly good. “The art market functions as a big consensus marketing machine,” said Velthuis. “So what people do is look at quality signals. Those signals can for instance be what an important curator is saying about an artist; if she has exhibitions in museums; if influential collectors are buying his work. Because everybody is, to some extent at the least, looking at the same signals, at one point they start agreeing who are the most desirable artists.”
In other words, some artists’ works are expensive because there’s a consensus in the art world that their works should be expensive. And, Velthuis adds, art “is a market for unique objects,” which adds a sense of scarcity into the mix. There are only a few known da Vinci paintings in existence, some of which belong to museums and are therefore permanently off the market. (It’s a “big taboo” for museums to sell works from their collection, Velthuis told me.) It only makes sense that when a da Vinci is up for auction, someone with the means to pay hundreds of millions of dollars for it will do just that.
Just 0.2 percent of artists have work that sells for more than $10 million, according to the UBS and Art Basel report. But 32 percent of the $63.7 billion in total sales made that year came from works that sold for more than $10 million. An analysis conducted by Artnet last year similarly found that just 25 artists accounted for nearly half of all contemporary auction sales in the first six months of 2017. Only three of those artists were women.
“It definitely is a good example of a winner-take-all market, where revenues and profits are distributed in a highly unequal way,” Velthuis said. “[On] principle, it is not a problem in itself. However, galleries in the middle segment of the market are having a hard time surviving, and if many of them close their doors, that is bad for the ecology of the art world. We should think of ways to let profits at the top trickle down to the middle and bottom.”
Who buys art? The superrich
The 2017 sale of da Vinci’s “Salvator Mundi” reignited discussions about the role of money in the art world. Georgina Adam, an art market expert and author of Dark Side of the Boom: The Excesses of the Art Market in the 21st Century, explained how it’s possible that a single painting could cost more money than most people would ever see in their lifetimes.
“Very rich people, these days, have an astonishing amount of money,” art expert Georgina Adam told the Financial Times. A gallerist interviewed in her book, The Dark Side of the Boom: The Excesses of the Art Market in the 21st Century, explained it this way: If a couple has a net worth of $10 billion and decides to invest 10 percent of that in art, they can buy $1 billion worth of paintings and sculptures.
There are more collectors now than ever before, and those collectors are wealthier than they have ever been. According to Adam’s book, the liberalization of certain countries’ economies — including China, India, and Eastern European countries — led to an art collection boom outside of the US and Western Europe. (The art market is also booming in the Gulf states.) As a result, the market has exploded into what writer Rachel Wetzler described as “a global industry bound up with luxury, fashion, and celebrity, attracting an expanded range of ultra-wealthy buyers who aggressively compete for works by brand-name artists.”
Art isn’t just a luxury commodity; it’s an investment. If collectors invest wisely, the works they buy can be worth much more later on. Perhaps the most famous example of this is Robert Scull, a New York City taxi tycoon who auctioned off pieces from his collection in 1973. One of the works was a painting by Robert Rauschenberg that Scull had bought for just $900 in 1958. It sold for $85,000.
The Price of Everything, a documentary about the role of money in the art world released in October, delves into the Scull auction drama and its aftermath. Art historian Barbara Rose, whose report on the auction for New York magazine was titled “Profit Without Honor,” called that auction a “pivotal moment” in the art world.
“The idea that art was being put on the auction block like a piece of meat, it was extraordinary to me,” Rose said in the film. “I remember that Rauschenberg was there and he was really incensed, because the artists got nothing out of this. … Suddenly there was the realization — because of the prices — that you could make money by buying low and selling high.”
More recently, the 2008 financial crisis was a boon for a few wealthy collectors who gobbled up works that were being sold by their suddenly cash-poor art world acquaintances. For example, billionaire business executive Mitchell Rales and his wife, Emily, added “about 50 works” to their collection in 2009, many of which they purchased at low prices, according to a 2016 Bloomberg report. The Rales family’s collection is now worth more than $1 billion.
“People who were active [buyers] at the time are very happy today,” art adviser Sandy Heller told Bloomberg. “Those opportunities would not have presented themselves without the financial crisis.”
A highly valued work of art is a luxury good, an investment, and, in some cases, a vehicle through which the ultra-wealthy can avoid paying taxes. Until very recently, collectors were able to exploit a loophole in the tax code known as the “like-kind exchange,” which allowed them to defer capital gains taxes on certain sales if the profits generated from those sales were put into a similar investment.
In the case of art sales, that meant that a collector who bought a painting for a certain amount of money — let’s say $1 million — and then sold it for $5 million a few years later didn’t have to pay capital gains taxes if they transferred that $4 million gain into the purchase of another work of art. (The Republican tax bill eliminated this benefit for art collectors, though it continues to benefit real estate developers.)
GettyImages_671115062.jpg
A gallery assistant views a painting by Turkish artist Fahrelnissa Zeid, titled Towards a Sky, which sold for £992,750 at Sotheby’s Middle Eastern Art Week in London in April 2017. Anadolu Agency/Getty Images
Collectors can also receive tax benefits by donating pieces from their collection to museums. (Here’s where buying low and donating high is really beneficial, since the charitable deduction would take the current value of the work into account, not the amount the collector originally paid for it.)
Jennifer Blei Stockman, the former president of the Guggenheim and one of the producers of The Price of Everything, told me that galleries often require collectors who purchase new work by prominent artists to eventually make that work available to the public.
“Many galleries are now insisting that they will not sell a work to a private collector unless they either buy a second work and give it to a museum, or promise that the artwork will eventually be given to a museum,” Stockman said. These agreements aren’t legally enforceable, but collectors who want to remain in good standing with galleries tend to keep their word.
Artists’ works don’t necessarily have to end up in publicly-owned museums in order to be seen by the public. Over the past decade, a growing number of ultra-wealthy art collectors have opened private museums in order to show off the works they’ve acquired. Unlike public museums, which are hindered by relatively limited acquisitions budgets — the Louvre’s 2016 budget, for example, was €7.3 million — collectors can purchase just about any work they want for their private museums, provided they have the money. And since these museums are ostensibly open to the public, they come with a slew of tax benefits.
“The rich buy art,” arts writer Julie Baumgardner declared in an Artsy editorial. “And the super-rich, well, they make museums.”
When works sell for millions of dollars, do artists benefit?
Materially speaking, artists only benefit from sales when their works are sold on the primary market, meaning a collector purchased the work from a gallery or, less frequently, from the artist himself. When a work sells at auction, the artist doesn’t benefit at all.
For decades, artists have attempted to correct this by fighting to receive royalties from works sold on the secondary market. Most writers, for example, receive royalties from book sales in perpetuity. But once an artist sells a work to a collector, the collector — and the auction house, if applicable — is the only one who benefits from selling that work at a later date.
In 2011, a coalition of artists, including Chuck Close and Laddie John Dill, filed class-action lawsuits against Sotheby’s, Christie’s, and eBay. Citing the California Resale Royalties Act — which entitled California residents who sold work anywhere in the country, as well as any visual artist selling their work in California, to 5 percent of the price of any resale of their work more than $1,000 — the artists claimed that the eBay and the auction houses had broken state law. But in July, a federal appeals court sided with the sellers, not the artists.
Even if artists don’t make any money from these sales, Stockman told me, they can occasionally benefit in other ways. “Artists do benefit when their pieces sell well at auction, because primary prices are then increased,” Stockman said. “However, when a piece sells at auction or in the secondary market, the artist does not [financially] benefit at all, and that, I know, is very scary and upsetting to many artists.”
Art for everyone else
Taken together, all of these factors paint a troubling picture: Access to art seems to be increasingly concentrated among the superrich. As the rich get richer, collectors are paying increasingly higher prices for works made by a handful of living artists, leaving emerging artists and the galleries that represent them behind. Then there’s the question of who even gets to be an artist. Art school is expensive, and an MFA doesn’t automatically translate to financial success in such a competitive industry.
GettyImages_487836979.jpg
Jeff Koons’s “Popeye” was purchased for $28 million by billionaire casino tycoon Steve Wynn in 2014. Emmanual Dunand/AFP/Getty Images
There is some pushback to this concentration of the market at the very top — or even to the idea that art is inaccessible to the average person. Emily Kaplan, the vice president of postwar and contemporary sales at Christie’s, told me that the auction house’s day sales are open to the public and often feature works that cost much less than headlines would lead you to believe.
“Christie’s can be seen as an intimidating name for a lot of people, but most of the sales that we do are much lower prices than what gets reported in the news,” said Kaplan. “We have a lot of sales that happen throughout the calendar year in multiple locations, especially postwar and contemporary art. … Works can sell for a couple hundred dollars, one, two, three thousand dollars. It’s a much lower range than people expect.”
Affordable art fairs, which usually sell art for a few thousand dollars, are another alternative for people who want to buy art but can’t spend millions on a single sculpture. Superfine, an art fair founded in 2015, describes itself as a way of bringing art to the people. Co-founders James Miille and Alex Mitow say the fair is a reaction to the inflated prices they saw on the high end of the “insular” art market.
“We saw a rift in the art market between artists and galleries with amazing work who need to sell it to survive, and people who love art and can afford it but weren’t feeling like a part of the game,” Mitow told me in an email. “Most transactions in the art market actually occur at the under $5,000 level, and that’s what we’re publicizing: the movement of real art by real living artists who build a sustainable career, not necessarily outlier superstar artists with sales records that are unattainable for the average — if equally qualified — artist.”
In addition to hosting fairs in New York City, Los Angeles, Miami, and Washington, DC, Superfine sells works through its “e-fair.” In the same vein as more traditional art fairs like Art Basel, Superfine charges artists or gallerists a flat fee for exhibition space, though Superfine’s rates are much lower.
In spite of these efforts to democratize art, though, the overall market is still privileged towards, well, the very privileged. Art patronage has always been a hobby for the very rich, and that’s not going to change any time soon — but the ability to look at beautiful things shouldn’t be limited to those who can afford to buy them.


First AI generated painting expected to sell for $35,000 sells for $432,500


Christie’s just sold an AI-generated painting for $432,500. It’s already controversial.

Chavie Lieber@ChavieLieberChavie.Lieber@Vox.com

7-9 minutes

From lab-grown diamonds to computer-generated perfumes to gadgets as stylists to synthetic whiskey, it’s hard to find a category of goods today that hasn’t been infiltrated by robots.
The latest industry to get the treatment is art. Last week, British auction house Christie’s sold its first piece of computer-generated art, titled “Portrait of Edmond Belamy.” The piece, which was made by a French art collective named Obvious, sold for a whopping $432,500 — about 45 times its estimated worth — signaling that while there might be those in the art world that will turn their noses up at computer-generated art, there plenty of others who take it seriously and are willing to pay for it.
The portrait was created via an algorithm, which combed through a collection of historical portraits. Then it generated a portrait of its own, which was printed on canvas. In a blog post discussing the sale, Christie’s wrote how AI could be the future of art, noting how an AI can “model the course of art history,” since it can comb through a chronology of pieces, showing how “the whole story of our visual culture were a mathematical inevitability.”
But the painting’s sale brings the light the question of what is art, and what constitutes “real” versus authentic when algorithms come into the picture — literally.
How did a computer create a piece of art?
“Portrait of Edmond Belamy” was made by Obvious, an AI research studio in Paris that’s run by three 25-year-old researchers named Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernier. Obvious uses a type of AI called a generative adversarial network, or GAN.
It combs through data points — in this case, historical portraits — and then create its own based on all that it’s learned. It’s how IBM is creating perfume using formulas provided by global fragrance company Symrise. It’s also how a data scientist created more than 15,000 AI internet cats via something called a Meow Generator.
aedmond.png
“Portrait of Edmond Belamy” was created by an AI called a generative adversarial network. Obvious
Caselles-Dupré explained to Christie’s that Obvious “fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th.”
The result is Edmond, a (fictional) man wearing a dark coat with a white collar. “Portrait of Edmond Belamy” looks like it could have been a portrait of some European nobleman you’d see in the Met or the Louvre. Christie’s notes, too, that there’s also “something weirdly contemporary” about Edmund, which Caselles-Dupré says is due to the art’s AI having a “distortion” built into its artistic abilities, which is why his face is blurred. The piece has been signed with the mathematical formula used to create it.
Obvious has created 11 portraits total of the fictional Belamy family, who each come with their own somewhat kitschy taglines. Take, for example, Madame De Belamy, who has fair skin and wears a powder blue dress and matching hat and has the tagline “Who said that not having a soul is a default ? It makes me unboundable, adaptative, and reckless.”
All these pieces have blurred faces, like Edmund, and are vague enough in appearance that they could come off as nobility from several countries.
Richard Lloyd, the international head of Christie’s print department, believes there’s a big market for AI-built artwork — as demonstrated by the amount of money spent by Edmund’s buyer, who remains anonymous.
“It is a portrait, after all,” Lloyd, who was in charge of the sale, said. “It may not have been painted by a man in a powdered wig, but it is exactly the kind of artwork we have been selling for 250 years.”
Is this really art?
When lab-grown diamonds starting hitting the market a few years ago, there was mass uproar, particularly among heavyweights in the industry like De Beers. “Real is rare,” the company insisted, and therefore synthetic diamonds, regardless of their genetic makeup or sparkle, were not to be taken seriously. Even when De Beers eventually announced they were creating lab-grown diamonds earlier this year, the company listed them with costume jewelry prices, which it apparently hoped would send a message.
The “Portrait of Edmond Belamy” hits a similar vein. Should computer-generated art be considered “real art?” Is it truly creative? Does it hold value beyond what some anonymous bidder wants to drop at Christie’s?
Ahmed Elgammal, the director of the Art and Artificial Intelligence Lab at Rutgers University who works on GAN, believes AI-created art should be looked at an artistic craft.
“Yes, if you look just at the form, and ignore the things that art is about, then the algorithm is just generating visual forms and following aesthetic principles extracted from existing art,” he told Christie’s.
“But if you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists — one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art.”
There’s already controversy about ownership
With AI on the rise, the art market could soon be flooded with machine-generated pieces. But if the discussion of authenticity isn’t what gets people upset, the issue of ownership certainly might.
In the case of Edmund, for example, there’s the question of who should get the credit. Is the AI that created him and the entire Belamy family considered the artist, or would that be the three AI researchers at Obvious? And if the art is inspired by hundreds of thousands of pre-existing pieces, how much is the process informed by a typical degree of borrowing or inspiration, and how much is just swiping?
This is already a brewing issue. The AI that was used to create “Portrait of Edmond Belamy” wasn’t even written at Obvious, as first reported by the Verge. It was created by Robbie Barrat, a 19-year-old AI artist who’s shared his research openly on the web.
On Twitter, Barrat called out Obvious; he believed it “really just used my network and are selling the results.”
While screenshots show Barrat was in contact with Obvious about using his AI, he tweeted that how he believed it was being used for “some open source project.” In an email to Vox, Barrat says he’s not coming after Obvious for some of the $432,500 that the Edmond portrait was sold for, but is still upset about the auction.
”I’m not concerned about getting any money from this: I really just want the legitimate artists working with AI to get attention,” he says. “ I feel like the work Christie’s has chosen to auction off is incredibly surface level.”
In a statement to Vox, Obvious wrote that “there are many people experimenting with different ways to use GAN models,” and that “indeed, Robbie Barrat deserves credit, which we gave in our main Medium post as soon as he asked back in April. We also credited him right after the auction.” When asked if it would be sharing its profits, Obvious did not offer comment.
Barrat believes that Obvious’s work with AI in art is sending “the wrong impression.” He says the art world is interested in using “AI as an artist’s tool, and really approach AI in art as something to collaborate with — not subscribing to Obvious’s false narrative of AI as something to replace the role of the artist.”
 

Cognisant

Prolific Member
Local time
Yesterday 6:55 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Art is also a great way to exchange large amounts of money for something that's apparently worthless without anyone getting suspicious.
 

onesteptwostep

Junior Hegelian
Local time
Today 2:55 PM
Joined
Dec 7, 2014
Messages
4,253
-->
Good art contains the geist of our age though, which is why that AI art costed so much. That art could go into an art history book someday. Predatory lending from banks is the thing we should be shitting on, not art.
 

Cognisant

Prolific Member
Local time
Yesterday 6:55 PM
Joined
Dec 12, 2009
Messages
10,564
-->
It's a fuzzy brown picture that vaguely resembles a person, it has no message, it evokes no emotion, anything can be art but the quality of that "art" is abysmal.
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Nah, art is supposed to theoretically be healing to the human condition, which is why some ancient philosophers suggest we surround ourselves with it to relieve any suffering. It distracts us from the truth, which is that death is inevitable and we have no control over anything in our lives. So one option is to indulge in things like art and philosophy, which by default means you can't let go, and will be stuck in a loop.

Art is more so like magic because it influences people, puts them under spells, and controls their mind. The other part is (critical) thinking and being able to think for yourself - are they really separate? What do the results really suggest? If you have maths and logic, then fine art like paintings or opera - which is better? Your mileage may vary. So some art can sometimes take logical rigor and work. It's not just all symbiosis with artists borrowing concepts from science or other lore to fill in their canvas. The asymmetry comes from the notion science doesn't really borrow from art to do it. Art seems like less hard work. Scientists feel they get the short end of the stick by doing their job.



I've been googling articles about AI:

In the Age of A.I., Is Seeing Still Believing?
Advances in digital imagery could deepen the fake-news crisis—or help us get out of it.

In 2011, Hany Farid, a photo-forensics expert, received an e-mail from a bereaved father. Three years earlier, the man’s son had found himself on the side of the road with a car that wouldn’t start. When some strangers offered him a lift, he accepted. A few minutes later, for unknown reasons, they shot him. A surveillance camera had captured him as he walked toward their car, but the video was of such low quality that key details, such as faces, were impossible to make out. The other car’s license plate was visible only as an indecipherable jumble of pixels. The father could see the evidence that pointed to his son’s killers—just not clearly enough.
Farid had pioneered the forensic analysis of digital photographs in the late nineteen-nineties, and gained a reputation as a miracle worker. As an expert witness in countless civil and criminal trials, he explained why a disputed digital image or video had to be real or fake. Now, in his lab at Dartmouth, where he was a professor of computer science, he played the father’s video over and over, wondering if there was anything he could do. On television, detectives often “enhance” photographs, sharpening the pixelated face of a suspect into a detailed portrait. In real life, this is impossible. As the video had flowed through the surveillance camera’s “imaging pipeline”—the lens, the sensor, the compression algorithms—its data had been “downsampled,” and, in the end, very little information remained. Farid told the father that the degradation of the image couldn’t be reversed, and the case languished, unsolved.
A few months later, though, Farid had a thought. What if he could use the same surveillance camera to photograph many, many license plates? In that case, patterns might emerge—correspondences between the jumbled pixels and the plates from which they derived. The correspondences would be incredibly subtle: the particular blur of any degraded image would depend not just on the plate numbers but also on the light conditions, the design of the plate, and many other variables. Still, if he had access to enough images—hundreds of thousands, perhaps millions—patterns might emerge.
Such an undertaking seemed impractical, and for a while it was. But a new field, “image synthesis,” was coming into focus, in which computer graphics and A.I. were combined. Progress was accelerating. Researchers were discovering new ways to use neural networks—software systems based, loosely, on the architecture of the brain—to analyze and create images and videos. In the emerging world of “synthetic media,” the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.
In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”
Not all synthetic media is dystopian. Recent top-grossing movies (“Black Panther,” “Jurassic World”) are saturated with synthesized images that, not long ago, would have been dramatically harder to produce; audiences were delighted by “Star Wars: The Last Jedi” and “Blade Runner 2049,” which featured synthetic versions of Carrie Fisher and Sean Young, respectively. Today’s smartphones digitally manipulate even ordinary snapshots, often using neural networks: the iPhone’s “portrait mode” simulates what a photograph would have looked like if it been taken by a more expensive camera. Meanwhile, for researchers in computer vision, A.I., robotics, and other fields, image synthesis makes whole new avenues of investigation accessible.
Farid started by sending his graduate students out on the Dartmouth campus to photograph a few hundred license plates. Then, based on those photographs, he and his team built a “generative model” capable of synthesizing more. In the course of a few weeks, they produced tens of millions of realistic license-plate images, each one unique. Then, by feeding their synthetic license plates through a simulated surveillance camera, they rendered them indecipherable. The aim was to create a Rosetta Stone, connecting pixels to plate numbers.
Next, they began “training” a neural network to interpret those degraded images. Modern neural networks are multilayered, and each layer juggles millions of variables; tracking the flow of information through such a system is like following drops of water through a waterfall. Researchers, unsure of how their creations work, must train them by trial and error. It took Farid’s team several attempts to perfect theirs. Eventually, though, they presented it with a still from the video. “The license plate was like ten pixels of noise,” Farid said. “But there was still a signal there.” Their network was “pretty confident about the last three characters.”
This summer, Farid e-mailed those characters to the detective working the case. Investigators had narrowed their search to a subset of blue Chevy Impalas; the network pinpointed which one. Someone connected to the car turned out to have been involved in another crime. A case that had lain dormant for nearly a decade is now moving again. Farid and his team, meanwhile, published their results in a computer-vision journal. In their paper, they noted that their system was a free upgrade for millions of low-quality surveillance cameras already in use. It was a paradoxical outcome typical of the world of image synthesis, in which unreal images, if they are realistic enough, can lead to the truth.
Farid is in the process of moving from Dartmouth to the University of California, Berkeley, where his wife, the psychologist Emily Cooper, studies human vision and virtual reality. Their modernist house, perched in the hills above the Berkeley campus, is enclosed almost entirely in glass; on a clear day this fall, I could see through the living room to the Golden Gate Bridge. At fifty-two, Farid is gray-haired, energized, and fit. He invited me to join him on the deck. “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.
“In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.
Farid speaks with a technologist’s enthusiasm and a lawyer’s wariness. “Why did Stalin airbrush those people out of those photographs?” he asked. “Why go to the trouble? It’s because there is something very, very powerful about the visual image. If you change the image, you change history. We’re incredibly visual beings. We rely on vision—and, historically, it’s been very reliable. And so photos and videos still have this incredible resonance.” He paused, tilting back into the sun and raising his hands. “How much longer will that be true?”
One of the world’s best image-synthesis labs is a seven-minute drive from Farid’s house, on the north side of the Berkeley campus. The lab is run by a forty-three-year-old computer scientist named Alexei A. Efros. Efros was born in St. Petersburg; he moved to the United States in 1989, when his father, a winner of the U.S.S.R.’s top prize for theoretical physics, got a job at the University of California, Riverside. Tall, blond, and sweetly genial, he retains a Russian accent and sense of humor. “I got here when I was fourteen, but, really, one year in the Soviet Union counts as two,” he told me. “I listened to classical music—everything!”
As a teen-ager, Efros learned to program on a Soviet PC, the Elektronika BK-0010. The system stored its programs on audiocassettes and, every three hours, overheated and reset; since Efros didn’t have a tape deck, he learned to code fast. He grew interested in artificial intelligence, and eventually gravitated toward computer vision—a field that allowed him to watch machines think.
In 1998, when Efros arrived at Berkeley for graduate school, he began exploring a problem called “texture synthesis.” “Let’s say you have a small patch of visual texture and you want to have more of it,” he said, as we sat in his windowless office. Perhaps you want a dungeon in a video game to be made of moss-covered stone. Because the human visual system is attuned to repetition, simply “tiling” the walls with a single image of stone won’t work. Efros developed a method for intelligently sampling bits of an image and probabilistically recombining them so that a texture could be indefinitely and organically extended. A few years later, a version of the technique became a tool in Adobe Photoshop called “content-aware fill”: you can delete someone from a pile of leaves, and new leaves will seamlessly fill in the gap.
From the front row of CS 194-26—Image Manipulation and Computational Photography—I watched as Efros, dressed in a blue shirt, washed jeans, and black boots, explained to about a hundred undergraduates how the concept of “texture” could be applied to media other than still images. Efros started his story in 1948, with the mathematician Claude Shannon, who invented information theory. Shannon had envisioned taking all the books in the English language and analyzing them in order to discover which words tended to follow which other words. He thought that probability tables based on this analysis might enable the construction of realistic English sentences.
“Let’s say that we have the words ‘we’ and ‘need,’ ” Efros said, as the words appeared on a large screen behind him. “What’s the likely next word?”
The students murmured until Efros advanced to the next slide, revealing the word “to.”
“Now let’s say that we move our contextual window,” he continued. “We just have ‘need’ and ‘to.’ What’s next?”
“Sleep!” one student said.
“Eat!” another said.


“Eat” appeared onscreen.
“If our data set were a book about the French Revolution, the next word might be ‘cake,’ ” Efros said, chuckling. “Now, what is this? You guys use it all the time.”
“Autocomplete!” a young man said.
Pacing the stage, Efros explained that the same techniques used to create synthetic stonework or text messages could also be used to create synthetic video. The key was to think of movement—the flickering of a candle flame, the strides of a man on a treadmill, the particular way a face changed as it smiled—as a texture in time. “Zzzzt,” he said, rotating his hands in the air. “Into the time dimension.”
A hush of concentration descended as he walked the students through what this meant mathematically. The frames of a video could be seen as links in a chain—and that chain could be looped and crossed over itself. “You’re going to compute transition probabilities between your frames,” he said. Using these, it would be possible to create user-controllable, natural motion.
The students, their faces illuminated by their laptops, toggled between their notes and their code. Efros, meanwhile, screened a video on “expression-dependent textures,” created by the team behind “Synthesizing Obama.” Onscreen, a synthetic version of Tom Hanks’s face looked left and right and, at the click of a mouse, expressed various emotions: fear, anger, happiness. The researchers had used publicly available images of Hanks to create a three-dimensional model, or “mesh,” of his face onto which they projected his characteristic expressions. For this week’s homework, Efros concluded, each student would construct a similar system. Half the class groaned; the other half grinned.
Afterward, a crowd gathered around Efros with questions. In my row, a young woman turned to her neighbor and said, “Edge detection is sweet!”
Before arriving in Berkeley, I had written to Shiry Ginosar, a graduate student in Efros’s lab, to find out what it would take to create a synthetic version of me. Ginosar had replied with instructions for filming myself. “For us to be able to generate the back of your head, your profile, your arm moving up and down, etc., we need to have seen you in these positions in your video,” she wrote. For around ten minutes, before the watchful eye of an iPhone, I walked back and forth, spun in circles, practiced my lunges, and attempted the Macarena; my performance culminated in downward dog. “You look awesome ;-),” Ginosar wrote, having received my video. She said it would take about two weeks for a network to learn to synthesize me.
When I arrived, its work wasn’t quite done. Ginosar—a serene, hyper-organized woman who, before training neural networks, trained fighter pilots in simulators in the Israel Defense Forces—created an itinerary to keep me occupied while I waited. In addition to CS 194–26, it included lunch at Momo, a Tibetan curry restaurant, where Efros’s graduate students explained how it had come to pass that undergrads could create, as homework, Hollywood-like special effects.
“In 1999, when ‘The Matrix’ came out, the ideas were there, but the computation was very slow,” Deepak Pathak, a Ph.D. candidate, said. “Now computers are really fast. The G.P.U.s”—graphics processing units, designed to power games like Assassin’s Creed—“are very advanced.”
“Also, everything is open-sourced,” said Angjoo Kanazawa, who specializes in “pose detection”—figuring out, from a photo of a person, how her body is arranged in 3-D space.
“And that’s good, because we want our research to be reproducible,” Pathak said. “The result is that it’s easy for someone who’s in high school or college to run the code, because it’s in a library.”
The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.
Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.
On his computer, Efros showed me a photo taken from a bridge in Lyon. A large section of the riverbank—which might have contained cars, trees, people—had been deleted. In 2007, he helped devise a system that rifles through Flickr for similar photos, many of them taken while on vacation, and samples them. He clicked, and the blank was filled in with convincing, synthetic buildings and greenery. “Probably it found photos from a different city,” Efros said. “But, you know, we’re boring. We always build the same kinds of buildings on the same kinds of riverbanks. And then, as we walk over bridges, we all say, along with a thousand other people, ‘Hey, this will look great, let me take a picture,’ and we all put the horizon in the same place.” In 2016, Ira Kemelmacher-Shlizerman, one of the researchers behind “Synthesizing Obama,” applied the same principle to faces. Given your face as input, her system combs the Internet for people who look like you, then combines their features with your own, to show how you’d look if you had curly hair or were a different age.
One of the lessons of image synthesis is that, with enough data, everything becomes texture. Each river and vista has its double, ready to be sampled; there are only so many faces, and your doppelgängers have already uploaded yours. Products are manufactured over and over, and new buildings echo old ones. The idea of texture even extends—“Zzzzt! ”—into the social dimension. Your Facebook news feed highlights what “people like you” want to see. In addition to unearthing similarities, social media creates them. Having seen photos that look a certain way, we start taking them that way ourselves, and the regularity of these photos makes it easier for networks to synthesize pictures that look “right” to us. Talking with Efros, I struggled to come up with an image for this looped and layered interconnectedness, in which patterns spread and outputs are recirculated as inputs. I thought of cloverleaf interchanges, subway maps, Möbius strips.
A sign on the door of Efros’s lab at Berkeley reads “Caution: Deep Nets.” Inside, dozens of workstations are arranged in rows, each its own jumble of laptop, keyboard, monitor, mouse, and coffee mug—the texture of workaholism, iterated. In the back, in a lounge with a pool table, Richard Zhang, a recent Ph.D., opened his laptop to explain the newest developments in synthetic-image generation. Suppose, he said, that you possessed an image of a landscape taken on a sunny day. You might want to know what it would look like in the rain. “The thing is, there’s not just one answer to this problem,” Zhang said. A truly creative network would do more than generate a convincing image. It would be able to synthesize many possibilities—to do for landscapes what Farid’s much simpler system had done for license plates.
Onscreen, Zhang showed me an elaborate flowchart in which neural networks train other networks—an arrangement that researchers call a “generative adversarial network,” or GAN. He pointed to one of the networks: the “generator,” charged with synthesizing, more or less at random, new versions of the landscape. A second network, the “discriminator,” would judge the verisimilitude of those images by comparing them with the “ground truth” of real landscape photographs. The first network riffed; the second disciplined the first. Zhang’s screen showed the system in action. An image of a small town in a valley, on a lake, perhaps in Switzerland, appeared; it was night, and the view was obscured by darkness. Then, image by image, we began to “traverse the latent space.” The sun rose; clouds appeared; the leaves turned; rain descended. The moon shone; fog rolled in; a storm gathered; snow fell. The sun returned. The trees were green, brown, gold, red, white, and bare; the sky was gray, pink, black, white, and blue. “It finds the sources of patterns of variation,” Zhang said. We watched the texture of weather unfold.
In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”
MediFor has brought together dozens of researchers from universities, tech companies, and government agencies. Collectively, they are creating automated systems based on more than fifty “manipulation indicators.” Their goal is not just to spot fakes but to trace them. “We want to attribute a manipulation to someone, to explain why a manipulation was done,” Turek said. Ideally, such systems would be integrated into YouTube, Facebook, and other social-media platforms, where they could flag synthesized content. The problem is speed. Each day, five hundred and seventy-six thousand hours of video are uploaded to YouTube; MediFor’s systems have a “range of run-times,” Turek said, from less than a second to “tens of seconds” or more. Even after they are sped up, practical questions will remain. How will innocent manipulations be distinguished from malicious ones? Will advertisements be flagged? How much content will turn out to be, to some degree, synthetic?
In his glass-walled living room, Hany Farid and I watched a viral video called “Golden Eagle Snatches Kid,” which appears to show a bird of prey swooping down upon a toddler in a Montreal park. Specialized software, Farid explained, could reveal that the shadows of the eagle and the kid were subtly misaligned. Calling up an image of a grizzly bear, Farid pointed out that, under high magnification, its muzzle was fringed in red and blue. “As light hits the surface of a lens, it bends in proportion to its wavelength, and that’s why you see the fringing,” he explained. These “chromatic aberrations” are smallest at the center of an image and larger toward its edges; when that pattern is broken, it suggests that parts of different photographs have been combined.
There are ways in which digital photographs are more tamper-evident than analog ones. During the manufacturing of a digital camera, Farid explained, its sensor—a complex latticework of photosensitive circuits—is assembled one layer at a time. “You’re laying down loads of material, and it’s not perfectly even,” Farid said; inevitably, wrinkles develop, resulting in a pattern of brighter and dimmer pixels that is unique to each individual camera. “We call it ‘camera ballistics’—it’s like the imperfections in the barrel of a gun,” he said. Modern digital cameras, meanwhile, often achieve higher resolutions by guessing about the light their sensors don’t catch. “Essentially, they cheat,” he said. “Two-thirds of the image isn’t recorded—it’s synthesized!” He laughed. “It’s making shit up, but in a logical way that creates a very specific pattern, and if you edit something the pattern is disturbed.”
Many researchers who study synthesis also study forensics, and vice versa. “I try to be an optimist,” Jacob Huh, a chilled-out grad student in Efros’s lab, told me. He had trained a neural network to spot chromatic aberrations and other signs of manipulation; the network produces “heat maps” highlighting the suspect areas of an image. “The problem is that, if you can spot it, you can fix it,” Huh said. In theory, a forger could integrate his forensic network into a GAN, where—as a discriminator—it could train a generator to synthesize images capable of eluding its detection. For this reason, in an article titled “Digital Forensics in a Post-Truth Age,” published earlier this year in Forensic Science International, Farid argued that researchers need to keep their newest techniques secret for a while. The time had come, he wrote, to balance “scientific openness” against the risk of “fueling our adversaries.”
In Farid’s view, the sheer number of distinctive “manipulation indicators” gives forensics experts a technical edge over forgers. Just as counterfeiters must painstakingly address each security feature on a hundred-dollar bill—holograms, raised printing, color-shifting ink, and so on—so must a media manipulator solve myriad technical problems, some of them statistical in nature and invisible to the eye, in order to create an undetectable fake. Training neural networks to do this is a formidable, perhaps impossible task. And yet, Farid said, forgers have the advantage in distribution. Although “Golden Eagle Snatches Kid” has been identified as fake, it’s still been viewed more than thirteen million times. Matt Turek predicts that, when it comes to images and video, we will arrive at a new, lower “trust point.” “ ‘A picture’s worth a thousand words,’ ‘Seeing is believing’—in the society I grew up in, those were catchphrases that people agreed with,” he said. “I’ve heard people talk about how we might land at a ‘zero trust’ model, where by default you believe nothing. That could be a difficult thing to recover from.”
As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”
“The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.
As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.
In the early days of photography, its practitioners had to argue for its objectivity. In courtrooms, experts debated whether photos were reflections of reality or artistic products; legal scholars wondered whether photographs needed to be corroborated by witnesses. It took decades for a consensus to emerge about what made a photograph trustworthy. Some technologists wonder if that consensus could be reëstablished on different terms. Perhaps, using modern tools, photography might be rebooted.
Truepic, a startup in San Diego, aims at producing a new kind of photograph—a verifiable digital original. Photographs taken with its smartphone app are uploaded to its servers, where they enter a kind of cryptographic lockbox. “We make sure the image hasn’t been manipulated in transit,” Jeffrey McGregor, the company’s C.E.O., explained. “We look at geolocation data, at the nearby cell towers, at the barometric-pressure sensor on the phone, and verify that everything matches. We run the photo through a bunch of computer-vision tests.” If the image passes muster, it’s entered into the Bitcoin and Ethereum blockchain. From then on, it can be shared on a special Web page that verifies its authenticity. Today, Truepic’s biggest clients are insurance companies, which allow policyholders to take verified photographs of their flooded basements or broken windshields. The software has also been used by N.G.O.s to document human-rights violations, and by workers at a construction company in Kazakhstan, who take “verified selfies” as a means of clocking in and out. “Our goal is to expand into industries where there’s a ‘trust gap,’ ” McGregor said: property rentals, online dating. Eventually, he hopes to integrate his software into camera components, so that “verification can begin the moment photons enter the lens.”
Earlier this year, Danielle Citron and Robert Chesney, law professors at the Universities of Maryland and Texas, respectively, published an article titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” in which they explore the question of whether certain kinds of synthetic media might be made illegal. (One plausible path, Citron told me, is to outlaw synthetic media aimed at inciting violence; another is to adapt the law against impersonating a government official so that it applies to synthetic videos depicting them.) Eventually, Citron and Chesney indulge in a bit of sci-fi speculation. They imagine the “worst-case scenario,” in which deepfakes prove ineradicable and are used for electioneering, blackmail, and other nefarious purposes. In such a world, we might record ourselves constantly, so as to debunk synthetic media when it emerges. “The vendor supplying such a service and maintaining the resulting data would be in an extraordinary position of power,” they write; its database would be a tempting resource for law-enforcement agencies. Still, if it’s a choice between surveillance and synthesis, many people may prefer to be surveilled. Truepic, McGregor told me, had already had discussions with a few political campaigns. “They say, ‘We would use this to just document everything for ourselves, as an insurance policy.’ ”
One evening, Efros and I walked to meet Farid for dinner at a Japanese restaurant near campus. On the way, we talked about the many non-nefarious applications of image synthesis. A robot, by envisioning what it might see around a corner and discovering whether it had guessed right, could learn its way around a building; “pose detection” could allow it to learn motions by observing them. “Prediction is really the hallmark of intelligence,” Efros said, “and we are constantly predicting and hallucinating things that are not actually visible.” In a sense, synthesizing is simply imagining. The apparent paradox of Farid’s license-plate research—that unreal images can help us read real ones—just reflects how thinking works. In this respect, deepfakes were sparks thrown off by the project of building A.I. “When I see a face,” Efros continued, “I don’t know for sure what it looks like from the side. . . .” He paused. “You know what? I think I screwed up.” We had gotten lost.
When we found the restaurant, Farid, who had come on his motorcycle, was waiting for us, wearing a snazzy leather jacket. Efros and Farid—the generator and the discriminator—embraced. They have known each other for a decade.
We took a small table by the window. “What’s really interesting about these technologies is how quickly they went from ‘Whoa, this is really cool’ to ‘Holy crap, this is subverting democracy,’ ” Farid said, over a seaweed salad.
“I think it’s video,” Efros said. “When it was images, nobody cared.”
“Trump is part of the equation, too, right?” Farid asked. “He’s creating an atmosphere where you shouldn’t believe what you read.”
“But Putin—my dear Putin!—his relationship with truth is amazing,” Efros said. “Oliver Stone did a documentary with him, and Putin showed Stone a video of Russian troops attacking ISIS in Syria. Later, it turned out to be footage of Americans in Iraq.” He grimaced, reaching for some sushi. “A lot of it is not faking data—it’s misattribution. On Russian TV, they say, ‘Look, the Ukrainians are bombing Donetsk,’ but actually it’s footage from somewhere else. The pictures are fine. It’s the label that’s wrong.”
Over dinner, Farid and Efros debated the deep roots of the fake-news phenomenon. “A huge part of the solution is dealing with perverse incentives on social media,” Farid said. “The entire business model of these trillion-dollar companies is attention engineering. It’s poison.” Efros wondered if we humans were evolutionarily predisposed to jump to conclusions that confirmed our own views—the epistemic equivalent of content-aware fill.
As another round of beer arrived, Farid told a story. Many years ago, he said, he’d published a paper about a famous photograph of Lee Harvey Oswald. The photograph shows Oswald standing in his back yard, holding the rifle he later used to kill President Kennedy; conspiracy theorists have long claimed that it’s a fake. “It kind of does look fake,” Farid said. The rifle appears unusually long, and Oswald seems to be leaning back into space at an unrealistic angle; in this photograph, but not in others, he has a strangely narrow chin. “We built this 3-D model of the scene,” Farid said, “and it turned out we could explain everything that people thought was wrong—it was just that the light was weird. You’d think people would be, like, ‘Nice job, Hany.’ ”
Efros laughed.
“But no! When it comes to conspiracies, there are the facts that prove our beliefs and the ones that are part of the plot. And so I became part of the conspiracy. At first, it was just me. Then my father sent me an e-mail. He said, ‘Someone sent me a link to an article claiming that you and I are part of a conspiracy together.’ My dad is a research chemist who made his career at Eastman Kodak. Well, it turns out he was at Eastman Kodak at the same time they developed the Zapruder film.”


“I hope you’re not going to let your parents pass by without speaking to them.”

“Ahhhhh,” Efros said.
For a moment, they were silent. “We’re going to need technological solutions, but I don’t think they’re going to solve the problem,” Farid said. “And I say that as a technologist. I think it’s a societal problem—a human problem.”
On a brisk Friday morning, I walked to Efros’s lab to see my synthetic self. The Berkeley campus was largely empty, and I couldn’t help noticing how much it resembled other campuses—the texture of college is highly consistent. Already, the way I looked at the world was shifting. That morning, on my phone, I’d watched an incredible video in which a cat scaled the outside of an apartment building, reached the tenth floor, then leaped to the ground and scampered away. Automatically, I’d assumed the video was fake. (I Googled; it wasn’t.)
A world saturated with synthesis, I’d begun to think, would evoke contradictory feelings. During my time at Berkeley, the images and videos I saw had come to seem distant and remote, like objects behind glass. Their clarity and perfection looked artificial (as did their gritty realism, when they had it). But I’d also begun to feel, more acutely than usual, the permeability of my own mind. I thought of a famous study in which people saw doctored photographs of themselves. As children, they appeared to be standing in the basket of a hot-air balloon. Later, when asked, some thought they could remember actually taking a balloon ride. It’s not just that what we see can’t be unseen. It’s that, in our memories and imaginations, we keep seeing it.
At a small round table, I sat down with Shiry Ginosar and another graduate student, Tinghui Zhou, a quietly amused man with oblong glasses. They were excited to show me what they had achieved using a GAN that they had developed over the past year and a half, with an undergraduate named Caroline Chan. (Chan is now a graduate student in computer science at M.I.T.)
“O.K.,” Ginosar said. On her laptop, she opened a video. In a box in the upper-left corner of the screen, the singer Bruno Mars wore white Nikes, track pants, and an elaborately striped shirt. Below him, a small wireframe figure imitated his posture. “That’s our pose detection,” she said. The right side of the screen contained a large image of me, also in the same pose: body turned slightly to the side, hips cocked, left arm raised in the air.
Ginosar tapped the space bar. Mars’s hit song “That’s What I Like” began to play. He started dancing. So did my synthetic self. Our shoulders rocked from left to right. We did a semi-dab, and then a cool, moonwalk-like maneuver with our feet.
“Jump in the Cadillac, girl, let’s put some miles on it!” Mars sang, and, on cue, we mimed turning a steering wheel. My synthetic face wore a huge grin.
“This is amazing,” I said.
“Look at the shadow!” Zhou said. It undulated realistically beneath my synthetic body. “We didn’t tell it to do that—it figured it out.” Looking carefully, I noticed a few imperfections. My shirt occasionally sprouted an extra button. My wristwatch appeared and disappeared. But I was transfixed. Had Bruno Mars and I always had such similar hair? Our fingers snapped in unison, on the beat.
Efros arrived. “Oh, very nice!” he said, leaning in close and nodding appreciatively. “It’s very good!”
“The generator tries to make it look real, but it can look real in different ways,” Ginosar explained.
“The music helps,” Efros said. “You don’t notice the mistakes as much.”
The song continued. “Take a look in that mirror—now tell me who’s the fairest,” Mars suggested. “Is it you? Is it me? Say it’s us and I’ll agree!”
“Before Photoshop, did everyone believe that images were real?” Zhou asked, in a wondering tone.
“Yes,” Ginosar said. “That’s how totalitarian regimes and propaganda worked.”
“I think that will happen with video, too,” Zhou said. “People will adjust.”
“It’s like with laser printers,” Efros said, picking up a printout from the table. “Before, if you got an official-looking envelope with an official-looking letter, you’d treat it seriously, because it was beautifully typed. Must be the government, right? Now I toss it out.”
Everyone laughed.
“But, actually, from the very beginning photography was never objective,” Efros continued. “Whom you photograph, how you frame it—it’s all choices. So we’ve been fooling ourselves. Historically, it will turn out that there was this weird time when people just assumed that photography and videography were true. And now that very short little period is fading. Maybe it should’ve faded a long time ago.”
When we’d first spoken on the phone, several weeks earlier, Efros had told me a family story about Soviet media manipulation. In the nineteen-forties and fifties, his grandmother had owned an edition of the Great Soviet Encyclopedia. Every so often, an update would arrive in the mail, containing revised articles and photographs to be pasted over the old ones. “Everyone knew it wasn’t true,” Efros said. “Apparently, that wasn’t the point.”
I mulled this over as I walked out the door, down the stairs, and into the sun. I watched the students pass by, with their identical backpacks, similar haircuts, and computable faces. I took out my phone, found the link to the video, and composed an e-mail to some friends. “This is so great!” I wrote. “Check out my moves!” I hit Send. ♦
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
When algorithms go wrong we need more power to fight back, say AI researchers

theverge.com

When algorithms go wrong we need more power to fight back, say AI researchers

James Vincent@jjvincent

6-7 minutes

Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.
The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations, including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services.
Big tech companies have found themselves in an AI gold rush, charging into a broad range of markets from recruitment to healthcare to sell their services. But, as AI Now co-founder Meredith Whittaker, leader of Google’s Open Research Group, tells The Verge, a lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence.”
Whittaker gives the example of IBM’s Watson system, which, during trial diagnoses at Memorial Sloan Kettering Cancer Center, gave “unsafe and incorrect treatment recommendations,” according to leaked internal documents. “The claims that their marketing department had made about [their technology’s] near-magical properties were never substantiated by peer-reviewed research,” says Whittaker.
The authors of AI Now’s report say this incident is just one of a number of “cascading scandals” involving AI and algorithmic systems deployed by governments and big tech companies in 2018. Others range from accusations that Facebook helped facilitate genocide in Myanmar, to the revelation that Google’s is helping to build AI tools for drones for the military as part of Project Maven, and the Cambridge Analytica scandal.
In all these cases there has been public outcry as well as internal dissent in Silicon Valley’s most valuable companies. The year saw Google employees quitting over the company’s Pentagon contracts, Microsoft employees pressuring the company to stop working with Immigration and Customs Enforcement (ICE), and employee walkouts from Google, Uber, eBay, and Airbnb protesting issues involving sexual harassment.
Whittaker says these protests, supported by labor alliances and research initiatives like AI Now’s own, have become “an unexpected and gratifying force for public accountability.”
503831564.jpg.jpg
This year saw widespread protests against the use of AI, including Google’s involvement in building drone surveillance technology. Photo by John Moore/Getty Images
But the report is clear: the public needs more. The danger to civic justice is especially clear when it comes to the adoption of automated decision systems (ADS) by the government. These include algorithms used for calculating prison sentences and allotting medical aid. Usually, say the report’s authors, software is introduced into these domains with the purpose of cutting costs and increasing efficiency. But that result is often systems making decisions that cannot be explained or appealed.
AI Now’s report cites a number of examples, including that of Tammy Dobbs, an Arkansas resident with cerebral palsy who had her Medicaid-provided home care cut from 56 hours to 32 hours a week without explanation. Legal Aid successfully sued the State of Arkansas and the algorithmic allocation system was judged to be unconstitutional.
Whittaker and fellow AI Now co-founder Kate Crawford, a researcher at Microsoft, say the integration of ADS into government services has outpaced our ability to audit these systems. But, they say, there are concrete steps that can be taken to remedy this. These include requiring technology vendors which sell services to the government to waive trade secrecy protections, thereby allowing researchers to better examine their algorithms.
“You have to be able to say, ‘you’ve been cut off from Medicaid, here’s why,’ and you can’t do that with black box systems” says Crawford. “If we want public accountability we have to be able to audit this technology.”
Another area where action is needed immediately, say the pair, is the use of facial recognition and affect recognition. The former is increasingly being used by police forces, in China, the US, and Europe. Amazon’s Rekognition software, for example, has been deployed by police in Orlando and Washington County, even though tests have shown that the software can perform differently across different races. In a test where Rekognition was used to identify members of Congress it had an error rate of 39 percent for non-white members compared to only five percent for white members. And for affect recognition, where companies claim technology can scan someone’s face and read their character and even intent, AI Now’s authors say companies are often peddling pseudoscience.
Despite these challenges, though, Whittaker and Crawford say that 2018 has shown that when the problems of AI accountability and bias are brought to light, tech employees, lawmakers, and the public are willing to act rather than acquiesce.
With regards to the algorithmic scandals incubated by Silicon Valley’s biggest companies, Crawford says: “Their ‘move fast and break things’ ideology has broken a lot of things that are pretty dear to us and right now we have to start thinking about the public interest.”
Says Whittaker: “What you’re seeing is people waking up to the contradictions between the cyber-utopian tech rhetoric and the reality of the implications of these technologies as they’re used in everyday life.”


Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will


By Janna Anderson, Lee Rainie and Alex Luchsinger



A vehicle and person recognition system for use by law enforcement is demonstrated at last year’s GPU Technology Conference in Washington, D.C., which highlights new uses for artificial intelligence and deep learning. (Saul Loeb/AFP/Getty Images)
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?
Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.
Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
AI and the future of humans: Experts express concerns and suggest solutions
CONCERNS Human agency:
Individuals are experiencing a loss of control over their lives
Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse:
Data use and surveillance in complex systems is designed for profit or for exercising power
Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss:
The AI takeover of jobs will widen economic divides, leading to social upheaval
The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in:
Reduction of individuals’ cognitive, social and survival skills
Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem:
Autonomous weapons, cybercrime and weaponized information
Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems. SUGGESTED SOLUTIONS Global good is No. 1:
Improve human collaboration across borders and
stakeholder groups
Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system:
Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people:
Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.

PEW RESEARCH CENTER AND ELON UNIVERSITY'S IMAGINING THE INTERNET CENTER
Specifically, participants were asked to consider the following:
“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.
Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.
A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:
Sonia Katyal, co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
We need to work aggressively to make sure technology matches our values.Erik Brynjolfsson​
Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”
Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
Marina Gorbis, executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”
Judith Donath, author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”
Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”
Michael M. Roberts, first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”
danah boyd, a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”
Amy Webb, founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”
Barry Chudakov, founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”
John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”
At stake is nothing less than what sort of society we want to live in and how we experience our humanity.Batya Friedman​
Batya Friedman, a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”
Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”
Kostas Alexandridis, author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”
Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”
James Scofield O’Rourke, a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”
Simon Biggs, a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”
Mark Surman, executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
William Uricchio, media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”
The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Next: 1. Concerns about human agency, evolution and survival Next Page → ← Prev Page
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
"Artificial Intelligence Has Some Explaining to Do


Software makers offer more transparent machine-learning tools—but there’s a trade-off.

By
Jeremy Kahn

December 12, 2018, 3:00 AM PST


Artificial intelligence software can recognize faces, translate between Mandarin and Swahili, and beat the world’s best human players at such games as Go, chess, and poker. What it can’t always do is explain itself.


AI is software that can learn from data or experiences to make predictions. A computer programmer specifies the data from which the software should learn and writes a set of instructions, known as an algorithm, about how the software should do that—but doesn’t dictate exactly what it should learn. This is what gives AI much of its power: It can discover connections in the data that would be more complicated or nuanced than a human would find. But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.







For software makers hoping to sell AI systems, this lack of clarity can be bad for business. It’s hard for humans to trust a system they can’t understand—and without trust, organizations won’t pony up big bucks for AI software. This is especially true in fields such as health care, finance, and law enforcement, where the consequences of a bad recommendation are more substantial than, say, that time Netflix thought you might enjoy watching The Hangover Part III.



Regulation is also driving companies to ask for more explainable AI. In the U.S., insurance laws require that companies be able to explain why they denied someone coverage or charged them a higher premium than their neighbor. In Europe, the General Data Protection Regulation that took effect in May gives EU citizens a “right to a human review” of any algorithmic decision affecting them. If the bank rejects your loan application, it can’t just tell you the computer said no—a bank employee has to be able to review the process the machine used to reject the loan application or conduct a separate analysis.


1000x-1.jpg


Illustration: Félix Decombat for Bloomberg Businessweek
David Kenny, who until earlier this month was International Business Machines Corp.’s senior vice president for cognitive services, says that when IBM surveyed 5,000 businesses about using AI, 82 percent said they wanted to do so, but two-thirds of those companies said they were reluctant to proceed, with a lack of explainability ranking as the largest roadblock to acceptance. Fully 60 percent of executives now express concern that AI’s inner workings are too opaque, up from 29 percent in 2016. “They are saying, ‘If I am going to make an important decision around underwriting risk or food safety, I need much more explainability,’ ” says Kenny, who is now chief executive officer of Nielsen Holdings Plc.

In response, software vendors and IT systems integrators have started touting their ability to give customers insights into how AI programs think. At the Conference on Neural Information Processing Systems in Montreal in early December, IBM’s booth trumpeted its cloud-based artificial intelligence software as offering “explainability.” IBM’s software can tell a customer the three to five factors that an algorithm weighted most heavily in making a decision. It can track the lineage of data, telling users where bits of information being used by the algorithm came from. That can be important for detecting bias, Kenny says. IBM also offers tools that will help businesses eliminate data fields that can be discriminatory—such as race—and other data points that may be closely correlated with those factors, such as postal codes.
Quantum Black, a consulting firm that helps companies design systems to analyze data, promoted its work on creating explainable AI at the conference, and there were numerous academic presentations on how to make algorithms more explainable. Accenture Plc has started marketing “fairness tools,” which can help companies detect and correct bias in their AI algorithms, as have rivals Deloitte LLC and KPMG LLC. Google, part of Alphabet Inc., has begun offering ways for those using its machine learning algorithms to better understand their decision-making processes. In June, Microsoft Corp. acquired Bonsai, a California startup that was promising to build explainable AI. Kyndi, an AI startup from San Mateo, Calif., has even trademarked the term “Explainable AI” to help sell its machine learning software.
There can be a trade-off between the transparency of an AI algorithm’s decision-making and its effectiveness. “If you really do explanation, it is going to cost you in the quality of the model,” says Mikhail Parakhin, chief technology officer for Russian internet giant Yandex NV, which uses machine learning in many of its applications. “The set of models that is fully explainable is a restricted set of models, and they are generally less accurate. There is no way to cheat around that.”

Parakhin is among those who worry that the explanations offered by some of these AI software vendors may actually be worse than no explanation at all because of the nuances lost by trying to reduce a very complex decision to just a handful of factors. “A lot of these tools just give you fake psychological peace of mind,” he says.
Alphabet-owned AI company DeepMind, in conjunction with Moorfields Eye Hospital in the U.K., built machine learning software to diagnose 50 different eye diseases as well as human experts can. Because the company was concerned that doctors wouldn’t trust the system unless they could understand the process behind its diagnostic recommendations, it chose to use two algorithms: One identified what areas of the image seemed to indicate eye disease, and another used those outputs to arrive at a diagnosis. Separating the work in this fashion allowed doctors to see exactly what in the eye scan had led to the diagnosis, giving them greater confidence in the system as a whole.
“This kind of multimodel approach is very good for explainability in situations where we know enough about the kind of reasoning that goes into the final decision and can train on that reasoning,” says Neil Rabinowitz, a researcher at DeepMind who has done work on explainability. But often that’s not the case.
There’s another problem with explanations. “The suitability of an explanation or interpretation depends on what task we are supporting,” Thomas Dietterich, an emeritus professor of computer science at Oregon State University, noted on Twitter in October. The needs of an engineer trying to debug AI software, he wrote, were very different from what a company executive using that software to make a decision would need to know. “There is no such thing as a universally interpretable model.”


We Need to Save Ignorance From AI

In an age of all-knowing algorithms, how do we choose not to know?

direct


After the fall of the Berlin Wall, East German citizens were offered the chance to read the files kept on them by the Stasi, the much-feared Communist-era secret police service. To date, it is estimated that only 10 percent have taken the opportunity.
In 2007, James Watson, the co-discoverer of the structure of DNA, asked that he not be given any information about his APOE gene, one allele of which is a known risk factor for Alzheimer’s disease.
Most people tell pollsters that, given the choice, they would prefer not to know the date of their own death—or even the future dates of happy events.
Each of these is an example of willful ignorance. Socrates may have made the case that the unexamined life is not worth living, and Hobbes may have argued that curiosity is mankind’s primary passion, but many of our oldest stories actually describe the dangers of knowing too much. From Adam and Eve and the tree of knowledge to Prometheus stealing the secret of fire, they teach us that real-life decisions need to strike a delicate balance between choosing to know, and choosing not to.

direct

Move slower?: Silicon Valley culture celebrates fast experimentation, which may not be what we want for our personal data. Photo by: Frederic Legrand - COMEO / Shutterstock.com
But what if a technology came along that shifted this balance unpredictably, complicating how we make decisions about when to remain ignorant? That technology is here: It’s called artificial intelligence.
AI can find patterns and make inferences using relatively little data. Only a handful of Facebook likes are necessary to predict your personality, race, and gender, for example. Another computer algorithm claims it can distinguish between homosexual and heterosexual men with 81 percent accuracy, and homosexual and heterosexual women with 71 percent accuracy, based on their picture alone.1 An algorithm named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) can predict criminal recidivism from data like juvenile arrests, criminal records in the family, education, social isolation, and leisure activities with 65 percent accuracy.2
Knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response.​
In each of these cases, the nature of the conclusion can represent a surprising departure from the nature of the data used (even if the validity of some of the results continues to be debated). That makes it hard to control what we know. There is also little to no regulation in place to help us remain ignorant: There is no protected “right not to know.”
This creates an atmosphere where, in the words of Facebook’s old motto, we are prone to “move fast and break things.” But when it comes to details about our private lives, is breaking things really what we want to be doing?
Governments and lawmakers have known for decades that Pandora’s box is sometimes best left closed. There have been laws on the books protecting the individual’s right to ignorance stretching back to at least the 1990s. The 1997 European Convention on Human Rights and Biomedicine, for example, states that “Everyone is entitled to know any information collected about his or her health. However, the wishes of individuals not to be so informed shall be observed.” Similarly, the 1995 World Medical Association’s Declaration on the Rights of the Patient states that “the patient has the right not to be informed [of medical data] on his/her explicit request, unless required for the protection of another person’s life.”
Writing right-to-ignorance laws for AI, though, is a very different matter. While medical data is strongly regulated, data used by AI is often in the hands of the notoriously unregulated for-profit tech sector. The types of data that AI deals with are also much broader, so that any corresponding laws require a broader scope of understanding of what a right to ignorance means. Research into the psychology of deliberate ignorance would help with designing right-to-ignorance laws for AI. But, surprisingly, the topic has long been ignored as a topic of rigorous scientific inquiry, perhaps because of the implicit assumption that deliberately avoiding information is irrational.
Recently, though, the psychologist Ralph Hertwig and legal scholar Christoph Engel have published an extensive taxonomy of motives for deliberate ignorance. They identified two sets of motives, in particular, that have a particular relevance to the need for ignorance in the face of AI.
The first set of motives revolves around impartiality and fairness. Simply put, knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response. For example, peer reviews of academic papers are usually anonymous. Insurance companies in most countries are not permitted to know all the details of their client’s health before they enroll; they only know general risk factors. This type of consideration is particularly relevant to AI, because AI can produce highly prejudicial information.
We’ve been giving our data away for so long that we’ve forgotten it’s ours in the first place.​
These sets of motives can help us understand the need to protect ignorance in the face of AI. The AI “gaydar” algorithm, for example, appears to have close to zero potential benefits, but great potential costs when it comes to impartiality and fairness. As The Economist put it, “in parts of the world where being gay is socially unacceptable, or illegal, such an algorithm could pose a serious threat to safety.” Similarly, the proposed benefits of an ethnicity detector currently under development at NtechLab seem to pale in comparison to the negative impact on impartiality and fairness. The use of the COMPAS recidivism prediction software has a higher accuracy than a human but, as Dressel and Farid write, is “not as accurate as we might want, particularly from the point of view of a defendant whose future lies in the balance.”2 Algorithms that predict individual life expectancy, like those being developed by Aspire Health, are not necessarily making emotional regulation any easier.
These examples illustrate the utility of identifying individual motives for ignorance, and show how complex questions of knowledge and ignorance can be, especially when AI is involved. There is no ready-made answer to the question of when collective ignorance is beneficial or ethically appropriate. The ideal approach would be to consider each case individually, performing a risk-benefit analysis. Ideally, given the complexity of the debate and the weight of its consequences, this analysis would be public, include diverse stakeholder and expert opinions, and consider all possible future outcomes, including worst-case scenarios.
That’s a lot to ask—in fact, it is probably infeasible in most cases. So how do we handle in broad strokes something that calls for fine shading?
One approach is to control and restrict the kinds of inferences we allow machines to make from data that they have already collected. We could “forbid” judicial algorithms from using race as a predictor variable, for example, or exclude gender from the predictive analyses of potential job candidates. But there are problems with this approach.
First of all, restricting the information used by big companies is costly and technically difficult. It would require those companies to open-source their algorithms, and large governmental agencies to constantly audit them. Plus once big data sets have been collected, there are many ways to infer “forbidden knowledge” in circuitous ways. Suppose that using gender information to predict academic success was declared illegal. It would be straightforward to use the variables “type of car owned” and “favorite music genre” as a proxy for gender, performing a second-order inference and resting the prediction on proxies of gender after all. Inferences about gender may even be accidentally built into an algorithm despite a company’s best intentions. These second-order inferences make the auditing of algorithms even more daunting. The more variables that are included in an analysis, the higher the chances that second-order inferences will occur.
The more radical—and potentially more effective—approach to protecting the right to ignorance is to prevent data from being gathered in the first place. In a pioneering move in 2017, for example, Germany passed legislation that prohibits self-driving cars from identifying people on the street by their race, age, and gender. This means that the car will never be able to inform its driving decisions—and especially the decisions it needs to take when an accident is unavoidable—with data from these categories.

direct

Driver’s Ed: The website moralmachine.mit.edu tests human moral intuition in cases where machines will soon be making decisions, using data types of our own choosing. Photo by: MIT
In line with this way of thinking, the European Union’s new General Data Protection Regulation (GDPR), which became effective in May 2018, states that companies are permitted to collect and store only the minimum amount of user data needed to provide a specific, stated service, and to get customers’ consent for how their data will be used. Such a restriction on data capture may also prevent second-order inferences. One important limitation of the GDPR approach is that companies can give themselves very broad objectives. The now-shut Cambridge Analytica’s explicit objective, for example, was to assess your personality, so technically its controversial collection of Facebook data satisfied GPDR’s guidelines. Similarly, GPDR’s focus on the alignment between data and a given service does not exclude categories of data we find morally questionable, nor completely stop companies from buying excluded data from a data broker as long as the user has consented—and many people consent to sharing their data even with relatively meager incentives. Researchers found that some MIT students would share their friends’ contact data for a slice of pizza.5 Clearly, further restrictions are needed. But how many?
The American activist and programmer Richard Stallman gave this answer: “There are so many ways to use data to hurt people that the only safe database is the one that was never collected.” But restricting data collection too severely may impede progress and undermine the benefits we stand to gain from AI.
Who should decide on these tradeoffs? We should all do it ourselves.
In most cases we are actually talking about data that is owned by you and me. We have been careless in giving it away for shiny apps without considering the consequences. In fact, we’ve been giving our data away for so long that we’ve forgotten it’s ours in the first place. Taking it back allows us to individually decide whether there is something we want or don’t want to know. Restoring data to its rightful owners—us—neatly solves many of the hard challenges we’ve discussed. It avoids the need to develop universal, prescient guidelines about data. Instead, millions of individuals will guide their own data usage according to their sense of what is right and wrong. We can all react in real time to evolving uses of data by companies, punishing or rewarding companies according to how their data is treated.
The computer science philosopher Jaron Lanier has suggested an additional, economic argument for placing data back into the hands of people. We should all be able to profit from our private data, he reasons, by selling it to big companies. The problem with this approach is twofold. First, it muddles the ethics of data use and ownership. The willingness to give data away for free is a good litmus test for the ethical integrity of the questions that data will be used to answer. How many individuals from a minority group would freely give away their data in order to create a facial recognition app like the gaydar? And how many would agree to be paid to do so? On the other hand, a majority of the population would gladly contribute their data to finding a cure for cancer. Second, putting (high) economic value on personal data may coerce people to share their data and make data privacy a privilege of the rich.
This isn’t to say that individual action alone will be sufficient. Collective action by society’s institutions will also be required. Even if only a small portion of the population shares their sensitive data, the result may be a high predictive accuracy opposed by the majority. Not all of us are aware of this. To prevent unwanted consequences we would need additional laws and public debates.
The Economist has written that the world’s most valuable resource is no longer oil—it’s data. But data is very different from oil. Data is an unlimited resource, it’s owned by individuals, and it’s best exchanged without any transactional economic value. Taking the profit out of oil kills the oil market. As a first step, taking profit out of data provides the space we need to create and maintain ethical standards that can survive the coming of AI, and pave the way for managing collective ignorance. In other words, as data becomes one of the most useful commodities of the modern world, it also needs to become one of the cheapest.
Christina Leuker is a pre-doctoral fellow at the Max Planck Institute for Human Development.
Wouter van den Bos is a research scientist at the Max Planck Institute for Human Development.
References
1. Wang, Y. & Kosinski, M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114, 246-257 (2018).
2. Dressel, J. & Farid, H. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, eaao5580 (2018).
3. Hertwig, R. & Engel, C. Homo ignorans: Deliberately choosing not to know. Perspectives on Psychological Science 11, 359-372 (2016).
4. Gigerenzer, G. & Garcia-Retamero, R. Cassandra’s regret: The psychology of not wanting to know. Psychological Review 124, 179-196 (2017).
5. Athey, S. Catalini, C., & Tucker, C.E. The digital privacy paradox: Small money, small costs, small talk. Stanford University Graduate School of Business Research Paper No. 17-14 (2018).
Additional Reading
Stallman, R. A radical proposal to keep your personal data safe. The Guardian (2018).
Staff writers. The world’s most valuable resource is no longer oil, but data. The Economist (2017).
Lead photo collage credit: Oliver Burston / Getty Images; Pixabay
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
"Google’s AI Guru Wants Computers to Think More Like Brains

Author: Tom Simonite

9-12 minutes



geoffhinton_14977197.jpg

"As a Google executive, I didn't think it was my place to complain in public about [a Pentagon contract], so I complained in private about it," says Geoff Hinton.
Aaron Vincent Elkaim/Redux
In the early 1970s, a British grad student named Geoff Hinton began to make simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained an impractical technology for decades. But in 2012, Hinton and two of his grad students at the University of Toronto used them to deliver a big jump in the accuracy with which computers could recognize objects in photos. Within six months, Google had acquired a startup founded by the three researchers. Previously obscure, artificial neural networks were the talk of Silicon Valley. All large tech companies now place the technology that Hinton and a small community of others painstakingly coaxed into usefulness at the heart of their plans for the future—and our lives.
WIRED caught up with Hinton last week at the first G7 conference on artificial intelligence, where delegates from the world’s leading industrialized economies discussed how to encourage the benefits of AI, while minimizing downsides such as job losses and algorithms that learn to discriminate. An edited transcript of the interview follows
WIRED: Canada’s prime minister Justin Trudeau told the G7 conference that more work is needed on the ethical challenges raised by artificial intelligence. What do you think?
Geoff Hinton: I’ve always been worried about potential misuses in lethal autonomous weapons. I think there should be something like a Geneva Convention banning them, like there is for chemical weapons. Even if not everyone signs on to it, the fact it’s there will act as a sort of moral flag post. You’ll notice who doesn’t sign it.
WIRED: More than 4,500 of your Google colleagues signed a letter protesting a Pentagon contract that involved applying machine learning to drone imagery. Google says it was not for offensive uses. Did you sign the letter?
GH: As a Google executive, I didn't think it was my place to complain in public about it, so I complained in private about it. Rather than signing the letter I talked to [Google cofounder] Sergey Brin. He said he was a bit upset about it, too. And so they're not pursuing it.
WIRED: Google’s leaders decided to complete but not renew the contract. And they released some guidelines on use of AI that include a pledge not to use the technology for weapons.
GH: I think Google's made the right decision. There are going to be all sorts of things that need cloud computation, and it's very hard to know where to draw a line, and in a sense it's going to be arbitrary. I'm happy where Google drew the line. The principles made a lot of sense to me.
WIRED: Artificial intelligence can raise ethical questions in everyday situations, too. For example, when software is used to make decisions in social services, or health care. What should we look out for?
GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster.
People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story.
Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago.
WIRED: So how can we know when to trust one of these systems?
GH: You should regulate them based on how they perform. You run the experiments to see if the thing’s biased, or if it is likely to kill fewer people than a person. With self-driving cars, I think people kind of accept that now. That even if you don’t quite know how a self-driving car does it all, if it has a lot fewer accidents than a person-driven car then it’s a good thing. I think we’re going to have to do it like you would for people: You just see how they perform, and if they repeatedly run into difficulties then you say they’re not so good.
WIRED: You’ve said that thinking about how the brain works inspires your research on artificial neural networks. Our brains feed information from our senses through networks of neurons connected by synapses. Artificial neural networks feed data through networks of mathematical neurons, linked by connections termed weights. In a paper presented last week, you and several coauthors argue we should do more to uncover the learning algorithms at work in the brain. Why?
GH: The brain is solving a very different problem from most of our neural nets. You’ve got roughly 100 trillion synapses. Artificial neural networks are typically at least 10,000 times smaller in terms of the number of weights they have. The brain is using lots and lots of synapses to learn as much as it can from just a few episodes. Deep learning is good at learning using many fewer connections between neurons, when it has many episodes or examples to learn from. I think the brain isn’t concerned with squeezing a lot of knowledge into a few connections, it’s concerned with extracting knowledge quickly using lots of connections.
WIRED: How might we build machine learning systems that function more that way?
GH: I think we need to move toward a different kind of computer. Fortunately I have one here.
Hinton reaches into his wallet and pulls out a large, shiny silicon chip. It’s a prototype from Graphcore, a UK startup working on a new kind of processor to power machine/deep learning algorithms.
"You should regulate [AI systems] based on how they perform. You run the experiments to see if the thing’s biased, or if it is likely to kill fewer people than a person."​
Geoff Hinton
Almost all of the computer systems we run neural nets on, even Google’s special hardware, use RAM [to store the program in use]. It costs an incredible amount of energy to fetch the weights of your neural network out of RAM so the processor can use it. So everyone makes sure that once their software has fetched the weights, it uses them a whole bunch of times. There’s a huge cost to that, which is that you cannot change what you do for each training example.
On the Graphcore chip, the weights are stored in cache right on the processor, not in RAM, so they never have to be moved. Some things will therefore become easier to explore. Then maybe we’ll get systems that have, say, a trillion weights but only touch a billion of them on each example. That's more like the scale of the brain.
WIRED: The recent boom of interest and investment in AI and machine learning means there’s more funding for research than ever. Does the rapid growth of the field also bring new challenges?
GH: One big challenge the community faces is that if you want to get a paper published in machine learning now it's got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don't think that's encouraging people to think about radically new ideas.
Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.
What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That's I think the main downside of the fact that we've got this inversion now, where you've got a few senior guys and a gazillion young guys.
WIRED: Could that derail progress in the field?
GH: Just wait a few years and the imbalance will correct itself. It’s temporary. The companies are busy educating people, the universities are educating people, the universities will eventually employ more professors in this area, and it's going to right itself.
WIRED: Some scholars have warned that the current hype could tip into an “AI winter,” like in the 1980s, when interest and funding dried up because progress didn’t meet expectations.
GH: No, there's not going to be an AI winter, because it drives your cellphone. In the old AI winters, AI wasn't actually part of your everyday life. Now it is."

"The AI boom is happening all over the world, and it’s accelerating quickly

10 comments
The second annual AI Index report pulls together data and expert findings on the field’s progress and acceleration

By Nick Statt@nickstatt Dec 12, 2018, 11:00am EST


Share


acastro_181017_1777_brain_ai_0002.0.jpg
Illustration by Alex Castro / The Verge
The rate of progress in the field of artificial intelligence is one of the most hotly contested aspects of the ongoing boom in teaching computers and robots how to see the world, make sense of it, and eventually perform complex tasks both in the physical realm and the virtual one. And just how fast the industry is moving, and to what end, is typically measured not just by actual product advancements and research milestones, but also by the prognostications and voiced concerns of AI leaders, futurists, academics, economists, and policymakers. AI is going to change the world — but how and when are still open questions.
Today, findings from a group of experts were published in an ongoing effort to help answer those questions. The experts include members of Harvard, MIT, Stanford, the nonprofit OpenAI, and the Partnership on AI industry consortium, among others, and they were put together as part of the second annual AI Index. The goal is to measure the field’s progress using hard data and to try and make sense of that progress as it relates to thorny subjects like workplace automation and the overarching quest for artificial general intelligence, or the type of intelligence that could let a machine perform any task a human could.
AI will change the world, but researchers are still trying to figure out how and when
The first report, published last December, found that investment and work in AI was accelerated at an unprecedented rate and that, while progress in certain fields like limited game-playing and vision has been extraordinary, AI remains far behind in general intelligence tasks that would result in, say, total automation of more than a limited variety of jobs. Still, the report was lacking in what the authors call a “global perspective,” and this second edition set out to answer many of the same questions with new, more granular data and a more international scope.
“There is no AI story without global perspective. The 2017 report was heavily skewed towards North American activities. This reflected a limited number of global partnerships, not an intrinsic bias,” reads the 2018 report’s introduction. “This year, we begin to close the global gap. We recognize that there is a long journey ahead — one that involves collaboration and outside participation — to make this report truly comprehensive.”
In that spirit of global analysis, the second AI Index report finds that commercial and research work in AI, as well as funding, is exploding pretty much everywhere on the planet. There’s an especially high concentration in Europe and Asia, with China, Japan, and South Korea leading Eastern countries in AI research paper publication, university enrollment, and patent applications. In fact, Europe is the largest publisher of AI papers, with 28 percent of all AI-related publications last year. China is close behind with 25 percent, while North America is responsible for 17 percent.
Screen_Shot_2018_12_11_at_4.40.33_PM.png
Image: AI Index Report 2018
When it comes to the type of AI activity, the report finds that machine learning and so-called probabilistic reasoning — or the type of cognition-related performance that lets a game-playing AI outsmart a human opponent — is far and away the leading research category by a number of published papers.
Not far behind, however, is work on computer vision, which is the foundational sub-discipline of AI that’s helping to develop self-driving cars and power augmented reality and object recognition, and neural networks, which, like machine learning, are instrumental in training those algorithms to improve over time. Less important, at least in the current moment, are areas like natural language processing, which is what lets your smart speaker understand what you’re saying and respond in kind, and general planning and decision making, which is what will be required of robots when automated machines are inevitably more integral facets of daily life.
Screen_Shot_2018_12_11_at_4.52.09_PM.png
Image: AI Index Report 2018
A fascinating element of the report is how research in those categories breaks down by global region. China is heavily focused on agricultural science, engineering, and technology, while Europe and North America are focused more on the humanities and medical and health sciences, though Europe is generally more well-rounded in its approach to research.
Some other interesting tidbits from the report include US AI research papers, which, despite being lower in volume, outpace China and Europe in citations. Government-related organizations and research outfits also account for far more papers in China and Europe than corporations or the medical field, while the US’s AI research efforts are largely dominated by corporate efforts, which makes sense given the immense investment in the field from Apple, Amazon, Google, Facebook, and Microsoft.
Screen_Shot_2018_12_11_at_4.54.22_PM.png
Image: AI Index Report 2018
As far as performance goes, AI continues to skyrocket, especially in fields like computer vision. By measuring benchmark performance for the widely used image training database ImageNet, the report finds that the time it takes to spin up a model that can classify pictures at state-of-the-art accuracy fell “from around on hour to around 4 minutes” in just 18 months. That equates to a roughly 16x jump in training speed. Other areas like object segmentation, which is what lets software differentiate between an image’s background and its subject, has increased in precision by 72 percent in just three years.
Screen_Shot_2018_12_11_at_5.10.20_PM.png
Image: AI Index Report 2018
For areas like machine translation and parsing, which is what lets software understand syntactic structures and more easily answer questions, accuracy and proficiency is getting more and more refined, but with diminishing returns as algorithms get ever closer human-level understanding of language.
In a separate “human-level milestones” section, the report breaks down some big 2018 milestones in fields like game-playing and medical diagnostics where progress is accelerating at surprising rates. Those include progress from Google-owned DeepMind in playing the classic first-person shooter Quake in objective-oriented game modes like capture the flag, as well as landmark performances against amateur and then former professional players of the online battle arena game Dota 2.
acastro_180730_1777_facial_recognition_0001.jpg
Illustration by Alex Castro / The Verge
All of this hard data is fantastic in understanding where the AI field stands right now and how it’s been growing over the years and is projected to grow in the future. Yet, we’re still stuck in murky territory when it comes to harder questions around automation and the ways that AI could be implemented in areas like criminal justice, border patrol screenings, warfare, and other areas where performance is less important than the underlying governmental policy at play. AI will only continue to get more sophisticated, but there are a number of hurdles, both technological and with regard to bias and safety, before such software could be reliably used without error in hospitals, education systems, airports, and police departments.
Unfortunately, that hasn't stopped corporations and governments from continuing to plow forward in deploying AI in the real world. This year, we discovered that Amazon was selling its Recognition facial recognition software to law enforcement, while Google found itself embroiled in controversy after it was discovered it was contributing computer vision expertise to a Department of Defense drone program known as Project Maven.
AI is increasingly being put to work by governments in situations that are ripe for abuse
Google said it would pull out of the project once its contract expired, and it also published a wide-ranging set of AI ethics principles that included a pledge never to develop AI weaponry surveillance systems or to contribute to any project that violated “widely accepted principles of international law and human rights.” But it’s clear that the leaders of Silicon Valley see AI as a prime business opportunity and such projects and contracts as the financial reward for participating in the AI research arms race.
Elsewhere in the world, AI is helping governments pioneer systems of surveillance and law enforcement that constantly track citizens as they move about society. According to The New York Times, China is using millions of cameras and AI-assisted technologies like facial recognition to create the world’s most comprehensive surveillance system for its nearly 1.4 billion-person populace. Such a system is expected to link with the country’s new social credit system for scoring citizens and stratifying society into layers of access and privilege based on education, financial background, and other metrics, all of which will be informed by a day-to-day data collection and analysis of people’s real-world and online behaviors.
acastro_181017_1777_brain_ai_0001.jpg
Illustration by Alex Castro / The Verge
With automation, we’ve come to an understanding that mass unemployment isn’t coming anytime soon, and the bigger concern is whether we as a society are prepared for the nature of work to transition toward less stable, lower-paid jobs without safety nets like health insurance.
Not everyone is going to lose their job right away. Rather, certain jobs will be eliminated over time, while others will become semi-automated. And some jobs will always require a human being. The fate of workers will depend on certain employer constraints, labor laws and regulations, and whether there’s a good enough system in place to transition people into new roles or industries. For instance, a McKinsey Global Institute report from November of last year found that 800 million jobs could be lost to worldwide automation by 2030, but only about 6 percent of all jobs are at risk of complete automation. How that process of moving from a human-only job to an AI- or robot-assisted one is developed could mean the difference between a full-blown crisis and a historical paradigm shift.
Automation won’t eliminate every job, but it will complicate the nature of work
A paper from US think tank, the Center for Global Development, that was published back in July centered on the potential effects of AI and robotic automation on global labor markets. Researchers found that there is not nearly enough work being done to prepare for the overall automation fallout, and we’re spending too much time debating the general ethics and viability of complete automation in a narrow set of markets. “Questions like profitability, labor regulations, unionization, and corporate-social expectations will be at least as important as technical constraints in determining which jobs get automated,” the paper concluded.
Not everything is all doom and gloom. Part of the philosophy behind the AI Index report is about asking the right questions and making sure that the people making policy, the public, and the leaders of the AI industry have data to make informed decisions. It may be too early to reliably measure the impact of AI on society — the industry is only just getting started — but preparing ourselves for what it all means and how it will affect daily life, work, and public institutions like health care, education, and law enforcement is perhaps just as important as the research and product development itself. Only by investing in both can we avoid the risk of creating technologies that change the world for the worse."
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
"These Portraits Were Made by AI: None of These People Exist
Dec 17, 2018
Michael Zhang
335 Comments

Check out these rather ordinary looking portraits. They’re all fake. Not in the sense that they were Photoshopped, but rather they were completely generated by artificial intelligence. That’s right: none of these people actually exist.

NVIDIA researchers have published a new paper on easily customizing the style of realistic faces created by a generative adversarial network (GAN).

The Verge points out that GAN has only existed for about four years. In 2014, a landmark paper introduced the concept, and this is what the AI-generated results looked like at the time:

In less than half a decade, the realism has improved to the point where most people might not be able to tell the portraits are fake, even when examining them up close.

NVIDIA researchers are now able to copy the “styles” of source faces onto destination faces, creating blends that have copied features but which look like entirely new people:

To create these latest faces, NVIDIA researchers trained the AI for a whole week using 8 powerful GPUs. Here’s a 6-minute video about this latest progress:

Here’s a collage of fake faces created by the AI:

This technology seems to have the potential to disrupt the world of photography. It’s by no means limited to generating faces — it can also create everything from fake interior real estate photos…

…to fake car photos…

…to fake cat photos…

A march toward artificially generated “photos” has already been taking place for years: back in 2014, 75% of IKEA’s catalog photos were already computer-generated.

It may be a scary thought for stock photographers, but in the future, creating needed “photos” out of thin air may be as simple as typing in a description into an AI-powered desktop app."

Why some people fear AI, explained.

By Kelsey Piper Updated Dec 23, 2018, 12:38am EST


Share


AI_LEAD.0.jpg

Finding the best ways to do good. Made possible by The Rockefeller Foundation.
Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”
That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.
This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.
There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.
The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:
1) What is AI?
Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.
Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.
Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.
But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.
But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.
charitable_fp.png

Do you ever struggle to figure out where to donate that will make the biggest impact?

Over five days, in five emails, we’ll walk you through research and frameworks that will help you decide how much and where to give and other ways to do good.

Sign up for Future Perfect’s new pop-up newsletter.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.
Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.
For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.
AI_4.png

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.
Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.
2) Is it even possible to make a computer as smart as a person?
Yes, though current AI systems aren’t nearly that smart.
One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).
Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.
The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.
These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.
With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.
That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.
And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.
Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”
Learn about the smart ways people are fixing the world’s problems.

Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sign up for the Future Perfect newsletter.
There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.
If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.
This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
3) How exactly could it wipe us out?
It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.
The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.
AI_3.png

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.
Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.
An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.
Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”
What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.
In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.
But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.
If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.
That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.
Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.
4) When did scientists first start worrying about AI risk?
Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.​
I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.
[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”​
In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.
Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.
In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
AI_2.png

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.
Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.
Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.
Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.
It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.
Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.
That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.
Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.
5) Why couldn’t we just shut off a computer if it got too powerful?
A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.
So we might not know when it’s the right moment to shut off a computer.
We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).
But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.
In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.
So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.
There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).
There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.
That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.
6) What are we doing right now to avoid an AI apocalypse?
“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.
The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.
Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.
The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)
The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.
Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).
There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.
But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.
Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.
The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.
7) Is this really likelier to kill us all than, say, climate change?
It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.
Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.
AI_1.png

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.
Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.
8) Is there a possibility that AI can be benevolent?
AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.
When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.
Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.
“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”
So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.
9) I just really want to know: how worried should we be?
To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.
While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.
At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.
AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.
Correction: This piece originally stated that Eliezer Yudkowsky is a “research scientist” at the Machine Intelligence Research Institute. It should’ve said “research fellow.”"
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Is Deep Learning Already Hitting its Limitations? – Towards Data Science

Thomas Nield

15-19 minutes



The breakthrough “MAC Hack VI” chess program in 1965.And Is Another AI Winter Coming?
[IMG alt="Go to the profile of Thomas Nield"]https://cdn-images-1.medium.com/fit/c/100/100/1*VQOYA5553uxB09y9XfbmuA.png[/IMG]
Many believed an algorithm would transcend humanity with cognitive awareness. Machines would discern and learn tasks without human intervention and replace workers in droves. They quite literally would be able to “think”. Many people even raised the question whether we could have robots for spouses.
But I am not talking about today. What if I told you this idea was widely marketed in the 1960’s, and AI pioneers Jerome Wiesner, Oliver Selfridge, and Claude Shannon insisted this could happen in their near future? If you find this surprising, watch this video and be amazed how familiar these sentiments are.

Fast forward to 1973, and the hype and exaggeration of AI backfired. The U.K. Parliament sent Sir James Lighthill to get a status report of A.I. research in the U.K. The report criticized the failure of artificial intelligence research to live up to its sensational claims. Interestingly, Lighthill also pointed out how specialized programs (or people) performed better than their “AI” counterparts, and had no prospects in real-world environments. Consequently, AI research funding was cancelled by the British government.

Across the pond, the United States Department of Defense was invested heavily in AI research, but then cancelled nearly all funding over the same frustrations: exaggeration of AI ability, high costs with no return, and dubious value in the real world.
In the 1980’s, Japan enthusiastically attempted a bold stab at “AI” with the Fifth Generation Project (EDIT: Toby Walsh himself corrected me in the comments. UK research did pick up again in the 1980’s with the Alvey Project in response to Japan). However, that ended up being a costly $850 million failure as well.
The First AI Winter
The end of the 1980’s brought forth an A.I. Winter, a dark period in computer science where “artificial intelligence” research burned organizations and governments with delivery failures and sunk costs. Such failures would terminate AI research for decades.
Oftentimes, these companies were driven by FOMO rather than practical use cases, worried that they would be left behind by their automated competitors.
By the time the 1990’s rolled around, “AI” became a dirty word and continued to be in the 2000’s. It was widely accepted that “AI just didn’t work”. Software companies who wrote seemingly intelligent programs would use terms like “search algorithms”, “business rule engines”, “constraint solvers”, and “operations research”. It is worth mentioning that these invaluable tools indeed came from AI research, but they were rebranded since they failed to live up to their grander purposes.
But around 2010, something started to change. There was rapidly growing interest in AI again and competitions in categorizing images caught the media’s eye. Silicon Valley was sitting on huge amounts of data, and for the first time there was enough to possibly make neural networks useful.
By 2015, “AI” research commanded huge budgets of many Fortune 500 companies. Oftentimes, these companies were driven by FOMO rather than practical use cases, worried that they would be left behind by their automated competitors. After all, having a neural network identify objects in images is nothing short of impressive! To the layperson, SkyNet capabilities must surely be next.
But is this really a step towards true AI? Or is history repeating itself, but this time emboldened by a handful of successful use cases?
What is AI Anyway?
For a long time, I have never liked the term “artificial intelligence”. It is vague and far-reaching, and defined more by marketing folks than scientists. Of course, marketing and buzzwords are arguably necessary to spur positive change. However, buzzword campaigns inevitably lead to confusion. My new ASUS smart phone has an “AI Ringtone” feature that dynamically adjusts the ring volume to be just loud enough over ambient noise. I guess something that literally could be programmed with a series of if conditions, or a simple linear function, is called “AI”. Alrighty then.
In light of that, it is probably no surprise the definition of “AI” is widely disputed. I like Geoffrey De Smet’s definition, which states AI solutions are for problems with a nondeterministic answer and/or an inevitable margin of error. This would include a wide array of tools from machine learning to probability and search algorithms.
It can also be said that the definition of AI evolves and only includes ground-breaking developments, while yesterday’s successes (like optical character recognition or language translators) are no longer considered “AI”. So “artificial intelligence” can be a relative term and hardly absolute.
In recent years, “AI” has often been associated with “neural networks” which is what this article will focus on. There are other “AI” solutions out there, from other machine learning models (Naive Bayes, Support Vector Machines, XGBoost) to search algorithms. However, neural networks are arguably the hottest and most hyped technology at the moment. If you want to learn more about neural networks, I posted my video below.

If you want a more thorough explanation, check out Grant Sanderson’s amazing video series on neural networks here:

An AI Renaissance?
The resurgence of AI hype after 2010 is simply due to a new class of tasks being mastered: categorization. More specifically, scientists have developed effective ways to categorize most types of data including images and natural language thanks to neural networks. Even self-driving cars are categorization tasks, where each image of the surrounding road translates into a set of discrete actions (gas, break, turn left, turn right, etc). To get a simplified idea of how this works, watch this tutorial showing how to make a video game AI.
In my opinion, Natural language processing is more impressive than pure categorization though. It is easy to believe these algorithms are sentient, but if you study them carefully you can tell they are relying on language patterns rather than consciously-constructed thoughts. These can lead to some entertaining results, like these bots that will troll scammers for you:

Probably the most impressive feat of natural language processing is Google Duplex, which allows your Android phone to make phone calls on your behalf, specifically for appointments. However, you have to consider that Google trained, structured, and perhaps even hardcoded the “AI” just for that task. And sure, the fake caller sounds natural with pauses, “ahhs”, and “uhms”… but again, this was done through operations on speech patterns, not actual reasoning and thoughts.

This is all very impressive, and definitely has some useful applications. But we really need to temper our expectations and stop hyping “deep learning” capabilities. If we don’t, we may find ourselves in another AI Winter.
History Repeats Itself
Gary Marcus at NYU wrote an interesting article on the limitations of deep learning, and poses several sobering points (he also wrote an equally interesting follow-up after the article went viral). Rodney Brooks is putting timelines together and keeping track of his AI hype cycle predictions, and predicts we will see “ The Era of Deep Learning is Over” headlines in 2020.
The skeptics generally share a few key points. Neural networks are data-hungry and even today, data is finite. This is also why “game” AI examples you see on YouTube (like this one as well as this one) often require days of constant losing gameplay until the neural network finds a pattern that allows it to win.
We really need to temper our expectations and stop hyping “deep learning” capabilities. If we don’t, we may find ourselves in another AI Winter.
Neural networks are “deep” in that they technically have several layers of nodes, not because it develops deep understanding about the problem. These layers also make the neural networks difficult to understand, even for its developer. Most importantly, neural networks are experiencing diminishing return when they venture out into other problem spaces, such as the Traveling Salesman Problem. And this makes sense. Why in the world would I solve the Traveling Salesman Problem with a neural network when a search algorithm will be much more straightforward, effective, scalable, and economical (as shown in the video below)?

Using search algorithms like simulated annealing for the Traveling Salesman Problem
Nor would I use deep learning to solve other everyday “AI” problems, like solving Sudokus or packing events into a schedule, which I discuss how to do in a separate article:
Of course, there are folks looking to generalize more problem spaces into neural networks, and while that is interesting it rarely seems to outperform any specialized algorithms.
Luke Hewitt at MIT puts it best in this article:
It is a bad idea to intuit how broadly intelligent a machine must be, or have the capacity to be, based solely on a single task. The checkers-playing machines of the 1950s amazed researchers and many considered these a huge leap towards human-level reasoning, yet we now appreciate that achieving human or superhuman performance in this game is far easier than achieving human-level general intelligence. In fact, even the best humans can easily be defeated by a search algorithm with simple heuristics. Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.
— Luke Hewitt
I think it is also worth pointing out that neural networks require vast amounts of hardware and energy to train. To me, that just does not feel sustainable. Of course, a neural network will predict much more efficiently than it trains. However I do think the ambitions people have for neural networks will demand constant training and therefore require exponential energy and costs. And sure, computers keep getting faster but can chip manufacturers struggle past the failure of Moore’s Law?
A final point to consider is the P versus NP problem. To describe this in the simplest terms possible, proving P = NP would mean we could calculate solutions to very difficult problems (like machine learning, cryptography, and optimization) just as quickly as we can verify them. Such a breakthrough would expand the capabilities of AI algorithms drastically and maybe transform our world beyond recognition (Fun fact: there’s a 2012 intellectual thriller movie called The Travelling Salesman which explores this idea).
Here is a great video that explains the P versus NP problem, and it is worth the 10 minutes to watch:

An explanation of P versus NP
Sadly after 50 years since the problem was formalized, more computer scientists are coming to believe that P does not equal NP. In my opinion, this is an enormous barrier to AI research that we may never overcome, as this means complexity will always limit what we can do.
And this makes sense. Why in the world would I solve the Traveling Salesman Problem with a neural network when a search algorithm will be much more effective, scalable, and economical?
It is for these reasons I think another AI Winter is coming. In 2018, a growing number of experts, articles, forum posts, and bloggers came forward calling out these limitations. I think this skepticism trend is going to intensify in 2019 and will go mainstream as soon as 2020. Companies are still sparing little expense in getting the best “deep learning” and “AI” talent, but I think it is a matter of time before many companies realize deep learning is not what they need. Even worse, if your company does not have Google’s research budget, the PhD talent, or massive data store it collected from users, you can quickly find your practical “deep learning” prospects very limited. This was best captured in this scene from the HBO show Silicon Valley (WARNING: language):

Each AI Winter is preceded with scientists exaggerating and hyping the potential of their creations. It is not enough to say their algorithm can do one task well. They want it to ambitiously adapt to any task, or at least give the impression it can. For instance, AlphaZero makes a better chess playing algorithm. Media’s reaction is “Oh my gosh, general AI is here. Everybody run for cover! The robots are coming!” Then the scientists do not bother correcting them and actually encourage it using clever choices of words. Tempering expectations does not help VC funding after all. But there could be other reasons why AI researchers anthropomorphize algorithms despite their robotic limitations, and it is more philosophical than scientific. I will save that for the end of the article.
So What’s Next?
Of course, not every company using “machine learning” or “AI” is actually using “deep learning.” A good data scientist may have been hired to build a neural network, but when she actually studies the problem she more appropriately builds a Naive Bayes classifier instead. For the companies that are successfully using image recognition and language processing, they will continue to do so happily. But I do think neural networks are not going to progress far from those problem spaces.
Tempering expectations does not help VC funding after all.
The AI Winters of the past were devastating in pushing the boundaries of computer science. It is worth pointing out that useful things came out of such research, like search algorithms which can effectively win at chess or minimize costs in transportation problems. Simply put, innovative algorithms emerged that often excelled at one particular task.
The point I am making is there are many proven solutions out there for many types of problems. To avoid getting put out in the cold by an AI Winter, the best thing you can do is be specific about the problem you are trying to solve and understand its nature. After that, find approaches that provide an intuitive path to a solution for that particular problem. If you want to categorize text messages, you probably want to use Naive Bayes. If you are trying to optimize your transportation network, you likely should use Discrete Optimization. No matter the peer pressure, you are allowed to approach convoluted models with a healthy amount of skepticism, and question whether it is the right approach.
Hopefully this article made it abundantly clear deep learning is not the right approach for most problems. There is no free lunch. Do not hit the obstacle of seeking a generalized AI solution for all your problems, because you are not going to find one.
Are Our Thoughts Really Dot Products? Philosophy vs Science
One last point I want to throw in this article, and it is more philosophical than scientific. Is every thought and feeling we have simply a bunch of numbers being multiplied and added in linear algebra fashion? Are our brains, in fact, simply a neural network doing dot products all day? That sounds almost like a Pythagorean philosophy that reduces our consciousness to a matrix of numbers. Perhaps this is why so many scientists believe general artificial intelligence is possible, as being human is no different than being a computer. (I’m just pointing this out, not commenting whether this worldview is right or wrong).
No matter the peer pressure, you are allowed to approach convoluted models with a healthy amount of skepticism, and question whether it is the right approach.
If you do not buy into this Pythagorean philosophy, then the best you can strive for is have AI “simulate” actions that give the illusion it has sentiments and thoughts. A translation program does not understand Chinese. It “simulates” the illusion of understanding Chinese by finding probabilistic patterns. When your smartphone “recognizes” a picture of a dog, does it really recognize a dog? Or does it see a grid of numbers it saw before?

Jan 8
Recently, I wrote an article about how deep learning might be hitting its limitations and posed the possibility of another AI winter. I closed that article with a question about whether AI’s limitations are defined just as much by philosophy as it is science. This article is a continuation of that topic.

The reason I wrote this article is to spur a discussion on why despite so many AI winters and failures, people are still sinking costs to pursue artificial general intelligence. I am presenting a very high-level, non-technical argument that maybe belief systems are driving people’s adamacy on what is possible and not just scientific research.

This article is not meant to be an academic paper that meticulously (and boringly) addresses every technicality and definition in the gap between philosophy and science. Rather it is some light-hearted musings that make some “common sense” observations. While we can nitpick definitions and semantics all day, or debate whether replicated behavior = intelligence, let’s just put all that aside. So please don’t take yourself too seriously while reading this.

A Brief History of Pythagoras

2500 or so years ago, there was a philosopher and mathematician in South Italy named Pythagoras. You may have heard of him, but the story behind the man who studied triangles and math theorems is much wilder than you probably think.

Pythagoras ran a number-worshipping cult, and his followers were called mathematikoi. Pythagoras told his followers to pray to numbers, particularly sacred ones like 1, 7, 8, and 10. After all, “1” is the building block of the entire universe. For some reason, the number “10” (called the Tetractys) was the most holy. It was so holy, in fact, they made sacrifices to it every time a theorem was discovered. “Bless us, divine number!” they prayed to the number 10. “Thou who generated gods and men!”

According to Pythagoras, the universe cannot exist without numbers, and therefore numbers hold the meaning of life and existence. More specifically, the idea rational numbers built the universe was sacred and unquestionable. Apart from enabling volume, space, and everything physical, rational numbers also enabled art and beauty especially in music. So fervent was this sacred belief, legend says Pythagoras drowned a man for proving irrational numbers existed.

Are Our Thoughts Really Dot Products?



1*SaO_sOydo9IlWUDrBS6IQw.jpeg

Multiplying matrices summons demons, as alluded by Elon Musk
Fast forward to today. It may not be obvious to most people, but “artificial intelligence” is nothing more than some math formulas cleverly put together. Many researchers hope to use such formulas to replicate human intelligence on a machine. Now you may defend this idea and say “Cannot a math formula define intelligence, thoughts, behaviors, and emotions?” See what you just did there? No fava beans for you.

Notice how even though we barely have an idea how the brain works especially when it comes to intelligence and consciousness, even the most educated people (scientists, journalists, etc) are quick to suggest an idea without evidence. Perhaps you find mathematics so convincing as a way to explain the world’s phenomena, you are almost certain emotions and intelligence can be modeled mathematically too. Is this not the natural human tendency to react to the unknown with a philosophy or worldview? Perhaps this is the very nature of hypotheses and theories. Before you know it, we sell the neural network model (loosely inspired by neurons in the brain) as a carbon copy of biology.

But again, you don’t know if this is true.

Is every thought, feeling, or even behavior we have really a bunch of numbers being multiplied and added in linear algebra fashion? Are our brains, in fact, simply a neural network doing dot products all day? Reducing our consciousness to a matrix of numbers (or any mathematical model) is certainly Pythagorean. If everything is numbers, then so is our consciousness. Perhaps this is why so many scientists believe general artificial intelligence is possible, as being human is no different than being a computer. It may also be why people are quick to anthropomorphize chess algorithms.

21st Century Pythagoreanism

For this reason I believe Pythagoreanism is alive and well, and the sensationalism of AI research is rooted in it. You might say “Well I get Pythagorean philosophy says that 'everything is numbers' and by definition that includes our thoughts and behaviors. And sure, maybe AI research unknowingly clings to this philosophy. But what about number worship? Are you really going to suggest that happens today?”

Hold my beer.

In Silicon Valley, a former Google/Uber executive started an AI-worshipping church called Way of the Future. According to documents filed with the IRS, the religious nonprofit states its mission is “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” You might justifiably say this community exists on the extremes of society, but we cannot dismiss the high profile people and companies involved and how the church seeks to entrench itself into the scientific community. Here are some excerpts from their mission statements:

Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged).
Alright, never mind the fact sensationalism about near-term AI capabilities was alive and kicking in the 1960’s. But let’s keep reading:

We believe that intelligence is not rooted in biology. While biology has evolved one type of intelligence, there is nothing inherently specific about biology that causes intelligence. Eventually, we will be able to recreate it without using biology and its limitations. From there we will be able to scale it to beyond what we can do using (our) biological limits (such as computing frequency, slowness and accuracy of data copy and communication, etc).
Okay, for all this talk about science and objectivity… there is so much Pythagorean philosophy filling in the gaps. A belief that intelligence is not biological but rather mathematical (because that is what AI is) is hardly proven and yet labels itself as hard science, just like Pythagoras claimed his beliefs were. And how can the uproven claim that “intelligence is not rooted in biology” stand up to the fact intelligence (even by a layman’s definition) has only existed in biology? I am not refuting this claim, but darn it we have had a hard and expensive time trying to prove it over the past 60 years, and with no success. At this point, shouldn’t we be entertaining opposing theories a little more?

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
— Claude Shannon
Regardless, let’s just assume this group is not reflective of the general AI community (How many of you are going to church to worship an AI overlord anyway?) There are still a lot of journalists, researchers, and a general public who may not share these sentiments in a religious sense, but they are still influenced by them. Many people worry robots will take their blue and white collar jobs, or worse create a SkyNet-like takeover of society. Other folks worry we will become cyborgs in a figurative or literal sense and AI will dehumanize humanity.

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
— Edsger Dijkstra
Science fiction movies definitely have not helped imaginations stay tempered within reality. But still, Silicon Valley executives and researchers insist this can happen in the near future and continue to promote exaggerated claims about AI capabilities. They could simply be doing this as a publicity stunt to attract media attention and VC funding, but I think many sincerely believe it. Why?

With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
— Elon Musk
This sensationalism, fear, and even worship of artificial intelligence is 21st century Pythagoreanism. In layman’s terms, it is completely based on a theory that intelligence, thoughts, and emotions are nothing more than mathematical models. If this theory indeed holds up to be true, then of course a neural network could replicate human intelligence. But is human intelligence really that simple to model? Or should we acknowledge human intelligence is not understood enough to make this possible?

Pythagoras Says Everything is Numbers. So What?

So, everything is numbers in the domain of artificial intelligence and in Pythagorean philosophy. Why does this matter?

I am not saying Pythagoreanism is wrong, but rather much of the scientific community fails to acknowledge they are driven just as much by a philosophy as they are science. One must be careful when they make scientific claims without acknowledging their own worldviews, because everyone lives by a philosophy whether they realize it or not. Philosophy forces us to reason about our existence, how we react to the unknown, and disclose our own biases.

To presume how human intelligence works quickly crosses a fine line. Failing to make this distinction between philosophy and science is going to hurt the reputation of the scientific community. Before millions of dollars are invested and sunk into an AI startup, it might be a good idea to vet out which claims about AI capability are merely philosophical versus absolute. Time and time again, ambitious AI research has a poor track record when it comes to credibility and delivering what they say is possible. I think a lack of philosophical disclosure is largely responsible for this.

What If Everything Isn’t Numbers?

What if the world is not structured and harmonious but rather messy and chaotic, and we merely use math just to loosely make sense of it? What if consciousness, intelligence, and emotions are not numbers and math functions? What if the human (and even animal) mind is infinitely more complex in ways we cannot model? Humans and animals are after all irrational and chaotic.

If we do not have discussions to these questions (and do it publicly), we are kidding ourselves when we make assertions about the unknown. And we should not entertain and accept just one philosophy. We should be able to discuss them all.

Failing to make this distinction between philosophy and science is going to hurt the reputation of the scientific community.
If you do not buy into this philosophy of 21st century Pythagoreanism, then the best you can strive for is have AI “simulate” actions that give the illusion it has sentiments and thoughts. A translation program does not understand Chinese. It “simulates” the illusion of understanding Chinese by finding probabilistic patterns. The algorithm does not understand what it is doing, know it is doing it, or much less why it is doing it. In a non-Pythagorean world, AI is like a magician passing off tricks as magic.

What if we got it all wrong, and it is biology that will always be intelligently superior and it is technology that is limited? We are trying to emulate biology after all, and with great frustration.

Here’s what I believe, and I am not a Luddite saying we shouldn’t try to make “smarter” machines. We need to set out to achieve small, reasonable goals focused on diverse and specific sets of problems… with equally diverse and specific solutions. In other words, we can accept it is okay to create algorithms that are great at one task, rather than spin our wheels creating an algorithm that does everything (“jack of all trades, master of none” and all that). If it is not to prevent AI winters, sunk investments, and wasted careers, let’s do it to make AI research more pragmatic and less sensationalized. We have seen enough AI winters to know better by now.

EDIT:
I have heard some feedback on this article, many with compelling arguments that suggest I should perhaps clarify.

I will admit that only suggesting "dot products" as a way to model thoughts is a little reductionist. Really it can be any math model invented or yet to be discovered.

That being said, this article is not meant to be argumentive but rather rhetorical and spark discussion, and comment on the parallels between Pythagorean philosophy and AI sensationalism. There are many philosophical discussions on AI. I just find it problematic they are not discussed in the open with the public and media enough. I am also surprised nobody has ever compared AI sensationalism to Pythagoreanism. If you find this wrong then please say why.

It is anyone’s prerogative to put resources and costs into research that may become a dead end in the name of science, and I wish them success. However, the AI winters and failures of past decades does raise questions if we are doing it all again, and we are not learning from the past. The Pythagorean perception is simply one explanation why strong AI is marketed as a near term possibility, even though we have no models that remotely come close yet. If intelligence is indeed just numbers and functions, then it is just a matter of time before finding the right model.
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Finland is making the most of artificial intelligence - thisisFINLAND
Artificial intelligence, a branch of computer science, can already perform demanding tasks, if taught and trained by humans.
In the future, intelligent machines will be able to learn like humans, act like humans, and think like humans. They can free us from tedious routine work, and will enable us to concentrate on more creative tasks that bring more value to our lives.
Three waves of AI
“The first wave of AI in the 1960s required coding and programming of rules, so that software and algorithms could solve specific problems,” says Harri Valpola, an accomplished computer scientist and CEO of The Curious AI Company.
“This enabled the creation of automated processes like route planning, which have become an integral part of today’s technology,” he continues.
“Today, when we talk about AI we refer to its second wave, which is based on supervised machine learning. Speech and image recognition, machine translation, data mining and other existing AI applications are all based on the second wave.”
Valpola says the third wave of AI, autonomous artificial intelligence, is emerging today. There are no third-wave technologies in current AI products yet, but research labs have had working prototypes for some time now.
It may take several decades before the intelligence of machines surpasses that of human beings.
“But things like digital coworkers that utilise a simpler form of AI will be around much sooner,” Valpola says.
Complex problem solving
Maria-Ritola-by-Samuli-Skantsi-350x345.jpg

“We are able to tap into knowledge that was never available to us before,” says Maria Ritola.Photo: Samuli Skantsi
“AI systems that identify patterns in vast amounts of data enable complex problem solving,” says Maria Ritola, the Finnish co-founder and CMO of Iris AI, which recently closed a two-million-euro funding round. “We are able to tap into knowledge that was never available to us before.” The startup has launched an AI-powered science R&D assistant that helps researchers track down relevant research papers without having to know the right keywords.
“But one of the risks of AI systems is that they learn human prejudices due to biases in the training data given to them, which is then used for decision making,” she says.
Social impacts of AI
“Another risk is that governments do not participate enough in developing AI systems,” says Ritola.
“As a result, we may fail to understand the social impacts of the machines that are getting ever more intelligent. One of the areas to understand and manage is the big shift in job markets relating to automation.”
Finland sees the big picture.
“The Finnish government is acutely aware that AI will change our jobs and careers, and wants to understand how it will affect individual people and our society,” says Pekka Ala-Pietilä, who heads a steering group that carved out a plan for Finland’s AI programme.
“Finland has huge potential to become one of the leading countries in exploiting the benefits of AI. The idea is to make it easy for businesses to utilise AI, and to support the public sector in building predictive, AI-powered digital services based on people’s major life events. We want to keep our country wealthy, our businesses competitive, our public sector effective, and our society well-functioning.”
AI MILESTONES
  • 1941
    German engineer and inventor Konrad Zuse builds the world’s first programmable and commercially available computer.
  • 1950
    British mathematician and logician Alan Turing introduces the Turing test, which lets people test whether a machine can think: The machine is intelligent if you can talk to it without noticing it is a machine.
  • 1956
    Researchers found a new academic discipline, AI research, at a workshop at Dartmouth College in the US.
  • 1961
    The first industrial robot, Unimate, starts work at the General Motors factory in New Jersey, USA.
  • 1982
    Finnish neural network pioneer Teuvo Kohonen introduces the concept of self-organising maps.
  • 1986
    American researchers Rumelhart, Hinton and Williams publish an article on MLP network and back-propagation, a new learning procedure that constitutes the basis for today’s deep learning AI.
  • 1997
    Chess computer Deep Blue beats the world’s best chess player, Garry Kasparov.
  • 2000
    Cynthia Breazeal of Massachusetts Institute of Technology in the US develops a robot called Kismet that can recognise and simulate emotions.
  • 2009
    Google starts to secretly develop autonomous, self-driving cars.
  • 2011
    Watson, a question-answering AI developed by IBM, can understand natural language. It competes against, and beats, two former winners of the quiz show Jeopardy.
  • 2012
    Deep learning technology beats all other computer vision methods in the ImageNet competition, where the goal is to recognise images in a vast set of approximately 1.2 million images.
  • 2012
    A robot that had learned to sort objects on its own, developed by Finnish robotics firm ZenRobotics, starts to sort useful waste material from industrial waste.
  • 2016
    AlphaGo, AI developed by Google, beats professional player and 18-time world champion Lee Sedol at Go, a complex game that requires creativity and is more difficult for a machine than chess.
By Leena Koskenlaakso, ThisisFINLAND Magazine 2018
 

Pizzabeak

Banned
Local time
Yesterday 10:55 PM
Joined
Jan 24, 2012
Messages
2,667
-->
Inside Finland’s plan to become an artificial intelligence powerhouse
Finland knows it doesn’t have the resources to compete with China or the United States for artificial intelligence supremacy, so it’s trying to outsmart them. “People are comparing this to electricity – it touches every single sector of human life,” says Nokia chairman Risto Siilasmaa. From its foundations as a pulp mill 153 years ago, Nokia is now one of the companies helping to drive a very quiet, very Finnish AI revolution.

Last May, the small Nordic country announced the launch of Elements of AI, a first-of-its-kind online course that forms part of an ambitious plan to turn Finland into an AI powerhouse. To date, more than 130,000 people have signed up for the course. “It’s a pretty unique thing in Finland,” says Siilasmaa, who had an advisory role in the development of the online course. But it isn’t just Finns who are benefitting from the grand AI plan.

A few months after the course launched, developer Teemu Roos found himself chatting online to a Nigerian plumber who wanted to learn more about artificial intelligence. It was then that Roos, and his colleagues at the University of Helsinki who helped develop Elements of AI, knew their work could have a massive impact – not just in Finland, but across the world.

For such an ambitious plan, it has humble beginnings. The aim of the online course is to ensure that as many Finns as possible understand the basics of AI. According to Roos and Siilasmaa, practically anyone could benefit from knowing more about AI right now. And from that huge pool of knowledge, the hope is that a few bright sparks can give Finland a competitive edge.

Businesses could be more competitive, consumers more informed about the products they use and entire societies could make better decisions about AI, including regulating it. There are rewards for those who engage with the subject, according to consultancy McKinsey. It thinks a handful of deep learning techniques alone could soon account for up to $6 trillion in value annually. The Organisation for Economic Co-operation and Development describes AI as already transforming “every aspect” of our lives.

The nation of 5.5 million people has certainly proved that it can punch above its weight: Finland is renowned for its accessible, world-beating higher education system, and huge numbers of people attend university – where tuition is free. Now the country is on a mission to educate itself – and anyone else who wants to join the class – about machine learning.

When he gives talks, Siilasmaa often asks the audience if they think AI will soon dictate the competitiveness of the Finnish economy or the business they work for. Usually hundreds of hands are raised. Then he asks how many people understand how AI works. “Typically no one raises their hands,” he says.

An online course could quickly and easily ensure that people are well-equipped to respond to the arrival of this technology. There is, after all, a substantial AI skills gap – with millions of engineering jobs available but only a few hundred thousand people currently qualified to fill them. Rather than waiting to be disrupted by AI, the hope is that Finns could learn to bend it to their will.

With the help of design consultancy Reaktor, the Elements of AI team came up with the full six-week programme in just a few months.

It’s written in plain English (or Finnish) and tackles the fundamental basics of artificial intelligence as it is used today. There are written sections, with exercises for students to complete, on subjects like problem-solving, neural networks and the social implications of AI. There are, however, no videos. The course designers didn’t consider them a good way to learn.

But the entirely text-based course is intended to be accessible to all. Ville Valtonen at Reaktor says it has been optimised for smartphones: “People have done it while taking the bus to work.”

Roos proudly explains that around 40 per cent of the 130,000 participants who have signed up so far are women – a proportion he is not used to (“You can imagine, teaching at the computer science department…”). There’s interest from businesses, too. More than 250 Finnish companies have pledged to use the course for staff training.

More than a quarter of participants are over 45, so the appeal isn’t limited to Finland’s youth either. Actually, it’s not even limited to Finns. Around half of those who have taken the course are from elsewhere in the world. There have been visitors to the website from every country in the world except North Korea, says Valtonen. Which is how Roos ended up chatting to a Nigerian plumber. “He has ideas about how he’ll use AI in his business and personal life,” Roos says. He adds that, in the early days of the course last summer, it was generally his colleagues and people in tech circles who most often told him they were taking the course. But since word of the programme has spread, more and more people have taken part. In meetings, senior politicians have excitedly pulled out their phones to show him what chapter of the course they’ve reached. “The government of Finland has fully supported us,” he says. “That’s been really, really cool.”

Mirva Kuvaja is a part-time artist who took the Elements of AI course last year. “I’ve done, like, three degrees,” she tells me. Two were in the UK and one in Finland. Kuvaja is also studying the programming language Java through a University of Helsinki programme. She enjoyed the AI course, she says. “It has no application for my work life at the moment but it’s a very trendy topic and totally new for me.”

Her day job is in the corporate social responsibility department at a firm that makes studs for winter tyres – a must-have road accessory in Finland. But in her spare time, Kuvaja is an artist. She thinks what she’s learned about machine learning may one day help her to come up with AI-assisted artworks. And the course, which delves briefly into the ethical issues around AI, has prompted her to reconsider what online services she uses, based on how they handle and process personal data.

“I actually left Facebook,” she explains. “I try to be a little bit more aware of what I do online and just generally always consider the terms and conditions before I click ‘Agree’ on anything.”

As Roos says, one of the goals is to inform consumers. “If there are powerful technologies shaping society, then the general public should have a chance to be aware and take part in the public discussion,” he says.

But for Siilasmaa, the long-term benefits will really come if those key decision-makers, not just consumers, begin to understand AI so that they can take advantage of it. He tells me about meeting the mayor of a major city – the mayor was all at sea when it came to AI, but wanted to learn more. Siilasmaa’s advice? Sit down and think carefully about how you can use AI to your advantage.

Although most executives and politicians think AI is important for maintaining a competitive advantage, deployment of the technology remains low. That suggests more people need to know how to actually integrate AI technologies with existing systems.

I ask Siilasmaa whether there is any patriotic motivation on his part behind this AI-drive. He says he’d like the whole world to be better off through such learning, before adding, “But, obviously, I’m a Finn.”

Citing the country’s strong social security policies, he says he thinks Finns can “deal with the potential negative consequences of machine learning best”. But Finland isn’t the only country that has spotted the opportunity to move ahead in this race. Authorities in China have said AI should be taught to primary school children. The first such classes are due to start this year.

European countries clearly don’t want to be left behind. Sweden has pledged to offer a version of the Elements of AI course in Swedish. And Roos has had requests from several other countries. “Faster than we can currently deliver,” he says.

Improving AI literacy could insulate societies from the threat posed to certain jobs. While there is much debate over how many jobs may actually be lost as automation takes over, there is little doubt that the rise of AI will change the job market in some way. New roles will come along, says Siilasmaa. But it will be those who understand the shifting nature of industry who will be best placed to take them on.

Another course participant, Kalle Langén, says he convinced his company to allow him time to take the Elements of AI course. He manages 26 specialists in an IT department. Their work often involves upgrading clients’ email systems or migrating software and data to the cloud.

He isn’t sure how yet, but the programme has inspired him to investigate how AI could help streamline this work and improve responses to customer queries. “That is probably the biggest value of the course,” he says.

And he’s not the only one. His wife took the course as well, he says. She works in the purchasing department at a retail business and is thinking about how AI may be used to crunch product data so that she and her team can make better buying decisions in the future.

AI expert Mark Briers at the Alan Turing Institute wasn’t involved with the course’s development but says it’s an “exciting and progressive” initiative. The introductory modules are informative and non-sensational, he says. “They would allow, for instance, better decisions to be formed around contentious issues such as ‘killer robots’.” He thinks there is an opportunity to introduce content that improves people’s data literacy generally, though. Future iterations could achieve that, he says. Why not teach people about modern statistical procedures – the like of which are now commonly used in weather forecasting and political debates, for example those around economics or immigration?

”It is important, in my opinion, that the general public have an ability to understand such content, in order to make informed decisions,” he explains.

For a technology like AI, which can be very confusing to the average person, it’s hard to sniff at Finland’s well-meaning – and free-to-access – course. Valtonen puts it more bluntly: “AI and technology in general are too important to be left in the hands of programmers.”
 

birdsnestfern

Earthling
Local time
Today 1:55 AM
Joined
Oct 7, 2021
Messages
1,670
-->
Links for INTPS from INTPCENTRAL:
(Some may be outdated now).

http://www.socionics.com/prof/intj.htm

I walk a lonely road
The only one that I have ever known
Don't know where it goes
But its home and I walk alone

http://www.xeromag.com/fun/personality.html

http://en.wikipedia.org/wiki/Gävle_goat

http://www.myersbriggs.org/


http://personalityjunkie.com/the-intp/


http://personalitycafe.com/intp-forum-thinkers/611-intp-portrait.html




http://www.ptypes.com/


http://forums.intpcentral.com/showthread.php?s=16f603b887b45455c610aa9fa6edc89d&t=582

In Depth Profiles
Do a search for Paul James INTP Profile (it may be in a pdf format).

Portrait of an INTP ..........(personality page) http://www.personalitypage.com/INTP.html
INTP Wikipedia Entry ..........(wikipedia) http://en.wikipedia.org/wiki/Intp

socionics INTp & INTj ..........(socionics)
http://www.socionics.com/prof/intp.htm

Joe Butt's INTP Profile ..........(type logic) http://www.typelogic.com/intp.html
From Conversations with Designer Theorizers ..........(bestfittype.com) http://www.bestfittype.com/intp.html


Rational Portrait of the Architect ..........(personalityzone.com)http://www.keirsey.com/handler.aspx?s=keirsey&f=fourtemps&tab=5&c=overview


Short Profiles & Lists
The Architect ..........(kiersey website)

INTP Traits ..........(team technology) http://www.teamtechnology.co.uk/myers-briggs/intp.htm
INTP: The Innovator ..........(careerfulfillment.com) http://www.careerfulfillment.com/profiles/8_intp_profile.htm
INTP Learning Styles ..........(careerfulfillment.com)
http://www.careerfulfillment.com/learning_styles/lrn_8intp.htm

Jung INTP Type Keyword Lists ..........(similarminds.com) http://similarminds.com/jung/intp.html

Blogs & Articles

How to Find the Emotional Side of INTPs..........(to be intp blog)

How INTPs See Discussion..........(to be intp blog)
http://homepage.mac.com/bahlberg/iblog/B1386252977/C707866389/E1519845737/index.html

How ISTPs Resemble INTPs & INTJs ..........(bestfittype.com)
http://www.bestfittype.com/istp_intpintj.html
INTj or INTp? ..........(socionics)


Personal Growth for INTPs..........(personality page)
http://www.socionics.com/articles/intjorintp.htm

INTP Thinking Patterns..........(from the asim jalis blog)
http://asimjalis.blogspot.com/2004/09/intp-thinking-patterns.html


INTP Careers..........(personality page) http://www.personalitypage.com/INTP_car.html
How INTPs Function on Teams ..........(bestfittype.com) http://www.bestfittype.com/intponateam.html


Relationship Specific
INTPs in Relationships..........(personality page)
http://www.personalitypage.com/INTP_rel.html
INFJ View of Relationships with INTPs.......https://personalityjunkie.com/09/infj-intp-relationships-compatibility-part-i/

How To Keep an INTP Man Happy.......... https://www.typologycentral.com/threads/how-to-keep-an-intp-man-happy.3607/


Iganokami's Guide to the INTP Mate..........(to be intp blog)

INTP Love Tips..........(Only includes INTJ, ENTJ, ENTP and INFJ, 2nd half of the page is advertising a book)
http://www.lovetype.com/intptips.html

Communities
Myspace: INTP Central's Profile http://www.myspace.com/intpcentral
Myspace: INTP Personality Type Group
LiveJournal INTP Community http://community.livejournal.com/jmbt_intp/
Stumble Upon INTP Group http://www.stumbleupon.com/

Everything Else
INTP or INTJ? http://groups.google.com/group/alt.psychology.personality/msg/d722e6b2c2d42e9e?hl=en&lr=&ie=UTF

INTP Writing http://www19.homepage.villanova.edu...urses/common_files/personality_types/intp.htm

INTPs in Relationships..........in an odd slideshow format
 

birdsnestfern

Earthling
Local time
Today 1:55 AM
Joined
Oct 7, 2021
Messages
1,670
-->
One set of links: (I have many).
http://www.meditationsociety.com/108meds.html
http://www.visitcalifornia.com
http://www.nps.gov/goga/index.htm
http://www.scenic.com/
http://www.pcap.com/tours.htm
http://www.visitlasvegas.com/vegas/index.jsp
http://www.stanford.edu/
http://www.parislasvegas.com/.../hote.../property-home.shtml
http://www.smartbargains.com/
http://www.streetmap.co.uk/map.srf?x=229620&y=682331&z=5&sv=229620,682331&st=OSGrid&lu=N&tl=~&ar=y&bi=~&mapp=map.srf&searchp=ids.srf
http://www-usr.rider.edu/~suler/zenstory/zenstory.html
http://www.cartoonbank.com/
http://www.allyoucanread.com/
http://www.ballarddesigns.com/
http://www.yankeecandle.com/cgi-bin/ycbvp/retail.jsp
http://www.christopherradko.com/
http://www.goodorient.com/
http://www.frontgate.com/
http://www.garnethill.com/
http://www.ghirardellisq.com/
http://www.goodhousekeeping.com/
http://www.ehow.com/
http://www.fao.com/
http://www.hearthsong.com
http://www.vermontcountrystore.com
http://www.jacksonandperkins.com
http://www.jennair.com
http://www.llbean.com/
www.amazon.com
www.landsend.com
www.overstock.com
www.smartbargains.com
http://www.neimanmarcus.com/
http://nordstrom.com/
http://www.msnbc.msn.com/
http://www.rei.com/
http://www.belson.com/
http://aol.com/
http://www.sees.com
http://www.sothebys.com
http://www.sunset.com
http://www.sunset.com/.../easy-fresh-thanksgiving-menus.../
http://www.target.com/
http://www.thirdage.com/
http://www.victoriantradingco.com/
http://www.footwise.com
http://www.6pm.com/
http://www.williams-sonoma.com/
http://www.joann.com
http://www.surlatable.com/
 
Top Bottom