• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.

Link dump

Pizzabeak

Prolific Member
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#1

Pizzabeak

Prolific Member
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#2
Woman hit by self-driving car:



Franken-algorithms: the deadly consequences of unpredictable code

Andrew Smith

29-37 minutes

The 18th of March 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes. An array of radar and light-emitting lidar sensors allowed onboard algorithms to calculate that, given their host vehicle’s steady speed of 43mph, the object was six seconds away – assuming it remained stationary. But objects in roads seldom remain stationary, so more algorithms crawled a database of recognizable mechanical and biological entities, searching for a fit from which this one’s likely behavior could be inferred.
At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?
“In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”
If these words sound shocking, they should, not least because Ellen Ullman, in addition to having been a distinguished professional programmer since the 1970s, is one of the few people to write revealingly about the process of coding. There’s not much she doesn’t know about software in the wild.
“People say, ‘Well, what about Facebook – they create and use algorithms and they can change them.’ But that’s not how it works. They set the algorithms off and they learn and change and run themselves. Facebook intervene in their running periodically, but they really don’t control them. And particular programs don’t just run on their own, they call on libraries, deep operating systems and so on ...”
What is an algorithm?
Few subjects are more constantly or fervidly discussed right now than algorithms. But what is an algorithm? In fact, the usage has changed in interesting ways since the rise of the internet – and search engines in particular – in the mid-1990s. At root, an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing. If a user claims to be 18, allow them into the website; if not, print “Sorry, you must be 18 to enter”. At core, computer programs are bundles of such algorithms. Recipes for treating data. On the micro level, nothing could be simpler. If computers appear to be performing magic, it’s because they are fast, not intelligent.
Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”). This has revolutionized areas of medicine, science, transport, communication, making it easy to understand the utopian view of computing that held sway for many years. Algorithms have made our lives better in myriad ways.
Only since 2016 has a more nuanced consideration of our new algorithmic reality begun to take shape. If we tend to discuss algorithms in almost biblical terms, as independent entities with lives of their own, it’s because we have been encouraged to think of them in this way. Corporations like Facebook and Google have sold and defended their algorithms on the promise of objectivity, an ability to weigh a set of conditions with mathematical detachment and absence of fuzzy emotion. No wonder such algorithmic decision-making has spread to the granting of loans/ bail/benefits/college places/job interviews and almost anything requiring choice.
We no longer accept the sales pitch for this type of algorithm so meekly. In her 2016 book Weapons of Math Destruction, Cathy O’Neil, a former math prodigy who left Wall Street to teach and write and run the excellent mathbabe blog, demonstrated beyond question that, far from eradicating human biases, algorithms could magnify and entrench them. After all, software is written by overwhelmingly affluent white and Asian men – and it will inevitably reflect their assumptions (Google “racist soap dispenser” to see how this plays out in even mundane real-world situations). Bias doesn’t require malice to become harm, and unlike a human being, we can’t easily ask an algorithmic gatekeeper to explain its decision. O’Neil called for “algorithmic audits” of any systems directly affecting the public, a sensible idea that the tech industry will fight tooth and nail, because algorithms are what the companies sell; the last thing they will volunteer is transparency.
The good news is that this battle is under way. The bad news is that it’s already looking quaint in relation to what comes next. So much attention has been focused on the distant promises and threats of artificial intelligence, AI, that almost no one has noticed us moving into a new phase of the algorithmic revolution that could be just as fraught and disorienting – with barely a question asked.
The algorithms flagged by O’Neil and others are opaque but predictable: they do what they’ve been programmed to do. A skilled coder can in principle examine and challenge their underpinnings. Some of us dream of a citizen army to do this work, similar to the network of amateur astronomers who support professionals in that field. Legislation to enable this seems inevitable.
We might call these algorithms “dumb”, in the sense that they’re doing their jobs according to parameters defined by humans. The quality of result depends on the thought and skill with which they were programmed. At the other end of the spectrum is the more or less distant dream of human-like artificial general intelligence, or AGI. A properly intelligent machine would be able to question the quality of its own calculations, based on something like our own intuition (which we might think of as a broad accumulation of experience and knowledge). To put this into perspective, Google’s DeepMind division has been justly lauded for creating a program capable of mastering arcade games, starting with nothing more than an instruction to aim for the highest possible score. This technique is called “reinforcement learning” and works because a computer can play millions of games quickly in order to learn what generates points. Some call this form of ability “artificial narrow intelligence”, but here the word “intelligent” is being used much as Facebook uses “friend” – to imply something safer and better understood than it is. Why? Because the machine has no context for what it’s doing and can’t do anything else. Neither, crucially, can it transfer knowledge from one game to the next (so-called “transfer learning”), which makes it less generally intelligent than a toddler, or even a cuttlefish. We might as well call an oil derrick or an aphid “intelligent”. Computers are already vastly superior to us at certain specialized tasks, but the day they rival our general ability is probably some way off – if it ever happens. Human beings may not be best at much, but we’re second-best at an impressive range of things.
Here’s the problem. Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Clashing codes
These algorithms are not new in themselves. I first encountered them almost five years ago while researching a piece for the Guardian about high frequency trading (HFT) on the stock market. What I found was extraordinary: a human-made digital ecosystem, distributed among racks of black boxes crouched like ninjas in billion-dollar data farms – which is what stock markets had become. Where once there had been a physical trading floor, all action had devolved to a central server, in which nimble, predatory algorithms fed off lumbering institutional ones, tempting them to sell lower and buy higher by fooling them as to the state of the market. Human HFT traders (although no human actively traded any more) called these large, slow participants “whales”, and they mostly belonged to mutual and pension funds – ie the public. For most HFT shops, whales were now the main profit source. In essence, these algorithms were trying to outwit each other; they were doing invisible battle at the speed of light, placing and cancelling the same order 10,000 times per second or slamming so many into the system that the whole market shook – all beyond the oversight or control of humans.
No one could be surprised that this situation was unstable. A “flash crash” had occurred in 2010, during which the market went into freefall for five traumatic minutes, then righted itself over another five – for no apparent reason. I travelled to Chicago to see a man named Eric Hunsader, whose prodigious programming skills allowed him to see market data in far more detail than regulators, and he showed me that by 2014, “mini flash crashes” were happening every week. Even he couldn’t prove exactly why, but he and his staff had begun to name some of the “algos” they saw, much as crop circle hunters named the formations found in English summer fields, dubbing them “Wild Thing”, “Zuma”, “The Click” or “Disruptor”.
Neil Johnson, a physicist specializing in complexity at George Washington University, made a study of stock market volatility. “It’s fascinating,” he told me. “I mean, people have talked about the ecology of computer systems for years in a vague sense, in terms of worm viruses and so on. But here’s a real working system that we can study. The bigger issue is that we don’t know how it’s working or what it could give rise to. And the attitude seems to be ‘out of sight, out of mind’.”
Significantly, Johnson’s paper on the subject was published in the journal Nature and described the stock market in terms of “an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan [ie highly unusual] events with ultrafast durations”. The scenario was complicated, according to the science historian George Dyson, by the fact that some HFT firms were allowing the algos to learn – “just letting the black box try different things, with small amounts of money, and if it works, reinforce those rules. We know that’s been done. Then you actually have rules where nobody knows what the rules are: the algorithms create their own rules – you let them evolve the same way nature evolves organisms.” Non-finance industry observers began to postulate a catastrophic global “splash crash”, while the fastest-growing area of the market became (and remains) instruments that profit from volatility. In his 2011 novel The Fear Index, Robert Harris imagines the emergence of AGI – of the Singularity, no less – from precisely this digital ooze. To my surprise, no scientist I spoke to would categorically rule out such a possibility.
All of which could be dismissed as high finance arcana, were it not for a simple fact. Wisdom used to hold that technology was adopted first by the porn industry, then by everyone else. But the 21st century’s porn is finance, so when I thought I saw signs of HFT-like algorithms causing problems elsewhere, I called Neil Johnson again.
“You’re right on point,” he told me: a new form of algorithm is moving into the world, which has “the capability to rewrite bits of its own code”, at which point it becomes like “a genetic algorithm”. He thinks he saw evidence of them on fact-finding forays into Facebook (“I’ve had my accounts attacked four times,” he adds). If so, algorithms are jousting there, and adapting, as on the stock market. “After all, Facebook is just one big algorithm,” Johnson says.
“And I think that’s exactly the issue Facebook has. They can have simple algorithms to recognize my face in a photo on someone else’s page, take the data from my profile and link us together. That’s a very simple concrete algorithm. But the question is what is the effect of billions of such algorithms working together at the macro level? You can’t predict the learned behavior at the level of the population from microscopic rules. So Facebook would claim that they know exactly what’s going on at the micro level, and they’d probably be right. But what happens at the level of the population? That’s the issue.”
To underscore this point, Johnson and a team of colleagues from the University of Miami and Notre Dame produced a paper, Emergence of Extreme Subpopulations from Common Information and Likely Enhancement from Future Bonding Algorithms, purporting to mathematically prove that attempts to connect people on social media inevitably polarize society as a whole. He thinks Facebook and others should model (or be made to model) the effects of their algorithms in the way climate scientists model climate change or weather patterns.
O’Neil says she consciously excluded this adaptive form of algorithm from Weapons of Math Destruction. In a convoluted algorithmic environment where nothing is clear, apportioning responsibility to particular segments of code becomes extremely difficult. This makes them easier to ignore or dismiss, because they and their precise effects are harder to identify, she explains, before advising that if I want to see them in the wild, I should ask what a flash crash on Amazon might look like.
“I’ve been looking out for these algorithms, too,” she says, “and I’d been thinking: ‘Oh, big data hasn’t gotten there yet.’ But more recently a friend who’s a bookseller on Amazon has been telling me how crazy the pricing situation there has become for people like him. Every so often you will see somebody tweet ‘Hey, you can buy a luxury yarn on Amazon for $40,000.’ And whenever I hear that kind of thing, I think: ‘Ah! That must be the equivalent of a flash crash!’”
Anecdotal evidence of anomalous events on Amazon is plentiful, in the form of threads from bemused sellers, and at least one academic paper from 2016, which claims: “Examples have emerged of cases where competing pieces of algorithmic pricing software interacted in unexpected ways and produced unpredictable prices, as well as cases where algorithms were intentionally designed to implement price fixing.” The problem, again, is how to apportion responsibility in a chaotic algorithmic environment where simple cause and effect either doesn’t apply or is nearly impossible to trace. As in finance, deniability is baked into the system.
Real-life dangers
Where safety is at stake, this really matters. When a driver ran off the road and was killed in a Toyota Camry after appearing to accelerate wildly for no obvious reason, Nasa experts spent six months examining the millions of lines of code in its operating system, without finding evidence for what the driver’s family believed had occurred, but the manufacturer steadfastly denied – that the car had accelerated of its own accord. Only when a pair of embedded software experts spent 20 months digging into the code were they able to prove the family’s case, revealing a twisted mass of what programmers call “spaghetti code”, full of algorithms that jostled and fought, generating anomalous, unpredictable output. The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, not least when the algorithms may also have to defend themselves from hackers?
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.
“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.
“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Unlike our old electro-mechanical systems, these new algorithms are also impossible to test exhaustively. Unless and until we have super-intelligent machines to do this for us, we’re going to be walking a tightrope.
Dyson questions whether we will ever have self-driving cars roaming freely through city streets, while Toby Walsh, a professor of artificial intelligence at the University of New South Wales who wrote his first program at age 13 and ran a tyro computing business by his late teens, explains from a technical perspective why this is.
“No one knows how to write a piece of code to recognize a stop sign. We spent years trying to do that kind of thing in AI – and failed! It was rather stalled by our stupidity, because we weren’t smart enough to learn how to break the problem down. You discover when you program that you have to learn how to break the problem down into simple enough parts that each can correspond to a computer instruction [to the machine]. We just don’t know how to do that for a very complex problem like identifying a stop sign or translating a sentence from English to Russian – it’s beyond our capability. All we know is how to write a more general purpose algorithm that can learn how to do that given enough examples.”
Hence the current emphasis on machine learning. We now know that Herzberg, the pedestrian killed by an automated Uber car in Arizona, died because the algorithms wavered in correctly categorizing her. Was this a result of poor programming, insufficient algorithmic training or a hubristic refusal to appreciate the limits of our technology? The real problem is that we may never know.
“And we will eventually give up writing algorithms altogether,” Walsh continues, “because the machines will be able to do it far better than we ever could. Software engineering is in that sense perhaps a dying profession. It’s going to be taken over by machines that will be far better at doing it than we are.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs.
“Where there are choices to be made, that’s where ethics comes in. And we tend to want to have an agency that we can interrogate or blame, which is very difficult to do with an algorithm. This is one of the criticisms of these systems so far, in that it’s not possible to go back and analyze exactly why some decisions are made, because the internal number of choices is so large that how we got to that point may not be something we can ever recreateto prove culpability beyond doubt.”
The counter-argument is that, once a program has slipped up, the entire population of programs can be rewritten or updated so it doesn’t happen again – unlike humans, whose propensity to repeat mistakes will doubtless fascinate intelligent machines of the future. Nonetheless, while automation should be safer in the long run, our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable. In an algorithmic environment, many unexpected outcomes may not have been foreseeable to humans – a feature with the potential to become a scoundrel’s charter, in which deliberate obfuscation becomes at once easier and more rewarding. Pharmaceutical companies have benefited from the cover of complexity for years (see the case of Thalidomide), but here the consequences could be both greater and harder to reverse.
The military stakes
Commerce, social media, finance and transport may come to look like small beer in future, however. If the military no longer drives innovation as it once did, it remains tech’s most consequential adopter. No surprise, then, that an outpouring of concern among scientists and tech workers has accompanied revelations that autonomous weapons are ghosting toward the battlefield in what amounts to an algorithmic arms race. A robotic sharpshooter currently polices the demilitarized zone between North and South Korea, and while its manufacturer, Samsung, denies it to be capable of autonomy, this claim is widely disbelieved. Russia, China and the US all claim to be at various stages of developing swarms of coordinated, weaponized drones , while the latter plans missiles able to hover over a battlefield for days, observing, before selecting their own targets. A group of Google employees resigned over and thousands more questioned the tech monolith’s provision of machine learning software to the Pentagon’s Project Maven “algorithmic warfare” program – concerns to which management eventually responded, agreeing not to renew the Maven contract and to publish a code of ethics for the use of its algorithms. At time of writing, competitors including Amazon and Microsoft have resisted following suit.
In common with other tech firms, Google had claimed moral virtue for its Maven software: that it would help choose targets more efficiently and thereby save lives. The question is how tech managers can presume to know what their algorithms will do or be directed to do in situ – especially given the certainty that all sides will develop adaptive algorithmic counter-systems designed to confuse enemy weapons. As in the stock market, unpredictability is likely to be seen as an asset rather than handicap, giving weapons a better chance of resisting attempts to subvert them. In this and other ways we risk in effect turning our machines inside out, wrapping our everyday corporeal world in spaghetti code.
Lucy Suchman of Lancaster University in the UK co-authored an open letter from technology researchers to Google, asking them to reflect on the rush to militarize their work. Tech firms’ motivations are easy to fathom, she says: military contracts have always been lucrative. For the Pentagon’s part, a vast network of sensors and surveillance systems has run ahead of any ability to use the screeds of data so acquired.
“They are overwhelmed by data, because they have new means to collect and store it, but they can’t process it. So it’s basically useless – unless something magical happens. And I think their recruitment of big data companies is a form of magical thinking in the sense of: ‘Here is some magic technology that will make sense of all this.’”
Suchman also offers statistics that shed chilling light on Maven. According to analysis carried out on drone attacks in Pakistan from 2003-13, fewer than 2% of people killed in this way are confirmable as “high value” targets presenting a clear threat to the United States. In the region of 20% are held to be non-combatants, leaving more than 75% unknown. Even if these figures were out by a factor of two – or three, or four – they would give any reasonable person pause.
“So here we have this very crude technology of identification and what Project Maven proposes to do is automate that. At which point it becomes even less accountable and open to questioning. It’s a really bad idea.”
Suchman’s colleague Lilly Irani, at the University of California, San Diego, reminds us that information travels around an algorithmic system at the speed of light, free of human oversight. Technical discussions are often used as a smokescreen to avoid responsibility, she suggests.
“When we talk about algorithms, sometimes what we’re talking about is bureaucracy. The choices algorithm designers and policy experts make are presented as objective, where in the past someone would have had to take responsibility for them. Tech companies say they’re only improving accuracy with Maven – ie the right people will be killed rather than the wrong ones – and in saying that, the political assumption that those people on the other side of the world are more killable, and that the US military gets to define what suspicion looks like, go unchallenged. So technology questions are being used to close off some things that are actually political questions. The choice to use algorithms to automate certain kinds of decisions is political too.”
The legal conventions of modern warfare, imperfect as they might be, assume human accountability for decisions taken. At the very least, algorithmic warfare muddies the water in ways we may grow to regret. A group of government experts is debating the issue at the UN convention on certain conventional weapons (CCW) meeting in Geneva this week.
Searching for a solution
Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines. More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on. One solution, employed by the Federal Aviation Authority in relation to commercial aviation, is to log and assess the content of all programs and subsequent updates to such a level of detail that algorithmic interactions are well understood in advance – but this is impractical on a large scale. Portions of the aerospace industry employ a relatively new approach called model-based programming, in which machines do most of the coding work and are able to test as they go.
Model-based programming may not be the panacea some hope for, however. Not only does it push humans yet further from the process, but Johnson, the physicist, conducted a study for the Department of Defense that found “extreme behaviors that couldn’t be deduced from the code itself” even in large, complex systems built using this technique. Much energy is being directed at finding ways to trace unexpected algorithmic behavior back to the specific lines of code that caused it. No one knows if a solution (or solutions) will be found, but none are likely to work where aggressive algos are designed to clash and/or adapt.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”. More practically, Spafford, the software security expert, advises making tech companies responsible for the actions of their products, whether specific lines of rogue code – or proof of negligence in relation to them – can be identified or not. He notes that the venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work. Johnson, for his part, considers our algorithmic discomfort to be at least partly conceptual; growing pains in a new realm of human experience. He laughs in noting that when he and I last spoke about this stuff a few short years ago, my questions were niche concerns, restricted to a few people who pored over the stock market in unseemly detail.
“And now, here we are – it’s even affecting elections. I mean, what the heck is going on? I think the deep scientific thing is that software engineers are trained to write programs to do things that optimize – and with good reason, because you’re often optimizing in relation to things like the weight distribution in a plane, or a most fuel-efficient speed: in the usual, anticipated circumstances optimizing makes sense. But in unusual circumstances it doesn’t, and we need to ask: ‘What’s the worst thing that could happen in this algorithm once it starts interacting with others?’ The problem is we don’t even have a word for this concept, much less a science to study it.”
He pauses for moment, trying to wrap his brain around the problem.
“The thing is, optimizing is all about either maximizing or minimizing something, which in computer terms are the same. So what is the opposite of an optimization, ie the least optimal case, and how do we identify and measure it? The question we need to ask, which we never do, is: ‘What’s the most extreme possible behavior in a system I thought I was optimizing?’”
Another brief silence ends with a hint of surprise in his voice.
“Basically, we need a new science,” he says.
Andrew Smith’s Totally Wired: The Rise and Fall of Joshua Harris and the Great Dotcom Swindle will be published by Grove Atlantic next February
 

Cognisant

Condescending Bastard
Local time
Today, 01:40
Joined
Dec 12, 2009
Messages
7,966
#3
Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention.
Yet another tragedy caused by human unreliability, how many deaths will it take before we finally get these beasts off the road?
 

Pizzabeak

Prolific Member
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#4
Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention.
Yet another tragedy caused by human unreliability, how many deaths will it take before we finally get these beasts off the road?
True, it's rife with contradiction. The big picture is (or can be) simple, or one or the other, but the detail isn't so black and white due to the granularity of the process. That's logic and human emotion, that "dichotomy" or axes and how to make decisions.


Saudi Arabia to begin construction on $500bn AI city where robots roam the streets
In October 2017 Hanson Robotics’ “Sophia” became the first robot to be granted citizenship when Saudi Arabia formally made her one of theirs at a conference in the nation’s capital Riya. Yesterday, Sophia joined a compatriot research team at the AI for Good Global Summit at the UN Headquarters in Geneva to discuss Saudi Vision 2030, in which the Gulf State charts a shift away from its dependence on oil revenue.

“This change will be powered by big data and artificial intelligence,” said the Kingdom’s Deputy Minister of Technology Industry and Digital Capabilities Dr. Ahmed Al Theneyan.

The jewel of the project is the smart city “NEOM”, an acronym that stands for “New Future” in Arabic. The Saudi government says it will pour US$500 billion into this mega-project, with construction expected to begin in 2020. NEOM will occupy 26,500 sq km (10,230 sq miles), 218 times larger than the city of San Francisco.

This smart city will span the Red Sea, connecting Saudi Arabia with Egypt and North Africa. City residents’ medical files, household electronics, and transportation will all be integrated with IoT systems.

Saudi Arabia is calling for global contractors, and according to media reports Amazon, IBM, and Alibaba are discussing potential partnerships with Kingdom officials. Chinese tech conglomerate Huawei is already committed to training 1,500 local engineers over the next two years.

The busy Saudi booth at the Geneva conference promoted AI not only as the engine driving NEOM, but also as a force to help the Saudi people now.

The 2015 Mina Stampede took the lives of 2,000 pilgrims at Mecca. Umm Al-Qura University professors Anas Basalamah and Saleh Basalamah introduced a research project using computer vision to manage crowd flow near the Kaaba. Deep learning algorithms can count the number of people in a scene with up to 97.2 percent accuracy. A heat map signals a warning when density exceeds 4–5 people per square meter, and the system can also monitor crowd circulation speed for safety purposes.

In Saudi Arabia one traffic accident occurs every minute, and there are 20 deaths daily on Saudi roads. Professor Basalamah tells Synced that, “computer vision is deployed here to enforce seat belt wearing and spot traffic violations.” His computer vision startup hazen.ai specializes in “building advanced traffic cameras with the capability to detect dangerous driving behavior through video analysis,” and has received a government contract to work on urban safety.



Crowd monitoring tech and heatmaps from hazen.ai
Oil producing countries are seeking new ways to power their economies, and many are looking to AI. This year, Crown Prince of Dubai Sheikh Hamdan launched a DFA program that matches government entities with private sector partners to digitalize the government. Dubai Police will use statistical AI systems to support decision-making processes, with the goal of cutting the crime rate by 25 percent by 2021.

The UAE named 27-year-old Omar bin Sultan Al-Olama its Minister of Artificial Intelligence — the world’s first such governmental position — and will host the Middle East’s biggest AI fair this year. “World AI Show” will run April 11–12 in Dubai before moving to Singapore, Mumbai, and Paris. The AI market in the United Arab Emirates is expected to reach $50 billion by 2025.

On NEOM’s announcement, Crown Prince of Saudi Arabia Mohammed bin Salman said the smart city “will allow for a new way of life to emerge that takes into account the ambitions and outlooks of humankind paired with best future technologies and outstanding economic prospects.”

As countries in the Middle East apply their considerable resources to smart/transformative technologies, will NEOM emerge as a new Mecca of AI?
 

Pizzabeak

Prolific Member
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#5

The $63 billion, “winner-take-all” global art market, explained.


Why is art so expensive?

Gaby Del Valle

17-22 minutes

Christie’s, the famed auction house, recently sold an AI-generated painting for $432,500. The piece, titled “Portrait of Edmond Belamy,” was made by Obvious, a French art collective, and sold for roughly 45 times its estimated worth.
The sale was controversial, though not entirely because of the painting’s steep price tag. Paying $450,000 for a buzzy work of art — especially one that may sell well later on — isn’t unheard of in the art world. The most coveted works sell for many times that. Sotheby’s Hong Kong sold a Picasso for $7.79 million in September; a pair of paintings by the late Chinese-French painter Zao Wou-Ki sold for $65.1 million and $11.5 million, respectively, at that same sale. Leonardo da Vinci’s “Salvator Mundi” sold at Christie’s last year for $450 million, making it the most expensive work of art ever sold.
According to a joint report by UBS and Art Basel released in March, the global art market saw $63.7 billion in total sales last year. But that doesn’t mean that most artists see even a small fraction of that money, since the highest-value sales usually involve one wealthy collector putting a highly sought-after work up for auction.
The money generated from that sale, then, goes to the work’s previous owner, not to the artist who made it. (Artists profit off their own work when it’s sold on what’s known as the “primary market,” i.e., directly from a gallery or from the artist herself. When art is sold on the “secondary market,” however — meaning that it’s sold by a collector to another collector, either privately or at an auction — only the seller and, if applicable, the auction house profits.)
Aside from a handful of celebrity artists — Jeff Koons, Damien Hirst, and Yayoi Kusama, to name a few — most living artists’ works will never sell in the six- or seven-figure range. The result of all of this is that a small group of collectors pay astronomical prices for works made by an even smaller group of artists, who are in turn represented by a small number of high-profile galleries. Meanwhile, lesser-known artists and smaller galleries are increasingly being left behind.
Why is art so expensive?
The short answer is that most art isn’t. Pieces sold for six and seven figures tend to make headlines, but most living artists’ works will never sell for that much.
To understand why a few artists are rich and famous, first you need to realize that most of them aren’t and will never be. To break into the art market, an artist first has to find a gallery to represent them, which is harder than it sounds. Henri Neuendorf, an associate editor at Artnet News, told me gallerists often visit art schools’ MFA graduate shows to find young talent to represent. “These shows are the first arena, the first entry point for a lot of young artists,” Neuendorf said.
Some gallerists also look outside the art school crowd, presumably to diversify their representation, since MFAs don’t come cheap. (In 2014, tuitions at the 10 most influential MFA programs cost an average $38,000 per year, meaning a student would have to spend around $100,000 to complete their degree.) That said, the art world remains far from diverse. A 2014 study by the artists collective BFAMFAPhD found that 77.6 percent of artists who actually make a living by selling art are white, as are 80 percent of all art school graduates.
Christie’s sold its first piece of computer generated art, “Portrait of Edmond Belamy,” for $432,500. Art collective Obvious
Artists who stand out in a graduate show or another setting may go on to have their work displayed in group shows with other emerging artists; if their work sells well, they may get a solo exhibition at a gallery. If their solo exhibition does well, that’s when their career really begins to take off.
Emerging artists’ works are generally priced based on size and medium, Neuendorf said. A larger painting, for example, will usually be priced between $10,000 and $15,000. Works on canvas are priced higher than works on paper, which are priced higher than prints. If an artist is represented by a well-known gallery like David Zwirner or Hauser & Wirth, however, the dealer’s prestige is enough to raise the artist’s sale prices, even if the artist is relatively unknown. In most cases, galleries take a 50 percent cut of the artist’s sales.
This process is becoming increasingly difficult thanks to the shuttering of small galleries around the world. The UBS and Art Basel report found that more galleries closed than opened in 2017. Meanwhile, large galleries are opening new locations to cater to an increasingly global market.
Olav Velthuis, a professor at the University of Amsterdam who studies sociology in the arts, attributes the shuttering of small galleries to the rise of art fairs like Frieze and Art Basel. In a column for the New York Times, Velthuis wrote that these fairs, which often charge gallerists between $50,000 and $100,000 for booth space, make it incredibly difficult for smaller gallerists to come home with a profit. But since fairs are becoming the preferred way for wealthy collectors to buy art — they can browse art from hundreds of galleries in a single location, all while hobnobbing with other collectors — galleries have no choice but to participate.
Smaller galleries tend to represent emerging artists, putting both the dealer and artist at yet another disadvantage. “The issue is that demand for art is not distributed evenly among all living artists,” Velthuis told me in an email. “Instead, many people are going after a small number of artists. That’s what’s driving up prices.”
Given the subjective nature part in general and contemporary art in particular, it’s hard for collectors to discern whether an artist is truly good. “The art market functions as a big consensus marketing machine,” said Velthuis. “So what people do is look at quality signals. Those signals can for instance be what an important curator is saying about an artist; if she has exhibitions in museums; if influential collectors are buying his work. Because everybody is, to some extent at the least, looking at the same signals, at one point they start agreeing who are the most desirable artists.”
In other words, some artists’ works are expensive because there’s a consensus in the art world that their works should be expensive. And, Velthuis adds, art “is a market for unique objects,” which adds a sense of scarcity into the mix. There are only a few known da Vinci paintings in existence, some of which belong to museums and are therefore permanently off the market. (It’s a “big taboo” for museums to sell works from their collection, Velthuis told me.) It only makes sense that when a da Vinci is up for auction, someone with the means to pay hundreds of millions of dollars for it will do just that.
Just 0.2 percent of artists have work that sells for more than $10 million, according to the UBS and Art Basel report. But 32 percent of the $63.7 billion in total sales made that year came from works that sold for more than $10 million. An analysis conducted by Artnet last year similarly found that just 25 artists accounted for nearly half of all contemporary auction sales in the first six months of 2017. Only three of those artists were women.
“It definitely is a good example of a winner-take-all market, where revenues and profits are distributed in a highly unequal way,” Velthuis said. “[On] principle, it is not a problem in itself. However, galleries in the middle segment of the market are having a hard time surviving, and if many of them close their doors, that is bad for the ecology of the art world. We should think of ways to let profits at the top trickle down to the middle and bottom.”
Who buys art? The superrich
The 2017 sale of da Vinci’s “Salvator Mundi” reignited discussions about the role of money in the art world. Georgina Adam, an art market expert and author of Dark Side of the Boom: The Excesses of the Art Market in the 21st Century, explained how it’s possible that a single painting could cost more money than most people would ever see in their lifetimes.
“Very rich people, these days, have an astonishing amount of money,” art expert Georgina Adam told the Financial Times. A gallerist interviewed in her book, The Dark Side of the Boom: The Excesses of the Art Market in the 21st Century, explained it this way: If a couple has a net worth of $10 billion and decides to invest 10 percent of that in art, they can buy $1 billion worth of paintings and sculptures.
There are more collectors now than ever before, and those collectors are wealthier than they have ever been. According to Adam’s book, the liberalization of certain countries’ economies — including China, India, and Eastern European countries — led to an art collection boom outside of the US and Western Europe. (The art market is also booming in the Gulf states.) As a result, the market has exploded into what writer Rachel Wetzler described as “a global industry bound up with luxury, fashion, and celebrity, attracting an expanded range of ultra-wealthy buyers who aggressively compete for works by brand-name artists.”
Art isn’t just a luxury commodity; it’s an investment. If collectors invest wisely, the works they buy can be worth much more later on. Perhaps the most famous example of this is Robert Scull, a New York City taxi tycoon who auctioned off pieces from his collection in 1973. One of the works was a painting by Robert Rauschenberg that Scull had bought for just $900 in 1958. It sold for $85,000.
The Price of Everything, a documentary about the role of money in the art world released in October, delves into the Scull auction drama and its aftermath. Art historian Barbara Rose, whose report on the auction for New York magazine was titled “Profit Without Honor,” called that auction a “pivotal moment” in the art world.
“The idea that art was being put on the auction block like a piece of meat, it was extraordinary to me,” Rose said in the film. “I remember that Rauschenberg was there and he was really incensed, because the artists got nothing out of this. … Suddenly there was the realization — because of the prices — that you could make money by buying low and selling high.”
More recently, the 2008 financial crisis was a boon for a few wealthy collectors who gobbled up works that were being sold by their suddenly cash-poor art world acquaintances. For example, billionaire business executive Mitchell Rales and his wife, Emily, added “about 50 works” to their collection in 2009, many of which they purchased at low prices, according to a 2016 Bloomberg report. The Rales family’s collection is now worth more than $1 billion.
“People who were active [buyers] at the time are very happy today,” art adviser Sandy Heller told Bloomberg. “Those opportunities would not have presented themselves without the financial crisis.”
A highly valued work of art is a luxury good, an investment, and, in some cases, a vehicle through which the ultra-wealthy can avoid paying taxes. Until very recently, collectors were able to exploit a loophole in the tax code known as the “like-kind exchange,” which allowed them to defer capital gains taxes on certain sales if the profits generated from those sales were put into a similar investment.
In the case of art sales, that meant that a collector who bought a painting for a certain amount of money — let’s say $1 million — and then sold it for $5 million a few years later didn’t have to pay capital gains taxes if they transferred that $4 million gain into the purchase of another work of art. (The Republican tax bill eliminated this benefit for art collectors, though it continues to benefit real estate developers.)
A gallery assistant views a painting by Turkish artist Fahrelnissa Zeid, titled Towards a Sky, which sold for £992,750 at Sotheby’s Middle Eastern Art Week in London in April 2017. Anadolu Agency/Getty Images
Collectors can also receive tax benefits by donating pieces from their collection to museums. (Here’s where buying low and donating high is really beneficial, since the charitable deduction would take the current value of the work into account, not the amount the collector originally paid for it.)
Jennifer Blei Stockman, the former president of the Guggenheim and one of the producers of The Price of Everything, told me that galleries often require collectors who purchase new work by prominent artists to eventually make that work available to the public.
“Many galleries are now insisting that they will not sell a work to a private collector unless they either buy a second work and give it to a museum, or promise that the artwork will eventually be given to a museum,” Stockman said. These agreements aren’t legally enforceable, but collectors who want to remain in good standing with galleries tend to keep their word.
Artists’ works don’t necessarily have to end up in publicly-owned museums in order to be seen by the public. Over the past decade, a growing number of ultra-wealthy art collectors have opened private museums in order to show off the works they’ve acquired. Unlike public museums, which are hindered by relatively limited acquisitions budgets — the Louvre’s 2016 budget, for example, was €7.3 million — collectors can purchase just about any work they want for their private museums, provided they have the money. And since these museums are ostensibly open to the public, they come with a slew of tax benefits.
“The rich buy art,” arts writer Julie Baumgardner declared in an Artsy editorial. “And the super-rich, well, they make museums.”
When works sell for millions of dollars, do artists benefit?
Materially speaking, artists only benefit from sales when their works are sold on the primary market, meaning a collector purchased the work from a gallery or, less frequently, from the artist himself. When a work sells at auction, the artist doesn’t benefit at all.
For decades, artists have attempted to correct this by fighting to receive royalties from works sold on the secondary market. Most writers, for example, receive royalties from book sales in perpetuity. But once an artist sells a work to a collector, the collector — and the auction house, if applicable — is the only one who benefits from selling that work at a later date.
In 2011, a coalition of artists, including Chuck Close and Laddie John Dill, filed class-action lawsuits against Sotheby’s, Christie’s, and eBay. Citing the California Resale Royalties Act — which entitled California residents who sold work anywhere in the country, as well as any visual artist selling their work in California, to 5 percent of the price of any resale of their work more than $1,000 — the artists claimed that the eBay and the auction houses had broken state law. But in July, a federal appeals court sided with the sellers, not the artists.
Even if artists don’t make any money from these sales, Stockman told me, they can occasionally benefit in other ways. “Artists do benefit when their pieces sell well at auction, because primary prices are then increased,” Stockman said. “However, when a piece sells at auction or in the secondary market, the artist does not [financially] benefit at all, and that, I know, is very scary and upsetting to many artists.”
Art for everyone else
Taken together, all of these factors paint a troubling picture: Access to art seems to be increasingly concentrated among the superrich. As the rich get richer, collectors are paying increasingly higher prices for works made by a handful of living artists, leaving emerging artists and the galleries that represent them behind. Then there’s the question of who even gets to be an artist. Art school is expensive, and an MFA doesn’t automatically translate to financial success in such a competitive industry.
Jeff Koons’s “Popeye” was purchased for $28 million by billionaire casino tycoon Steve Wynn in 2014. Emmanual Dunand/AFP/Getty Images
There is some pushback to this concentration of the market at the very top — or even to the idea that art is inaccessible to the average person. Emily Kaplan, the vice president of postwar and contemporary sales at Christie’s, told me that the auction house’s day sales are open to the public and often feature works that cost much less than headlines would lead you to believe.
“Christie’s can be seen as an intimidating name for a lot of people, but most of the sales that we do are much lower prices than what gets reported in the news,” said Kaplan. “We have a lot of sales that happen throughout the calendar year in multiple locations, especially postwar and contemporary art. … Works can sell for a couple hundred dollars, one, two, three thousand dollars. It’s a much lower range than people expect.”
Affordable art fairs, which usually sell art for a few thousand dollars, are another alternative for people who want to buy art but can’t spend millions on a single sculpture. Superfine, an art fair founded in 2015, describes itself as a way of bringing art to the people. Co-founders James Miille and Alex Mitow say the fair is a reaction to the inflated prices they saw on the high end of the “insular” art market.
“We saw a rift in the art market between artists and galleries with amazing work who need to sell it to survive, and people who love art and can afford it but weren’t feeling like a part of the game,” Mitow told me in an email. “Most transactions in the art market actually occur at the under $5,000 level, and that’s what we’re publicizing: the movement of real art by real living artists who build a sustainable career, not necessarily outlier superstar artists with sales records that are unattainable for the average — if equally qualified — artist.”
In addition to hosting fairs in New York City, Los Angeles, Miami, and Washington, DC, Superfine sells works through its “e-fair.” In the same vein as more traditional art fairs like Art Basel, Superfine charges artists or gallerists a flat fee for exhibition space, though Superfine’s rates are much lower.
In spite of these efforts to democratize art, though, the overall market is still privileged towards, well, the very privileged. Art patronage has always been a hobby for the very rich, and that’s not going to change any time soon — but the ability to look at beautiful things shouldn’t be limited to those who can afford to buy them.

First AI generated painting expected to sell for $35,000 sells for $432,500


Christie’s just sold an AI-generated painting for $432,500. It’s already controversial.

Chavie Lieber@ChavieLieberChavie.Lieber@Vox.com

7-9 minutes

From lab-grown diamonds to computer-generated perfumes to gadgets as stylists to synthetic whiskey, it’s hard to find a category of goods today that hasn’t been infiltrated by robots.
The latest industry to get the treatment is art. Last week, British auction house Christie’s sold its first piece of computer-generated art, titled “Portrait of Edmond Belamy.” The piece, which was made by a French art collective named Obvious, sold for a whopping $432,500 — about 45 times its estimated worth — signaling that while there might be those in the art world that will turn their noses up at computer-generated art, there plenty of others who take it seriously and are willing to pay for it.
The portrait was created via an algorithm, which combed through a collection of historical portraits. Then it generated a portrait of its own, which was printed on canvas. In a blog post discussing the sale, Christie’s wrote how AI could be the future of art, noting how an AI can “model the course of art history,” since it can comb through a chronology of pieces, showing how “the whole story of our visual culture were a mathematical inevitability.”
But the painting’s sale brings the light the question of what is art, and what constitutes “real” versus authentic when algorithms come into the picture — literally.
How did a computer create a piece of art?
“Portrait of Edmond Belamy” was made by Obvious, an AI research studio in Paris that’s run by three 25-year-old researchers named Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernier. Obvious uses a type of AI called a generative adversarial network, or GAN.
It combs through data points — in this case, historical portraits — and then create its own based on all that it’s learned. It’s how IBM is creating perfume using formulas provided by global fragrance company Symrise. It’s also how a data scientist created more than 15,000 AI internet cats via something called a Meow Generator.
“Portrait of Edmond Belamy” was created by an AI called a generative adversarial network. Obvious
Caselles-Dupré explained to Christie’s that Obvious “fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th.”
The result is Edmond, a (fictional) man wearing a dark coat with a white collar. “Portrait of Edmond Belamy” looks like it could have been a portrait of some European nobleman you’d see in the Met or the Louvre. Christie’s notes, too, that there’s also “something weirdly contemporary” about Edmund, which Caselles-Dupré says is due to the art’s AI having a “distortion” built into its artistic abilities, which is why his face is blurred. The piece has been signed with the mathematical formula used to create it.
Obvious has created 11 portraits total of the fictional Belamy family, who each come with their own somewhat kitschy taglines. Take, for example, Madame De Belamy, who has fair skin and wears a powder blue dress and matching hat and has the tagline “Who said that not having a soul is a default ? It makes me unboundable, adaptative, and reckless.”
All these pieces have blurred faces, like Edmund, and are vague enough in appearance that they could come off as nobility from several countries.
Richard Lloyd, the international head of Christie’s print department, believes there’s a big market for AI-built artwork — as demonstrated by the amount of money spent by Edmund’s buyer, who remains anonymous.
“It is a portrait, after all,” Lloyd, who was in charge of the sale, said. “It may not have been painted by a man in a powdered wig, but it is exactly the kind of artwork we have been selling for 250 years.”
Is this really art?
When lab-grown diamonds starting hitting the market a few years ago, there was mass uproar, particularly among heavyweights in the industry like De Beers. “Real is rare,” the company insisted, and therefore synthetic diamonds, regardless of their genetic makeup or sparkle, were not to be taken seriously. Even when De Beers eventually announced they were creating lab-grown diamonds earlier this year, the company listed them with costume jewelry prices, which it apparently hoped would send a message.
The “Portrait of Edmond Belamy” hits a similar vein. Should computer-generated art be considered “real art?” Is it truly creative? Does it hold value beyond what some anonymous bidder wants to drop at Christie’s?
Ahmed Elgammal, the director of the Art and Artificial Intelligence Lab at Rutgers University who works on GAN, believes AI-created art should be looked at an artistic craft.
“Yes, if you look just at the form, and ignore the things that art is about, then the algorithm is just generating visual forms and following aesthetic principles extracted from existing art,” he told Christie’s.
“But if you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists — one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art.”
There’s already controversy about ownership
With AI on the rise, the art market could soon be flooded with machine-generated pieces. But if the discussion of authenticity isn’t what gets people upset, the issue of ownership certainly might.
In the case of Edmund, for example, there’s the question of who should get the credit. Is the AI that created him and the entire Belamy family considered the artist, or would that be the three AI researchers at Obvious? And if the art is inspired by hundreds of thousands of pre-existing pieces, how much is the process informed by a typical degree of borrowing or inspiration, and how much is just swiping?
This is already a brewing issue. The AI that was used to create “Portrait of Edmond Belamy” wasn’t even written at Obvious, as first reported by the Verge. It was created by Robbie Barrat, a 19-year-old AI artist who’s shared his research openly on the web.
On Twitter, Barrat called out Obvious; he believed it “really just used my network and are selling the results.”
While screenshots show Barrat was in contact with Obvious about using his AI, he tweeted that how he believed it was being used for “some open source project.” In an email to Vox, Barrat says he’s not coming after Obvious for some of the $432,500 that the Edmond portrait was sold for, but is still upset about the auction.
”I’m not concerned about getting any money from this: I really just want the legitimate artists working with AI to get attention,” he says. “ I feel like the work Christie’s has chosen to auction off is incredibly surface level.”
In a statement to Vox, Obvious wrote that “there are many people experimenting with different ways to use GAN models,” and that “indeed, Robbie Barrat deserves credit, which we gave in our main Medium post as soon as he asked back in April. We also credited him right after the auction.” When asked if it would be sharing its profits, Obvious did not offer comment.
Barrat believes that Obvious’s work with AI in art is sending “the wrong impression.” He says the art world is interested in using “AI as an artist’s tool, and really approach AI in art as something to collaborate with — not subscribing to Obvious’s false narrative of AI as something to replace the role of the artist.”
 

Cognisant

Condescending Bastard
Local time
Today, 01:40
Joined
Dec 12, 2009
Messages
7,966
#6
Art is also a great way to exchange large amounts of money for something that's apparently worthless without anyone getting suspicious.
 

onesteptwostep

Think.. Be... ..buzz buzz :)
Local time
Today, 21:40
Joined
Dec 7, 2014
Messages
2,954
#7
Good art contains the geist of our age though, which is why that AI art costed so much. That art could go into an art history book someday. Predatory lending from banks is the thing we should be shitting on, not art.
 

Cognisant

Condescending Bastard
Local time
Today, 01:40
Joined
Dec 12, 2009
Messages
7,966
#8
It's a fuzzy brown picture that vaguely resembles a person, it has no message, it evokes no emotion, anything can be art but the quality of that "art" is abysmal.
 
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#9
Nah, art is supposed to theoretically be healing to the human condition, which is why some ancient philosophers suggest we surround ourselves with it to relieve any suffering. It distracts us from the truth, which is that death is inevitable and we have no control over anything in our lives. So one option is to indulge in things like art and philosophy, which by default means you can't let go, and will be stuck in a loop.

Art is more so like magic because it influences people, puts them under spells, and controls their mind. The other part is (critical) thinking and being able to think for yourself - are they really separate? What do the results really suggest? If you have maths and logic, then fine art like paintings or opera - which is better? Your mileage may vary. So some art can sometimes take logical rigor and work. It's not just all symbiosis with artists borrowing concepts from science or other lore to fill in their canvas. The asymmetry comes from the notion science doesn't really borrow from art to do it. Art seems like less hard work. Scientists feel they get the short end of the stick by doing their job.



I've been googling articles about AI:

In the Age of A.I., Is Seeing Still Believing?
Advances in digital imagery could deepen the fake-news crisis—or help us get out of it.

In 2011, Hany Farid, a photo-forensics expert, received an e-mail from a bereaved father. Three years earlier, the man’s son had found himself on the side of the road with a car that wouldn’t start. When some strangers offered him a lift, he accepted. A few minutes later, for unknown reasons, they shot him. A surveillance camera had captured him as he walked toward their car, but the video was of such low quality that key details, such as faces, were impossible to make out. The other car’s license plate was visible only as an indecipherable jumble of pixels. The father could see the evidence that pointed to his son’s killers—just not clearly enough.
Farid had pioneered the forensic analysis of digital photographs in the late nineteen-nineties, and gained a reputation as a miracle worker. As an expert witness in countless civil and criminal trials, he explained why a disputed digital image or video had to be real or fake. Now, in his lab at Dartmouth, where he was a professor of computer science, he played the father’s video over and over, wondering if there was anything he could do. On television, detectives often “enhance” photographs, sharpening the pixelated face of a suspect into a detailed portrait. In real life, this is impossible. As the video had flowed through the surveillance camera’s “imaging pipeline”—the lens, the sensor, the compression algorithms—its data had been “downsampled,” and, in the end, very little information remained. Farid told the father that the degradation of the image couldn’t be reversed, and the case languished, unsolved.
A few months later, though, Farid had a thought. What if he could use the same surveillance camera to photograph many, many license plates? In that case, patterns might emerge—correspondences between the jumbled pixels and the plates from which they derived. The correspondences would be incredibly subtle: the particular blur of any degraded image would depend not just on the plate numbers but also on the light conditions, the design of the plate, and many other variables. Still, if he had access to enough images—hundreds of thousands, perhaps millions—patterns might emerge.
Such an undertaking seemed impractical, and for a while it was. But a new field, “image synthesis,” was coming into focus, in which computer graphics and A.I. were combined. Progress was accelerating. Researchers were discovering new ways to use neural networks—software systems based, loosely, on the architecture of the brain—to analyze and create images and videos. In the emerging world of “synthetic media,” the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.
In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”
Not all synthetic media is dystopian. Recent top-grossing movies (“Black Panther,” “Jurassic World”) are saturated with synthesized images that, not long ago, would have been dramatically harder to produce; audiences were delighted by “Star Wars: The Last Jedi” and “Blade Runner 2049,” which featured synthetic versions of Carrie Fisher and Sean Young, respectively. Today’s smartphones digitally manipulate even ordinary snapshots, often using neural networks: the iPhone’s “portrait mode” simulates what a photograph would have looked like if it been taken by a more expensive camera. Meanwhile, for researchers in computer vision, A.I., robotics, and other fields, image synthesis makes whole new avenues of investigation accessible.
Farid started by sending his graduate students out on the Dartmouth campus to photograph a few hundred license plates. Then, based on those photographs, he and his team built a “generative model” capable of synthesizing more. In the course of a few weeks, they produced tens of millions of realistic license-plate images, each one unique. Then, by feeding their synthetic license plates through a simulated surveillance camera, they rendered them indecipherable. The aim was to create a Rosetta Stone, connecting pixels to plate numbers.
Next, they began “training” a neural network to interpret those degraded images. Modern neural networks are multilayered, and each layer juggles millions of variables; tracking the flow of information through such a system is like following drops of water through a waterfall. Researchers, unsure of how their creations work, must train them by trial and error. It took Farid’s team several attempts to perfect theirs. Eventually, though, they presented it with a still from the video. “The license plate was like ten pixels of noise,” Farid said. “But there was still a signal there.” Their network was “pretty confident about the last three characters.”
This summer, Farid e-mailed those characters to the detective working the case. Investigators had narrowed their search to a subset of blue Chevy Impalas; the network pinpointed which one. Someone connected to the car turned out to have been involved in another crime. A case that had lain dormant for nearly a decade is now moving again. Farid and his team, meanwhile, published their results in a computer-vision journal. In their paper, they noted that their system was a free upgrade for millions of low-quality surveillance cameras already in use. It was a paradoxical outcome typical of the world of image synthesis, in which unreal images, if they are realistic enough, can lead to the truth.
Farid is in the process of moving from Dartmouth to the University of California, Berkeley, where his wife, the psychologist Emily Cooper, studies human vision and virtual reality. Their modernist house, perched in the hills above the Berkeley campus, is enclosed almost entirely in glass; on a clear day this fall, I could see through the living room to the Golden Gate Bridge. At fifty-two, Farid is gray-haired, energized, and fit. He invited me to join him on the deck. “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.
“In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.
Farid speaks with a technologist’s enthusiasm and a lawyer’s wariness. “Why did Stalin airbrush those people out of those photographs?” he asked. “Why go to the trouble? It’s because there is something very, very powerful about the visual image. If you change the image, you change history. We’re incredibly visual beings. We rely on vision—and, historically, it’s been very reliable. And so photos and videos still have this incredible resonance.” He paused, tilting back into the sun and raising his hands. “How much longer will that be true?”
One of the world’s best image-synthesis labs is a seven-minute drive from Farid’s house, on the north side of the Berkeley campus. The lab is run by a forty-three-year-old computer scientist named Alexei A. Efros. Efros was born in St. Petersburg; he moved to the United States in 1989, when his father, a winner of the U.S.S.R.’s top prize for theoretical physics, got a job at the University of California, Riverside. Tall, blond, and sweetly genial, he retains a Russian accent and sense of humor. “I got here when I was fourteen, but, really, one year in the Soviet Union counts as two,” he told me. “I listened to classical music—everything!”
As a teen-ager, Efros learned to program on a Soviet PC, the Elektronika BK-0010. The system stored its programs on audiocassettes and, every three hours, overheated and reset; since Efros didn’t have a tape deck, he learned to code fast. He grew interested in artificial intelligence, and eventually gravitated toward computer vision—a field that allowed him to watch machines think.
In 1998, when Efros arrived at Berkeley for graduate school, he began exploring a problem called “texture synthesis.” “Let’s say you have a small patch of visual texture and you want to have more of it,” he said, as we sat in his windowless office. Perhaps you want a dungeon in a video game to be made of moss-covered stone. Because the human visual system is attuned to repetition, simply “tiling” the walls with a single image of stone won’t work. Efros developed a method for intelligently sampling bits of an image and probabilistically recombining them so that a texture could be indefinitely and organically extended. A few years later, a version of the technique became a tool in Adobe Photoshop called “content-aware fill”: you can delete someone from a pile of leaves, and new leaves will seamlessly fill in the gap.
From the front row of CS 194-26—Image Manipulation and Computational Photography—I watched as Efros, dressed in a blue shirt, washed jeans, and black boots, explained to about a hundred undergraduates how the concept of “texture” could be applied to media other than still images. Efros started his story in 1948, with the mathematician Claude Shannon, who invented information theory. Shannon had envisioned taking all the books in the English language and analyzing them in order to discover which words tended to follow which other words. He thought that probability tables based on this analysis might enable the construction of realistic English sentences.
“Let’s say that we have the words ‘we’ and ‘need,’ ” Efros said, as the words appeared on a large screen behind him. “What’s the likely next word?”
The students murmured until Efros advanced to the next slide, revealing the word “to.”
“Now let’s say that we move our contextual window,” he continued. “We just have ‘need’ and ‘to.’ What’s next?”
“Sleep!” one student said.
“Eat!” another said.


“Eat” appeared onscreen.
“If our data set were a book about the French Revolution, the next word might be ‘cake,’ ” Efros said, chuckling. “Now, what is this? You guys use it all the time.”
“Autocomplete!” a young man said.
Pacing the stage, Efros explained that the same techniques used to create synthetic stonework or text messages could also be used to create synthetic video. The key was to think of movement—the flickering of a candle flame, the strides of a man on a treadmill, the particular way a face changed as it smiled—as a texture in time. “Zzzzt,” he said, rotating his hands in the air. “Into the time dimension.”
A hush of concentration descended as he walked the students through what this meant mathematically. The frames of a video could be seen as links in a chain—and that chain could be looped and crossed over itself. “You’re going to compute transition probabilities between your frames,” he said. Using these, it would be possible to create user-controllable, natural motion.
The students, their faces illuminated by their laptops, toggled between their notes and their code. Efros, meanwhile, screened a video on “expression-dependent textures,” created by the team behind “Synthesizing Obama.” Onscreen, a synthetic version of Tom Hanks’s face looked left and right and, at the click of a mouse, expressed various emotions: fear, anger, happiness. The researchers had used publicly available images of Hanks to create a three-dimensional model, or “mesh,” of his face onto which they projected his characteristic expressions. For this week’s homework, Efros concluded, each student would construct a similar system. Half the class groaned; the other half grinned.
Afterward, a crowd gathered around Efros with questions. In my row, a young woman turned to her neighbor and said, “Edge detection is sweet!”
Before arriving in Berkeley, I had written to Shiry Ginosar, a graduate student in Efros’s lab, to find out what it would take to create a synthetic version of me. Ginosar had replied with instructions for filming myself. “For us to be able to generate the back of your head, your profile, your arm moving up and down, etc., we need to have seen you in these positions in your video,” she wrote. For around ten minutes, before the watchful eye of an iPhone, I walked back and forth, spun in circles, practiced my lunges, and attempted the Macarena; my performance culminated in downward dog. “You look awesome ;-),” Ginosar wrote, having received my video. She said it would take about two weeks for a network to learn to synthesize me.
When I arrived, its work wasn’t quite done. Ginosar—a serene, hyper-organized woman who, before training neural networks, trained fighter pilots in simulators in the Israel Defense Forces—created an itinerary to keep me occupied while I waited. In addition to CS 194–26, it included lunch at Momo, a Tibetan curry restaurant, where Efros’s graduate students explained how it had come to pass that undergrads could create, as homework, Hollywood-like special effects.
“In 1999, when ‘The Matrix’ came out, the ideas were there, but the computation was very slow,” Deepak Pathak, a Ph.D. candidate, said. “Now computers are really fast. The G.P.U.s”—graphics processing units, designed to power games like Assassin’s Creed—“are very advanced.”
“Also, everything is open-sourced,” said Angjoo Kanazawa, who specializes in “pose detection”—figuring out, from a photo of a person, how her body is arranged in 3-D space.
“And that’s good, because we want our research to be reproducible,” Pathak said. “The result is that it’s easy for someone who’s in high school or college to run the code, because it’s in a library.”
The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.
Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.
On his computer, Efros showed me a photo taken from a bridge in Lyon. A large section of the riverbank—which might have contained cars, trees, people—had been deleted. In 2007, he helped devise a system that rifles through Flickr for similar photos, many of them taken while on vacation, and samples them. He clicked, and the blank was filled in with convincing, synthetic buildings and greenery. “Probably it found photos from a different city,” Efros said. “But, you know, we’re boring. We always build the same kinds of buildings on the same kinds of riverbanks. And then, as we walk over bridges, we all say, along with a thousand other people, ‘Hey, this will look great, let me take a picture,’ and we all put the horizon in the same place.” In 2016, Ira Kemelmacher-Shlizerman, one of the researchers behind “Synthesizing Obama,” applied the same principle to faces. Given your face as input, her system combs the Internet for people who look like you, then combines their features with your own, to show how you’d look if you had curly hair or were a different age.
One of the lessons of image synthesis is that, with enough data, everything becomes texture. Each river and vista has its double, ready to be sampled; there are only so many faces, and your doppelgängers have already uploaded yours. Products are manufactured over and over, and new buildings echo old ones. The idea of texture even extends—“Zzzzt! ”—into the social dimension. Your Facebook news feed highlights what “people like you” want to see. In addition to unearthing similarities, social media creates them. Having seen photos that look a certain way, we start taking them that way ourselves, and the regularity of these photos makes it easier for networks to synthesize pictures that look “right” to us. Talking with Efros, I struggled to come up with an image for this looped and layered interconnectedness, in which patterns spread and outputs are recirculated as inputs. I thought of cloverleaf interchanges, subway maps, Möbius strips.
A sign on the door of Efros’s lab at Berkeley reads “Caution: Deep Nets.” Inside, dozens of workstations are arranged in rows, each its own jumble of laptop, keyboard, monitor, mouse, and coffee mug—the texture of workaholism, iterated. In the back, in a lounge with a pool table, Richard Zhang, a recent Ph.D., opened his laptop to explain the newest developments in synthetic-image generation. Suppose, he said, that you possessed an image of a landscape taken on a sunny day. You might want to know what it would look like in the rain. “The thing is, there’s not just one answer to this problem,” Zhang said. A truly creative network would do more than generate a convincing image. It would be able to synthesize many possibilities—to do for landscapes what Farid’s much simpler system had done for license plates.
Onscreen, Zhang showed me an elaborate flowchart in which neural networks train other networks—an arrangement that researchers call a “generative adversarial network,” or GAN. He pointed to one of the networks: the “generator,” charged with synthesizing, more or less at random, new versions of the landscape. A second network, the “discriminator,” would judge the verisimilitude of those images by comparing them with the “ground truth” of real landscape photographs. The first network riffed; the second disciplined the first. Zhang’s screen showed the system in action. An image of a small town in a valley, on a lake, perhaps in Switzerland, appeared; it was night, and the view was obscured by darkness. Then, image by image, we began to “traverse the latent space.” The sun rose; clouds appeared; the leaves turned; rain descended. The moon shone; fog rolled in; a storm gathered; snow fell. The sun returned. The trees were green, brown, gold, red, white, and bare; the sky was gray, pink, black, white, and blue. “It finds the sources of patterns of variation,” Zhang said. We watched the texture of weather unfold.
In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”
MediFor has brought together dozens of researchers from universities, tech companies, and government agencies. Collectively, they are creating automated systems based on more than fifty “manipulation indicators.” Their goal is not just to spot fakes but to trace them. “We want to attribute a manipulation to someone, to explain why a manipulation was done,” Turek said. Ideally, such systems would be integrated into YouTube, Facebook, and other social-media platforms, where they could flag synthesized content. The problem is speed. Each day, five hundred and seventy-six thousand hours of video are uploaded to YouTube; MediFor’s systems have a “range of run-times,” Turek said, from less than a second to “tens of seconds” or more. Even after they are sped up, practical questions will remain. How will innocent manipulations be distinguished from malicious ones? Will advertisements be flagged? How much content will turn out to be, to some degree, synthetic?
In his glass-walled living room, Hany Farid and I watched a viral video called “Golden Eagle Snatches Kid,” which appears to show a bird of prey swooping down upon a toddler in a Montreal park. Specialized software, Farid explained, could reveal that the shadows of the eagle and the kid were subtly misaligned. Calling up an image of a grizzly bear, Farid pointed out that, under high magnification, its muzzle was fringed in red and blue. “As light hits the surface of a lens, it bends in proportion to its wavelength, and that’s why you see the fringing,” he explained. These “chromatic aberrations” are smallest at the center of an image and larger toward its edges; when that pattern is broken, it suggests that parts of different photographs have been combined.
There are ways in which digital photographs are more tamper-evident than analog ones. During the manufacturing of a digital camera, Farid explained, its sensor—a complex latticework of photosensitive circuits—is assembled one layer at a time. “You’re laying down loads of material, and it’s not perfectly even,” Farid said; inevitably, wrinkles develop, resulting in a pattern of brighter and dimmer pixels that is unique to each individual camera. “We call it ‘camera ballistics’—it’s like the imperfections in the barrel of a gun,” he said. Modern digital cameras, meanwhile, often achieve higher resolutions by guessing about the light their sensors don’t catch. “Essentially, they cheat,” he said. “Two-thirds of the image isn’t recorded—it’s synthesized!” He laughed. “It’s making shit up, but in a logical way that creates a very specific pattern, and if you edit something the pattern is disturbed.”
Many researchers who study synthesis also study forensics, and vice versa. “I try to be an optimist,” Jacob Huh, a chilled-out grad student in Efros’s lab, told me. He had trained a neural network to spot chromatic aberrations and other signs of manipulation; the network produces “heat maps” highlighting the suspect areas of an image. “The problem is that, if you can spot it, you can fix it,” Huh said. In theory, a forger could integrate his forensic network into a GAN, where—as a discriminator—it could train a generator to synthesize images capable of eluding its detection. For this reason, in an article titled “Digital Forensics in a Post-Truth Age,” published earlier this year in Forensic Science International, Farid argued that researchers need to keep their newest techniques secret for a while. The time had come, he wrote, to balance “scientific openness” against the risk of “fueling our adversaries.”
In Farid’s view, the sheer number of distinctive “manipulation indicators” gives forensics experts a technical edge over forgers. Just as counterfeiters must painstakingly address each security feature on a hundred-dollar bill—holograms, raised printing, color-shifting ink, and so on—so must a media manipulator solve myriad technical problems, some of them statistical in nature and invisible to the eye, in order to create an undetectable fake. Training neural networks to do this is a formidable, perhaps impossible task. And yet, Farid said, forgers have the advantage in distribution. Although “Golden Eagle Snatches Kid” has been identified as fake, it’s still been viewed more than thirteen million times. Matt Turek predicts that, when it comes to images and video, we will arrive at a new, lower “trust point.” “ ‘A picture’s worth a thousand words,’ ‘Seeing is believing’—in the society I grew up in, those were catchphrases that people agreed with,” he said. “I’ve heard people talk about how we might land at a ‘zero trust’ model, where by default you believe nothing. That could be a difficult thing to recover from.”
As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”
“The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.
As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.
In the early days of photography, its practitioners had to argue for its objectivity. In courtrooms, experts debated whether photos were reflections of reality or artistic products; legal scholars wondered whether photographs needed to be corroborated by witnesses. It took decades for a consensus to emerge about what made a photograph trustworthy. Some technologists wonder if that consensus could be reëstablished on different terms. Perhaps, using modern tools, photography might be rebooted.
Truepic, a startup in San Diego, aims at producing a new kind of photograph—a verifiable digital original. Photographs taken with its smartphone app are uploaded to its servers, where they enter a kind of cryptographic lockbox. “We make sure the image hasn’t been manipulated in transit,” Jeffrey McGregor, the company’s C.E.O., explained. “We look at geolocation data, at the nearby cell towers, at the barometric-pressure sensor on the phone, and verify that everything matches. We run the photo through a bunch of computer-vision tests.” If the image passes muster, it’s entered into the Bitcoin and Ethereum blockchain. From then on, it can be shared on a special Web page that verifies its authenticity. Today, Truepic’s biggest clients are insurance companies, which allow policyholders to take verified photographs of their flooded basements or broken windshields. The software has also been used by N.G.O.s to document human-rights violations, and by workers at a construction company in Kazakhstan, who take “verified selfies” as a means of clocking in and out. “Our goal is to expand into industries where there’s a ‘trust gap,’ ” McGregor said: property rentals, online dating. Eventually, he hopes to integrate his software into camera components, so that “verification can begin the moment photons enter the lens.”
Earlier this year, Danielle Citron and Robert Chesney, law professors at the Universities of Maryland and Texas, respectively, published an article titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” in which they explore the question of whether certain kinds of synthetic media might be made illegal. (One plausible path, Citron told me, is to outlaw synthetic media aimed at inciting violence; another is to adapt the law against impersonating a government official so that it applies to synthetic videos depicting them.) Eventually, Citron and Chesney indulge in a bit of sci-fi speculation. They imagine the “worst-case scenario,” in which deepfakes prove ineradicable and are used for electioneering, blackmail, and other nefarious purposes. In such a world, we might record ourselves constantly, so as to debunk synthetic media when it emerges. “The vendor supplying such a service and maintaining the resulting data would be in an extraordinary position of power,” they write; its database would be a tempting resource for law-enforcement agencies. Still, if it’s a choice between surveillance and synthesis, many people may prefer to be surveilled. Truepic, McGregor told me, had already had discussions with a few political campaigns. “They say, ‘We would use this to just document everything for ourselves, as an insurance policy.’ ”
One evening, Efros and I walked to meet Farid for dinner at a Japanese restaurant near campus. On the way, we talked about the many non-nefarious applications of image synthesis. A robot, by envisioning what it might see around a corner and discovering whether it had guessed right, could learn its way around a building; “pose detection” could allow it to learn motions by observing them. “Prediction is really the hallmark of intelligence,” Efros said, “and we are constantly predicting and hallucinating things that are not actually visible.” In a sense, synthesizing is simply imagining. The apparent paradox of Farid’s license-plate research—that unreal images can help us read real ones—just reflects how thinking works. In this respect, deepfakes were sparks thrown off by the project of building A.I. “When I see a face,” Efros continued, “I don’t know for sure what it looks like from the side. . . .” He paused. “You know what? I think I screwed up.” We had gotten lost.
When we found the restaurant, Farid, who had come on his motorcycle, was waiting for us, wearing a snazzy leather jacket. Efros and Farid—the generator and the discriminator—embraced. They have known each other for a decade.
We took a small table by the window. “What’s really interesting about these technologies is how quickly they went from ‘Whoa, this is really cool’ to ‘Holy crap, this is subverting democracy,’ ” Farid said, over a seaweed salad.
“I think it’s video,” Efros said. “When it was images, nobody cared.”
“Trump is part of the equation, too, right?” Farid asked. “He’s creating an atmosphere where you shouldn’t believe what you read.”
“But Putin—my dear Putin!—his relationship with truth is amazing,” Efros said. “Oliver Stone did a documentary with him, and Putin showed Stone a video of Russian troops attacking ISIS in Syria. Later, it turned out to be footage of Americans in Iraq.” He grimaced, reaching for some sushi. “A lot of it is not faking data—it’s misattribution. On Russian TV, they say, ‘Look, the Ukrainians are bombing Donetsk,’ but actually it’s footage from somewhere else. The pictures are fine. It’s the label that’s wrong.”
Over dinner, Farid and Efros debated the deep roots of the fake-news phenomenon. “A huge part of the solution is dealing with perverse incentives on social media,” Farid said. “The entire business model of these trillion-dollar companies is attention engineering. It’s poison.” Efros wondered if we humans were evolutionarily predisposed to jump to conclusions that confirmed our own views—the epistemic equivalent of content-aware fill.
As another round of beer arrived, Farid told a story. Many years ago, he said, he’d published a paper about a famous photograph of Lee Harvey Oswald. The photograph shows Oswald standing in his back yard, holding the rifle he later used to kill President Kennedy; conspiracy theorists have long claimed that it’s a fake. “It kind of does look fake,” Farid said. The rifle appears unusually long, and Oswald seems to be leaning back into space at an unrealistic angle; in this photograph, but not in others, he has a strangely narrow chin. “We built this 3-D model of the scene,” Farid said, “and it turned out we could explain everything that people thought was wrong—it was just that the light was weird. You’d think people would be, like, ‘Nice job, Hany.’ ”
Efros laughed.
“But no! When it comes to conspiracies, there are the facts that prove our beliefs and the ones that are part of the plot. And so I became part of the conspiracy. At first, it was just me. Then my father sent me an e-mail. He said, ‘Someone sent me a link to an article claiming that you and I are part of a conspiracy together.’ My dad is a research chemist who made his career at Eastman Kodak. Well, it turns out he was at Eastman Kodak at the same time they developed the Zapruder film.”


“I hope you’re not going to let your parents pass by without speaking to them.”

“Ahhhhh,” Efros said.
For a moment, they were silent. “We’re going to need technological solutions, but I don’t think they’re going to solve the problem,” Farid said. “And I say that as a technologist. I think it’s a societal problem—a human problem.”
On a brisk Friday morning, I walked to Efros’s lab to see my synthetic self. The Berkeley campus was largely empty, and I couldn’t help noticing how much it resembled other campuses—the texture of college is highly consistent. Already, the way I looked at the world was shifting. That morning, on my phone, I’d watched an incredible video in which a cat scaled the outside of an apartment building, reached the tenth floor, then leaped to the ground and scampered away. Automatically, I’d assumed the video was fake. (I Googled; it wasn’t.)
A world saturated with synthesis, I’d begun to think, would evoke contradictory feelings. During my time at Berkeley, the images and videos I saw had come to seem distant and remote, like objects behind glass. Their clarity and perfection looked artificial (as did their gritty realism, when they had it). But I’d also begun to feel, more acutely than usual, the permeability of my own mind. I thought of a famous study in which people saw doctored photographs of themselves. As children, they appeared to be standing in the basket of a hot-air balloon. Later, when asked, some thought they could remember actually taking a balloon ride. It’s not just that what we see can’t be unseen. It’s that, in our memories and imaginations, we keep seeing it.
At a small round table, I sat down with Shiry Ginosar and another graduate student, Tinghui Zhou, a quietly amused man with oblong glasses. They were excited to show me what they had achieved using a GAN that they had developed over the past year and a half, with an undergraduate named Caroline Chan. (Chan is now a graduate student in computer science at M.I.T.)
“O.K.,” Ginosar said. On her laptop, she opened a video. In a box in the upper-left corner of the screen, the singer Bruno Mars wore white Nikes, track pants, and an elaborately striped shirt. Below him, a small wireframe figure imitated his posture. “That’s our pose detection,” she said. The right side of the screen contained a large image of me, also in the same pose: body turned slightly to the side, hips cocked, left arm raised in the air.
Ginosar tapped the space bar. Mars’s hit song “That’s What I Like” began to play. He started dancing. So did my synthetic self. Our shoulders rocked from left to right. We did a semi-dab, and then a cool, moonwalk-like maneuver with our feet.
“Jump in the Cadillac, girl, let’s put some miles on it!” Mars sang, and, on cue, we mimed turning a steering wheel. My synthetic face wore a huge grin.
“This is amazing,” I said.
“Look at the shadow!” Zhou said. It undulated realistically beneath my synthetic body. “We didn’t tell it to do that—it figured it out.” Looking carefully, I noticed a few imperfections. My shirt occasionally sprouted an extra button. My wristwatch appeared and disappeared. But I was transfixed. Had Bruno Mars and I always had such similar hair? Our fingers snapped in unison, on the beat.
Efros arrived. “Oh, very nice!” he said, leaning in close and nodding appreciatively. “It’s very good!”
“The generator tries to make it look real, but it can look real in different ways,” Ginosar explained.
“The music helps,” Efros said. “You don’t notice the mistakes as much.”
The song continued. “Take a look in that mirror—now tell me who’s the fairest,” Mars suggested. “Is it you? Is it me? Say it’s us and I’ll agree!”
“Before Photoshop, did everyone believe that images were real?” Zhou asked, in a wondering tone.
“Yes,” Ginosar said. “That’s how totalitarian regimes and propaganda worked.”
“I think that will happen with video, too,” Zhou said. “People will adjust.”
“It’s like with laser printers,” Efros said, picking up a printout from the table. “Before, if you got an official-looking envelope with an official-looking letter, you’d treat it seriously, because it was beautifully typed. Must be the government, right? Now I toss it out.”
Everyone laughed.
“But, actually, from the very beginning photography was never objective,” Efros continued. “Whom you photograph, how you frame it—it’s all choices. So we’ve been fooling ourselves. Historically, it will turn out that there was this weird time when people just assumed that photography and videography were true. And now that very short little period is fading. Maybe it should’ve faded a long time ago.”
When we’d first spoken on the phone, several weeks earlier, Efros had told me a family story about Soviet media manipulation. In the nineteen-forties and fifties, his grandmother had owned an edition of the Great Soviet Encyclopedia. Every so often, an update would arrive in the mail, containing revised articles and photographs to be pasted over the old ones. “Everyone knew it wasn’t true,” Efros said. “Apparently, that wasn’t the point.”
I mulled this over as I walked out the door, down the stairs, and into the sun. I watched the students pass by, with their identical backpacks, similar haircuts, and computable faces. I took out my phone, found the link to the video, and composed an e-mail to some friends. “This is so great!” I wrote. “Check out my moves!” I hit Send. ♦
 

Pizzabeak

Prolific Member
Local time
Today, 04:40
Joined
Jan 24, 2012
Messages
1,986
#10

When algorithms go wrong we need more power to fight back, say AI researchers


theverge.com

When algorithms go wrong we need more power to fight back, say AI researchers

James Vincent@jjvincent

6-7 minutes

Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.
The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations, including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services.
Big tech companies have found themselves in an AI gold rush, charging into a broad range of markets from recruitment to healthcare to sell their services. But, as AI Now co-founder Meredith Whittaker, leader of Google’s Open Research Group, tells The Verge, a lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence.”
Whittaker gives the example of IBM’s Watson system, which, during trial diagnoses at Memorial Sloan Kettering Cancer Center, gave “unsafe and incorrect treatment recommendations,” according to leaked internal documents. “The claims that their marketing department had made about [their technology’s] near-magical properties were never substantiated by peer-reviewed research,” says Whittaker.
The authors of AI Now’s report say this incident is just one of a number of “cascading scandals” involving AI and algorithmic systems deployed by governments and big tech companies in 2018. Others range from accusations that Facebook helped facilitate genocide in Myanmar, to the revelation that Google’s is helping to build AI tools for drones for the military as part of Project Maven, and the Cambridge Analytica scandal.
In all these cases there has been public outcry as well as internal dissent in Silicon Valley’s most valuable companies. The year saw Google employees quitting over the company’s Pentagon contracts, Microsoft employees pressuring the company to stop working with Immigration and Customs Enforcement (ICE), and employee walkouts from Google, Uber, eBay, and Airbnb protesting issues involving sexual harassment.
Whittaker says these protests, supported by labor alliances and research initiatives like AI Now’s own, have become “an unexpected and gratifying force for public accountability.”
This year saw widespread protests against the use of AI, including Google’s involvement in building drone surveillance technology. Photo by John Moore/Getty Images
But the report is clear: the public needs more. The danger to civic justice is especially clear when it comes to the adoption of automated decision systems (ADS) by the government. These include algorithms used for calculating prison sentences and allotting medical aid. Usually, say the report’s authors, software is introduced into these domains with the purpose of cutting costs and increasing efficiency. But that result is often systems making decisions that cannot be explained or appealed.
AI Now’s report cites a number of examples, including that of Tammy Dobbs, an Arkansas resident with cerebral palsy who had her Medicaid-provided home care cut from 56 hours to 32 hours a week without explanation. Legal Aid successfully sued the State of Arkansas and the algorithmic allocation system was judged to be unconstitutional.
Whittaker and fellow AI Now co-founder Kate Crawford, a researcher at Microsoft, say the integration of ADS into government services has outpaced our ability to audit these systems. But, they say, there are concrete steps that can be taken to remedy this. These include requiring technology vendors which sell services to the government to waive trade secrecy protections, thereby allowing researchers to better examine their algorithms.
“You have to be able to say, ‘you’ve been cut off from Medicaid, here’s why,’ and you can’t do that with black box systems” says Crawford. “If we want public accountability we have to be able to audit this technology.”
Another area where action is needed immediately, say the pair, is the use of facial recognition and affect recognition. The former is increasingly being used by police forces, in China, the US, and Europe. Amazon’s Rekognition software, for example, has been deployed by police in Orlando and Washington County, even though tests have shown that the software can perform differently across different races. In a test where Rekognition was used to identify members of Congress it had an error rate of 39 percent for non-white members compared to only five percent for white members. And for affect recognition, where companies claim technology can scan someone’s face and read their character and even intent, AI Now’s authors say companies are often peddling pseudoscience.
Despite these challenges, though, Whittaker and Crawford say that 2018 has shown that when the problems of AI accountability and bias are brought to light, tech employees, lawmakers, and the public are willing to act rather than acquiesce.
With regards to the algorithmic scandals incubated by Silicon Valley’s biggest companies, Crawford says: “Their ‘move fast and break things’ ideology has broken a lot of things that are pretty dear to us and right now we have to start thinking about the public interest.”
Says Whittaker: “What you’re seeing is people waking up to the contradictions between the cyber-utopian tech rhetoric and the reality of the implications of these technologies as they’re used in everyday life.”
 
Top Bottom