Amazon Imagines a Future of Infinite Computing Power

Amazon Imagines a Future of Infinite Computing Power

Amazon Imagines a Future of Infinite Computing Power
David Limp speaks with WIRED editor-in-chief Nicholas Thompson.
When David Limp thinks about the future of Alexa, the AI assistant he oversees at Amazon, he imagines a world not unlike Star Trek—a future in which you could be anywhere, asking anything, and an ambient computer would be there to fulfill your every need.
“Imagine a world in the not-so-distant future where you could have infinite computing power and infinite storage,” Limp said today at WIRED’s 2017 Business Conference in New York. “If you take off the constrains of servers and building up infrastructure, what could you do?”
Limp, who has worked at Amazon since 2010 as the senior vice president for devices, sees Alexa as a critical part of this future. Already, you can shout “Hey, Alexa,” and get the assistant to tell you the weather forecast, turn off the lights, hail an Uber, or thousands of other things that Amazon and developers have trained it to do. But Limp says there’s still plenty more work to be done before we live in the AI-assisted future he thinks about every day, and much of that effort has to do with training machines to better understand humans.
Since Alexa made its debut in 2014, the virtual assistant has taken lease in dedicated devices like the Echo, Tap, Echo Dot, Echo Look, Echo Show, as well as dozens of other supported devices. All that interaction with humans has given Alexa plenty of voice data to parse through—data that’s helped train the assistant to understand preferences, recognize different accents , even figure out the intent of a request without specific keywords. A year ago, if you’d told Alexa to “order a car,” it wouldn’t have understood what you meant. (What, like, order one from the Amazon Marketplace?) Now, through improved machine learning, Alexa knows what you mean and will prompt you to enable an Uber or Lyft skill so that it can summon your ride.
Of course, Alexa is far from perfect. Limp says one near-future goal would be improving Alexa’s understanding of anaphora—so if you ask, “Who’s the president of the United States?” and then follow up by asking, “How old is he?” Alexa knows you’re still talking about Donald Trump. Amazon is also tinkering with Alexa’s short-term and long-term memory, so that the bot can recall context from yesterday’s conversation as well as the thing you asked it five seconds ago.
Those changes involve a shift toward making devices that aren’t personal but can work for everyone. Think more like a wall clock in the kitchen, which everyone in a household can glance at to get the time, rather than a smartphone, which is designed for one person to use.
“As we design the interfaces for Alexa, whether voice or graphical, it’s about making it ambient and so that anybody can use it,” Limp said on stage. “If you ask for a timer and I ask for a timer, they’re both going to work.”
In a world where devices will surround people all the time, those gadgets will have to understand what humans mean, however they choose to say it. For anyone who uses Alexa, that education is already under way: Every time someone talks to their Echo, the world inches a little bit closer to that Starship Enterprise future Limp imagines.
Author: Klint Finley. Klint Finley Business
Date of Publication: 06.07.17.
Time of Publication: 2:19 pm.
2:19 pm
How Google Copes When Even It Can’t Afford Enough Gear
Urs Hölzle and Google’s AI chip, the TPU 2.0.Cole Wilson for WIRED
Urs Hölzle has a big job. As senior vice president of technical infrastructure at Google, he’s in charge of the hundreds of thousands of servers in data centers spread across the planet to power the company’s ever widening range of services.
He’s also the person that the company’s engineers turn to when all that computing power turns out not to be enough.
Today at the 2017 Wired Business Conference in New York, Hölze explained that even with its enormous resources, Google has had to find ways to economize its operations in order to meet its ambitious goals. Most recently, he said, the company was forced to start building its own artificial intelligence chips because the company’s existing infrastructure just wouldn’t cut it.
Around five years ago, Jeff Dean, who ran Google’s artificial intelligence group, realized that his team’s technique for speech recognition was getting really good. So good in fact, that he thought it was ready to move from the lab to the real world by powering Android’s voice-control system.
But when Dean and Hölzle ran the numbers, they realized that if every Android user in the world used about three minutes of voice recognition time per day, Google would need twice as much computing power to handle it all. The world’s largest computing infrastructure, in other words, would have to double in size.
“Even for Google that is not something you can afford, because Android is free, Android speech recognition is free, and you want to keep it free, and you can’t double your infrastructure to do that,” Hölzle says.
What Google decided to do instead, Hölzle said, is create a whole new type of chip specialized exclusively for machine learning. He likens traditional CPU chips to everyday cars—they have to do a lot of things relatively well to make sure you get where you’r going. An AI chip, on the other hand, has to do just one thing exceptionally well.
“What we built was the equivalent of a drag race car, it can only do one thing, go straight as fast as it can,” he says. “Everything else it is really, really bad at, but this one thing it is super good at.”
Google’s custom chips could handle AI tasks far more efficiently than traditional chips, which meant the company could support not just voice recognition, but a broad range of other tasks as well without breaking the bank.
Pattern Recognition
This pattern has repeated itself again and again during Hölzle’s time at Google. He says that when he started at the company in 1999 (he was somewhere between the seventh and 11th employee hired by Google, depending on how you count), Google only had around 50 servers and was straining to support the number of search queries it received each day. But even with $25 million in venture funding, the company couldn’t afford to buy enough ready-made servers to meet its growing demand.
“If we had done it with the machines, the servers, that people were using, professional servers, real servers, that would have blown our $25 million in an instant,” he says. “It really was not an option, so we were forced to look for other ways to do the same thing more cheaply.”
Related Stories
Hacking Online Hate Means Talking to the Humans Behind It
Yasmin Green, right, speaks with WIRED’s Issie Lapowsky.Cole Wilson for WIRED
Yasmin Green leads a team at Google’s parent company with an audacious goal: solving the thorniest geopolitical problems that emerge online. Jigsaw , where she is the head of research and development, is a think tank within Alphabet tasked with fighting the unintended unsavory consequences of technological progress. Green’s radical strategy for tackling the dark side of the web? Talk directly to the humans behind it.
That means listening to fake news creators, jihadis, and cyber bullies so that she and her team can understand their motivations, processes, and goals. “We look at censorship, cybersecurity, cyberattacks, ISIS—everything the creators of the internet did not imagine the internet would be used for,” Green said today at WIRED’s 2017 Business Conference in New York.
Last week, Green traveled to Macedonia to meet with peddlers of fake news, those click-hungry opportunists who had such a sway over the 2016 presidential election in the US. Her goal was to understand the business model of fake news dissemination so that she and her team can create algorithms to identify the process and disrupt it. She learned that these content farms utilize social media and online advertising—the same tools used by legit online publishers. “[The problem of fake news] starts off in a way that algorithms should be able to detect,” she said. Her team is now working on a tool that could be shared across Google as well as competing platforms like Facebook and Twitter to thwart that system.
‘It’s mostly good people making bad decisions who join violent extremist groups.’
Along with fake news, Jigsaw is intensely focused on combatting online pro-terror propaganda. Last year, Green and her team travelled to Iraq to speak directly to ex-ISIS recruits. The conversations led to a tool called the Redirect Method, which uses machine learning detect extremist sympathies based on search patterns. Once detected, the Redirect Method serves these users videos that show the ugly side of ISIS—a counternarrative to the allure of the ideology. At the point that they are buying a ticket to join the caliphate, she said, it was too late.
“It’s mostly good people making bad decisions who join violent extremist groups,” Green says. “So the job was: let’s respect that these people are not evil and they are buying into something and lets use the power of targeted advertising to reach them, the people who are sympathetic but not sold.”
Since its launch last year, 300,000 people have watched videos served up by the Redirect Method—a total of more than half a million minutes, Green said.
Beyond fake news and extremism, Green’s team has also created a tool to target toxic speech in comment sections on news organizations’ sites. They created Perspective, a machine-learning algorithm that uses context and sentiment training to detect potential online harassment and alert moderators to the problem. The beta version is being used by the likes of the New York Times. But as Green explained, it’s a constantly evolving tool. One potential worry is that it could be itself biased against certain words, ideas, even tones of speech. To counteract that risk, Jigsaw decided not to open up the API to allow others to set the parameters themselves, fearing that an authoritarian regime might use the tool for full-on censorship.
“We have to take measures to keep these tools from being misused,” she said. Just like the internet itself, which has been used in destructive ways its creators could never have imagined, Green is aware that the solutions her team creates could also be abused. That risk is always on her mnind, she says. But it’s not a reason to stop trying.

Images Powered by Shutterstock