\n","length":332}">
DeepMind’s New Way to Think About the Brain Could Improve How AI Makes Plans
DeepMind thinks that we imagine the future so well because part of our brain creates efficient summaries of how the future could play out.
For all of the recent advances in AI, machines still struggle to effectively plan in situations where even a few procedural steps cause huge explosions in complexity. We’ve seen that in AI’s struggle to master, say, the computer game Starcraft . In contrast, humans are pretty good at it: chances are you can quickly imagine how to handle a whole set of different scenarios for getting dinner if, say, the bodega is closed on your journey home from work.
Now, in a paper published in Nature Neuroscience , a team of researchers from Google’s AI division draws parallels between reinforcement learning—the field of machine learning where an AI learns to perform a task through trial and error by being rewarded when it does it correctly—and the brain’s hippocampus, to understand why humans have that edge.
While the hippocampus is usually thought to deal with a human’s current situation, DeepMind proposes that it actually makes predictions about the future, too. From a blog post describing the new work :
We argue that the hippocampus represents every situation—or state—in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.
Of course, it’s not clear that this is the case. Nor is it clear that this alone is what makes humans good at planning. But DeepMind plans to try and work out if its new theory could help AIs to plan more efficiently by applying a mathematical implementation of the idea—where each future state can be assigned its own reward in order to calculate an optimal decision—inside neural networks. And if it works, the machines may just get a little bit better at thinking ahead.
Source:
\n","length":332}">
DeepMind’s New Way to Think About the Brain Could Improve How AI Makes Plans
DeepMind thinks that we imagine the future so well because part of our brain creates efficient summaries of how the future could play out.
For all of the recent advances in AI, machines still struggle to effectively plan in situations where even a few… Read more
DeepMind thinks that we imagine the future so well because part of our brain creates efficient summaries of how the future could play out.
For all of the recent advances in AI, machines still struggle to effectively plan in situations where even a few procedural steps cause huge explosions in complexity. We’ve seen that in AI’s struggle to master, say, the computer game Starcraft . In contrast, humans are pretty good at it: chances are you can quickly imagine how to handle a whole set of different scenarios for getting dinner if, say, the bodega is closed on your journey home from work.
Now, in a paper published in Nature Neuroscience , a team of researchers from Google’s AI division draws parallels between reinforcement learning—the field of machine learning where an AI learns to perform a task through trial and error by being rewarded when it does it correctly—and the brain’s hippocampus, to understand why humans have that edge.
While the hippocampus is usually thought to deal with a human’s current situation, DeepMind proposes that it actually makes predictions about the future, too. From a blog post describing the new work :
We argue that the hippocampus represents every situation—or state—in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.
Of course, it’s not clear that this is the case. Nor is it clear that this alone is what makes humans good at planning. But DeepMind plans to try and work out if its new theory could help AIs to plan more efficiently by applying a mathematical implementation of the idea—where each future state can be assigned its own reward in order to calculate an optimal decision—inside neural networks. And if it works, the machines may just get a little bit better at thinking ahead.
Source:
\n \t\n\t\n\t
\n
\n","length":178}">
This Disaster Robot Would Climb Ladders in the Pouring Rain to Rescue You
If you have a problem, if no one else can help, maybe you can hire E2-DR. That’s the name of Honda’s latest prototype disaster relief robot, and as IEEE Spectrum reports , it’s an impressive piece of machinery. How impressive, exactly? Well, according… Read more
If you have a problem, if no one else can help, maybe you can hire E2-DR. That’s the name of Honda’s latest prototype disaster relief robot, and as IEEE Spectrum reports , it’s an impressive piece of machinery. How impressive, exactly? Well, according to Honda it can, among other things: walk at 2.5 miles per hour, step over eight-inch obstacles, climb stairs, navigate rubble, squeeze through 20-inch gaps, and even climb a vertical ladder under inch-per-hour rain (presumably without frying its circuits). They’re things that many humanoid robots have tried to do in the past—though rarely with guaranteed success .
Source:
\n","length":355}">
AI Definitely Didn’t Stop Fake News about the Las Vegas Shooting
As Americans woke Monday to reports of the tragedy in Las Vegas, many were confronted not by accurate news accounts but by untruthful posts from questionable websites.
Ars Technica reports that Google promoted a 4chan post, which incorrectly identified… Read more
As Americans woke Monday to reports of the tragedy in Las Vegas, many were confronted not by accurate news accounts but by untruthful posts from questionable websites.
Ars Technica reports that Google promoted a 4chan post, which incorrectly identified the shooter, in its Top Stories. The item was posted to 4chan’s “pol” section, which is famously full of … provocative content. Outline reporter William Turton was told by a Google spokesperson that its Top Stories are chosen according to a combination of “authoritativeness” and “how fresh” an item is. Clearly unauthoritative, the 4chan post must have been considered very fresh—especially given that it’s hard to consider 4chan a conventional news source.
Elsewhere, Fast Company explains that factually inaccurate content also made its way onto Facebook’s Safety Check page for the Las Vegas shooting, in the form of a story from a blog called Alt-Right News. Other fake news swirled, too— Buzzfeed has a list of examples .
This is, of course, a troubling misstep at a time when tech giants are supposed to be redoubling their efforts to contain such content. Facebook and Google are involved in an ongoing investigation with Congress about the spread of Russia-linked propaganda during the 2016 presidential election. Facebook has been plagued by problems like anti-Semitic ad targeting and an inability to effectively police offensive content .
Both firms believe computation solves these kinds of problems. Mark Zuckerberg has repeatedly and emphatically argued that artificial intelligence should be able to weed out offensive content and fake news. Speaking to the Outline , a Google spokesperson pleaded that “within hours, the 4chan story was algorithmically replaced by relevant results.”
But currently, making decisions to censor fake content based on breaking news is close to impossible for AIs, because they need a large set of data to learn from, and it can’t be rounded up and processed quickly enough. Yet relying on more conventional algorithms based on measures of “authoritativeness” and “freshness” takes hours to get things right—and hours isn’t fast enough.
The solution, for now at least, is probably not technological. Facebook has admitted as much, by increasing the number of people it uses to vet offensive content. But it, and Google, may need to swell those ranks far more if they’re to avoid repeating this kind of mistake over and over.
Source:
\n","length":330}">
Technology Could Help You Build an Untraceable Gun. Should It?
Just hours before Sunday night's horrific mass shooting in Las Vegas, Texas-based Defense Distributed launched new software that allows people to mill metal handgun bodies at home. The two events are almost certainly unconnected, but it's hard not to consider… Read more
Just hours before Sunday night's horrific mass shooting in Las Vegas, Texas-based Defense Distributed launched new software that allows people to mill metal handgun bodies at home. The two events are almost certainly unconnected, but it's hard not to consider them in a similar light, given the unfortunate timing.
For those unfamiliar, Defense Distributed has been publishing open-source designs for 3-D printed guns online since 2013 (though that was a plastic part). Its stated purpose is to use 3-D printing and other technologies to render any attempts to regulate or track the possession of firearms obsolete—a move its CEO and founder, Cody Wilson, sees as fundamental to the U.S. constitution's guarantee of a right to bear arms.
While a 3-D printed plastic gun may have worked for that purpose, Wilson tells Ars Technica that with the machined metal devices “you’re making the identical item that you would otherwise handle, purchase, and fire—so it feels identical."
The newly released software allows people to use the firm’s $1,500 Ghost Gunner milling machine to create the body of an M1911 handgun, which is used in the Colt 45. Defense Distributed says “no prior CNC knowledge or experience is required to manufacture [the gun bodies] in the comfort and privacy of your home.”
Metal or plastic, the technology behind Wilson’s company probably isn’t making much of an impact on gun violence in the U.S.—mainly because guns are already so cheap and easy to come by. In fact, it's far more likely that the Las Vegas shooter performed a technically-legal modification on a weapon, which for a hundred bucks can turn an AR-15 (a gun that Defense Distributed has also worked to help anonymize) into something virtually indistiguishable from a machine gun . Building a gun from scratch at home is simply more effort than it’s worth, and will remain so for the foreseeable future.
That in and of itself would be enough to make Defense Distributed's efforts look rather worthless, if not absurd. The company appears to be girding for some kind of apocalypse of gun regulation that shows no sign of materializing, despite a seemingly endless string of heartwrenching mass-shootings in recent years.
That its latest advance in its mission is now cast against the backdrop of the death of dozens more innocents in Las Vegas makes it all the more grotesque. Indeed, in a piece in Wired just yesterday Wilson said, "there is going to be universal access to arms. Even if I'm the only one working on it, it's going to happen."
In the shadow of one of the deadliest mass shootings that America has ever witnessed, that seems like a poor use of technology indeed.
Image credit:
\n","length":364}">
OK, Phone: How Are My Crops Looking?
Some cassava farmers may not be able to tell one plant’s debilitating brown streak from another’s troubling brown leaf spot—but a smartphone-friendly AI can.
Wired reports that researchers have developed a lightweight image-recognition AI that can identify… Read more
Some cassava farmers may not be able to tell one plant’s debilitating brown streak from another’s troubling brown leaf spot—but a smartphone-friendly AI can.
Wired reports that researchers have developed a lightweight image-recognition AI that can identify diseases in the cassava plant based on pictures of its leaves. That could be useful, because cassava is one of the most commonly eaten tubers on the planet, but is grown predominantly in developing countries where access to expertise to diagnose unusual crop problems may be limited.
In a paper published on the arXiv , the researchers behind the new AI explain how they’ve used a technique known as transfer learning to retrain an existing image-recognition neural network using just a small number of new images. With just 2,756 pictures of cassava leaves captured from plants in Tanzania, the team was able to build software based on Google’s TensorFlow AI library that could reliably identify three crop diseases and two types of pest damage. It could, for instance, discern brown leaf spot with 98 percent accuracy. The AI is also small enough to load and run on on a smartphone and doesn't need to send data to the cloud for processing, though it isn’t yet available for people to use.
Automation increasingly appears to be heading for the field. Last month, tractor maker John Deere acquired a Silicon Valley AI firm—one that precision-targets weed killer using machine learning—for a cool $300 million. And a pilot project in the U.K. recently saw researchers tend a field full of barley using only robots .
Source:
\n","length":382}">
Oracle’s New Database Uses AI to Patch Itself
If you can’t trust humans to update your software, teach it to do the job for itself. That’s the thinking at the enterprise software firm Oracle, anyway, which has just announced that its 18c database system now uses machine learning to “automatically… Read more
If you can’t trust humans to update your software, teach it to do the job for itself. That’s the thinking at the enterprise software firm Oracle, anyway, which has just announced that its 18c database system now uses machine learning to “automatically upgrade, patch, and tune itself while running.”
VentureBeat reports that system administrators provide rules for the database, but then leave it to its own devices. The software learns what “normal” looks like, and then tries to stop anything that seems untoward, and does all that without the downtime that comes when humans peer under the hood. “If you eliminate all human labor, you eliminate human error,” Oracle CEO Larry Ellison, (pictured above) explained as he announced the new software yesterday.
That raises the specter of technological unemployment , of course. In Oracle's case, the theory appears to be that the AI will free up network administrators to help users, rather than having them spend their time patching software. But if, like at Equifax , they weren’t patching much software in the first place, there is always the chance that their presence is no longer required.
And Oracle isn’t alone in using AI to improve the security of software. Data science platform Kaggle is currently running a contest in which researchers build offensive and defensive AIs to improve software security. In other words: sysadmins of the world, look busy.
Source:
\n","length":346}">
China’s New Electric Car Rules Are Amazingly Aggressive
This is how you really get an industry to change its ways. Bloomberg reports that China’s government has announced that any automaker producing or importing more than 30,000 cars in China must ensure 10 percent of them are all-electric, plug-in hybrid,… Read more
This is how you really get an industry to change its ways. Bloomberg reports that China’s government has announced that any automaker producing or importing more than 30,000 cars in China must ensure 10 percent of them are all-electric, plug-in hybrid, or hydrogen-powered by 2019. That number will rise to 12 percent in 2020.
In fact, the new regulations are actually more lenient than drafts of the rules had suggested : they scrap a 2018 introduction to give manufacturers more time to prepare, and will also excuse failure to meet the quota in the first year. So, really, the 12 percent target in 2020 is the first enforceable number.
That still doesn’t make it very easy, as the Wall Street Journal notes (paywall). Domestic automakers already make plenty of electric cars (largely at the government's behest), which means that they should be able to meet the numbers, but Western firms will find it harder. In preparation, some have actually set up partnerships with Chinese companies to help them build electric vehicles in time.
Other nations have also been pushing to limit the sale of cars that run on fossil fuels. The U.K. and France have both recently decided to outlaw the sale of new internal-combustion cars by 2040. But as the world's largest car market, China’s push will have a more profound effect on the industry.
These kinds of big policy shifts are, as we’ve argued before , the only way to quickly make electric cars pervasive. A recent analysis from Bloomberg New Energy Finance suggested that electric vehicles could account for as many as half of all new cars sold by 2040. If the world follows China’s lead, that figure could yet turn out to be a cautious estimate.
Source: