Can Artificial Intelligence Master the Art of the Deal?

Can Artificial Intelligence Master the Art of the Deal?

\n","length":352}">
Can Artificial Intelligence Master the Art of the Deal?
A bot might someday take your job, but perhaps it can help you negotiate a nice severance package, too.
A recent research paper  (PDF) suggests that AI agents could do all sorts of useful haggling, providing they become a little bit smarter, and users can be persuaded to trust them.
The authors envision a world where an AI agent negotiates on your behalf, like buying a house or working out the details of a pay raise. And they write that such technology could allow negotiations in new areas like figuring out the terms of an energy-sharing deal with your neighbor, or the amount of money you should receive for giving up some privacy information to a mobile app. The team has experimented with an Android app that lets you do just that , in fact.
In an interview with Science , one of the authors, Tim Baarslag from Centrum Wiskunde & Informatica in Amsterdam, says the key challenges are giving such systems deep understanding of a particular domain, like real estate or energy, as well as enabling some sort of long-term perspective, so that it can negotiate with an entity that it’s dealt with before.
AI agents are, in fact, already used for all sorts of more narrow negotiations, like figuring out the correct price for an online ad, or the right bid for a stock. But it’s fascinating to imagine them taking on more human areas of deal-making. Perhaps AI may someday play a vital role in future standoffs like the one between the U.S. and North Korea?
One interesting observation from the researchers’ paper, however, provides a little pause for thought. “People may show less regard for fairness and ethical behavior when negotiating through a third (human) party,” the authors write. “This raises the question as to whether agents should similarly lie on behalf of a user, for example by using argumentation and persuasion technology. Analogous to recent research on ethical dilemmas in self-driving cars , people may claim that negotiation agents should be ethical, but sacrifice these ideals if it maximizes their profits.”
Image credit:
\n","length":352}">
Can Artificial Intelligence Master the Art of the Deal?
A bot might someday take your job, but perhaps it can help you negotiate a nice severance package, too.
A recent research paper  (PDF) suggests that AI agents could do all sorts of useful haggling, providing they become a little bit smarter, and users… Read more
A bot might someday take your job, but perhaps it can help you negotiate a nice severance package, too.
A recent research paper  (PDF) suggests that AI agents could do all sorts of useful haggling, providing they become a little bit smarter, and users can be persuaded to trust them.
The authors envision a world where an AI agent negotiates on your behalf, like buying a house or working out the details of a pay raise. And they write that such technology could allow negotiations in new areas like figuring out the terms of an energy-sharing deal with your neighbor, or the amount of money you should receive for giving up some privacy information to a mobile app. The team has experimented with an Android app that lets you do just that , in fact.
In an interview with Science , one of the authors, Tim Baarslag from Centrum Wiskunde & Informatica in Amsterdam, says the key challenges are giving such systems deep understanding of a particular domain, like real estate or energy, as well as enabling some sort of long-term perspective, so that it can negotiate with an entity that it’s dealt with before.
AI agents are, in fact, already used for all sorts of more narrow negotiations, like figuring out the correct price for an online ad, or the right bid for a stock. But it’s fascinating to imagine them taking on more human areas of deal-making. Perhaps AI may someday play a vital role in future standoffs like the one between the U.S. and North Korea?
One interesting observation from the researchers’ paper, however, provides a little pause for thought. “People may show less regard for fairness and ethical behavior when negotiating through a third (human) party,” the authors write. “This raises the question as to whether agents should similarly lie on behalf of a user, for example by using argumentation and persuasion technology. Analogous to recent research on ethical dilemmas in self-driving cars , people may claim that negotiation agents should be ethical, but sacrifice these ideals if it maximizes their profits.”
Image credit:
\n","length":317}">
Autopilot’s Limitations Played “Major Role” in Fatal Tesla Crash, NTSB Says
The chairman of the National Transportation Safety Board said Tuesday that "system safeguards were lacking" in the Tesla S that killed a driver when it struck a truck in Florida in May 2016. 
According to Reuters , the new statement from the NTSB suggests… Read more
The chairman of the National Transportation Safety Board said Tuesday that "system safeguards were lacking" in the Tesla S that killed a driver when it struck a truck in Florida in May 2016. 
According to Reuters , the new statement from the NTSB suggests that Tesla not only fielded a car with limited autonomous capabilities—something that company has previously acknowledged—but one that was also incapable of ensuring its human driver was paying attention to the road when its Autopilot system was engaged. 
“Tesla allowed the driver to use the system outside of the environment for which it was designed and the system gave far too much leeway to the driver to divert his attention, ” Robert Sumwalt, the agency's chairman, said. 
The news comes less than a month after the Wall Street Journal published a detailed account of the Autopilot program at Tesla, which included several accounts of its engineers expressing deep concerns  about the system's safety. The system was only meant to provide partial autonomy that required continuous driver attention, but the company, particularly CEO Elon Musk, made public indications that Autopilot was capable of fully autonomous driving. 
At the time of the crash, Tesla vehicles were outfitted to detect if a driver's hands were on the wheel, and emit warning sounds if they were removed for more than a few seconds. In its statement, the NTSB said that such measures were insufficient for monitoring whether a driver was paying attention to the road.
Tesla has since ratcheted up how its cars watch for driver engagement  and  upgraded its sensor packages for new cars . But the NTSB's statement is a necessary reminder that we have a long way still to go in perfecting the human-vehicle interaction problem that lies at the heart of designing semi-autonomous cars. 
Source:
\n","length":397}">
A Chatbot Will Help You Sue Equifax
If you’re one of the 143 million Americans that had their data leaked during the Equifax hack , your to-do list just got a little easier.
Joshua Browder, the man behind the automated legal helper DoNotPay and one of our 35 Innovators Under 35 for 2017 , has created a chatbot that will help you sue Equifax in small claims court. Hit it up , and it will take the initial legwork out of filing the claim for you: you simply tell it your name and address, then it provides you with the remaining paperwork you need to complete and submit. Sadly that’s still eight pages of forms to be filled, but at least you didn’t have to work out how to get to that point in the first place. As the BBC notes , depending on the state you live in, full damages claimed could be as much as $25,000.
DoNotPay started life as a means to dodge parking tickets, but Browder wants to turn it into a helper that makes the process of navigating legal challeges, like immigration or divorce, more straightforward for regular folks. It seems you can add data breaches to the list, too.
Source:
Subscribe today
Robotic Farmers Can Literally Reap What They Sow
If you want to grow a field full of barley but don’t want to get your hands dirty, never fear: robots can do the whole dang thing.
That’s what researchers from Harper Adams University in the U.K. have shown . In a sleepy hectare of land in Shropshire,… Read more
If you want to grow a field full of barley but don’t want to get your hands dirty, never fear: robots can do the whole dang thing.
That’s what researchers from Harper Adams University in the U.K. have shown . In a sleepy hectare of land in Shropshire, England, robots have been tending the land without a human setting foot on the field. A series of roboticized tractors and harvesters, along with drones for aerial surveillance, have carefully planted seeds, taken soil and crop samples, sprayed herbicides and fertilizers, and even harvested the resulting crop of barley. You can see them in action above.
As IEEE Spectrum points out , the yields achieved by the Hands Free Hectare project are so far uninspiring. Without the careful eye of a farmer checking the crops in person, the researchers only managed to generate 4.5 metric tons of barley per hectare—which is some way off the average 6.8 metric tons you might expect on a regular farm.
But this is a proof of concept, and it serves to show that robots are certainly able to take over from humans when it comes to the repetitive efforts of farm labor—especially if they get a little better with practice. The researchers also point out that automation allows the use of smaller, lighter vehicles, which makes sowing and harvesting more precise and reduces damage caused to land as they trundle across fields.
Still not convinced? Then how about this: last week, tractor maker John Deere acquired a Silicon Valley AI firm , which precision-targets weed killer using machine learning, for a cool $300 million. The future of farming is headed in one direction, and it’s automated.
Source:
Get The Download delivered to your inbox every day.
The Download
\n","length":357}">
Earrings Made of Top-Secret Electronics Are Actually Part of the Uber-Waymo Lawsuit
This court case is starting to sound like a satire on corporate espionage. If you were working on a top-secret hardware project, would you give away a prototype of the device, fashioned into a pair of earrings, to a departing colleague? Probably not!… Read more
This court case is starting to sound like a satire on corporate espionage. If you were working on a top-secret hardware project, would you give away a prototype of the device, fashioned into a pair of earrings, to a departing colleague? Probably not! But bizarrely, that's exactly what employees at Waymo seem to have done with experimental versions of the hardware that is now at the center of its heated lawsuit with Uber .
IEEE Spectrum reports that when Waymo exec Seval Oz left the firm for Continental, she was given a set of earrings made from circuit boards. But they weren’t any old circuit boards: the jewelry was made from parts of the second version of a proprietary lidar sensor developed by Waymo. The third version of that sensor is the one at the center of an explosive lawsuit, in which Waymo accuses Uber of stealing its trade secrets. Discussing the gift in court, Waymo’s Pierre-Yves Droz said that the electronics, even in earring form, were "not something we should give to someone, especially if someone is leaving the company."
Their existence certainly caught the attention of Anthony Levandowski, Uber’s star self-driving-car engineer, who used to work at Waymo and now stands accused of taking secrets to Uber. Levandowski tracked down Oz via text message, at one point writing: "I could meet you in Fremont or do dinner your call. Just want to make sure I don’t forget to grab the ear rings." He ultimately did get hold of them. (Levandowski has text message history with former Uber CEO Travis Kalanick , too.)
Which way the revelation swings is hard to tell. It certainly shows that Levandowski was eager to have his hands on Waymo secrets. But it could also be argued that it shows that Waymo didn’t treat the security of its intellectual property particularly seriously. The courts will decide.
Source:
\n","length":325}">
AI Can Re-create Video Games Just by Watching Them
Machines just took aim at video-game development—from the '80s. AIs have been able to learn to play games like Space Invaders   by watching them for a while. But now, Georgia Tech researchers have written a paper describing how AI can actually build the… Read more
Machines just took aim at video-game development—from the '80s. AIs have been able to learn to play games like Space Invaders   by watching them for a while. But now, Georgia Tech researchers have written a paper describing how AI can actually build the underlying game engine of Super Mario Bros. just by spectating.
The approach, first reported by the Verge , works by analyzing thousands of frames of game play to see what happens as everyone’s favorite mustachioed plumber moves through the game. The AI looks at what changes between one frame and the next, and tries to link cause to effect—what happens when Mario, say, touches a coin, or lands on an evil sentient mushroom (oh, okay, then: a Goomba).
Over time, the researchers say, the AI can build up rules into a rudimentary version of the game engine. The Verge’s James Vincent calls the results “glitchy, but passable” and notes that the tool is limited to simple 2-D platform games like Super Mario Bros. and Mega Man at the moment.
Speaking to the Verge, one of the researchers says that “a future version of this could [analyze] limited domains of reality.” That’s a nice idea, but as we’ve explained before, making sense of the world is one of the biggest challenges facing AI right now —and re-creating Super Mario Bros. is only a very small jump toward cracking it.
 
\n","length":358}">
China and India Want All New Cars to Be Electric
The desire to kill off internal combustion is spreading. Bloomberg  reports that China plans to end the sale of fossil-fuel-burning vehicles, though it’s not yet clear when the ban will kick in. Meanwhile,  Reuters explains  that India plans to electrify… Read more
The desire to kill off internal combustion is spreading. Bloomberg  reports that China plans to end the sale of fossil-fuel-burning vehicles, though it’s not yet clear when the ban will kick in. Meanwhile,  Reuters explains  that India plans to electrify all new vehicles by 2030, with a detailed explanation of how that will happen expected by the end of the year.
A year or two ago, that kind of news would have been practically unthinkable. But as America under the Trump administration  turns its back on efforts to address climate change , India and China have emerged as unlikely icons in the battle to save the planet.
China is currently the largest electric-vehicle market in the world with a thriving electric-car industry , though there are still far fewer such cars on the roads than those powered by gas and diesel. India is further behind and still lacks a domestic battery manufacturing industry, which may make a homegrown electric-vehicle scene slower to take off.
Even so, if the two huge Asian countries do push ahead as reported and stamp out cars that run on fossil fuels, they will join the U.K. and France , which have both recently decided to outlaw the sale of new internal-combustion cars by 2040.
Such big policy shifts that, as we’ve argued before, are the only way to quickly make electric cars pervasive . A recent analysis from Bloomberg New Energy Finance suggested that electric vehicles could account for as many as half of all new cars sold by 2040. If moves like those in India and China continue to be announced, this optimistic assessment may actually stand a chance of coming true.
Source:
Swipe Up To Dismiss
Thermal Imaging Aims to Give Autonomous Cars Better Night Vision
There are many striking differences between a fence post and a human being, but one may prove particularly useful to robotic vehicles: temperature.
At least that's what the established thermal imaging firm FLIR and the Israeli startup Adasky think. Both… Read more
There are many striking differences between a fence post and a human being, but one may prove particularly useful to robotic vehicles: temperature.
At least that's what the established thermal imaging firm FLIR and the Israeli startup Adasky think. Both companies are building new thermal sensors for use in autonomous cars, and they believe that the extra data could be used to spot hazards in adverse conditions. Humans and animals on or near the road, in particular, create far more heat than their surroundings.
Some high-end automakers, including Porsche, BMW, and Audi, have been fitting vehicles with thermal imaging sensors from FLIR for several years. The vehicles use them to alert a driver to the presence of animals or people on the road at night, which may not show up within reach of the car's headlights. Those sensors offer an image 320 pixels wide by 240 high.
Now, both FLIR and Adasky hope to achieve a similar result in autonomous cars using new sensors that they're both building.
Adasky today announced a new far-infrared sensor, called Viper, that offers better resolution (640 pixels by 480) at refresh rates of 60 frames per second and can detect temperature differences as small as 0.05 °C. Adasky's Dror Meiri tells MIT Technology Review that such sensitivity allows the device not just to spot something like an elk on the road, but also to easily discern things like black ice on the surface of a highway. He says the device should be in mass production by 2020.
The video above shows Adasky's sensor in use on a country road. On the left is a visible-light camera, on the right its thermal sensor. Cyclists, pedestrians, and animals certainly show up far more sharply on the right. And the sharp contrast with surroundings would make it easier for a car to approach those living hazards with caution in a way that lidar, which provides only ranging information, may not.
FLIR is also developing a VGA device for use specifically in self-driving cars. Paul Clayton, director of automotive at FLIR, says that sample units of the sensors are being shipped to some manufacturers for testing this year and that mass-produced sensors could cost "a couple of hundred bucks."
If you're wondering why another sensor may be required in a driverless car, both companies point to an important, if uninspiring word: redundancy. "It's an augment, a somewhat redundant sensor that helps classify in bad lighting conditions or weather," explains Clayton. We've explained in the past that multiple sensors can allow autonomous cars to make better sense of their surroundings , and the sharp contrast of thermal imaging may enable, say, a glimpse of a human to provide extra context that helps avert a dangerous incident.
Even so, an extra sensor does provide another data stream to process, and because thermal imaging is less widely used on the roads than visible-light cameras and even lidar, the machine-learning systems that discern details from footage will have to catch up. Both firms seem to understand that: Adasky has built its own AI to process data for its customers, and FLIR is putting together a thermal image database that clients can use to train their own neural networks.
Adoption of thermal imaging in autonomous vehicles is likely to come down to budget. Some manufacturers may balk at the expense of yet another sensor, but until cheap lidar sensors offer the quality that automakers might like , the devices could add a useful layer of safety.
Video credit:

Images Powered by Shutterstock