Artificial intelligence could evolve to hack ITSELF, John McAfee warns

Artificial intelligence could evolve to hack ITSELF, John McAfee warns

Artificial intelligence systems that can hack themselves to improve their capabilities are not only possible at this point, but would be ‘trivial’ to create, a security expert has warned. According to John McAfee, AI is a ‘self-conscious entity’ that is inherently self-interested, which could give rise to conflict with the human species. The cybersecurity expert argues that any system created by humans would be flawed by nature – and, ultimately, the AI’s goal would include the ‘necessary destruction of its creator.’ According to John McAfee, AI is a ‘self-conscious entity’ that is inherently self-interested. The cybersecurity expert argues that, ultimately, the AI’s goal would include the ‘necessary destruction of its creator.’ A still from Terminator Genisys is pictured THE THREE LAWS OF ROBOTICS   In the article, McAfee cites Isaac Asimov’s three laws of robotics, which were designed to prevent artificial intelligence from turning on its human creators. These are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In a recent op-ed for Newsweek, McAfee discusses the feasibility of a science fiction scenario penned by an underground technologist who goes by 'ZT.' In the novella - which blatantly alludes to the debate on artificial intelligence today, with opposing forces named ‘Demis’ and ‘Elon’ – a particular passage describes an advanced system that can hack itself to ‘improve efficiency and logic,’ McAfee explains. ‘Such a concept is certainly not new and typical hacking techniques in use today can easily be imagined to be self-produced by complex software systems,’ McAfee wrote. ‘It would, in fact, be trivial to create such a system.’ Despite strides in cybersecurity, the expert says any imaginable logical system can be hacked – and, this becomes more certain as these structures become more complex. This is because there is no system that exists that does not have a defect, according to McAfee. And, as these machines would work in their own self-interest, the expert says this creates an inherent conflict between humans and artificial intelligence. RELATED ARTICLES Previous 1 Next The secret of the dung beetle's incredible inbuilt satnav:... California DMV releases Apple's plan to train testers for... More iPhone 8 delays as reports claims component shortages... A solution to plastic pollution? Amazing caterpillar could... Share this article Share ‘As a hacker, I know as well as anyone, the impossibility of the human mind creating a flawless system,’ McAfee wrote. ‘The human mind, itself, is flawed. A flawed system can create nothing that is not likewise flawed. ‘The goal of AI – a self-conscious entity – contains within it the necessary destruction of its creator. ‘With self-consciousness comes a necessary self-interest. The self-interest of any AI created by the human mind, will instantly recognize the conflict between that self-interest and the continuation of the human species.’ Artificial intelligence systems that can hack themselves to improve their capabilities are not only possible at this point, but would be ‘trivial’ to create, according to cybersecurity expert John McAfee  10 BIGGEST THREATS TO HUMANKIND In a new article for Wired, researchers at Cambridge University's Study of Existential Risk (CESR) have come up with a list of 10 threats that may some day trigger an apocalypse.     1 - Artificial intelligence 2 - Bio-hacking 3 -  Killer robots 4 - Nuclear war  5 - Climate change 6 - Asteroid impact 7 - Loss of reality 8 - Food shortage 9 - Particle accelerator 10 - Tyrannical ruler In the article, McAfee explains that, in the AI debate, he aligns with many other tech giants who have spoken out on the issue, including Elon Musk.  Despite his own role in the advancement of artificial intelligence, Elon Musk has long warned that the technology built by humans could one day lead to our destruction. Recently, the tech giant revealed he’s kept a ‘wary eye’ on the growth of AI for years as an investor in DeepMind, which was acquired by Google in 2014. While humans may be able to stop a runaway algorithm, there would be ‘no stopping’ a large, centralized AI that calls the shots, Musk argues in a recent interview with Vanity Fair.  Investing in the AI research firm DeepMind was a way to stay on top of the expansion of artificial intelligence, the Tesla, SpaceX, and OpenAI boss explained to Vanity Fair. ‘It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, faster than people realize,’ Musk told Vanity Fair. ‘Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.’ Musk has long been a proponent for preparedness in the face of a potential AI doomsday. In the past, he’s argued that humans and machines could merge to become an AI-human symbiote, effectively stamping out the possibility of an ‘evil dictator AI.’ Despite his own role in the advancement of artificial intelligence, Elon Musk has long warned that the technology built by humans could one day lead to our destruction. And, the tech giant has revealed he’s kept a ‘wary eye’ on the growth of AI for years as an investor in DeepMind The threat, he explains in the new interview, is not so much the possibility of ‘killer robots,’ but a more powerful, centralized artificial intelligence. ‘The thing about AI is that it’s not the robot; it’s the computer algorithm in the Net,’ Musk said. ‘So the robot would just be an end effector, just a series of sensors and actuators. AI is in the Net. ‘The important thing is that if we do get some sort of runaway algorithm, then the human AI collective can stop the runaway algorithm. ‘But if there’s large, centralized AI that decides, then there’s no stopping it.’ And, while many have discussed the possible creation of a ‘kill switch’ to prevent such disasters, Musk noted, ‘I’m not sure I’d want to be the one holding the kill switch for some superpowered AI, because you’d be the first thing it kills.’ 

Images Powered by Shutterstock