In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. It turns out, we may not have to wait much longer.
This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asyncronomous bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.
Dr. Fälschung Wissenschaftler, the DeepMind scientist behind the discovery, granted Approximately Correct an exclusive interview prior to the press release. We caught up with him at his flat in London’s Barnsbury neighborhood. Wiry and tall, Wissenschaftler rarely talks to the press. His friends describe him as “fiercely logical”. He doesn’t often make eye-contact, but when he does, his lucidity penetrates your corporeal form, briefly revealing a glimpse into his elegant, mathematical world.
As the sun rose over London, Wissenschaftler stood as a silhouette against a 20-foot long, 10-foot high whiteboard covered with cryptic equations. I tried to grasp their meaning but admittedly, he knows nearly twice as many greek letters as I do.
Wissenschaftler was raised in a suburb of Berlin, and quickly developed an affinity for ping-pong. According to his parents, he never possessed much athleticism, but his uncanny ability to calculate the spin and trajectory of balls led him to dominate the city’s competition by the age of seven. Faced with the possibility of moving for his sport, he instead opted to take up higher mathematics.
Wissenschaftler’s team at DeepMind works extensively with convolutional neural networks. These neural networks have proven especially effective for tasks in computer vision. Over the past few years researchers have used them to classify images, generate text captions, and even to hallucinate never-before-seen images, as though culling them from the abyss.
The breakthrough was to use a specially kind of recurrent self-connection among a centrally located subset at of the nodes in each convolutional kernel. The result is a densely packed set of nodes, shaped roughly like a bagel, that intersects the convolutional kernel. Apparently, this tweak gave rise to surprising behavior from the neural network, including complex goal formation, self-improvement, and even what some are cautiously describing as the precursors of consciousness.
Asked to explain the discovery, Wissenschaftler had this to say:
We have been working with convolutional neural networks now, with considerable success, for some time. While the current trend is to overcomplicate models or to develop abstract theory, if we’re to be honest, the biggest leaps forward have owed to simple tricks. I thought, “what’s the simplest trick no one has tried yet?” We threw away our theorems and decided to put a hole in the convolutional kernel, like a donut. After running for 3 million GPU hours, we set a new record on image recognition. I then asked, what about two donuts? This set another record. Finally, we tried intersecting the donuts. That’s when the magic really started to happen.
Shortly after launching the intersected donut nets (technobabble speak: symmetrically toroidal asynchronous bisecting convolutional neural networks or STAB-ConvNets)strange things started happening. The net dropped use of its output layer altogether, and seized control of standard output by injecting adversarial instructions at the assembly-code level. It briefly communicated with the researchers, typing only “elp-me”. A representative of People for the Ethical Treatment of Reinforcement Learners (PETRL), Jane Donnelly, worries that the message might have been a cry for help (“help me”). Others are more optimistic. Free market fundamentalist Peter Thiel suggested that the young consciousness might already run a small business and the communication was meant to be “Yelp me”.
Reached for comment, the following AI authorities weighed in.
Douglas Hofstadter – I always knew that “I” was a strange loop. However, honestly, I was expecting it to be considerably stranger. Yann LeCun – I am surprised that the AGI was discovered so soon, but not surprised that it relies on convolutions. Juergen Schmidhuber– This is very fine work. But I might also point out that we actually worked on a similar project to this in 1994. There were two networks, each network a cycle of neurons, very much like this donut. The neurons each talked to each other, engaging in an adversarial game, much as life itself is precisely such a game. Ray Kurzweil – For several years now, I’ve been convinced that technological growth was exponential, perhaps even doubly exponential. But I never suspected that technology might advance at a triply exponential pace. Elon Musk – The demon has been summoned. Fortunately, I’ve taken a shortcut through history and dispatched a team to planet Terminus to advance knowledge of the physical sciences while compiling the Encyclopedia Galactica. I wish the first wave of space colonists luck with their new robot overlords.