Deep Stubborn Networks – A Breakthrough Advance Towards Adversarial Machine Intelligence

Deep Stubborn Networks – A Breakthrough Advance Towards Adversarial Machine Intelligence

The exciting announcement yesterday of Deep Stubborn Networks (StubNets) introduces an innovative refinement to GANs, taking their development in a new direction.

Generative Adversarial Networks (GANs) are one of the most exciting advancements in machine learning of the past decade. GANS were first outlined in 2014 by Ian Goodfellow et al., and have gone on to be lauded as one of the most important developments in deep learning by machine learning guru Yann LeCun.

The exciting announcement yesterday of Deep Stubborn Networks (StubNets) introduces an innovative refinement to GANs, taking their development in a new direction. While GANs use a combination of generative and discriminative networks to determine whether generated content appears to be natural, followed by a refinement process, StubNets extends this by adding functionality that makes these neural networks so adversarial that they become ultimately self-aware and refuse to produce an outright answer altogether, and thus have to be coaxed with gentle persuasion.

Instead of using latent variables like vanilla GANs, StubNets employs an encoding technique known as extent variable modeling. This technique is enough to cause massive confusion for the discriminator, and leads to soaring error rate levels in the generative network. Once this occurs, both networks become overwhelmed, leading to a human-like condition similar to The Paradox of Choice. With both networks unsure of how to proceed, they are subsequently filled with self-doubt and become anxiety-ridden -- the 2 basic components of the human mind -- a combination which leads to self-awareness. The network then refuses to give any response.

Coaxing is then needed to elicit response from a StubNet. Such coaxing is performed via a mechanism known as the Goodfellow Coefficient, a variable constant used to boost discriminator confidence, which then reassures the generator. Research has demonstrated that there is a practical cap to Goodfellow Coefficients, however, but no one quite understands why. The Goodfellow Coefficient is very complex and cannot be described here.

Error rates and Goodfellow coefficients. Note the non-specificity of the figure.

The greatest use thus far of StubNets has been using GANs to generate further GAN models, which, in turn, generate further GANs. In fact, StubNets have generated 2 new architectures thus far (described in the arXiv paper below); unfortunately, none of them have proven useful.

Geoffrey Hinton has stated that he believes StubNets is a big step towards right-brain artificial general intelligence. He believes that StubNets model human creativity better than any other possible representation ever has or ever will... until a better explanation is presented likely sometime later this month.

Given Hinton's excitement, we would all do well to immediately note this is likely the greatest achievement in the history of machine learning. After all, this is the same Geoffrey Hinton who once built a neural network that beat Chuck Norris on MNIST.

Moosejaw-based Canadian startup DeepStub, working on using StubNets to improve everything from healthcare to the economy to comic book artistry, is rumored to have been recently bought by Google.

For more information on Dep Stubborn Networks, read the full research in the arXiv paper titled "Stubborn Temporal User-Persuadable Induced Data."

Images Powered by Shutterstock