Artificial Intelligence Is Stuck. Here’s How to Move It Forward

Artificial Intelligence Is Stuck. Here’s How to Move It Forward

rwnspace 1 hour ago
I sense the hand of an editor. Particularly regarding the title.
Embodiment seems to be a branch with low-hanging fruit, when it comes to advancing AGI. I think the economic structural problems are important, but it's possible to over-egg the details and for some lab to stumble on an experimental paradigm with features we didn't realise were implicated a priori. When it comes to other AIs, the idea that we are stuck for pragmatic/practical issues is a little silly.
I'm no expert, just a person with an arm-chair (and too much time on my hands), but I suspect that idealising the feature-space we work with can hide as many things as it reveals - it may turn out that the computational problems are so large because we are mostly attempting to solve them ex nihilo. That is, embedding in an environment plays as much a role in the process of intelligence as a neuronal structure does; genes and evolution provide a mode for translating environmental computation into neuronal computation. The vast scope of what we don't know about the role of glial cells for cognition (and the little that we do) makes me doubt that complex structures of binary mechanisms will be sufficient. But again, that's just my speculation, and perhaps lack of education.
backpropaganda 1 hour ago
An agent can be intelligent without it learning how to read human language. Look around, most organisms in our world communicate using extremely simple binary language or don't communicate verbally at all. Yet, they are intelligent enough to do very complicated tasks which current robots fail to do. Intelligence is an easier problem than language, and thus should be solved before language.
What a sigh of relief to read a refreshing take on the real progress of AI. Yes, it's stuck, and that's the real problem of AI, that we haven't been able to do anything significant after perception. However, unlike the author, I don't think the solution is to nationalize AI research (we're not close enough for that), but to fund more non-deeplearning research for 5-10 years, and then we might see some progress in non-perception tasks.
the8472 51 minutes ago
> Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short.
That doesn't seem to be very surprising given the limited complexity compared to say a fly's brain. artificial NNs manage to work work because they are highly specialized to a specific task.
baalimago 40 minutes ago
We humans don't know how to learn. We don't know how learning works. We simply work work work work until we know whatever we set out to know, we don't learn how we learned it, but are happy that we simply know it and leave it at that.
Therefore teaching someone/something else how to learn will be almost inherently impossible, because we don't understand it ourselves (yet?)
And if we do learn how to learn, why would we need an AI to do it for us?
adamnemecek 1 hour ago
We are using the wrong computational paradigm. We have to abandon bits and go back to analog computing in the form of analog photonic computing that gives you fast Calculus. This is painfully obvious in the case of neural networks, which run faster on an analog computer and are also easier to program.
dimatura 1 minute ago
How is an analog computer "easier to program" than a digital computer? Making neural networks do what you want is hard enough with the help of tons of libraries, decent scripting languages, the ability to dump the weights into a file and inspect them, etc. Programming with an analog computer, which I'm guessing would be something like programming with FPGAs, sounds like a nightmare in comparison.
sgt101 33 minutes ago
I think you are out of date. Rectification is pretty good at removing a lot of the vanishing gradient issues that nn's used to face, and the overwhelming power of modern digital computers (50k cores is common) make this all moot as far as I can see.
adamnemecek 27 minutes ago
Nope, they aren't even in the same category. E.g. Notice that there is a cpu size limit because of heat. Do you know what doesn't overheat? A photonic computer. You can build a cpu the size of a house. Also how does the number of cores constitute "overwhelming power".
jsemrau 1 hour ago
We (humanity) have made huge progress to understand images in terms of content and emotions of people. Imagenet is truly a gift to the world. However, that has brought us only a small but important step forward. Clearly expectation has to catch up to reality. However, all these solutions are becoming quickly more accessible to the laymen bringing another boost to operational efficiencies for companies worldwide.
asketak 1 hour ago
Most of the AI progress in last years is just tuning pattern recognition algorithms. We can not expect these algorithms to produce results like humans, because humans have a lot of information not just from percieving the world, but their patterns of thinking are also vastly dependend on the underlying structure of brain, that has developed over milions of years of evolution.
If there is a cliff, toddlers are scared of being nearby. They definetely don't have the ability to "imagine=simulate" the consequences of falling over the cliff. The fear is in the structure of neurons of brain.
If you feed classifier algorithm with images of black dogs and white swans and then want to classify black swan. Both classifying it as dog(because of color) or swan(because of shape) are right. The difference is only in bias, which features do you prefer.
Arwill 55 minutes ago
> If there is a cliff, toddlers are scared of being nearby.
I don't know about a study proving that, might be true. But from my own experience, toddlers are not afraid of anything until a. they hurt themselves, b. they develop more and understand the concepts like height, c. the parent repeats "no" to them and/or shows them what to do or not to do until they learn.
So there might be evolutionary pre-programming in the human brain, but toddlers brain still needs to develop until those became active. I think there should be more research into how toddlers learn to crawl, stand and walk, how they learn to speak, etc.
If there is a cliff, toddlers are scared of being nearby.
Not sure if you are a parent, but this isn't the case at all!
unityByFreedom 1 hour ago
Click-baity. AI tech isn't stuck. There are many forthcoming breakthroughs, particularly in medicine, which should really benefit humanity. Radiology is poised to let CNNs make radiologists a lot more efficient. We just need to build the labeled datasets.
If we invest heavily in some AI tech, let it be to produce huge medical datasets. The software and hardware is ready. We're only lacking sufficient data to make more diagnoses with super-human accuracy.
xiphias 51 minutes ago
I agree with the fact that AI isn't stuck (flagged the article). I would say the research is ready, but there's still a lot of infrastructure work for integrating with data sets, learning and serving cheaply on high scale. Still, lots of people are making hard to productionize the research results.
No systems can really understand what they read or translate yet. They are basically sophisticated pattern matching systems.
Check out Winograd Schema: https://en.wikipedia.org/wiki/Winograd_Schema_Challenge
Overview by an expert: http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...
An example: The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
When you switch between "fear" and "violence", the meaning of 'they' change. There are many more examples like this.
The best performance in the first round of the 2016 challenge was 58% by a neural network based system. Random guessing would yield 44% (some questions had more than 2 choices). Human performance was 90.89% with a standard deviation of 7.6%.
Here are the challenge problems used in the first round: http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...
Human Subject Test Performance: http://www.cs.nyu.edu/faculty/davise/papers/WinogradSchemas/...
Kenji 1 hour ago
Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus.
That's exactly the problem. Robots lack sanity checks because they lack real understanding. If you cannot recognize an object that is far away, you are instantly aware of your inability to identify this object. A computer just runs its code over it and outputs complete garbage, and this nonsense then enters the system and does who knows what damage.
Plausibility checks are incredibly complex! If you are in central Europe and you are not in a zoo and you see a leopard fur pattern, it's probably not the living animal! And so on.

Images Powered by Shutterstock