Joanna is an associate professor of AI at Bath University, and an affiliate at Princeton University. She has been working on various fields such as natural intelligence, culture, religion and also intensively on the design of intelligent systems. Her first experiences with AI date back to the mid-80s.
We were lucky to talk toJoannafor an hour to explore more of her in-depth experiences and viewpoints.
Could you tell us a little bit about your background? How did you get into AI?
My first Degree is Behavioral Science, which is a kind of non-clinical psychology. It was at the University of Chicago, and it was a Liberal Arts Degree that was joint between all the different faculties of social sciences. That is what I really cared about, I wanted to understand animal intelligence, but as humans are pretty interesting animals too, I was looking at all those things. At the same time, I was a really good programmer. I think it was actually a conservative decision on my part to do what I knew I was extraordinarily good at and combine it with what I was really interested in.
Often people are humanizing artificial intelligence, which opposes your point of view as you stress that artificial intelligence is, and will always be, something entirely different than human intelligence…
Yes, there are two fundamental differences.
One is that artificial intelligence by definition is an artifact. Something that we deliberately designed and even if we choose to design it by throwing some dice once in a while that does not make it any less our responsibility.
The other fundamental difference is that we do not, so far, build it by cloning. All the stuff I say doesn’t apply to cloning and biological engineering and things like that. I am just talking about when you build a system out of wires and silicon, then phenomenologically, you are never going to get the same kind of thing. You will never get something as close to us as a cow or a rat or probably even a fruit fly because those share much more similarity in terms of architecture and in terms of information processing, of how they deal with the world we are in than anything that we built out of wires and silicon.
We eat animals; we poison them. I was just reading about poisoning rats in Washington DC because apparently there is a resurge. We do all kinds of things to animals, and some people think you should not, but the point is that when we cannot even make up our minds about the proper way to treat rats and mice, why would we now go back the other direction?
AI is something we have built, and while we are building something that we can maintain, something that we can be accountable for, we are not going to ever have something that experiences stress and pain and those kinds of things as systemically as humans do. It would be a really bad idea to do that.
You could build a robot now, and you could put a bomb in it. You could put a sensor and a clock and say, okay if I have not seen any other person for five minutes I blow myself up. You could do that, and you might say that it is even worse than being a human because when we go into solitary confinement, it is torture for us, but we survive, while this robot blows up in five minutes, and the robot is not going to care. That is a separate piece. It is not the same experience of solitary confinement that a human has.
While we are over anthropomorphizing, we are basically allowing ourselves to be exploited by the people. For example, I hate the fact that people are talking about smart speakers when they are smart microphones.
What would you call them?
Do you know 1984? One of my friends Jason Noble said we should call them telescreens.
I would call them personal digital assistants, which is what we used to call something sort of a smartphone. The early smartphones were not networked, but we would just write things down on them.
The point is that you have a personal digital assistant, that’s not personal, because you don’t know how much it’s being used by other people and I really hope that one of the things that we’re succeeding in doing (I see some indication about this) is getting people over thinking that the internet of things is a good idea.
I was at a European Commission meeting in Sophia, the digital assembly and the Eastern European politicians totally get this. Wiring the world together means that you are allowing the hacker next door to get into the world and so you need to be very careful about how you architect things. If you did have a single program that was in charge of, e.g. the power grid, then that becomes a single point that can be compromised, either by hacking it directly or understanding how it works and then sort of exploiting it by moving around it. You would rather have a system that is heterogeneous.
Another thing that is really essential is that our entire culture and all of our concepts like justice, punishment, and responsibility are things that we have developed to keep our species running. They are not things that are just out there in the world; they are innovations that we use, and we keep updating them.
I cannot believe how many people think that fairness is something that is the native state of the world. Last year I had a paper come out that showed that you find sexism and racism in any artificial intelligence that’s been trained up on just ordinary language. Some people say, just add random noise.
We are different, and then fairness is how do we cope with those differences in the way that is best for society as a whole and that is a constantly changing set of compromises that we make.
What do we need to do in the next five to ten years in order to make sure that we develop safe AI that is augmenting our capabilities? AI is taking over many decisions, and we all have different values and stem from different cultures. How can we make sure to incorporate this in the design of AI?
It is not the AI that is taking over decisions. It is the people that are deciding to put AI in charge of decisions, and that is something that they are then in control of. They can alter the AI and alter the decisions.
It is not the AI; it is not another species or another country that we negotiate with. Rather what’s important is understanding that artificial intelligence is something we have had for a long time; for decades we have had it using digital computers, and it is one of the tools in our toolbox, and we need to get a strong concept of that. We need to start talking about it.
Again, this is one of the huge dangers of anthropomorphizing it, you literally get smart, well-meaning people who are sitting there trying to defend the rights of AI or really want to build this future were we are superseded by AI and they are not realizing that first of all that it is incredible technologically implausible but second as you would say in the next thirty years or so would almost certainly just lead to more people being killed and destroyed. People would be seen as unimportant aspects by the government, corporations or rich individuals who don’t need them as part of their plan.
AI is being used as a shield that is literally a shell company, allowing people to do what they really want and misdirect attention towards some machines.
Artificial intelligence is just a technology that we can use badly or well. We need to answer the same kind of questions as for other potentially harmful technologies; e.g. what is the right way to store chemical weapons? What is the right way to deploy nuclear power, should we not deploy nuclear power at all or maybe it is really important ecologically? Those kinds of decisions are the same kinds of decisions we need to make when we talk about not only artificial intelligence but data privacy, and cybersecurity. This applies to ICT in general. AI is just one tool in the toolbox.
We are interacting with AI in more personal ways. What about emotional connections?
Even, if you have a robot that really is your best friend; that you are immensely emotional attached to, you could demand that that robot can be backed up. Then you do not have to worry, e.g., if there’s a fire or something. You just leave it behind, and then you buy another robot, and you get the backup.
Even if you choose to make an emotional bond into a machine instead of with the people who could actually help you in the long term. That is your personal decision, but you can demand that you do not have that kind of exposure and I think this is a real problem.
When we had the flooding in New Orleans it needed to be decided how much resources and expenditures do you put forward to try and make sure the dogs and cats are okay, but dogs and cats do stuff like humans, they do emotionally, physically, very similar things, but the problem is we only have so many resources. How do we make sure there are no children left behind before we start rescuing the dogs? Also, the families will not come without their dogs. The same thing happened with Sandy here on the east coast; animals were left behind, and they were starving, and people were very concerned about that.
Why should we put ourselves in that situation with a technology, which is designed in a way to ensure that we do not have that kind of problems?
Having an understanding of the dangers associated with AI, tell us about your Ph.D. topic in the early 90s and how it relates to AI safety?
One of the things, even when I was an undergraduate, that people were talking about was modularity. For example, you are sort of a different person, when you are driving than when you are a pedestrian because you have such different attitudes. It seems like you have these discrete sections of your mind, you’re not all just one big thing.
Even philosophers were talking about that back then. When I was programming professionally between my undergraduate degree and my Ph.D. people were starting to realize that maybe you did not need one big mainframe, maybe you could have a bunch of different computers and create like a network. Now it’s obviously called the cloud, but at the time it was this big thing where people said, “Oh yeah, clients, server, architecture… maybe we can get some computing doing specialized different tasks based on the hardware that was there”.
Then when I got to do my Masters Degree and I saw what Rodney Brookswas doing and a bunch of other people while I was at Edinburgh, they were getting very excited about his work. It was called behavior-based design, and the idea was to instead of trying to build something that is as complicated as a person, you build modules that are relatively simple, and each module perceived what it needed to perceive in order to do what it needed to do.
Another thing I realized was that not only do you need memory in order to have intelligence but also it makes sense that that is the core of the module. One of the things that determine when you need a module is what kind of information you need to do the perception and action. The other component though, which was the one that went into the games industry was how do you arbitrate between these modules.
Once you are taken something apart into pieces, one of the problems is how to assemble it back up again, you need coherent behavior. Though, there are certain kinds of things you can do just autonomously like a reflex to avoid fire, quite a lot of what any organism has to do. There are certain kinds of resources like your physical location that are completely determined, by all the modules. That was when I created, what’s now called behavior trees. We called it P.O.S.H.( Parallel-rooted, Ordered Slip-stack Hierarchical Action Selection).
The basic idea was that you want to use object-oriented design (standard software engineering) but the one thing you really need to do differently for AI is to say what are the priorities in the system because an AI system is proactive. It is something that actually has to recognize the context in which it is going to act, and indeed, it has to act for homeostatic reasons. It has to act because it knows it needs to do something, it has a purpose.
I was not really worried about evil AI, but I was worried about not being able to guarantee that for example, it would know how to run out of a fire or whatever.
So, you had to have certain high priorities about guaranteeing the system that are relatively deterministic. Even if you use machine learning, that could also be used in a deterministic way because it could have boundaries around which you do not allow the system to update its learning. If it looks like it’s not going to meet performance guidelines, you can build deterministic sort of fences around the system that’s trying to improve one aspect of what the system does.
The mistake that people like Nick Bostrom make is if you actually build a system there’s zero chance that it turns the world into paper clips. He is worried about the alignment between the high-level priorities and then what happens with the lower levels, but you do not give full autonomy to the lower levels. If the system is autonomous at all, it is autonomous at the level of the priorities that you have that are guaranteed to be in it and their subparts that are going off to learn the best way to do something. If the rest of the system, which includes normally for any AI, humans, notices that there is no reconsumption of metal or something then it would shut that part of the system down.
Can you give an example to illustrate this? You are saying, instead of having one system that can work autonomously, you design a modular system with different components that control each other and thus the system…
One example is Bitcoin. Nobody had originally expected Bitcoin was going to consume as much power as it has, but now everyone is trying to figure out how to regulate that. That is a problem to some extent that is self-regulating because people cannot afford that power.
However, it seems that there are other regulatory forces that may come in any way, and say; you guys aren’t paying your fair share for environmental damage you’re doing and then they may be forced to actually and maybe not just them but all environments, all power consumers may suddenly start finding themselves as the consequence of the conversation there has been about Bitcoin, actually paying a more appropriate amount of money for the infrastructure they’re destroying, the global infrastructure.
The point is that you may fail, but that’s because people are involved. It is called cognizant error detection. It’s the awareness that things aren’t always going to go right. Consequently, a basic thing you should do when you structure a system is to make sure that you build components that check for other components. In fact, this is an easier way to build a powerful system as you have simple components that are detecting when other components are going off the rail.
I am not saying that it’s necessarily true that everything works that well. There have been very good examples of corporations and various other entities that have done enormous damage to the world. It is not that the world is perfectly safe, and nothing can happen, but it is that there are processes that we can follow that make it much more likely that we will catch those processes early and keep them from doing much damage.
That is one of the things I am working on a lot right now is just trying to communicate to legislatures that you can hold people who use artificial intelligence accountable.
I’m also talking to companies; I was just talking to Microsoft and Google two weeks ago about this, that this is what I expect to be coming down the pipes.
You are no longer going to just be linking to any arbitrary software library, that you have downloaded from the internet and you do not know its provenance, and you do not know who hacked into it or whatever.
Rather you’re going to have to be able to show, what version of software system you have used and what version of what data library you use to train your machine learning, and what your procedures were. If you don’t have those kinds of records then you’re not going to be able to prove due diligence and you are going to be held responsible, liable for any damage that you do or other people do with your software.
This is not a unique approach. If you look at things like medicine or the automotive industry where there already is good practice with the way they develop AI and document the process, and they have loads of AI. Everyone talks about driverless, but there are loads of AI in every car manufacturer trying to improve the driving experience and make the car safer, and they already have documented this, it is not impossible. It is not even that hard, and when you are careful with your software engineering, it does take you a little longer to go to market, but also it is much easier to maintain and extend your code.
I think we need to grow up. For example, in architecture back then you could just build a building anywhere you wanted, and that would create various kinds of mayhem for traffic in cities, and sometimes buildings would just collapse on people and kill them. What you have now is that undergraduates that are getting architecture degrees to learn how to talk to people that are city planners and they are going to get licensed and learn how to become licensed, and buildings get inspected.
I think we could very well wind up in that kind of place and it does not mean the end of IP. Medicine is heavily regulated, heavily inspected, but it has ten times as much IP as the software, the technology industry. I think we are just going to define the processes by which we develop things. Things are changing, and that is good for our society.
We all want to live in a society, that is at least stable enough, that we can plan a company or a family or something else.