Jun 16, 2016

"Everybody's a lover, everybody's an innovator, and everything is AI": Rohan Murty

BYShrabonti Bagchi

There’s a virtual gabfest about Artificial Intelligence or AI out there these days. You may have heard AI is taking over the world. AI is the new app. It is the new cloud computing. It is the new big data. It is the latest jargon-y term that everyone uses with casual abandon.
We thought it would be a good idea to tell you something about AI that the so-called experts won’t tell you. Watch and read our interview with Rohan Murty, who has a keen interest in the subject. Despite being part of global academia (he is a Junior Fellow at the Society of Fellows at Harvard), his views on AI are unorthodox, refreshing and border on the philosophical. Murty has a PhD in computer science from Harvard University, with research interests in networked systems, embedded computing, and distributed computing systems. He studied computer science at Cornell University, and was a Computing Innovations Fellow at MIT.
Talking about how the term “AI” is flung about rather loosely these days in popular culture and media, Murty dissects the meanings of intelligence and artificial intelligence. Is creative human intelligence merely an aberration along the continuum of human thought and endeavour? We don’t know, and neither does he, but it’s important to ask ourselves if acts of intelligence are, in fact, random events. Why? Because once we understand intelligence, we may better understand artificial intelligence.
Edited excerpts from the interview:
How would you define Artificial Intelligence?
It would depend really on what context you’re asking the question in. AI, in the way computer scientists define it, is perhaps different from the way it is defined or is thought of in popular culture and media. In the classical computer science sense, the whole promise of AI was supposed to be the promise of computer science; in fact, if you look at the computer science department at MIT, it is called the Computer Science and Artificial Intelligence Laboratory. If you look at what Alan Turing, the progenitor of modern computer science, [was working on] — his ruminations came from this idea of ‘How does the mind work? Can you get a machine to model the human mind?’ If you look at the construct of the Turing machine, which is not a real physical machine, it is a theoretical construct, it was a very rudimentary, crude way of beginning to model the human mind. And so for always, the promise of computer science has always been equal to the promise of AI.
But then, over a period of time, you started thinking how do you build an interface out of that — that’s what von Neumann did at the Institute for Advanced Study at Princeton. We began to wonder about how do we interface with these machines… how do we create something like an operating system? So that’s how computer science today has branched out into so many different areas. And AI has remained, from the beginning, one of these areas.
This is the historical context in which people thought about AI, where depending on who you ask different people think of it in different ways. ‘Oh can I get a machine to think like a human or react like a human or behave like a human or to reason like a human or understand like a human?’ In my limited opinion we are very far away from all of this, because the whole paradigm of computer science is still largely this notion of GiGO — Garbage In Garbage Out. A machine only does pretty much does what you tell it. There’s one variant of intelligence, where you say ‘can you find patterns?’ Finding patterns is intelligence, that’s what IQ tests are supposed to do to a certain extent or when people in interviews say ‘can you find patterns?’ Fair enough, to that extent you can say machines can find certain kinds of patterns, but can a machine figure out if that pattern is the right pattern or if it is valuable or not? No. We don’t yet know how to [teach a machine] to do that.
What is your definition of intelligence?
My notion or definition of intelligence is — can you solve a problem that you have previously never seen? A classical example of this kind of intelligence is mathematicians — they solve theorems or they come up with theorems and proofs that they’ve never seen before. And the whole end to end thing — that is one kind of intelligence you can’t yet ascribe to machines. I don’t think we as yet have a comprehensive paradigm for thinking about this in the context of machines. To give you another example — you can show a picture of a few cars to a child, different kinds of cars, and after that it’s very likely that the child will recognize every car. With machines, we can’t yet do that. The child understands context, understands how to apply this in different scenarios and principles, and what meaning to derive out of it. We don’t even have the ability to using these words in the context of a machine. What does a machine understand from the word ‘red’ versus the word ‘blue’? It evokes nothing in it. In my mind, at a philosophical level — I’m not saying this will never happen, who knows what the future holds — there’s a little too much excitement about ‘oh machines are taking over the world!’ I don’t yet see it that way.
Then how do you see it?
The way that I see it is — most tasks that most of us do on most days are fairly deterministic tasks; by that I mean you can actually write down a bunch of rules on a piece of paper and describe most of the task, if not all parts of it. And over a period of time, machines have become more powerful, in the sense that processors and memory have become cheaper and so on, and that is letting us run more and more algorithms which can help us do more and more of these deterministic tasks cheaper and faster and better. And sometimes, perhaps, some people mistake this for ‘machines are taking over the world and are taking over from humans and you’ll never need humans’. No. I don’t believe that at all. Human endeavour and human creativity are fairly genuine things. And this is my take on the increased use of this word AI in popular culture and media. It’s a misuse of the term, and it’s a misnomer. I think people use it far too loosely, like they use the word ‘love’ or ‘innovation’. Everybody’s a lover, everybody’s an innovator and everything is AI.
Today, you upload photos on social media sites and you tag photos, so you’ve shown [the algorithm] some thousand times what your face looks like and yes, algorithms have become sophisticated enough that because you have trained it, now they can find these patterns. But that, to me, has never been the definition of intelligence. My definition of intelligence is, can the machine do a task that it was never trained or programmed to do? I mean, that’s my definition of intelligence for a human being! Why should it be any less for a machine? For example, when I interview somebody, I don’t give them a problem they have solved hundreds of time and ask them to do it again. I give them a situation or a problem that I don’t think they are familiar with, and they have to reason from first principles. Look, there is value in all these things [machines finding patterns]; not for a moment am I saying there’s no value in it. These all represent advances and the next couple of steps. But are machines going to replicate human intelligence? No.
What exactly do we we mean when we say ‘intelligence’?
I have these discussions sometimes with my friends, who are all faculty or in academia — philosophical discussions, really, about intelligent creativity. And we sometimes wonder if, maybe, we are all operating on some sort of a probability distribution where acts of intelligence are random errors that we make, but in the positive direction. Like when we make random errors in the negative direction, we think ‘that person did something stupid or silly’, but we do something in the opposite direction and we think ‘that was a moment of genius or creative spark’.
So what is the role of AI in society, culture, in problem-solving?
The increasing trend is we are implicitly discovering that more and more of the world runs on rules, and sure, rules have variations and a probabilistic sort of nature to them, they may be stochastic… but that’s why we are saying machines will do more and more of this. I don’t see that as the rise of AI. Rather, I see that as the explicit acknowledgement of the fact that the world is run [more often] by rules than we previously thought.
Nothing has changed, mostly. All that has changed is that the power of computing has changed; it has become cheaper. The day I can have a machine pass judgement in court, I’d say we were closer to AI. We know how to convey numbers to machines, and if only we could reduce morality to a set of numbers and rules… but that’s precisely the point, that’s precisely what we cannot do. Michael Sandel in his ‘Justice’ lectures talks about decision-making in this context.
https://www.youtube.com/watch?v=PFol2E7B5fQ

FactorDaily’s journalism is produced by some of the best brains in the story-telling business. If you like our body of work – deep reportage, domain specialist write-ups, data stories, podcasts and the like – consider supporting the FactorDaily journey.

Support FactorDaily

Shrabonti Bagchi is a writer of FactorDaily.