Alan Kay is one of the pioneers of personal computing. His role in developing the graphical user interface (GUI) at Xerox’s Palo Alto Research Centre (PARC) that set off the proliferation of hundreds of millions of computers is well-storied. He is widely considered one of the fathers of object-oriented programming, or OOP, which at its simplest is a “if-then” programming paradigm that links data to pinpointed procedures. OOP languages include Java, C++, PHP, Perl, Python, and Ruby, to name some. Kay also conceived something called the Dynabook, close to his heart. That formed the bedrock of laptop, tablet and e-book computing the way we know of it today.
Kay, 76, is now the President of the Viewpoints Research Institute, after four decades of computing work at PARC, Apple, Disney and HP. In an interview with FactorDaily over three emails — we believe it is the first he’s given to an Indian publisher — Kay, talks about today’s computing challenges, artificial intelligence, learnings for modern research, and, of course, Apple. The interview, lightly edited:
[Editor’s note: Spend some time on the interview to absorb the nuances of Kay’s wisdom. Text inserted within square brackets are editorial inserts to help make clear what Kay’s saying.] Q: You are known for your vision of “code as biology” as mentioned here in this Wired story. As computer scientists and companies scramble to mimic the human brain in solving real world and business problems, what do you think artificial intelligence (AI) can and cannot do?
A: There’s the human brain, and there’s the human mind. Biology means variation, so we see a wide spread in depth of thinking that is partly genetic and partly cultural.
It would be a great breakthrough to make a dog’s or a cat’s mind ([we are] getting there), and then to make anything that works like a human mind.
However, the current aims of AI and its funders are mainly to create “mind models” that rank with superior human thinking along a variety of dimensions.
Most superior human thinking is the result of various kinds of inventions — including science — and the learning that is required. Some of this should be doable without recreating human-type minds. On the other hand, many important parts of the human-type mind has to be discovered and understood in order for any kind of inter-communication and understanding to take place (this is a part of the science that not very [many] are interested in). As with the dangers of playing around with the meta-systems of life, playing around with the meta-systems of thought “without enough thought” is likely to be quite dangerous in many respects.
Philosophically, the question of “life” was whether it was a special arrangement of simple elements, or whether something not physical was needed. The former seems to be the case. The philosophical question of intelligence also revolves around “architecture?” or “something special?”.
Again, I think the former will be the case, and this is likely much more wide open — and dangerous — as it is understood. Q: AI has also become a marketing jargon being used by some companies to position their low end solutions as something disruptive. How can one separate real AI from all the clutter of noise?
A: See above. Q: There are about three million engineers working in India’s over $150 billion software services industry. As one of the computing pioneers who also came up with object oriented programming, what will be your advice for the next generation of software engineers?
A: “Real objects” as I thought about them in the mid-60s have never made it into main-stream programming — just the label! I think this is because people want to be thought of as “current” but most don’t really want change. So the “simulation idea” of objects — which can be used very powerfully — was mostly used to simulate old ideas (like data structures, etc.) — and the result was a more complex version of old ways of doing things that really don’t scale well.
Part of the idea behind “real objects” was to act as “virtual machines” connected by “neutral messages” in the very same way as we were starting to design the Internet — and that the “virtual internet of objects” would map into the subsequent hardware network. The latter got made, and the former did not get understood or adopted.
So good advice for people in software is to really try to understand your field as though it were something like physics (where what Newton did needs to be understood).
Quite a few important ideas from the last 50 years have simply been ignored in the pretty weak culture of computing over the years. Then do things to make software practice qualitatively better, and convince their companies to make things better. Q: Having been part of the legendary Xerox PARC, how do you find the research labs and efforts by some of the biggest technology companies including IBM, Google, Apple and Facebook today? Are there learnings from your experience that can be applied in modern research?
A: My experience over the last 50 years has been that the quality of research results is mostly highly correlated to the quality of the funding and administration of the funding. This “finds” the “best” and “right” people, who in turn “find” the best things to work on. This has not existed in the US in any comparable way since the 60s and 70s. But take a look at what Sam Altman is doing at YC Research. Vishal Sikka and Infosys are helping here. (Sikka, the CEO of India’s second largest software company Infosys, is an advisor with the “Open AI” project, along with Kay).
Q: You’ve inspired initiatives aimed at helping children get access to better computing and programming. What is your assessment of programs such as One Laptop Per Child? Have they achieved the mission? A lot of criticism around these initiatives have been about the focus on building cheap devices.
A: The main criticism of OLPC is that there was not enough funding and lead time to put a combination of great curriculum combined with a UI that could teach without requiring outside help. The project was a social success in most countries, and did lead to other companies trying to make similar devices (again unfortunately without the total work required for content and curriculum). Even with enough time and funding, doing this well is still an enormous task, and there are almost no examples (even in the small) about how this should be done in the large. Q: Apple claims to have disrupted many industries with innovative products and ideas. These include phones, music and even computers. What are your views on the Apple as you see today, and how does it compare with the company that Steve Jobs co founded and ran?
A: The early Apple under the ‘first Steve’ had a lot of idealism and romance about uplifting humans qualitatively — one of the slogans back then was “Wings for the mind!”. The first Steve got John Sculley from Pepsi by saying to him “Do you want to keep selling sugar water to children, or do you want to change the world?”
The irony is that the ‘second Steve’ of the later Apple made and sold the equivalent of mental sugar water to all via convenient appealing consumer gadgets.
Q: Do you feel some of the smartest and the brightest researchers and technology brains today, are solving problems that don’t matter? Contrasting what we are seeing now with the kind of team you had at the legendary PARC.
A: Value judgements require a value system.
If the prevailing values have to do with simply making money then what is going on now “matters”. If we think that the human race in our age of great powers needs great wisdom that exceeds the powers, then most money-making goals will miss what is needed, and much of what is being done “doesn’t matter”.
Q: Is the technology industry thinking too short term in terms of next products and innovations? One of the complaints is that the companies are not thinking decades into the future. Does that worry you, as a pioneer?
A: Again, what are the values and what are the goals. If the values are primarily those of “hunter-gatherers”, then nothing will be done with the ecologies to be exploited — the hunter-gatherers will move on, not replant, not renew the land, not invent better versions of agriculture or seeds etc. Q: What’s your legacy? What according would you like to be remembered for, as a pioneer and as such a revered rockstar?
A: I’m not interested in being remembered — but I would to have the ideas, visions, goals, and values of my whole research community not just remembered but understood and heeded. Q: Some passionate and bright startup founding teams find it nearly impossible to convince the VCs about their futuristic ideas; artificial intelligence, for instance. Unless of course, it’s solving a retail problem and so on. In a year when funding new ideas is tougher, what will be your advice to founding teams chasing building futuristic solutions? Do they have hope?
Startups are not a good place to do research (and they rarely come up with real advances). The pacing is wrong, there is no time for problem “finding”, there is a “customer” who is not expected to do much if any learning, and — by the way — “research means you can change your mind” whereas most the action at startups (rightfully) is various kinds of engineering, packaging, and marketing.
Take a look at “The Dream Machine” by Mitchell Waldrop (the story of the ARPA computing research community) if you want a glimpse of the processes that make real progress. Or read a much shorter tribute I wrote about this community: pdf. Also read: FactorDaily’s interview of Rohan Murty, Junior Fellow at the Society of Fellows, Harvard University. and the founder of Murty Classical Library of India, on artificial intelligence. And, another good interview of Kay by Time magazine.
FactorDaily’s journalism is produced by some of the best brains in the story-telling business. If you like our body of work – deep reportage, domain specialist write-ups, data stories, podcasts and the like – consider supporting the FactorDaily journey.