
This is the second part of a piece on self-learning algorithms that imbibe human prejudices from the people who create them, and what effect this may have on us in the future as Artificial Intelligence (AI) becomes more and more integrated with our lives.
Read the first part here.
In this piece, we look at how we can weed out biases from bots — and if it is possible, or, indeed, desirable to do so:
Speaking in a a16z podcast “When humanity meets AI”, Dr Fei Fei Li (Director of Stanford AI lab) identified ‘humanistic thinking’ as a key gap in developing technology, especially AI. The only way creator biases can be overcome and a ‘humanistic’ narrative is likely to emerge is if the pool working on AI is more diverse across sexes, races, ethnicities and cultures.
Jeffrey Dean, head of the Google Brain Team working on AI research, is concerned about the lack of diversity in the AI research community. “In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary,” he said during a recent Reddit AMA.
However, the Google Brain Team has token diversity at best (as of August 2016). For instance, it has only three women (and 40 men) to represent the thinking of nearly half the world.
Perhaps this is the reason Google, Facebook, Microsoft and Amazon are all open-sourcing their AI platforms. It doesn’t beat the ‘dudes working on software’ problem completely and may not solve significant diversity issues but it’s a start, at least.
But are countries around the world ready to join in this definition of our future?
Late in December 2015, Musk and a group of other partners formed OpenAI, a non-profit artificial intelligence research company. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible,” says the mission statement.
Non-profit research is one part of the solution towards building a transparent, unbiased AI platform, although massive adoption and development is still likely to be controlled by the large AI despots that are going to emerge.
Today, AI is built under a large regulatory vacuum. It is notoriously difficult to regulate due to the lack of any common framework for development, multiplicity of actors working on it, opaqueness of the underlying algorithms, and inability to foresee the kind of problems that can arise. Governments are struggling to legally define artificial intelligence.
Non-profit research is one part of the solution towards building a transparent, unbiased AI platform
A human child is trained under a protected bubble for years before being allowed to roam the open world learning off its own volition. AI is still in its infancy today and is child-like.
Books are the largest, ongoing compendium of human civilization’s document of itself and one research suggests using the corpus of stories in our books to teach AI systems about human values before letting them roam free.
However, with language being a significant determinant of the value-narrative that can be derived from the books, would the cultural and social values of large parts of the world (that are not part of the Western world) still get ignored?
For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time and early biases will tend to lurk for eternity.
Even more fundamental is the question – is our definition of right and wrong and morality static, that machines can learn about it from books and gather a clear understanding off them? For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time and early biases will tend to lurk for eternity.
For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time
Even more radical is this thought experiment: What if there exists an AI framework or a ‘Mother AI’, if you will, that provides the necessary moral, ethical and social checks and balances for all AI-related platforms being developed in the world? While this may not solve the primary problem of how to train this Mother AI, it consolidates the moral and ethical questions to one platform that can be monitored, worked upon and constantly improved through collective effort. All AI platforms would need to hook onto it as a rule, which will ensure that they fall under a standard ongoing framework of ‘goodness’.
What if there exists an AI framework or a ‘Mother AI’, if you will, that provides the necessary moral, ethical and social checks and balances for all AI-related platforms