This morning I was fortunate enough to have the time to read an essay by Sataya Nadella; the CEO of Microsoft, and an intelligent, impassioned thinker about the new industrial revolution that will come to us via artificial intelligence.
While the cliff notes are interesting, I strongly recommend that you take the time to read the entire essay in which Nadella talks about the need for robots to be viewed as humanity’s assistants, and not our future domineering overlords. To arrive at this state we must rely on the ethics and principles of the engineers who construct the artificial intelligence.
I’m sure many of you have some concerns about this on the 25th anniversary of Terminator Two the movie. Surely all the AI that we build will one day combine into a super locus of intelligence that will view us as cockroaches and mental weaklings?
Well, we have some time before that’s going to happen. If you consider that all of the neurons in the human brain put together exceed the entire capacity of silicon memory on this planet by several orders of magnitude, and that we would need to expand to other planets to gain the resources to build such a super brain, I think we have some time on our side.
Nadella’s argument is that we should forget these apocalyptic scenarios, and remember four core skills that must be developed in humans; empathy, education, creativity, judgement and accountability. We will look to machines to help us enrich these skills, and to bring out the best in us. For if machines are truly to augment humans, these are the areas in which we most need augmentation.
The emotional component of the traditional medical catechism “first do no harm” applies to the designer of the system who brings the emotional component that remains outside the capabilities of artificial intelligence. The designer must understand the system, know its limitations, know its dangers, and above all know that he or she alone cannot represent the entire human population. We speak today of the need for care team members and patient advocates to be involved in the design of medical systems. How much more will this be true in the future?
As a computer scientist and mathematician working in medicine, I feel a deep weight of responsibility in Nadella’s words. The systems that I conceive and build will be responsible for the care of other humans, and while they will fail from time to time, they must do so in a way that allows us to inspect the errors to arrive at accountable answers. Above all, they must seek to augment, but not replace the role of the clinical care team in medical decision-making.
I must bring the human emotional component to the machine so that the machine may understand us and we collectively may understand it. For while it is possible for me to comprehend the emotional component of the Hippocratic oath, the types of artificial intelligence we see today have no such abilities, and without supporting ethical systems they will certainly make emotional mistakes that are logically correct.