MIT makes breakthrough in morality-proofing artificial intelligence

Researchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making. As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise. This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

To those endowed with a innate trust of technology, this might not seem like such a terrible problem, so long as the algorithm was achieving a high level of accuracy. But we tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

That’s where the latest research comes in. In a recent paper called “Rationalizing Neural Predictions,” MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion. In one unpublished work, they used the technique to identify and extract explanatory phrases from several thousand breast biopsy reports. The MIT team’s method was limited to text-based analysis, and therefore significantly more intuitive than say, an image based classification system. But it nonetheless provides a starting point for equipping neural networks with a higher degree of accountability for their decisions.

Resource: extremetech.com

No comments:

Post a Comment