How Do You Regulate a Self-Improving Algorithm?

The Atlantic – At a large technology conference in Toronto this fall, Anna Goldenberg, a star in the field of computer science and genetics, described how artificial intelligence is revolutionizing medicine. Algorithms based on the AI principle of machine learning now can outperform dermatologists at recognizing skin cancers in blemish photos. They can beat cardiologists in detecting arrhythmias in EKGs. In Goldenberg’s own lab, algorithms can be used to identify hitherto obscure subcategories of adult-onset brain cancer, estimate the survival rates of breast-cancer patients, and reduce unnecessary thyroid surgeries.

It was a stunning taste of what’s to come. According to McKinsey Global Institute, large tech companies poured as much as $30 billion into AI in 2016, with another $9 billion going into AI start-ups. Many people already are familiar with how machine learning—the process by which computers automatically refine an analytical model as new data comes in, teasing out new trends and linkages to optimize predictive power—allows Facebook to recognize the faces of friends and relatives, and Google to know where you want to eat lunch. These are useful features—but pale in comparison to the new ways in which machine learning will change health care in coming years.

The science is unstoppable, and so is the flow of funding. But at least one roadblock stands in the way: a big, bureaucratic Cold War–era regulatory apparatus that could prove to be fundamentally incompatible with the very nature of artificial intelligence.

Read more at The Atlantic.