Daily updates of news, research and trends by UPCEA
Click on the URL at the end of posting to visit the relevant article or website mentioned in the post.
Friday, May 31, 2019
How we might protect ourselves from malicious AI - Karen Hao, MIT Technology Review
We’ve touched previously on the concept of adversarial examples—the class of tiny changes that, when fed into a deep-learning model, cause it to misbehave. In recent years, as deep-learning systems have grown more and more pervasive in our lives, researchers have demonstrated how adversarial examples can affect everything from simple image classifiers to cancer diagnosis systems, leading to consequences that range from the benign to the life-threatening. A new paper from MIT now points toward a possible path to overcoming this challenge. It could allow us to create far more robust deep-learning models that would be much harder to manipulate in malicious ways.
https://www.technologyreview.com/s/613555/how-we-might-protect-ourselves-from-malicious-ai/