Machine learning provides a powerful way to automate decision making, but the algorithms don’t always get it right. When things go wrong, it’s often the machine learning model that gets the blame. But more often than not, it’s the data itself that’s biased, not the algorithm or the model.
That’s been the experience of Cheryl Martin, Ph.D., who worked as an applied research scientist at the University of Texas, Austin and NASA for 14 years before joining the AI crowdsourcing outfit Alegion as its chief data scientist earlier this year. “You often hear that the algorithm is biased, or the machine learning is algorithmically biased,” Martin tells Datanami.
Author: Alex Woodie