Three Ways Biased Data Can Ruin Your ML Models

Machine learning provides a powerful way to automate decision making, but the algorithms don’t always get it right. When things go wrong, it’s often the machine learning model that gets the blame. But more often than not, it’s the data itself that’s biased, not the algorithm or the model.

That’s been the experience of Cheryl Martin, Ph.D., who worked as an applied research scientist at the University of Texas, Austin and NASA for 14 years before joining the AI crowdsourcing outfit Alegion as its chief data scientist earlier this year. “You often hear that the algorithm is biased, or the machine learning is algorithmically biased,” Martin tells Datanami.

Author: Alex Woodie

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s