Section Branding
Header Content
How Artificial Intelligence Reflects Human Biases - And How It Can Improve
Primary Content
Whether you're searching something on Google, assessing a mortgage rate, or applying for a job, much of our lives today is informed by artificial intelligence. Or, the less scary term: intelligent algorithms.
While AI helps systems operate quickly, it's not perfect. Like humans, these technologies are only as good as the information they get.
"On Second Thought" host Virginia Prescott speaks with Dr. Ayanna Howard.
Dr. Ayanna Howard is chair of the School of Interactive Computing at Georgia Tech. She joined On Second Thought to talk about how technology often reflects our own biases, the dangers of employing and blindly believing imperfect AI systems, and how these algorithmic biases can be improved.
"Research, including my own, has shown that when an algorithm, an AI system, says something, people are more inclined to believe it, even if they aren't quite sure," Dr. Howard explained. "Like if a person says something, and we don't believe them, we'll go, 'Oh, let's Google it' or something. When an AI system says something, we are more inclined to be like, 'Well, it must know something I don't know.'"
Get in touch with us.
Twitter: @OSTTalk
Facebook: OnSecondThought
Email: OnSecondThought@gpb.org
Phone: 404-500-9457