YouTube has two billion active monthly users and uploads 500 hours of content every minute. Twenty five percent of U.S. adults get their news from YouTube, and 60% of regular users “use the platform to keep up with current events.” Since roughly 70% of all videos watched come from YouTube’s algorithm, it's fair to ask if it might be biased. A new study says it is, and that the algorithm leans left.
Over the last few years, it’s become clear that using a patient’s race as a variable in some predictive models – like those involving kidney function – leads to poorer outcomes. Meanwhile, removing that variable leads to improved prediction of patient risk, more prompt treatment, and presumably better outcomes. A new study shows that by taking the race variable “out of the equation” the predictive model fares worse. Should we consider race as a determinant of health?
A new study shows how an artificial intelligence algorithm is biased against black patients. Specifically, denying them care designed to make their outcomes and quality of life better. Why is there so little concern? And who is responsible for algorithmic healthcare?