MU LibraryFINDGET HELPSERVICESABOUT Skip to Main Content

Search Engines, Algorithms and Bias

Glossary

Algorithmic Bias -  a systematic error in predictive computation. In some contexts, the term bias describes statistical mistakes that predictive models make because of code bugs, poor model selection, inappropriate optimization metrics, or suppressed data. Most commonly, especially among non-data-scientists, algorithmic bias refers to computational discrimination whereby unfair outcomes privilege one arbitrary group of people over another. In this definition, the focus is on the disparate impact technology may have that reinforces social biases based on race, gender, sexuality, ethnicity, age, and disability. Source: Center for Critical Race + Digital Studies

Search Engine Bias - includes a selective presentation of Web documents to a search engine user. This selectivity can be in terms of sources, content, views and page ranks. The causes for these biases can be explicit by censorship that disables users to access certain sources, content, viewpoints, and algorithms, that is, will make it difficult for users to find and see certain content. Source: Fons Wijnhoven & Jeanna van Haren 

Search Neutrality -  involves various policies aimed at correcting biases in search engine results. It is a suggested solution to the problem of search bias. Source: If Search Neutrality is the Answer, What's the Question?

Algorithm Fairness - algorithms are vulnerable to biases that render their decisions "unfair." A biased model may inadvertently encode human prejudice due to biases in data. Specifically, the algorithm may be discriminatory when it learns incorrect patterns, like stereotypes, from the observed data to make predictions and affect peoples' lives. Furthermore, the algorithm itself may also lead to algorithm unfairness/discrimination. The algorithm may sacrifice high performance on marginalized groups to achieve higher accuracy on overall samples while putting marginalized groups in a disadvantageous position. A typical case of algorithm discrimination is that COMPAS measures the risk of a person recommitting another crime and falsely links African-American offenders with high-risk recidivist scores. Besides, similar problems have been found in employment, insurance, and advertising.  Source: Wang, Zhang, & Zhu