Frank Pasquale:

Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles.

Door deze eerste golf van wetenschappelijk onderzoek (en activisme) is er breder maatschappelijk bewustzijn ontstaan, wat langzamerhand begint te leiden tot regelgeving. Mede doordat:

many members of the corporate and governmental establishment now acknowledge that data can be biased, inaccurate, or inappropriate. 

Pasquale legt uit dat de eerste golf van onderzoek pas het begin was. Daarin lag de focus op het verbeteren van bestaande systemen. Met de tweede golf wordt onderzocht of dergelijke systemen eigenlijk moeten bestaan:

For example, when it comes to facial recognition, first-wave researchers have demonstrated that all too many of these systems cannot identify minorities’ faces well. These researchers have tended to focus on making facial recognition more inclusive, ensuring that it will have a success rate as high for minorities as it is for the majority population. However, several second-wave researchers and advocates have asked: if these systems are often used for oppression or social stratification, should inclusion really be the goal? Isn’t it better to ban them, or at least ensure they are only licensed for socially productive uses?

En dat zijn dé vragen.