The deployment of machine learning algorithms is becoming increasingly widespread in consequential decision-making domains, including judicial risk assessments, employment screening, and financial credit allocation. As these systems gain prominence, their role in shaping critical societal outcomes demands an unprecedented level of scrutiny. Despite the conventional view that computational systems operate as objective arbiters, emerging scholarship has illuminated how such algorithms can perpetuate, and even amplify, societal inequities embedded within historical data. This necessitates a critical examination of the structures and assumptions underpinning these technologies.