Intervention 1: Rule-Based Decision-Making Tool
Equity
The equity of rule-based systems is directly determined by the rules encoded within them. These explicit "if-then-else" rules reflect the values and potential biases of their creators.
Real-world example: The Netherlands' System Risk Indication (SyRI) was designed to detect welfare fraud using rule-based algorithms. In 2020, a Dutch court ruled that SyRI violated human rights because it disproportionately targeted low-income neighborhoods and minorities, creating significant equity issues in how investigations were conducted.
Steps for future action:
- Implement transparent auditing of rules to identify potential discrimination
- Develop rules with diverse teams representing multiple perspectives
- Include specific rules that protect vulnerable populations
Acceptability
Rule-based systems potentially offer greater transparency than AI systems, as their decision logic can be explicitly explained.
Real-world example: Australia's "Robodebt" scheme, which used rule-based algorithms to identify welfare overpayments, faced massive public backlash despite its rule-based nature. The system's lack of contextual understanding led to wrongful debt notices for thousands of citizens, making it highly unacceptable to affected communities.
Steps for future action:
- Create simple explanations of each rule for applicants
- Establish mechanisms for human appeals of automated decisions
- Involve stakeholder representatives in rule development
Feasibility
Rule-based systems are technically straightforward but struggle with complex human situations that don't fit neatly into binary logic.
Real-world example: The UK's Universal Credit benefits system uses rule-based calculations that have repeatedly failed to handle complex life situations like variable working hours and changing family circumstances.
Steps for future action:
- Develop more nuanced rules that can handle exceptions
- Implement regular rule reviews to address emerging challenges
- Establish clear human oversight protocols
Ethics
The ethics of rule-based systems depend on both the values encoded in the rules and the transparency of their operation.
Real-world example: The Illinois DCFS used a rule-based system to flag potential child abuse cases that resulted in many false positives, creating significant ethical concerns about family privacy and government intervention.
Steps for future action:
- Develop ethical guidelines specific to hiring/admission contexts
- Create explainability requirements so applicants understand decisions
- Establish regular ethical audits of system outcomes
Intervention 2: AI-Based Decision-Making Tool
Equity
AI systems can either reduce or amplify inequities depending on their training data and design.
Real-world example: Amazon scrapped an AI recruiting tool in 2018 after discovering it discriminated against women because it was trained on historical hiring data that reflected male dominance in the tech industry. The system downgraded resumes containing terms like "women's" and graduates from women's colleges.
Steps for future action:
- Implement rigorous bias testing across different demographic groups
- Use balanced training datasets that represent diverse populations
- Add equity-focused constraints to the AI optimization process
Acceptability
AI systems often lack transparency, making their decisions difficult for affected individuals to understand and accept.
Real-world example: HireVue's AI-powered video interview analysis faced significant backlash from job seekers and privacy advocates for analyzing candidates' facial expressions and speech patterns with little transparency about how these factors influenced decisions.
Steps for future action:
- Develop explainable AI approaches that clarify decision factors
- Ensure human oversight of all rejection decisions
- Create appeals processes with human reviewers
Cost
AI systems typically have high development costs but can scale efficiently once implemented.
Real-world example: The recruitment AI startup Pymetrics required over $50 million in venture funding to develop its AI assessment platform, showing the significant upfront investment needed for sophisticated AI tools.
Steps for future action:
- Conduct cost-benefit analysis comparing AI to human screening
- Consider open-source AI platforms to reduce development costs
- Implement gradual deployment to manage financial risks
Innovation
AI-based systems represent a more recent innovation in recruitment technology, offering potential advances in pattern recognition.
Real-world example: Unilever implemented an AI screening system that reportedly increased diversity in their hiring pool by 16% while reducing recruitment time by 75%.
Steps for future action:
- Combine AI with human decision-making for hybrid approaches
- Implement continuous learning mechanisms to improve system performance
- Create industry standards for responsible AI recruiting innovations
Ethics
AI screening tools raise profound ethical questions about automated judgment of human potential.
Real-world example: In 2019, AI Now Institute reported numerous cases where AI hiring systems disadvantaged applicants with disabilities by penalizing speech patterns, facial expressions, or interaction styles that deviated from neurotypical norms.
Steps for future action:
- Establish clear liability frameworks for discriminatory outcomes
- Implement mandatory ethics reviews before deployment
- Create mechanisms for ongoing ethical oversight with diverse stakeholders
- Ensure systems comply with emerging AI regulations like the EU AI Act
Both interventions require careful implementation with strong governance frameworks to ensure they promote rather than undermine diversity, access, and inclusion in selection processes.