Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
skip to main content
10.1145/3630106.3658938acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

The four-fifths rule is not disparate impact: A woeful tale of epistemic trespassing in algorithmic fairness

Published: 05 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Computer scientists are trained in the art of creating abstractions that simplify and generalize. However, a premature abstraction that omits crucial contextual details creates the risk of epistemic trespassing, by falsely asserting its relevance into other contexts. We study how the field of responsible AI has created an imperfect synecdoche by abstracting the four-fifths rule (a.k.a. the <Formula format="inline"><TexMath><?TeX $\nicefrac {4}{5}$?></TexMath><AltText>Math 1</AltText><File name="facct24-53-inline1" type="svg"/></Formula> rule or 80% rule), a single part of disparate impact discrimination law, into the disparate impact metric. This metric incorrectly introduces a new deontic nuance and new potentials for ethical harms that were absent in the original <Formula format="inline"><TexMath><?TeX $\nicefrac {4}{5}$?></TexMath><AltText>Math 2</AltText><File name="facct24-53-inline2" type="svg"/></Formula> rule. We also survey how the field has amplified the potential for harm in codifying the <Formula format="inline"><TexMath><?TeX $\nicefrac {4}{5}$?></TexMath><AltText>Math 3</AltText><File name="facct24-53-inline3" type="svg"/></Formula> rule into popular AI fairness software toolkits. The harmful erasure of legal nuances is a wake-up call for computer scientists to self-critically re-evaluate the abstractions they create and use, particularly in the interdisciplinary field of AI ethics.

    References

    [1]
    Harold Abelson, Gerald Jay Sussman, and Julie Sussman. 1996. Structure and Intepretation of Computer Programs (2 ed.). MIT Press, Cambridge, MA. https://mitpress.mit.edu/sites/default/files/sicp/index.html
    [2]
    Ad Hoc Group on Uniform Selection Guidelines. 1981. A professional and legal analysis of the uniform guidelines on employee selection procedures. American Society for Personnel Administration, Berea, Ohio.
    [3]
    Alexander v. Edgewood Mgmt. Corp.2019. Civil Case No. 15-1140 (D.D.C. Jun. 25).
    [4]
    Nathan Ballantyne. 2019. Epistemic trespassing. Mind 128, 510 (2019), 367–395. https://doi.org/10.1093/mind/fzx042
    [5]
    Michelle Bao, Angela Zhou, Samantha A Zottola, Brian Brubach, Sarah Desmarais, Aaron Seth Horowitz, Kristian Lum, and Suresh Venkatasubramanian. 2021. It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)(NeurIPS ’21). OpenReview, OpenReview.net, 18 pages. https://openreview.net/forum?id=qeM58whnpXM
    [6]
    R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4:1–15. https://doi.org/10.1147/JRD.2019.2942287
    [7]
    Dan Biddle. 2006. Adverse Impact and Test Validation: A Practitioner’s Guide to Valid and Defensible Employment Testing (2 ed.). Gower, Aldershot. https://doi.org/10.4324/9781315263298
    [8]
    Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. 2020. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSR-TR-2020-32. Microsoft. https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
    [9]
    George E. P. Box. 1976. Science and Statistics. J. Amer. Statist. Assoc. 71, 356 (1976), 791–799. https://doi.org/10.1080/01621459.1976.10480949
    [10]
    Felice Cardone. 2021. Games, Full Abstraction and Full Completeness. In The Stanford Encyclopedia of Philosophy (Spring 2021 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University, Stanford, CA.
    [11]
    Giuseppe Castagna. 1997. Object-Oriented Programming: A Unified Foundation. Birkhäuser, Boston. https://doi.org/10.1007/978-1-4612-4138-6
    [12]
    Brendan Churchill and Chabel Khan. 2021. Youth underemployment: A review of research on young people and the problems of less(er) employment in an era of mass education. Sociology Compass 15, 10 (2021), e12921. https://doi.org/10.1111/soc4.12921
    [13]
    Consumer Financial Protection Bureau. 2020. Supplement I to Part 1002 - Official Interpretations. Comment for 1002.6 - Rules Concerning Evaluation of Applications. Comment 6(a)-2. Effects test. https://www.consumerfinance.gov/rules-policy/regulations/1002/interp-6/
    [14]
    Samuel Deng and Achille Varzi. 2019. Methodological blind spots in machine learning fairness: Lessons from the philosophy of science and computer science. In Workshop on Human-Centric Machine Learning at the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019). arXiv, Vancouver, Canada, 5 pages. arxiv:1910.14210
    [15]
    EEOC v. Schuster Co.2021. No. 13-CV-4063, 2021 U.S. Dist. LEXIS 79815 (N.D. Iowa Apr. 13, 2021).
    [16]
    Elston v. Talladega County Bd. of Educ.1993. (997 F.2d 1394, 84 Ed. Law Rep. 122)., 1394 pages.
    [17]
    Hazelwood School District et al. v. United States. 1977. (433 U.S. 299).
    [18]
    Michael Etter, Christian Fieseler, and Glen Whelan. 2019. Sharing Economy, Sharing Responsibility? Corporate Social Responsibility in the Digital Age. Journal of Business Ethics 159, 4 (2019), 935–942.
    [19]
    Fair Employment Practice Commission. 1974. Report: July 1, 1971 – June 30, 1972. https://digitalcommons.law.ggu.edu/caldocs_agencies/49
    [20]
    Fair Employment Practice Commission. 1975. Report: July 1, 1972 – June 30, 1974. https://digitalcommons.law.ggu.edu/caldocs_agencies/51
    [21]
    FDIC. 2019. Policy statement on discrimination in lending. https://www.fdic.gov/regulations/laws/rules/5000-3860.html
    [22]
    Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, 259–268. https://doi.org/10.1145/2783258.2783311 arxiv:1412.3756
    [23]
    W. C. Fletcher. 1940. Premature Abstraction. The Mathematical Gazette 24, 259 (1940), 73–85. http://www.jstor.org/stable/3606739
    [24]
    Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
    [25]
    Navdeep Gill, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. 2020. A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing. Information 11, 3 (29 Feb. 2020), 137. https://doi.org/10.3390/info11030137
    [26]
    Alan H. Goldman. 2015. Justice and Reverse Discrimination. Princeton University Press, Princeton, NJ. https://doi.org/10.1515/9781400868605
    [27]
    K. Greenawalt. 1983. Discrimination and Reverse Discrimination. Knopf, New York. https://books.google.com/books?id=mTAQAQAAMAAJ
    [28]
    Barry R Gross. 1977. Reverse discrimination. Prometheus Books, Buffalo, NY. https://archive.org/details/reversediscrimin00gros
    [29]
    Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 375–385. https://doi.org/10.1145/3442188.3445901
    [30]
    Monique Janneck. 2010. Challenges of Software Recontextualization: Lessons Learned. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI EA ’10). Association for Computing Machinery, New York, NY, USA, 4613–4628. https://doi.org/10.1145/1753846.1754202
    [31]
    Jones v. City of Bos.2014. 752 F.3d 38, 51 (1st Cir. 2014).
    [32]
    Jeff Kramer. 2007. Is Abstraction the Key to Computing?Commun. ACM 50, 4 (April 2007), 36–42. https://doi.org/10.1145/1232743.1232745
    [33]
    Michelle Seng Ah Lee and Jat Singh. 2021. The Landscape and Gaps in Open Source Fairness Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 699, 13 pages. https://doi.org/10.1145/3411764.3445261
    [34]
    Paul London. 1978. A Conversation with Eleanor Holmes Norton. Employee Relations Law Journal 3 (1978), 314–326.
    [35]
    Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376445
    [36]
    Mandala v. NTT Data, Inc.2020. 975 F.3d 202 (2d Cir. 2020). https://casetext.com/case/mandala-v-ntt-data-inc-1 “a number of courts have denied motions to dismiss disparate impact claims using general population statistics to challenge [criminal history checks]”.
    [37]
    David Manheim and Scott Garrabrant. 2018. Categorizing variants of Goodhart’s Law. arxiv:1803.04585
    [38]
    Donald Martin Jr, Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S Isaac. 2020. Extending the machine learning abstraction boundary: A Complex systems approach to incorporate societal context. arxiv:2006.09663
    [39]
    Abraham Harold Maslow. 1966. The Psychology of Science: A Reconnaissance. John Dewey Society Lectureship Series, Vol. 8. Harper & Row, New York, 15.
    [40]
    Shannon Mattern. 2021. Unboxing the Toolkit.https://tool-shed.org/unboxing-the-toolkit
    [41]
    Kevin S. McGuiness. 1976. No. SC-2, Government memoranda on affirmative action programs: a study of compliance agency documents affecting non-construction federal contractors. Vol. 1. Equal Employment Advisory Council, Washington, DC.
    [42]
    Meditz v. City of Newark. 2011. (658 F. 3d 364).
    [43]
    Nick Merrill, Michael Madaio, and Richmond Wong. 2022. Seeking Like a Toolkit: How Toolkits Envision the Work of AI Ethics. arxiv:2202.08792
    [44]
    Boris Miethlich and Anett G. Oldenburg. 2019. Social Inclusion Drives Business Sales: A Literature Review on the Case of the Employment of Persons With Disabilities. In 33nd International Business Information Management Association Conference (IBIMA), Education Excellence and Innovation Management through Vision 2020, Granada, Spain, 10-11.04.2019. IBIMA Publishing, King of Prussia, PA; King of Prussia, PA, 6253–6267. https://doi.org/10.33543/16002.62536267
    [45]
    Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://doi.org/10.1145/3287560.3287596
    [46]
    Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2021. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141–163.
    [47]
    Gina Neff. 2020. From Bad Users and Failed Uses to Responsible Technologies: A Call to Expand the AI Ethics Toolkit. Association for Computing Machinery, New York, NY, USA, 5–6. https://doi.org/10.1145/3375627.3377141
    [48]
    Jennifer L. Nelson and Steven P. Vallas. 2021. Race and inequality at work: An occupational perspective. Sociology Compass 15, 10 (2021), e12926. https://doi.org/10.1111/soc4.12926
    [49]
    Eleanor Holmes Norton, Alan K. Campbell, Drew S. Days, Welden Rougeau, and Kent A. Peterson. 1979. Adoption of Questions and Answers To Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. Federal Register 44, 43 (2 March 1979), 11996–12009. https://www.loc.gov/item/fr044043
    [50]
    Eleanor Holmes Norton, Alan K. Campbell, Drew S. Days, Weldon J. Rougeau, and Kent A. Peterson. 1980. Adoption of Additional Questions and Answers To Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. Federal Register 45, 87 (2 May 1980), 29530–1. https://www.loc.gov/item/fr045087
    [51]
    Eleanor Holmes Norton, Drew S. Days, and Jule M. Sugarman. 1977. Uniform Guidelines on Employee Selection Procedures: Notice of Proposed Rulemaking. Federal Register 42, 251 (30 Dec. 1977), 65542–65552. https://www.loc.gov/item/fr042251
    [52]
    Eleanor Holmes Norton, Richard J. Devine, and Drew S. Days. 1978. Proposed Uniform Guidelines on Employee Selection Procedures: Issues of Particular Interest for Public Hearing and Meeting. Federal Register 43, 55 (21 March 1978), 11812–3. https://www.loc.gov/item/fr043055
    [53]
    Office of Federal Contract Compliance. 1974. Memorandum No. 8: Testing and Selection Order Guidance.
    [54]
    Office of Federal Contract Compliance Programs, U.S. Department of Labor. 2021. Federal Contract Compliance Manual. https://www.dol.gov/agencies/ofccp/manual/fccm
    [55]
    Liesbet Okkerse. 2008. How to measure labour market effects of immigration: a review. Journal of Economic Surveys 22, 1 (2008), 1–30. https://doi.org/10.1111/j.1467-6419.2007.00533.x
    [56]
    Chris Olah and Shan Carter. 2017. Research Debt. https://doi.org/10.23915/distill.00005
    [57]
    Orlando Patterson and Xiaolin Zhuo. 2018. Modern Trafficking, Slavery, and Other Forms of Servitude. Annual Review of Sociology 44, 1 (2018), 407–439. https://doi.org/10.1146/annurev-soc-073117-041147
    [58]
    Payan v. Los Angeles Community College Dist.2021. (11 F. 4th 729)., 729 pages.
    [59]
    prog.world. 2022. Fair modeling with Fairlearn. https://prog.world/fair-modeling-with-fairlearn/ accessed 2022-02-16.
    [60]
    Amir Radfar, Seyed Ahmad Ahmadi Asgharzadeh, Fernando Quesada, and Irina Filip. 2018. Challenges and perspectives of child labor. Industrial psychiatry journal 27, 1 (Jan-Jun 2018), 17–20.
    [61]
    Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, and Henriette Cramer. 2021. Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 236, 13 pages. https://doi.org/10.1145/3411764.3445604
    [62]
    John E. Roemer. 1982. A General Theory of Exploitation and Class. Harvard University Press, Cambridge, MA. https://doi.org/10.4159/harvard.9780674435865
    [63]
    Benjamin I. Sachs. 2007-2008. Employment Law as Labor Law. Cardozo Law Review 29 (2007-2008), 2685–2748. https://heinonline.org/HOL/P?h=hein.journals/cdozo29&i=2707
    [64]
    Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T. Rodolfa, and Rayid Ghani. 2019. Aequitas: A Bias and Fairness Audit Toolkit. arxiv:1811.05577 [cs.LG]
    [65]
    Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What Does It Mean to ’solve’ the Problem of Discrimination in Hiring? Social, Technical and Legal Perspectives from the UK on Automated Hiring Systems. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 458–468. https://doi.org/10.1145/3351095.3372849
    [66]
    Angela L. Sauer, Andra Parks, and Patricia C. Heyn. 2010. Assistive technology effects on the employment outcomes for people with cognitive disabilities: A systematic review. Disability and Rehabilitation: Assistive Technology 5, 6 (2010), 377–391. https://doi.org/10.3109/17483101003746360
    [67]
    Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 59–68. https://doi.org/10.1145/3287560.3287598
    [68]
    Elaine W. Shoben. 1977. Probing the Discriminatory Effects of Employee Selection Procedures with Disparate Impact Analysis Under Title VII. Texas Law Review 56, 1 (Dec. 1977), 1–45. https://scholars.law.unlv.edu/facpub/575
    [69]
    Elaine W. Shoben. 1978. Differential Pass-Fail Rates in Employment Testing: Statistical Proof under Title VII. Harvard Law Review 91, 4 (1978), 793–813. http://www.jstor.org/stable/1340356
    [70]
    Edouard J. Simon, Monique Janneck, and Dorina Gumm. 2006. Understanding Socio-Technical Change: Towards a Multidisciplinary Approach. In Social Informatics: An Information Society for all? In Remembrance of Rob Kling, Jacques Berleur, Markku I. Nurminen, and John Impagliazzo (Eds.). Springer US, Boston, MA, 469–479.
    [71]
    Jeremy Snyder. 2010. Exploitation and Sweatshop Labor: Perspectives and Issues. Business Ethics Quarterly 20, 2 (2010), 187–213. http://www.jstor.org/stable/25702393
    [72]
    State of California Fair Employment Practice Commission. 1972. Guidelines on employee selection procedures.
    [73]
    Stevenson v. City & County of San Francisco. 2016. C-11-4950 MMC (N.D. Cal. Jan. 5, 2016). “The primary reason for the experts’ divergent opinions is that the experts employed different testing methods. Dr. Haan used the “Fisher’s Exact” test, the results of which, he states, show no significant statistical disparity (see Rolnick Decl. Ex. 18 at 14), while Dr. Gutman used the “Chi Square” test, the results of which, he states, do show a significant statistical disparity (see Randle Decl. Ex. 4 at 14)”.
    [74]
    Christopher Strachey. 2000. Fundamental Concepts in Programming Languages. Higher-Order and Symbolic Computation 13, 1/2 (2000), 11–49. https://doi.org/10.1023/A:1010000313106
    [75]
    Will Sutherland, Mohammad Hossein Jarrahi, Michael Dunn, and Sarah Beth Nelson. 2020. Work Precarity and Gig Literacies in Online Freelancing. Work, Employment and Society 34, 3 (2020), 457–475. https://doi.org/10.1177/0950017019886511
    [76]
    Texas Dept. of Housing and Community Affairs v. Inclusive Communities Project, Inc.2015. (576 U.S. 519). ‘A robust causality requirement ensures that “[r]acial imbalance [...] does not, without more, establish a prima facie case of disparate impact” and thus protects defendants from being held liable for racial disparities they did not create.’.
    [77]
    Harold R. Tyler, Michael H. Moskov, Ethel Bent Walsh, Robert E. Hampton, Arthur E. Flemming, Richard Albrecht, Eleanor Holmes Norton, Alan K. Campell, Ray Marshall, and Griffin B. Bell. 1971. Adoption of Employee Selection Procedures. Supplementary Information: An overview of the 1978 Uniform Guidelines on Employee Selection Procedures. Federal Register 43, 166 (2 Oct. 1971), 38290–38295. https://www.loc.gov/item/fr043166
    [78]
    U.S. Department of Justice, Civil Rights Division, Federal Coordination and Compliance Section. 2021. Title VI legal manual. https://www.justice.gov/crt/fcs/T6Manualhttps://www.justice.gov/crt/book/file/1364106/download
    [79]
    U.S. Government Publishing Office. 1978. 41B C.F.R. 60 Pt. 60-3(D). Adverse impact and the “four-fifths rule.”. https://www.ecfr.gov/current/title-41/subtitle-B/chapter-60/part-60-3
    [80]
    Villafana v. Cnty. of San Diego. 2020. 57 Cal.App.5th 1012, 271 Cal. Rptr. 3d 639 (Cal. Ct. App. 2020). https://casetext.com/case/villafana-v-cnty-of-san-diego
    [81]
    Philip Wadler. 2015. Propositions as Types. Commun. ACM 58, 12 (nov 2015), 75–84. https://doi.org/10.1145/2699407
    [82]
    Ronald Weitzer. 2015. Human Trafficking and Contemporary Slavery. Annual Review of Sociology 41, 1 (2015), 223–242. https://doi.org/10.1146/annurev-soc-073014-112506
    [83]
    Johanna Weststar. 2011. A Review of Women’s Experiences of Three Dimensions of Underemployment. Springer New York, New York, NY, 105–125. https://doi.org/10.1007/978-1-4419-9413-4_6
    [84]
    Tina Williams. 2020. Nondiscrimination Obligations of Federal Contractors and Subcontractors: Procedures To Resolve Potential Employment Discrimination. Federal Register 85, 218 (10 Nov. 2020), 71553–75. https://www.govinfo.gov/content/pkg/FR-2020-11-10/pdf/2020-24858.pdfhttps://www.federalregister.gov/documents/2020/11/10/2020-24858/rin-1250-aa10
    [85]
    Jeannette M. Wing. 2006. Computational Thinking. Commun. ACM 49, 3 (March 2006), 33–35. https://doi.org/10.1145/1118178.1118215
    [86]
    Alice Xiang and Inioluwa Deborah Raji. 2019. On the legal compatibility of fairness definitions. In Workshop on Human-Centric Machine Learning at the 33rd Conference on Neural Information Processing Systems(NeurIPS ’19). arXiv, Vancouver, Canada, 6 pages. arxiv:1912.00761

    Index Terms

    1. The four-fifths rule is not disparate impact: A woeful tale of epistemic trespassing in algorithmic fairness

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
      June 2024
      2580 pages
      ISBN:9798400704505
      DOI:10.1145/3630106
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 June 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. AI ethics
      2. bias
      3. civil rights
      4. discrimination law
      5. disparate impact
      6. employment
      7. fairness
      8. metrics
      9. optimization

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      FAccT '24

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 44
        Total Downloads
      • Downloads (Last 12 months)44
      • Downloads (Last 6 weeks)13
      Reflects downloads up to 16 Aug 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media