Jump to content

Allison Koenecke

From Wikipedia, the free encyclopedia
Allison Koenecke
Alma mater
Scientific career
Institutions
ThesisFairness in algorithmic services (2021)
Doctoral advisorSusan Athey

Allison Koenecke is an American computer scientist and an assistant professor in the Department of Information Science at Cornell University.[1] Her research considers computational social science and algorithmic fairness. In 2022, Koenecke was named one of the Forbes 30 Under 30 in Science.

Early life and education

[edit]

As a high school student, Koenecke took part in a mathematics competition at Massachusetts Institute of Technology.[2] She was in the first cohort of participants for the Math Prize for Girls, and has continued to support the program as her career has progressed. Koenecke was an undergraduate student at Massachusetts Institute of Technology, where she majored in mathematics with a minor in economics.[3] She worked in economic consultancy for several years before realizing she wanted to do something that benefitted society.[3]

Koenecke was a doctoral researcher in the Institute for Computational and Mathematical Engineering at Stanford University. Koenecke was advised by notable economist Susan Athey and her doctoral research focused on fairness in algorithmic systems.[4][5][6][7][8] Prior to Cornell, Koenecke was a postdoctoral researcher at Microsoft Research, New England, where she focused on machine learning and statistics.[3] Her current research interest also includes causal inference in public health.[3]

Research and career

[edit]

Koenecke moved to Cornell University as an assistant professor in 2022.[9] She studies algorithmic fairness,[10] including racial disparities in voice recognition systems. She noticed that voice recognition was being increasingly used in society, and was aware of the work of Joy Buolamwini and Timnit Gebru on facial recognition.[11] Koenecke started to perform tests on the voice recognition software developed by Amazon, IBM, Google, Microsoft and Apple.[12] She showed these voice recognition systems had considerable racial disparities, and were more likely to misinterpret Black speakers.[12][13][14] Whilst she could not precisely define the reasons for these racial disparities, she proposed that it was due to acoustic differences (differences in the patterns of stress/intonation) between white and African American vernacular.[3][11] She argued that this kind of study was critical to improving such systems, emphasizing that equity must be part of the design of future technologies.[15]

Koenecke was named one of the Forbes 30 Under 30 in Science in 2022.[16]

Awards and honors

[edit]
  • 2020 Ben Rolfs Memorial Award[17]
  • 2020 Berkeley EECS Rising Stars[18]
  • 2021 Stanford School of Engineering Justice, Equity, Diversity, and Inclusion (JEDI) Appreciation
  • 2022 Forbes 30 Under 30[16]

Selected publications

[edit]
  • Allison Koenecke; Andrew Nam; Emily Lake; et al. (23 March 2020). "Racial disparities in automated speech recognition". Proceedings of the National Academy of Sciences of the United States of America. 117 (14): 7684–7689. Bibcode:2020PNAS..117.7684K. doi:10.1073/PNAS.1915768117. ISSN 0027-8424. PMID 32205437. Wikidata Q89589357.
  • Maximilian F Konig; Michael A Powell; Verena Staedtke; et al. (30 April 2020). "Preventing cytokine storm syndrome in COVID-19 using α-1 adrenergic receptor antagonists". Journal of Clinical Investigation. doi:10.1172/JCI139642. ISSN 0021-9738. PMC 7324164. PMID 32352407. Wikidata Q94466649.
  • Michael Powell; Allison Koenecke; James Brian Byrd; et al. (28 July 2021). "Ten Rules for Conducting Retrospective Pharmacoepidemiological Analyses: Example COVID-19 Study". Frontiers in Pharmacology. 12: 700776. doi:10.3389/FPHAR.2021.700776. ISSN 1663-9812. PMC 8357144. PMID 34393782. Wikidata Q111857738.

References

[edit]
  1. ^ Koenecke's homepage
  2. ^ "Math enthusiasts take aim at STEM glass ceiling". MIT News | Massachusetts Institute of Technology. Retrieved 2022-11-30.
  3. ^ a b c d e "Allison Koenecke". Women in Data Science (WiDS). Retrieved 2022-11-30.
  4. ^ "Former Students". Susan Athey.
  5. ^ "Susan Athey awarded CME Group-MSRI Prize for innovative work in tech economics". The Stanford Daily. 2020-12-16. Retrieved 2022-11-30.
  6. ^ "Some essential reading and research on race and technology". VentureBeat. 2 June 2020.
  7. ^ Thompson, Clive. "Sorry, but 'I Missed the Meeting' Is No Longer an Excuse". Wired.
  8. ^ Foundation, Thomson Reuters. "US prisons explore use of AI to analyze inmate phone calls". news.trust.org. Reuters. {{cite news}}: |first1= has generic name (help)
  9. ^ "Cornell Bowers CIS welcomes 13 faculty members". Cornell Chronicle. Retrieved 2022-11-30.
  10. ^ Metz, Cade (24 November 2020). "Meet GPT-3. It Has Learned to Code (and Blog and Argue)". The New York Times.
  11. ^ a b Ravindran, Sandeep (September 2020). "QnAs with Sharad Goel and Allison Koenecke". Proceedings of the National Academy of Sciences. 117 (35): 20986–20987. doi:10.1073/pnas.2015356117. ISSN 0027-8424. PMC 7474661. PMID 32778579.
  12. ^ a b University, Stanford (2020-03-23). "Automated speech recognition less accurate for blacks". Stanford News. Retrieved 2022-11-30.
  13. ^ Lloreda, Claudia Lopez. "Speech Recognition Tech Is Yet Another Example of Bias". Scientific American. Retrieved 2022-11-30.
  14. ^ Metz, Cade (23 March 2020). "There Is a Racial Divide in Speech-Recognition Systems, Researchers Say". The New York Times.
  15. ^ "Voicing Erasure". www.onassis.org. Retrieved 17 December 2022.
  16. ^ a b "Science – Inventing the future from the atom up". Forbes. 2022-11-30.
  17. ^ "ICME Awards | Stanford Institute for Computational & Mathematical Engineering". icme.stanford.edu. Retrieved 2022-11-30.
  18. ^ "Rising Star 2020 Allison Koenecke | EECS at UC Berkeley". www2.eecs.berkeley.edu. Retrieved 2022-11-30.