A Stanford HAI student affinity group, led by Stanford PhD student Zachary Robertson, examines the ethical challenges faced by data workers and the companies that employ them. https://lnkd.in/gD7bpF3N
Stanford Institute for Human-Centered Artificial Intelligence (HAI)’s Post
More Relevant Posts
-
Today, I completed my summer AI Ethics course at the University of California, Berkeley, and it has completely transformed my way of thinking. While many of my peers are diving into traditional data science, I chose to explore the ethical dimensions of AI. This experience has deepened my understanding of the impact technology has on society, and I'm excited to apply these insights as I continue my journey in data science. Looking forward to what's next.
To view or add a comment, sign in
-
This short essay examines AI’s algorithmic bias and its ethical implications.
Algorithmic Bias and Ethics in AI
bearnetai.com
To view or add a comment, sign in
-
If you're concerned about the ethical practices of high-tech companies, especially regarding AI and its impact on our digital lives, this program is a must-see! Don't miss out on the opportunity to learn more about democratic regulation in the tech industry. #AIethics #techregulation
This week we tackle a topic of great interest to many people - how to understand and deal with the ethical challenges for high tech, particularly AI. Our speaker, Mehran Sahami, is the James and Ellenor Chesebrough Professor and Tencent Chair of the Computer Science department at Stanford University. Before joining the Stanford faculty in 2007, he was a Senior Research Scientist at Google. Mehran's research interests include computer science education, machine learning, and ethics. Professor Sahami and his colleagues at Stanford are focused on ideas about creating technical expertise in government so that regulatory decisions remain fair, ethical, and democratic to all. This is a do-not-miss program! #Rotary #RotaryClub #AI #AIethics https://lnkd.in/gFPEWsYG
Confronting Ethical Challenges in a High-Tech World — Rotary eClub of Silicon Valley
siliconvalleyrotary.com
To view or add a comment, sign in
-
"...or when developers get carried away with doing something “cool” with AI, and shifting away from the core purpose of problem-solving for end users" ...or when tax-payer funded professors get carried away with doing something "cool" with AI and shifting away from the core-purpose of research and education.
Bringing together design anthropology and software engineering! Published today 💥 "Trust, artificial intelligence and software practitioners: an interdisciplinary agenda" 💥 - a new interdisciplinary article in AI & Society, co-authored with my excellent colleagues; it was so interesting to bring our approaches together to explore new questions about trust - authors: Sarah Pink, Emma Quilty, John Grundy, Rashina Hoda! Emerging Technologies Research Lab, Monash University HumaniSE Lab Monash Information Technology
Trust, artificial intelligence and software practitioners: an interdisciplinary agenda - AI & SOCIETY
link.springer.com
To view or add a comment, sign in
-
// The Only Possible Objects of Trust Are Either the Institutions or People Responsible for Technology // Interesting design anthropological analysis of how trust and trustworthiness concepts are articulated and performed by AI software practitioners. I especially appreciated the literature review describing the differences in how trust is perceived in different fields. "while agendas for trustworthy AI superficially engage shared terminology, they in fact frequently represent diverse and contested understandings of what trust, trustworthiness and ethics entail, and different alignments with the relations of power and capital." "these suggest that trust between humans and machines is impossible and therefore argue that the only possible objects of trust are either the institutions or people responsible for technology." "instead, design anthropology turns around the relationship between trust and those other agents that might be related to it. It begins with the premise that if we understand trust as an underpinning element (and experiential possibility) of the circumstances of human life, we can then ask how AI and other technologies and things might become part of those circumstances of trust." "we must connect the way that trustworthy AI development is taught to software engineers more closely /to the ways that trustworthy AI is experienced, expected and anticipated by those who use it in everyday life/." Data & Society Research Institute Sareeta Amrute Cc. Amanda Andres Amy Ko
Bringing together design anthropology and software engineering! Published today 💥 "Trust, artificial intelligence and software practitioners: an interdisciplinary agenda" 💥 - a new interdisciplinary article in AI & Society, co-authored with my excellent colleagues; it was so interesting to bring our approaches together to explore new questions about trust - authors: Sarah Pink, Emma Quilty, John Grundy, Rashina Hoda! Emerging Technologies Research Lab, Monash University HumaniSE Lab Monash Information Technology
Trust, artificial intelligence and software practitioners: an interdisciplinary agenda - AI & SOCIETY
link.springer.com
To view or add a comment, sign in
-
Much before all of this started, in my mind, there was the thought that we could not compete with the "machines." Imagine the program or LLM that has the source of all available information for certain topics, has the skills to compose work based on the rules of the best textbooks, knows all the statistical methods and rules, and has perfect language knowledge. This model could generate the most comprehensive reviews and analyses in seconds. How can my knowledge compete with this or anyone's knowledge? And this will happen, sooner or later. With all the troubles in the scientific community that already exist, will this change be the death sentence to science or a new beginning for better science that we are not aware of yet? https://lnkd.in/dq57dSKi
Has your paper been used to train an AI model? Almost certainly
nature.com
To view or add a comment, sign in
-
'I've always felt that law has a special role to play in making sure that we can use technology for good.' Discover the research of Sandra Wachter, Professor of Technology and Regulation at the Oxford Internet Institute, University of Oxford - including her key areas of focus around #AI and her predictions for the future of her field ⬇️
'Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should' - a famous line from the 90's classic Jurassic Park aptly summarises the key question that drives the research of Prof Sandra Wachter. Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute, University of Oxford, leading the Governance of Emerging Technologies Research Group. Part of Prof Wachter's work has been to help develop ethical auditing methods for AI to combat bias and discrimination. Find out more about her career to date and her thoughts on how technology can be used as a force for good, without infringing on human rights ⬇️ https://lnkd.in/gsFm_fyn #OxfordAI
Prof Sandra Wachter #OxfordAI
To view or add a comment, sign in
-
At Data School, we do not want to merely study the digital society, we want to take part in shaping it. The societal impact of our work is an important result of our approach to research. But there is also an epistemic angle to it: when working with societal partners, we have the opportunity to study AI and data practices up close. We are where technological change manifests. [Iris Muis and I refused to have our pictures taken for the interview; this team photo is a compromise, it does not show former members, our university colleagues or our external partners; what we do is #teamscience, it's inter- and even transdisciplinary, and it rocks!] https://lnkd.in/etBNuBgs
'We want to take part in shaping the digital society'
uu.nl
To view or add a comment, sign in
-
Excited to share my latest research paper! It is a condensed version of my M.A. thesis and represents the final research output of my work as a Fulbright and DAAD scholar in the U.S. and the U.K. It proposes a (geo)political risk taxonomy of AI, identifying 12 risks distributed across 4 categories: (1) Geopolitical Pressures, (2) Malicious Usage, (3) Environmental, Social, and Ethical Risks, and (4) Privacy and Trust Violations. Incorporating a regulatory side, the paper conducts a policy assessment of the EU AI Act. The landmark regulation has the potential to have a positive top-down impact concerning AI risk reduction but needs regulatory adjustments to mitigate risks more comprehensively. Regulatory exceptions for open-source models, excessively high parameters for the classification of GPAI models as a systemic risk, and the exclusion of systems designed exclusively for military purposes from the regulation’s obligations leave room for future action.
Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act
arxiv.org
To view or add a comment, sign in
103,120 followers
More from this author
-
Vanessa Parli: Leading Programs to Enable Interdisciplinary Research
Stanford Institute for Human-Centered Artificial Intelligence (HAI) 5mo -
Parth Sarin: Boosting Access to AI Education
Stanford Institute for Human-Centered Artificial Intelligence (HAI) 1y -
Rohini Kosoglu: Bringing Inclusion to the AI Conversation
Stanford Institute for Human-Centered Artificial Intelligence (HAI) 1y
CTO at Co.Ingenia
3moVery helpful!