
Artificial Intelligence and machine learning algorithms operate in societies that are shaped by profound forms of social, material, and political inequality. AI algorithms both exacerbate existing structural inequalities, and create new inequalities. A range of established concerns have emerged, including bias, fairness, accountability, explainability, and responsibility. I am particularly interested in how AI affects marginalized communities, including people of color, precarious workers, women, trans and non-binary, Indigenous, poor, and disabled people, who are all particularly at risk of being adversely affected by AI, whether through increased surveillance, invasions of privacy, misrecognition by facial recognition software, or exclusions from data sets.
I lead the ERC Starting Grant PARTIALJUSTICE project (2025-2030), which develops the novel approach of participatory algorithmic justice. Participatory algorithmic justice defines a concept and standards of practice for collaborative research to better understand who and what AI harms, but also how these harms should be redressed. Participatory algorithmic justice draws on ethnography, participatory design, and graphic art to bring the voices, priorities, and concerns of those affected by AI to the forefront of debates over what kinds of AI people want to live with.
I also lead the German team on the ADJUST project (2025-2028) together with Jay Shaw (Canada) and Sharifa Sekalala (UK) on how to develop and apply the idea of health data justice. The project addresses urgent concerns around challenges to accessing safe, high-quality health and social care for marginalized groups, within the context of AI and other data-intensive health technologies.
I co-founded and co-lead the Munich Embedded Ethics and Social Sciences Hub (MESH) at the Technical University of Munich, including the development of the “embedded ethics” methodology for studying social, ethical, and legal considerations of AI development processes. This has included partnerships with robotics developers, STS scholars, and bioethicists. I have a particular interest in the use of AI in mental healthcare.
FIRST AND SENIOR AUTHOR ARTICLES
- Climate Change and Health: The Next Challenge of Ethical AI
- Building Health Systems Capable of Leveraging AI: Applying Paul Farmer’s 5S Framework for Equitable Global Health
- Weighing the benefits and risks of collecting race and ethnicity data in clinical settings for medical artificial intelligence
- What the embedded ethics approach brings to AI-enhanced neuroscience
- Staying Curious with Conversational AI in Psychotherapy
- Responding to Uncertainty in the COVID-19 Pandemic: Perspectives from Bavaria, Germany
- Diversity in German-speaking medical ethics and humanities
- Value-creation in the health data domain: A typology of what health data help us do
- Like I’m Talking to a Real Person”: Exploring the Meaning of Transference for the Use and Design of AI-based Applications in Psychotherapy
- Embedded ethics could help implement the pipeline model framework for machine learning healthcare applications
- The implications of embodied artificial intelligence in mental healthcare for digital wellbeing
- Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy
- What the embedded ethics approach brings to AI-enhanced neuroscience
CO-AUTHORED ARTICLES
- Global Health in the Age of AI: Charting a Course for Ethical Implementation and Societal Benefit
- Diversity in the medical research ecosystem: a descriptive scientometric analysis of over 49,000 studies and 150,000 authors published in high-impact medical journals between 2007 and 2022
- Introducing Team Cards: Enhancing Governance for AI Models and Data in the Age of Complexity
- Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias
- The Urgent Need for Health Data Justice in Precision Medicine.
- Defining the Social License of Large Language Models in Healthcare
- Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research
- Diversity and inclusion: A hidden additional benefit of Open Data
- Competing Interests: Digital Health and Indigenous Data Sovereignty
- Embedded Ethics and the “Soft Impacts” of Technology
- Peer review of GPT-4 Technical Report & Systems Card
- Toward best practices in embedded ethics: Suggestions for interdisciplinary technology development
- Embedded ethics: a proposal for integrating ethics into the development of medical AI
- Psychotherapie mit einer autonomen künstlichen Intelligenz. Ethische Potentiale und Herausforderungen
- AI ethics is not a panacea
- An embedded ethics approach for AI development
- Staying Curious with Conversational AI in Psychotherapy
- Building a house without foundations? A 24-country qualitative interview study on artificial intelligence in intensive care medicine
- A scientometric analysis of fairness in health AI literature: Who does the fairness in health AI community represent
Please get in touch if you would like a PDF copy of any article.