Research

Below see some of my current scholarly interests and projects. Versions of papers, datasets, pre-analysis plans, and reproducible code are made available when possible. For further requests, contact me.

My research has been covered by venues such as VentureBeat, the Montreal AI Ethics Institute, and Forbes. For additional examples of my work, see my CV.

Governance and Ethics of Autonomous and Intelligent Systems

More than 100 public sector, private sector, and non-governmental organizations have published normative AI ethics documents (i.e., codes, frameworks, guidelines, policy strategies) in recent years. Our ongoing empirical study assesses these documents through coding and quantitative and qualitative analysis of 25 ethics topics and 17 policy sectors, resulting in an original open-source data set (The AI Ethics Global Document Collection) and analysis of cross-sectoral differences in the prioritization and framing of AI ethics topics. As compared to documents from private entities, NGO and public sector documents reflect more ethical breadth in the number of topics covered, are more engaged with law and regulation, and are generated through processes that are more participatory. These findings may reveal differences in underlying beliefs about an organization’s responsibilities, the relative importance of relying on experts versus including representatives from the public, and the tension between prosocial and economic goals

Schiff, D., Biddle, J., Borenstein, J., & Laas, K. (2020). What’s Next for AI Ethics, Policy, and Governance? A Global Overview. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 153–158. https://doi.org/10.1145/3375627.3375804.

Schiff, D., Borenstein, J., Laas, K., & Biddle, J. (2021). AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection. IEEE Transactions on Technology and Society, 2(1), 31-42. https://doi.org/10.1109/TTS.2021.3052127. (accepted version) (appendix)

Schiff, D., Laas, K., Biddle, J., & Borenstein, J. (forthcoming 2021). Global AI Ethics Documents: What They Reveal About Motivations, Practices, and Policies. In K. Laas, M. Davis, & E. Hildt (Eds.), Codes of Ethics and Ethical Guidelines, Emerging Technologies, Changing Fields. Springer.

This research has been featured by the Montreal AI Ethics Institute and in the European Parliament framework on ethical aspects of artificial intelligence, robotics and related technologies.


This article, resulting from work serving as Secretary for the IEEE 7010 standard, reviews the first international industry standard focused on the social and ethical implications of AI: The Institute of Electrical and Electronics Engineering’s (IEEE) Standard (Std) 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being. Incorporating well-being factors throughout the lifecycle of AI is both challenging and urgent and IEEE 7010 aims to provide guidance for those who design, deploy, and procure these technologies.

Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020). IEEE 7010: A New Standard for Assessing the Well-being Implications of Artificial Intelligence. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2746–2753. https://doi.org/10.1109/SMC42975.2020.9283454 (pre-print version).


In this paper, we review the gap between high-level principles aimed at responsible uses of AI and the translation of those principles into effective practices. We review six potential explanations for the gap: tensions related to organizational incentives and values, a need to make sense of the complexity of AI’s impacts, disciplinary divides in understanding problems and solutions, the distribution of accountability and functional separation within organizations, the need for holistic management of knowledge processes, and a lack of clarity and guidance around tool usage. We argue that stakeholders interested in realizing AI’s potential for good should advance research on understanding the principles-to-practices gap and attend to these issues when proposing solutions and best practices.

Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2021). Explaining the Principles to Practices Gap in AI. IEEE Technology and Society Magazine, 40(2), 81–94. https://doi.org/10.1109/MTS.2021.3056286 (conference version).

This research has been featured by the Montreal AI Ethics Institute and by VentureBeat.

Subcoalition Cluster Analysis

The theory that firms are coalitions of competing interests is a cornerstone of multiple streams of research in management theory, many of which are as old as the field itself. However, there are relatively few quantitative studies that examine the dynamics of coalition-building, conflict, and compromise inside of organizations. One important reason for this gap between theory development and empirical study is a lack of quantitative methods for identifying the fault-lines that define intra-organizational conflict. In our paper, we propose Subcoalition Cluster Analysis (sCCA) as one such method. sCCA uses data from a set of agents with preferences over pairs of alternatives to reveal a partition of agents that defines meaningful, representative, and stable subcoalitions. The partition of agents corresponds to an equilibrium in which each subcoalition’s preferences represent those of their members and each member is assigned to the subcoalition whose preferences they best align with. We then analyze three cases in which an organization or community faced the possibility of coalition-based conflict in order to show how sCCA can be used to study the social structure of conflict systems in a variety of empirical settings.

Ganz, S. & Schiff, D. (Under revision). Subcoalition Cluster Analysis: A New Method for Measuring Political Conflict in Organizations. Working paper available at https://osf.io/preprints/socarxiv/5kufg/.

AI, Agenda-Setting, and Policy Process Theory

While scholars have increasingly recognized the role of persuasive narratives in agenda-setting, the provision of technical expertise is also known to constitute an important influence, especially in highly complex and technical policy domains. An unsettled question is which of these two strategies employed by policy entrepreneurs—providing narratives or expertise—is more influential in gaining the ear of policymakers. Based on a pre-registered analysis plan, this study uses a field experiment to compare the relative importance of these influence strategies in agenda-setting through the case of an emerging technical domain, artificial intelligence (AI) policy. In partnership with a leading AI organization, we contact more than 7,000 United States state-level policymakers to share one of several messages about AI policy. Legislators are block randomized into treatment groups and receive e-mail communications that emphasize one of the two influence strategies (either narrative or expertise-focused messages), as well as one of two prominent issue frames about AI (emphasizing either economic and military competitiveness or ethical considerations). To measure policymaker engagement as an indicator of influence, we use novel measures of email engagement, including clicking on links to further resources, registering for a listserv, and registering for and attending a webinar. Results reveal the relative importance of reason (expertise) against passion (narratives) for highly technical policy domains, and advance efforts to bridge scholarship on narratives with policy entrepreneurship and its influence on agenda-setting.

Schiff, D., & Schiff, K. J. (Under development). Reason and Passion in Agenda-Setting: Influence Dynamics in AI Policy. Pre-analysis plan available at https://osf.io/cfb9u/.

Public and Elite Attitudes Towards AI in Government

In the context of rising delegation of administrative discretion to advanced technologies, this study aims to quantitatively assess key public values that may be at risk when governments employ automated decision systems (ADS). Drawing on the public value failure framework coupled with experimental methodology, we address the need to measure and compare the salience of three such values—fairness, transparency, and human responsiveness. Based on a preregistered design, we administer a survey experiment to 1460 American adults inspired by prominent ADS applications in child welfare and criminal justice. The results provide clear causal evidence that certain public value failures associated with artificial intelligence have significant negative impacts on citizens’ evaluations of government. We find substantial negative citizen reactions when fairness and transparency are not realized in the implementation of ADS. These results transcend both policy context and political ideology and persist even when respondents are not themselves personally impacted.

Schiff, D. S., Schiff, K. J., & Pierson, P. (2021). Assessing public value failure in government adoption of artificial intelligence. Public Administration. https://doi.org/10.1111/padm.12742. Open access version available at https://onlinelibrary.wiley.com/share/author/2DPJCNNUBCXPKVS9HRVB?target=10.1111/padm.12742.


Designing effective and inclusive governance strategies for artificial intelligence (AI) and communicating about AI to the public requires understanding how and why stakeholders reason about its potential benefits and harms. We examine underlying factors and mechanisms that drive attitudes towards the use and governance of AI across six policy-relevant applications, using structural equation modeling and a survey of the U.S. public (N=3500) and computer science master’s students (N=425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes, though their effects differ in some applications of AI. Perceived self- and societal benefit from AI drive support for its use, but not governance. Experts hold substantially more nuanced views, and are more supportive of AI use but not its regulation. Our results suggest the importance of and potential strategies for communicating broader impacts of AI to the public, and indicate that while voluntary standards and soft-law approaches are more likely to find support among experts and the public, ambivalence towards AI regulation may belie strong support for narrowly targeted regulatory action for certain applications.

O’Shaughnessy, M., Schiff, D. S., Varshney, L. R., Rozell, C. J., & Davenport, M. A. (Working paper). What governs public opinion on AI use and governance? Pre-analysis plan available at https://osf.io/fh5mv/.

Political Communication, Deepfakes, and Misinformation

Scholars have argued that concerns surrounding the impact of misinformation may be overstated. Nevertheless, some politicians’ actions suggest that they perceive a benefit from an informational environment saturated with misinformation (i.e., fake news and deepfakes). To explain this behavior, we argue that strategic and false allegations of misinformation benefit politicians by allowing them to maintain support in the face of information that could be damaging to their reputation. This concept is known as the “liar’s dividend”. We propose that the payoffs from the liar’s dividend work through two theoretical channels: by injecting informational uncertainty into the media environment that upwardly biases evaluations of the politician’s type, or by providing rhetorical cover which supports motivated reasoning by core supporters. To evaluate these potential impacts, we use a survey experiment to randomly assign vignette treatments detailing embarrassing or scandalous information about American politicians to American citizens. Our study design, treatments, outcomes, covariates, estimands, and analysis strategy are described in more detail in our pre-registered analysis plan.

Schiff, K., Schiff, D., and Bueno, N. (Under review). The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media. Pre-analysis plan available at http://egap.org/registration/6435.

The Impact of Automation on Worker Well-being

Discourse surrounding the future of work often treats technological substitution of workers as a cause for concern, but complementarity as a good. However, while automation and artificial intelligence may improve productivity or wages for those who remain employed, they may also have mixed or negative impacts on worker well-being. This study considers five hypothetical channels through which automation may impact worker well-being: influencing worker freedom, sense of meaning, cognitive load, external monitoring, and insecurity. We apply a measure of automation risk to a set of 402 occupations to assess whether automation predicts impacts on worker well-being along the dimensions of job satisfaction, stress, health, and insecurity. Findings based on a 2002-2018 dataset from the General Social Survey reveal that workers facing automation risk appear to experience less stress, but also worse health, and minimal or negative impacts on job satisfaction. These impacts are more concentrated on workers facing the highest levels of automation risk. This article encourages new research directions by revealing important heterogeneous effects of technological complementarity. We recommend that firms, policymakers, and researchers not conceive of technological complementarity as a uniform good, and instead direct more attention to mixed well-being impacts of automation and artificial intelligence on workers.

Nazareno, L., & Schiff, D. (2021, forthcoming). The Impact of Automation and Artificial Intelligence on Worker Well-being. Technology in Society. https://doi.org/10.1016/j.techsoc.2021.101679. Replication materials available at Harvard Dataverse.

This research has been featured by Forbes, Inc, ScienceDaily, Phys.org, Mirage, WorldHealth.net, and by The Register.

Social Responsibility of Engineering and Computer Science Students

Developing social responsibility attitudes in future engineers and computer scientists is of critical and rising importance.  Yet research shows that prosocial attitudes decline during undergraduate engineering education. We are engaging in study of a wide range of college and pre-college influences and inhibitors, influenced by the Professional Social Responsibility Development Model. Our mixed methods project, funded by NSF CCE STEM Grant No. 1158863 has resulted in a new survey instrument, the Generalized Professional Responsibility Assessment (GPRA), and other instruments, presentations, and publications.

Schiff, D. S., Logevall, E., Borenstein, J., Newstetter, W., Potts, C., & Zegura, E. (2020). Linking personal and professional social responsibility development to microethics and macroethics: Observations from early undergraduate education. Journal of Engineering Education, 110(1), 70-91. https://doi.org/10.1002/jee.20371.

Schiff, D. S., Lee, J., Borenstein, J., & Zegura, E. (Under review). The impact of community engagement on undergraduate social responsibility attitudes.

AI in Education & Healthcare

Like previous educational technologies, artificial intelligence in education (AIEd) threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI’s impacts on education policy and practice have not yet captured the public attention. This paper therefore evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I discuss AIEd’s purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socioemotional engagement. Next, in order to situate developmental pathways for AIEd going forward, I contrast sociotechnical possibilities and risks through two idealized futures. Finally, I consider a recent proposal to use peer review as a gatekeeping strategy to prevent harmful research.

Schiff, D. (2021). Out of the Laboratory and Into the Classroom: The Future of Artificial Intelligence in Education. AI & Society, 36(1), 331–348. https://doi.org/10.1007/s00146-020-01033-8.


This article engages in thematic analysis of 24 national AI policy strategies, reviewing the role of education in global AI policy discourse. It finds that the use of AI in education (AIED) is largely absent from policy conversations, while the instrumental value of education in supporting an AI-ready workforce and training more AI experts is overwhelmingly prioritized. Further, the ethical implications of AIED receive scant attention despite the prominence of AI ethics discussion generally in these documents. This suggests that AIED and its broader policy and ethical implications—good or bad—have failed to reach mainstream awareness and the agendas of key decision-makers, a concern given that effective policy and careful consideration of ethics are inextricably linked, as this article argues. In light of these findings, the article applies a framework of five AI ethics principles to consider ways in which policymakers can better incorporate AIED’s implications. Finally, the article offers recommendations for AIED scholars on strategies for engagement with the policymaking process, and for performing ethics and policy-oriented AIED research to that end, in order to shape policy deliberations on behalf of the public good.

Schiff, D. (2021, forthcoming). Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies. International Journal of AI in Education. https://doi.org/10.1007/s40593-021-00270-2 (pre-print version). Open access version available at https://rdcu.be/cw5aY.


This commentary responds to a hypothetical case involving an assistive artificial intelligence (AI) surgical device and focuses on potential harms emerging from interactions between humans and AI systems. Informed consent and responsibility—specifically, how responsibility should be distributed among professionals, technology companies, and other stakeholders—for uses of AI in health care are discussed.

Schiff, D., & Borenstein, J. (2019). How Should Clinicians Communicate With Patients About the Roles of Artificially Intelligent Team Members? AMA Journal of Ethics, 21(2), 138–145. https://doi.org/10.1001/amajethics.2019.138.

This research has been featured by Becker’s Hospital Review.