Research

Below see some of my current interests and projects.

For a more complete list of my work, see my CV.

AI Ethics, Policy, and Governance

Schiff, D., Borenstein, J., Laas, K, & Biddle, J. (Working paper). AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection.

More than 100 public sector, private sector, and non-governmental organizations have published normative AI ethics documents (i.e., codes, frameworks, guidelines, policy strategies). Our ongoing empirical study addresses the ethics and policy issues in these emerging documents through coding of approximately 25 ethics topics and 17 policy sectors, resulting in an original data set. This work was presented in the APPE 2020 conference and published in the AAAI/ACM Conference on AI Ethics and Society. We are developing a paper exploring sectoral differences and a book chapter.

Social Responsibility of Engineering Students

Schiff, D., Logevall, E., Borenstein, J., Newstetter, W., Potts, C., & Zegura, E. (Revise and resubmit). Linking personal and professional social responsibility development: Influences and inhibitors in early undergraduate education.

Developing social responsibility attitudes in future engineers and computer scientists is of critical and rising importance.  Yet research shows that prosocial attitudes decline during undergraduate engineering education. We are engaging in study of a wide range of college and pre-college influences and inhibitors, influenced by the Professional Social Responsibility Development Model. Our mixed methods project has resulted in several presentations and one paper under review. Another paper is under development.

AI in Government Services

Schiff, D., Schiff, K.J., & Pierson, P. (Working paper). Bringing Technology into Policy Design: How Bias, Transparency, and Lack of Human Agency Affect Public Support for Government Use of AI. Pre-analysis plan available at https://egap.org/registration/5819.

This study argues that technology is an indispensable component of the policy design toolkit. We explore the use of artificial intelligence (AI) in government services via automated decision systems (ADS) to discuss how technical characteristics of technology impart ethical features into policy design and impose burdens on target populations. Using a pre-registered survey experiment based on prominent ADS use cases in child welfare and criminal justice, we provide clear causal evidence that policy design burdens associated with AI have significant impacts on public opinion – feelings towards ADS, trust, and expectations of government service quality.

The Impact of Automation on Worker Well-being

Nazareno, L. & Schiff, D. (Working paper). The Impact of Automation and Artificial Intelligence on Worker Well-being.

Discourse surrounding the Fourth Industrial Revolution often treats technological substitution of workers as a cause for concern, but complementarity as a good. However, while automation and artificial intelligence may improve efficiency or wages for those who remain employed, they may also have mixed or negative impacts on worker well-being. Increased uptake of automation in work environments may affect worker autonomy, cognitive load, socialization, job insecurity, and external monitoring, among other effects. This study considers several hypothetical channels through which automation may impact worker well-being. We combine two different automation risk measures with a set of occupation codes to assess whether automation risk predicts impacts on job satisfaction, stress, and health.

Subcoalition Cluster Analysis

Ganz, S. & Schiff, D. (Under review). Subcoalition Cluster Analysis: A New Method for Measuring Political Conflict in Organizations. Pre-print available at https://osf.io/preprints/socarxiv/5kufg/.

We introduce a novel method for modeling politics in organizations that builds on the model of intra-organizational conflict in March (1962), which we call “subcoalition cluster analysis” (sCCA). The main contribution of sCCA is that it identifies subcoalitions with consistent preferences that are in conflict without placing additional restrictions on the structure of individual preferences. In our paper, we first describe sCCA, emphasizing how it differs from prior clustering and preference aggregation routines. Then, we apply sCCA to two empirical contexts: Wikipedia and the Baseball Writers’ Association of America (BBWAA).

AI in Education

Schiff, D. (2020). Out of the Laboratory and Into the Classroom: The Future of AI in Education. AI & Society. https://doi.org/10.1007/s00146-020-01033-8.

Like previous educational technologies, artificial intelligence in education (AIEd) threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI’s impacts on education policy and practice have not yet captured the public attention. This paper therefore evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I discuss AIEd’s purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socioemotional engagement. Next, in order to situate developmental pathways for AIEd going forward, I contrast sociotechnical possibilities and risks through two idealized futures. Finally, I consider a recent proposal to use peer review as a gatekeeping strategy to prevent harmful research.

Deepfakes and Misinformation

Schiff, K., Schiff, D., and Bueno, N. (Under development). The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media. Pre-analysis plan available at http://egap.org/registration/6435.

Scholars have argued that concerns surrounding the impact of misinformation may be overstated. Nevertheless, some politicians’ actions suggest that they perceive a benefit from an informational environment saturated with misinformation (i.e., fake news and deepfakes). To explain this behavior, we argue that strategic and false allegations of misinformation benefit politicians by allowing them to maintain support in the face of information that could be damaging to their reputation. This concept is known as the “liar’s dividend”. We propose that the payoffs from the liar’s dividend work through two theoretical channels: by injecting informational uncertainty into the media environment that upwardly biases evaluations of the politician’s type, or by providing rhetorical cover which supports motivated reasoning by core supporters. To evaluate these potential impacts, we use a survey experiment to randomly assign vignette treatments detailing embarrassing or scandalous information about American politicians to American citizens. Our study design, treatments, outcomes, covariates, estimands, and analysis strategy are described in more detail in our pre-registered analysis plan.