National Academies Press: OpenBook

Fostering Responsible Computing Research: Foundations and Practices (2022)

Chapter: 2 Theoretical Foundations from Ethical and Social Science Frameworks

« Previous: 1 Introduction
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

2

Theoretical Foundations from Ethical and Social Science Frameworks

This chapter provides theoretical foundations for identifying the roots of ethical challenges and sources of problematic societal impacts in computing research, which are described in Chapter 3, and for recommendations for addressing them, which are presented in Chapter 4. It describes core ethical concepts (Section 2.1) and fundamental ideas from social and behavioral sciences (Section 2.2). These foundations enable identifying, understanding, and thus better addressing ethical and societal dilemmas that arise with computing research and in the technologies it engenders. The chapter aims to give computer scientists and engineers, most of whom are neither philosophers nor social scientists, a basic understanding of the major ideas in ethics and social sciences that will assist them in carrying out responsible computing research. Necessarily, given its brief nature, the chapter’s presentations are not in-depth in either area of scholarship. Importantly, this report does not assume, nor expect, that computer scientists and engineers will become experts in these areas of scholarship. Rather, the goal, both of this section of the report and through the recommendations, is to enable them to participate in meaningful collaborations with scholars in these fields, so that their computing research may be better informed and more responsible to societal needs.

The social and behavioral sciences provide methods for identifying the morally relevant actors, environments, and interactions in a sociotechnical system; ethical reasoning provides a calculus for understanding how to resolve competing moral tensions involving those actors, environments, and interactions. The theoretical foundations presented in this chapter can thus support the computing research community in identifying and making informed decisions about ethical and societal impact challenges that arise in

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

their research. They provide a basis for determining ways to adapt for responsible computing the processes of design, development, deployment, evaluation, and monitoring of computing research and thus help guide responsible downstream use of computing research in building the many products that are reshaping daily life. In particular, scholars with expertise in these areas can assist computing researchers in designing research projects that adequately meet societal constraints, norms, and needs.

To address the ethical and societal impact challenges discussed in this chapter, computing researchers need to be able to envision alternative ethical values and tradeoffs among them as well as alternative socio-technical contexts. Scholarship in the field of design provides methods and techniques for effectively doing such envisioning. These methods and techniques are frequently deployed in human–computer interaction research and are now typically included in courses on this topic in computer and information science curricula. The report discusses design in Sections 3.3.1 and 3.4.4.

2.1 THE VALUE AND SCOPE OF ETHICS

In recent years, sets of principles aimed at guiding those engaged in the development and deployment of artificial intelligence (AI) systems toward ethical and socially beneficial outcomes have proliferated.1 To advance responsible computing research in general, one might consider taking these principles as a baseline, and expanding on them to encompass other areas of computing, such as cybersecurity or software engineering. Principles may be a natural starting place for developing recommendations for responsible computing research, and indeed, many such principles make meaningful contributions to society’s continuing deliberation about the present and future of computing. Nonetheless, they are insufficient in themselves because they are both often relatively divorced from practice and tend to be presented absent sufficient explanations of their underlying assumptions or origin in ethical reasoning. For example, a principle that says that a system must be “governable” or an algorithm’s results “interpretable” provides little, if any, guidance about ways to develop or test for these properties. Similarly, a principle that says that a research project or product should be “respectful of human dignity” is unlikely to make a practical difference in isolation. Without shedding light on core assumptions about the fundamental ethical concepts, social theories, and humanist and social scientific factors that underlie these principles, researchers lack guidance on how to interpret, critique, and apply them in practice. This chapter focuses instead on presenting fundamental ethical concepts, the very concepts from which such principles

___________________

1 AI Index Steering Committee, 2022, “AI Policy and Governance,” Chapter 5 in The AI Index 2021 Annual Report, Stanford University Human-Centered AI Institute, CA, https://aiindex.stanford.edu/report.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

arise, but, more importantly, concepts which support the practical reasoning responsible computing research requires.

Ethics provides tools for the moral evaluation of behaviors, institutions, and social structures. This section focuses on evaluation of behaviors, and Section 2.2 examines the roles of institutions and social structures. The tools for evaluating behaviors provide building blocks for ethical evaluation of computing research, including a language and concepts in which to express a set of baseline commitments against which to assess research. When assessing behaviors, one must distinguish between the moral evaluation of acts and that of agents. The first concerns what people ought to do; it aims to identify the right act to choose. The second concerns practices of moral blame and praise, and aims to identify who is responsible, and to what degree, when a morally right or wrong act is performed. Both these questions are important: achieving responsible computing research requires not only determining whether an action (e.g., a design choice) was responsible, but also determining who or what is responsible for that action. Responsibility comes in degrees depending on the nature of the researcher’s contribution and the source of any harm that ensues from the system.

In addition to determining whether an act is wrong, ethical evaluation also requires determining why the act is wrong and how seriously wrong it is. This further evaluation requires examination of values that are put into play by the computing researchers’ decisions, and determination of the extent to which their decisions undermine (or serve) those values. Institutional and social structures can also promote or undermine relevant values; for example, through promotion and tenure evaluations, or review criteria at conferences. Ethical evaluations often require examination of many mechanisms that impact relevant values, as well as potential conflicts between different values.

Consider, for example, evaluations of decisions about how to deploy potentially privacy-invasive computing technologies in order to support public health responses during the COVID-19 pandemic. The use of smartphones to support contact tracing by creating Bluetooth “handshakes” with other devices within a given range for a given period might help advance public health goals, especially if such information is integrated into manual contact tracing. But sharing information about people’s “social graph” with a centralized health authority (as well as storing it on a device) can raise real privacy concerns. Understanding the contributions each makes to supporting values such as autonomy, well-being, and the legitimate exercise of power can help structure a well-reasoned evaluation of this potential trade-off.

More generally, ethical evaluation fundamentally requires weighing multiple values, and both the values and weightings of them are domains of intense disagreement. This report neither adopts a particular value system nor provides a complete decision procedure for resolving the conflicts and tensions that inevitably arise. The concepts it

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

articulates cannot be used to derive an “ethical checklist,” either for computing artifacts or for computing researchers. Indeed, no simple checklist could suffice for determining if research is responsible; there is no mechanical procedure that can spare researchers from having to think about the values they embed in their research and the trade-offs they make in doing so.

2.1.1 From Ethical Theories to Ethical Values

Philosophers have developed a number of distinct ethical theories, each of which may be mobilized to determine whether an act, in a context, is morally required, permissible (but not required), or impermissible. These moral theories differ less with respect to the practical verdicts they endorse and more with respect to how they explain those practical verdicts2—a topic beyond the scope of this report. This report’s engaged ethics approach aims at being useful for the purposes of guiding responsible computing research: instead of applying some canonical, abstract ethical theory, it starts from an engagement with responsible computing issues and aims to identify the ethical concepts and reasoning that can be used to approach resolving them, sometimes yielding new theory.3

This section thus focuses on the fundamental building blocks of moral theories, namely, ethical values.4 The authors expect most reasonable moral theories to agree that these values are important, even though the theories may not agree on the precise details of the values or the ways to deploy them in an ethical argument.5

The space of plausible values is vast. One useful contribution of moral theories is to provide a shorthand for thinking about those values and their structure. So, this report adopts a pragmatic distinction to facilitate ethical analysis of responsible computing research: the distinction between intrinsic and instrumental values.6Intrinsic values are things that matter in themselves. Instrumental values are things that matter because they help us to realize intrinsic values. Intrinsic values are typically more abstract, general, and

___________________

2 D.W. Portmore, 2009, “Consequentializing,” Philosophy Compass 4:329-347, https://doi.org/10.1111/j.1747-9991.2009.00198.x.

3 A. Cribb, 2010, “Translational Ethics?: The Theory–Practice Gap in Medical Ethics,” Journal of Medical Ethics 36(4):207-210; J. Johnstone, 2007, “Technology as Empowerment: A Capability Approach to Computer Ethics,” Ethics and Information Technology 9(1):73-87; D. Danks, 2021, “Digital Ethics as Translational Ethics,” Pp. 1-15 in Applied Ethics in a Digital World (I. Vasiliu-Feltes and J. Thomason, eds.), IGI Global, Hershey, PA.

4 J. Raz, 1999, Engaging Reason: On the Theory of Value and Action. Oxford University Press, United Kingdom; M. Schroeder, 2021, “Value Theory,” In The Stanford Encyclopedia of Philosophy, Fall 2021 ed. (E.N. Zalta, ed.), https://plato.stanford.edu/archives/fall2021/entries/value-theory.

5 In particular, different substantive ethical theories may disagree about the exact scope of particular values (e.g., does privacy apply to email communications?) or the relative weights of the values (e.g., should transparency or privacy be valued more when those come into conflict?). Historically, different cultures have often endorsed different substantive theories, particularly with regard to how different values are weighted or traded off against one another. Despite these differences, these values can provide valuable building blocks for substantive theories.

6 M.J. Zimmerman and B. Bradley, 2019, “Intrinsic vs. Extrinsic Value,” In The Stanford Encyclopedia of Philosophy, Spring 2019 ed. (E.N. Zalta, ed.), https://plato.stanford.edu/archives/spr2019/entries/value-intrinsic-extrinsic.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

as a result fewer in number. Instrumental values are typically more applied, specific, and as such are potentially infinite.

This division can help us navigate substantive moral disagreement. Ethics scholarship generally agrees that the concepts described here as intrinsic values matter, but often disagrees about how much each intrinsic value matters or how to balance them. Because the importance of each instrumental value depends on its connection with various intrinsic values, ethical debates can be conducted in terms of the smaller set of agreed-upon intrinsic values. For example, suppose one has apparently conflicting instrumental values, but both matter because they help realize the same intrinsic values. In that case, one can translate the debate into the intrinsic values to provide a manageable currency in which to express, and potentially resolve, this apparent conflict. More generally, this division enables one to reduce the potentially infinite list of possible instrumental values to the more tractable set of intrinsic ones.

The lists below focus on relatively canonical intrinsic values and on instrumental values that are relevant for computing research. One challenge is that different philosophical traditions may use different terms for some values, even when the core value (and related concepts) is shared. Moreover, there is some philosophical disagreement about the exact list of intrinsic values, though agreement about the general content of the list. Some also suggest using rights to establish ethical foundations, but this is problematic.7 And of course, beyond this philosophical disagreement, there is considerable substantive moral disagreement across cultures, places, and times. The presence of pervasive disagreement may be daunting to computing researchers who are disposed to seek determinate solutions to quantifiable problems and may lead some to seek to avoid ethical considerations entirely. Although it is impossible to eliminate this pervasive moral disagreement, it is possible to provide concepts that enable computing researchers and others to better understand those disagreements. By identifying a set of widely (though not universally) shared intrinsic values and illustrating how they are served by instrumental values specific to computing, this report offers computing researchers concepts with which to structure and understand both their own moral intuitions, and the inevitable

___________________

7 One approach to addressing ethical questions in computing research would be to rely almost exclusively on ideas of rights, perhaps rooted in domestic or international law. At least in principle, for example, virtually all countries in the world accept the idea of human rights. Although this report does not dismiss the importance of rights as a rhetorical framing for public discussions and debates about the implications of computing research, it relies on a somewhat different approach for both theoretical and practical reasons. First, the notion that rights provide foundations is itself illusory, as rights must be grounded in values that are important enough in a context to generate a duty that someone owes to the holder of the right. Second, the appearance of agreement on universal human rights is superficial and depends on the articulation of those rights being vague and general. Efforts to make universal human rights more precise inevitably leads to the same disagreements as with any other ethical concept. Third, rights are not useful for injuries that are significant only in the aggregate. Although this report is not grounded in rights, there are contexts in which such language is valuable, such as giving a name to duties to one another that might otherwise go unrecognized.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

moral disagreements that they will confront, when assessing the sources of ethical and societal challenges discussed in Chapter 3 and the recommendations in Chapter 4.

2.1.2 Intrinsic Ethical Values

  • Autonomy and freedom—Individuals have beliefs, plans, and goals, and autonomy is the ability to act on those beliefs by formulating plans to achieve goals. Different people’s autonomy, as well as their perceptions of autonomy (or its absence), can obviously come into conflict, and so philosophers have advanced more and less substantive conceptions of freedom and autonomy, including explanations of when perceived autonomy is relevant. At one extreme, so-called negative freedom consists simply in being free from interference by others. At the other extreme, so-called positive freedom consists in being able to formulate authentic beliefs and goals, and actually realize those goals. Autonomy as positive freedom also presupposes having a sufficient range of good options to choose from.8
  • Well-being (material and non-material)—People’s well-being is an intrinsic value, as all people have an interest in achieving suitable levels of physical and psychological functioning. Material well-being requires access to sufficient sustenance, water, shelter, and so forth. Non-material (psychological and social) well-being is similarly an intrinsic value, described by some philosophers as “the social bases of self-respect.” This type of well-being can be negatively impacted by, for example, representational harms by algorithmic systems that reproduce racist tropes. This value naturally translates into a right for access to a basic level of psychological and social health, even if people autonomously choose not to use that access.
  • Relational and material equality—Relational equality refers to people standing in equal social relations to one another such that, for example, one person is not unilaterally vulnerable to the other, or where each is an equal partner in decision-making. Many regard this type of equality as an instance of mutual regard for human dignity. The ethical wrong of exploitation (of other people) is closely associated with the intrinsic value of relational equality, though exploitative acts and policies can impact other intrinsic values as well.

___________________

8 There have been many efforts to develop viable universal, inclusive, and trans-cultural conceptions of the substantive bases for “real” autonomy, in contrast with the relatively easier task of evaluating comparisons in the extent of autonomy experienced by particular persons, or merely assessing subjective perceptions of autonomy among individuals or groups. For example, it is widely accepted in human rights law that women ought not have less autonomy than men, but that comparative claim does not establish criteria for determining whether persons in general objectively have “sufficient” autonomy, nor does it necessarily resolve conceptual questions about whether autonomy means the same thing in different contexts. These issues can be important for topics like surveillance that touch all members of a society equally. Computing researchers whose efforts implicate such questions are strongly encouraged to learn more about these complex issues.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
  • Justice and legitimate power—Equality is a fundamentally comparative value—it concerns how people stand in relation to each other, or how their material well-being compares with others. There is also a distinct value of justice that is noncomparative. Justice focuses on ensuring that people receive what they are due, which may require the use of legitimate power. In modern societies, people’s ability to live according to their intrinsic values (and be protected from others’ actions) can require an authority to exercise power and thereby resolve disputes, protect the vulnerable, and enact collective self-determination. It is intrinsically valuable that this power be exercised legitimately—in ways that are limited, and impartial, and are properly authorized by the community whom that authority represents. Of course, in many contexts, one may also find significant instrumental value in justice and legitimate exercise of power.
  • Collective self-determination—Just as some forms of individual autonomy may be intrinsically valuable, so the self-determination of groups and collectives can have intrinsic value. One’s life plans frequently depend on coordinated planned action with others, and so too there is value in one’s group being able to autonomously pursue legitimate plans. This value does not entail that collectives have values above those of their members, but only that individuals can find intrinsic value in the success of their groups.
  • A thriving natural environment—The natural world arguably has moral status on its own, independently of our human interests. As such, there are arguments that a thriving natural environment is intrinsically valuable analogous to human thriving being intrinsically valuable. Of course, the environment is also instrumentally valuable in the ways that it enables us (and other moral beings) to thrive.

2.1.3 Instrumental Ethical Values

Instrumental values are ethically important because they contribute to the realization of (or capability to realize) intrinsic values. Instrumental values thus tend to be a more heterogeneous collection than intrinsic values, as their ability to contribute to people’s intrinsic values will depend partly on the particular context, environment, and agents. Instrumental values are sometimes not actually valuable, as pursuit of them might not lead, for that agent in that context, to any intrinsic value; they must be assessed in context-sensitive ways. The instrumental values listed below are ones frequently raised in conjunction with computing research. Some expectations for responsible computing research are themselves an amalgamation of instrumental values. For example, such values as privacy, trust, and transparency are relevant to the value of non-exploitative

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

participation or use—that is, to ensure that an individual’s participation, activity, or data is used in ways that the individual understands and agrees to.

  • Privacy—The ethical value of privacy arises in many different spheres, not only those to do with computing research and product deployment. In the computing context, analyses of privacy all focus on its importance for protecting some other important value, though conceptions of privacy differ across individuals, cultures, and contexts. Privacy can provide protection against manipulation or coercion (supporting autonomy); enable formation of intimate relationships that partly depend on the secrets shared with loved ones (supporting psychological well-being); protect against overreaching and illegitimate governments (supporting collective self-determination and legitimate power); or yield other support depending on the computing use and context.
  • Avoidance of unjust bias—Unjustly biased systems potentially undermine material well-being, non-material well-being, justice, autonomy, and collective self-determination for those against whom they are biased, while also often undermining relational equality. For example, university admissions systems that are biased against people with working-class backgrounds can result in the harm of denied opportunities, and biased medical resource allocation algorithms can divert resources for medical care away from needy but historically disadvantaged populations. In general, unjustly biased systems perform worse for members of historically disadvantaged groups than for members of historically advantaged groups, without an ethically defensible reason for this bias. As a result, these systems can also damage non-material well-being even when deployed in contexts with lower stakes than these.
  • Fairness—Although there are many conceptions of fairness, most imply that one should use the same kinds of ethically defensible reasons for all decisions of a particular type. For example, one might use only an applicant’s income to determine whether to approve a loan, as someone’s income is clearly relevant to their ability to repay the loan. The ethical value of fairness is not the same as the absence of bias. As this example shows, fair systems can sometimes be biased (depending on the broader context); conversely, unbiased systems can nonetheless be unfair (as when a monopolistic company unfairly charges higher prices to everyone). There have recently been a number of proposed statistical measures of fairness, and while those might provide signals or guidance about potential (un)fairness, it is important to recognize that they are not constitutive of it.
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
  • Trust and trustworthiness—At a high level, trust involves someone becoming vulnerable in certain respects because of (justified) expectations about the person or tool being trusted; trustworthiness is the property of a person or tool that makes such trust reasonable. Trust thus enables people to do or realize much more when that trust is appropriately placed. For example, a trustworthy computing system could be valuable because it maintains data integrity, or learns from incorrect predictions, or otherwise supports intrinsic values such as material well-being or autonomy.
  • Verifiability—People understandably have an interest in knowing that a computational system will function correctly so that they can use or engage with it appropriately. That is, people instrumentally value being able to foresee a system’s behavior, because that knowledge enables them to use the system to advance other values. The importance of knowing that a system will behave correctly is closely aligned with computing research on (formal) verification.
  • Assurance—People also value having reasons for believing that a system, or another human, will behave in expected ways. Reason-giving is a ubiquitous feature of ethical debate and discussion, often serving to excuse perceived ethical lapses or errors; we do not just value knowing what others will do, but also why they will. This information is particularly (instrumentally) valuable because it supports successful interaction in new contexts. Computing research on system assurance similarly focuses on the value of providing reasons to expect that the system will behave appropriately, particularly in complex environments.
  • Explainability, interpretability, and intelligibility—These concepts are grouped together as they have all been proposed as ways to promote understanding of increasingly complex computational systems, and thus to support meaningful deliberation, oversight, or use of these systems. There are no widely shared definitions of these terms—for instance, one person’s explainability is another’s interpretability—but they have a shared conceptual core of increased understanding, whether about predictions, control, potential improvements, or transfer. Improved understanding is clearly ethically valuable in many cases, but always because of what it enables—increased autonomy, better outcomes, lower risks, greater security, accountability in the exercise of power, and so forth.
  • Safety—In computing contexts, safety primarily concerns the design and structure of the system, and whether it will behave appropriately such that it does not kill, injure, harm, or otherwise endanger the well-being of users or other
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
  • individuals.9 This focus is closely related to the broader ethical value of being able to act as desired without active threat. It thereby supports the intrinsic values of well-being and autonomy, either directly or through changes or preservation of the natural environment.
  • Security—The related concept of security has a specific meaning in computing research focusing on performance of real-world implementations (with potential adversaries), and this applied concept is rooted in a deeper ethical concept, according to which the intrinsic values described above should be enjoyed securely—that is, without worry that significant threats or harms might arise. Protection against novel or additional threats makes actual harm less likely, and so enables people to better realize their intrinsic values.
  • Transparency—In computing, this notion encompasses a broad range of goals, including algorithmic transparency, clarity, and specificity about system capabilities, and information about how one’s data are used. Techniques that support explainability, interpretability, and intelligibility are one way to increase transparency. These goals can be valuable to ensure appropriate use or understanding of computing systems for people to realize their intrinsic values.
  • Inclusiveness and diversity—Inclusion of a range of diverse perspectives is frequently emphasized as an important value in computing research contexts, including by this report (see Sections 1.6 and 3.1). There is growing recognition across many sectors— government, industry, academia—that diverse and inclusive teams make better decisions. Increasing diversity and inclusion—in research teams as well as in the stakeholders who are consulted in carrying out the research—also supports relational equality and collective self-determination, when everyone feels empowered to contribute to important decisions that affect their community. Non-material well-being—the social bases of self-respect—is also enhanced by removing barriers for underrepresented groups and maintaining environments that are conducive to their participation.

Ethical challenges often involve conflicts between values. For example, decisions about surveillance systems including facial recognition almost always involve trade-offs between increased security and increased privacy, as when decision-makers must decide how many cameras to deploy, or indeed whether to adopt facial recognition algorithms at all. In this example, computing research can potentially help change this trade-off (e.g., through privacy techniques embedded in the cameras themselves), but not fully eliminate it. Or consider the trade-offs that arise when only some people benefit while

___________________

9 M. Bishop, 2019, Computer Security, 2nd ed., Addison-Wesley, Boston, MA, p. 630.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

others are potentially harmed: for example, software engineers are often pressured to rush products that have not been fully tested, thereby pitting the values of some end-users (e.g., their material well-being if the system fails) against the values of other end-users (e.g., their autonomy to be free to use the product) against the values of the employees (e.g., to advance the company in personally beneficial ways). The concepts, ideas, and framework articulated here provide the resources to engage in careful analysis and decision-making about cases like these, even if no simple checklist or decision procedure is possible. Section 3.1.1 further explores the challenges in reconciling conflicting values and goals of stakeholders.

2.2 THE POWER OF A SOCIOTECHNICAL PERSPECTIVE

The technological artifacts that computing research creates—from algorithms and other computational methods to networked systems—participate in an increasingly complex and highly interconnected ecosystem of people, institutions, laws, and societies. It is computing’s participation in this social ecosystem and, consequently, its far-reaching societal impact that give rise to the ethical challenges and questions of responsibilities increasingly posed to computing research. The sociotechnical approach10 explained in this section provides an important framework for the computing research community in its pursuit of understanding ways to identify and address these challenges and calls for greater accountability. The term sociotechnical conveys more than the fact that computing systems have users. It highlights that people—individually, in families, at work, in communities, as members of society, and so on—are interacting with and affected by computational systems, tying humanity to the material, technical, and social worlds created by computing research.11

A sociotechnical approach enables identifying, designing for, and tracking the benefits and risks that arise from introducing novel technologies into social worlds. It draws on social theories and social scientific methodologies, and empirical observations that enable the development of hypotheses about the ways people interact with the world

___________________

10 W. Bijker and T. Pinch, 1987, “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other,” Social Studies of Science 14:17-50, First published 1984 in Social Studies of Science 14(3):399-441; S. Sismondo, 2011, An Introduction to Science and Technology Studies, John Wiley & Sons, Hoboken, NJ; E.J. Hackett, O. Amsterdamska, M. Lynch, and J. Wajcman, eds., 2008, The Handbook of Science and Technology Studies, MIT Press, Cambridge, MA. Other relevant resources include the ACM Computer-Supported Cooperative Work and Social Computing (CSCW) Conference, which has been hosting a robust conversation about sociotechnical systems since the 1990s, the European CSCW Conference and CSCW Journal, and the field of Science and Technology Studies’ flagship journal Science, Technology and Human Values, which has a wealth of important research about sociotechnical systems.

11 J. Hughes, 1989, “Why Functional Programming Matters,” The Computer Journal 32(2):98-107, https://doi.org/10.1093/comjnl/32.2.98.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

around them. These theories and methods thus enable investigations of the ways people interact with computing technologies in various contexts and circumstances of use and of the complex roles that technologies play in those dynamic interactions. They are essential to revealing ways to enable computing research to be responsible in this time of widely deployed and highly networked computing systems.

The analytical methods the social and behavioral sciences use to generate meaningful insights about the world include ethnographic observation methods, in-depth interviews, survey studies, historical analysis, and simulations and experimental studies in controlled lab settings.12 These methods have been developed and deployed by scholars in many disciplines—including anthropology, information science, education, ethnic studies, history, qualitative sociology, political science, public health, urban studies, and women and gender studies. Computing researchers cannot be expected to become experts in any one of these disciplines, let alone all of them. They can, however, collaborate with scholars in these disciplines, who can assist them in understanding and applying such nuanced concepts as society, gender, race, justice, and systemic oppression that are studied in the social sciences.

Of the many social science approaches, the discussion in this section draws mainly on insights from the interpretivist paradigm, an approach that focuses on understanding how people make sense of their everyday lived experiences and they ways those understandings shape what people do and value in the world. It provides effective methods for analyzing the relationships among science, technology, and society, enabling study of ways new technologies may be adopted by people and the organizations in which the operate. In particular, this approach draws social considerations into focus for even the most seemingly purely technical systems, and thereby shows the advantage of framing the challenges of responsible computing research as sociotechnical problems. More socially accountable technologies that support people’s manifold values require the expertise of both social scientists and computing researchers.13

2.2.1 Sociotechnical Systems Briefly Explained

The social contexts that participate in this feedback loop include the many interpersonal, linguistic, cultural, professional, institutional, and historical experiences that shape individuals as well as their personal experiences. Technologies take on their meaning

___________________

12 See, for example, J.S. Olson and W.A. Kellogg, eds., 2014, Ways of Knowing in HCI, Vol. 2, Springer, New York; M.Q. Patton, 2014, Qualitative Research and Evaluation Methods: Integrating Theory and Practice, Sage Publishing, Los Angeles, CA; U. Felt, R. Fouché, C.A. Miller, and L. Smith-Doerr, eds., 2016, The Handbook of Science and Technology Studies, MIT Press, Cambridge, MA; M.J. Salganik, 2019, Bit by Bit: Social Research in the Digital Age, Princeton University Press, NJ.

13 G. Ropohl, 1999, “Philosophy of Socio-Technical Systems,” Society for Philosophy and Technology Quarterly Electronic Journal 4(3):186-194, https://doi.org/10.5840/techne19994311.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

and value in particular places, moments in time, among networks of people, and their physical environments. Even the simple technology of scissors was designed initially with a notion of a universal person in mind—the right-handed person. As a result, seemingly purely technical systems are not just technical, and approaches to responsible computing must grapple with the myriad ways computing research interacts with people and the social contexts they inhabit. The facial recognition example presented below illuminates and explores the sociotechnical nature of computing technologies.

Computing research not only shapes, but also is shaped by a range of values, priorities, influences, and effects. Stakeholders of various types are embedded in the social contexts of computing research, including funding agencies and academic peer reviewers and investors, who influence the deployment and use of computing research. Thus, the computing research enterprise itself participates in a sociotechnical system.

Another insight from social science scholarship is that the effects of computing research results and the products it enables are influenced by social phenomena at multiple scales.14 Macroscale social phenomena include national laws, economic conditions, and shared political ideologies; mesoscale social phenomena include organizational cultures and institutional rules; and microscale social phenomena include interpersonal relationships and shared identities. These different scales of social phenomena interact, particularly because they might embody different core values.15 For example, micro-level interactions (such as the treatments a patient is offered by a clinician) are shaped by meso-level cultural norms (such as whether there are patterns of systemic racism in the admissions practices of a regional hospital) and macro-level policy decisions (such as laws that restrict who can legally access health care). Individuals help shape the kinds of research questions that are asked but so do the organizational cultures and incentives that influence whose questions are pursued and what approaches are taken.16 The interplay of these connections and social phenomena makes technical systems challenging to design, build, and deploy.

2.2.2 From Image Recognition to Facial Recognition Technologies

The importance of a sociotechnical perspective can be readily seen by considering the contrast between image recognition and facial recognition systems. From a purely computing technical perspective, it might seem that facial recognition technology is merely

___________________

14 Social scientists might characterize these scales with more nuance than the simple explanations given here for the sake of those in other fields.

15 E.L. Trist, 1981, The Evolution of Socio-Technical Systems, Vol. 2, Ontario Quality of Working Life Centre, Toronto.

16 M.S. Ackerman, 1998, “Augmenting Organizational Memory: A Field Study of Answer Garden,” ACM Transactions on Information Systems 16(3):203-224, https://doi.org/10.1145/290159.290160.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

“image recognition applied to faces.” However, there are stark differences between the technologies when one adopts a broader perspective.

Facial recognition technologies were built on a body of image recognition research. One important technique, the simple, elegant tool of box-bounding, a core image recognition technique, provides a mechanism for digitally drawing (and using) discrete boundaries around objects. Box-bounding makes it possible for human annotators to mark the borders between the features of an image to help computer vision methods develop the ability to distinguish among different objects, including faces. The second was the use of human work to assist the machine learning systems by labeling image recognition training data, which harnesses human work to assist computing systems, to label image recognition training data.

Four research advances enabled this second development: the capacity to collect and store images from vast troves of user-generated content scattered across the Internet (that could serve as training data); the development of platforms for effective labeling of images by people at a large scale; the increased computing power of graphics processing units; and the development of faster deep neural network algorithms.

ImageNet, a large collaborative project initiated by Fei-Fei Li at Stanford University, is one of the earliest and most well-known examples of the successful development of an image recognition model for computer vision human-labeled training data. The success of ImageNet validated a methodology for benchmarking ground truth in machine learning. With enough data, it became possible to build reliable classifiers for various types of images, including, importantly for facial recognition.

Although some researchers at the time thought about the possible consequences of image recognition systems, there were few if any in-depth sociotechnically informed investigations of the socially relevant consequences of automating computer vision of social worlds. However, the social consequences of image recognition change significantly when the images being recognized are human faces. Two components of this transition raise important ethical and societal impact issues. These issues arise as well for other non-neutral objects as cultural artifacts.

First, researchers were able to amass a collection of faceprints that were purchased, donated, or surreptitiously scraped from image stashes available online through myriad sources—everything from local newspapers to Flickr and other photo hobbyist sites. This case provides a compelling illustration of the conflicts that can arise between different instrumental values (Section 2.1)—here between low data acquisition costs for researchers and the autonomy of the subjects whose data are used. Hundreds of computer researchers, using these collections, advanced the ability of software to identify any single face with a computational projection of the mathematical likelihood that an image taken in real time matched the face in front of it. Second, combining box

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

bounding with platforms for human labeling of images created a powerful mechanism for image classifiers—the capacity to rapidly aggregate human decisions to validate and structure large quantities of data—in particular for identifying faces in images.17

The collection and use specifically of facial data raise novel societal issues: What images might different people, with diverse languages, cultures, and laws, consider sensitive, profane, or private? What collection methods (beyond confirming the data) were secure? How was the privacy of the individual supplying the image protected? What if people didn’t want to be part of a research project? It also raises some issues familiar from other contexts about what rights those posting their images publicly on the Web have with respect to various ways their information might be used.

This workflow also raises distinctive issues when applied to faces, particularly with regards to training of the image classifiers and the potential uses of these classifiers: What if people whose faceprints are bucketed into the same demographic category for training data would not agree with where they are placed? What are appropriate and inappropriate uses of ImageNet and face recognition software? What kinds of governance structures need to be in place to ensure appropriate use of and access to ImageNet and other data sets, or of the uses of facial recognition overall?

2.2.3 Characteristics of Sociotechnical Systems

Facial recognition technologies illustrate several key characteristics of sociotechnical systems: interactivity of social scales and technology design, divergent stakeholder values, challenges of achieving universality, the role of social historical contexts, limited predictability of future uses, values implicit in design, continuous integration and evolution, and impacts beyond the individual level. Each of these is described briefly below.

Challenges of Achieving Universality

People are more than the sum of their personal experiences and individual attributes. They share group identities as well as linguistic and cultural traditions, and they navigate a world of laws and economic conditions. Models of individual human cognition and behavior on which computing research often relies tend to flatten or disregard such social phenomena. People who design a technical system may even start with themselves as the typical user without explicitly thinking through how much or how little they resemble the breadth and depth of human experience. The people represented in data sets may be only those who use a specific system, and thus not be representative of any larger span of humanity than that population. For instance, the images in a facial

___________________

17 M.L. Gray and S. Suri, 2019, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, Houghton Mifflin Harcourt, Boston and New York.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

recognition system represent only people like those who have uploaded images. Analogous problems arise for systems designed with an imagined typical user (or, abstractly, “anyone”) rather than careful attention to engaging the full range of stakeholders.

Stakeholders’ Divergent Values and Power

Individuals, groups, and organizations may have both shared and divergent interests in how a system is designed and deployed. For example, among the many stakeholders in facial recognition technologies are the people who appear in the images, different people potentially now classified as similar enough to members of a group in the images, the workers who labeled the images, the researchers (at varying stages of their careers and in different positions) who developed the different systems, and the researchers at universities or in corporations who adopted the box bounding and face print innovations. The stakeholders also include the people who use the technology, people whose images or faces are recognized correctly or incorrectly, the marginalized populations that are not represented, and government agencies that make decisions based on recognized images and faces. Perhaps the largest, most nebulous stakeholder is society itself as it absorbs the many and varied applications of face recognition and computer vision technologies and the ways they are deployed by macro- and meso-scale organizations.18

The power to influence outcomes is not uniform among these stakeholders, and so they are not equally able to advance or advocate for their values.19 For instance, the political, cultural, and economic environments in which facial recognition development were embedded shaped decisions about what and who to fund; in lab settings, Ph.D. students and professors have different degrees of power to set agendas or challenge the status quo; and marginalized groups who could address whether particular projects using image recognition are appropriate—using faces to identify sexuality or gender, for example—are likely not present and empowered to question and convince researchers to drop such research.

More generally, the design of technologies reflects values of the people and institutions that created them. Researchers’ relationships with each other shapes their collaborations and scholarly production. Publishing, intellectual property, and reputational norms impact knowledge sharing about computing research. Differences in power among these groups can thus impact the computing research and technology that result.

___________________

18 K. Levy, presentation to the committee on March 11, 2021, Cornell University; V. Eubanks, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, St. Martin’s Press, New York.

19 S. Costanza-Chock, 2020, Design Justice: Community-Led Practices to Build the Worlds We Need, MIT Press, Cambridge, MA.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Values Implicit in Technology Design

An important argument from social studies of technologies is that all sociotechnical systems have particular values built into them that condition the possibilities of use.20 There is nothing intrinsically wrong—or intrinsically right—about technologies reflecting particular values or encouraging particular uses. For instance, the Internet Relay Chat (IRC) and XMPP distributed protocols reflect different sets of values. IRC’s underlying distributed algorithms optimize for real time interaction while sacrificing security whereas the XMPP protocol is open and more secure. Because of these design choices, IRC is often used for applications that might be used by people geographically dispersed who have poor network connectivity and wish to share files. This enables a very particular set of communication and social practices. XMPP might be better for organizations with good network connections that require security for private conversations.

Interactivity of the Social Scales and Technology Design

Sociotechnical systems must be considered from many vantage points to account for the interplay of people and the macro, meso, and micro levels of social phenomena that shape the meaning of technologies when they are put into use. Technology design cannot fully be decoupled from the phenomena of social contexts.

For example, the social phenomena of gender identity—people’s everyday experience of gender—is constantly evolving with the macro level of legal requirements, the meso level of cultural norms, and the micro level of self-esteem. Yet, many applications of facial recognition technologies rely on classifying gender as a stable, binary category. As a result of their need to classify and causally discriminate according to assumptions about the ground truth of gender,21 such technologies cannot inclusively reflect the social phenomena of gender identities. These applications thus raise potential problems of bias.

Similarly, attempts to mathematically define and measure bias cannot encapsulate people’s full experience of discrimination, because those everyday social experiences are complex and unique to a specific moment and place in time. Social phenomena, like xenophobia, ableism, racism, patriarchy, and cis-/heteronormativity have multiple interwoven, interacting, and overlapping effects. Cisnormativity, for example, helps explain how some expressions of gender can be marginalized by those considered “normal” in a social setting. However, what counts as normal or typical differs across cultures and

___________________

20 L. Winner, 1980, “Do Artifacts Have Politics?” Daedalus 109(1):121-136, http://www.jstor.org/stable/20024652.

21 V. Mays, presentation to the committee on May 6, 2021, University of California, Los Angeles.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

points in history22 and the meanings and impact of these forms of gender discrimination change by location and time.23

Societal impacts are not aggregated individual impacts. The impacts of sociotechnical systems are not reducible to effects on individuals. The risks and benefits of computing research may have effects at all three levels of social phenomena. The meso level can be a point of significant impact. For instance, workplace guidelines may determine whether some workers must use facial recognition software to log into a secure work-site. Furthermore, computing research and the systems that follow from them may have social consequences even when their impact occurs through absence; for example, through lack of an accessible, secure, and affordable technical option in some community or from an algorithmic model that fails to recognize one’s skin tone or face shape.24 For example, opponents of facial recognition might use privacy, an individual value, to argue against its use. Privacy also has social effects, taking on very different meanings in societies that see a division between public and private life as important to self-development and social connection.25

Social-Historical Influences

Sociotechnical systems arise from the particular social and political contexts of their creators and users, historically bound to them. Such systems conform to laws and regulations, and rely on materials, norms, and social conventions that are historically configured.26 Even the most innovative new research projects build on existing technology and are affected by the decisions that shaped that technology.27 Social contexts create opportunities (large swaths of images) and limitations (the images are bound to the context of the services) for computing researchers.28 For example, when researchers went to social media to find images of people to develop faceprints, they inherited the histories of the designs of those systems. They also inherited the evolving demographics

___________________

22 J. Butler, 1990, Gender Trouble: Feminism and the Subversion of Identity, Routledge, New York; S. Stryker, 2017, Transgender History: The Roots of Today’s Revolution, Seal Press, New York.

23 E. Newton, 2015, Cherry Grove, Fire Island Sixty Years in America’s First Gay and Lesbian Town, Duke University Press, NC; M.L. Gray, 2009, Out in the Country Youth, Media, and Queer Visibility in Rural America, New York University Press, New York.

24 J. Buolamwini and T. Gebru, 2018, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of the 1st Conference on Fairness, Accountability and Transparency in Proceedings of Machine Learning Research 81:77-91, https://proceedings.mlr.press/v81/buolamwini18a.html.

25 S. Duguay, 2022, Personal Not Private: Queer Women, Sexuality, and Identity Modulation on Digital Platforms, Princeton University Press, NJ.

26 L. Winner, 1980, “Do Artifacts Have Politics?” Daedalus 109(1):121-136, http://www.jstor.org/stable/20024652.

27 F.W. Geels, 2005, “The Dynamics of Transitions in Socio-Technical Systems: A Multi-Level Analysis of the Transition Pathway from Horse-Drawn Carriages to Automobiles (1860-1930),” Technology Analysis and Strategic Management 17(4):445-476, https://doi.org/10.1080/09537320500357319.

28 D. Forsythe and D.J. Hess, 2001, Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence, Stanford University Press, CA.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

of user generated content, which in turn depend on differences in access to computing technology across society, thus affecting the quality and biases of the resulting models. Data sets built from user-generated content posted to social media inherit the legacies of regulatory regimes (or lack of regulation) as well as the commercial goals of social media corporations.29 Having been shielded from certain liabilities for hosting user-generated content, the social media companies adopted algorithms and content moderation policies that incentivized particular kinds of image sharing. These algorithms in turn shaped the content and user interactions among those engaging those social media systems.

Time and shifts in power among stakeholders can change any sociotechnical system. For example, Illinois’s passage of the Biometric Information Privacy Act in October 2008 added new constraints on private enterprise’s collection, use, and sharing of biometric data from people in the state without consent. A class action lawsuit filed in 2015, representing more than 1 million Facebook users in Illinois, resulted in a $650 million settlement for the company’s practice of tagging people in photos using facial recognition without users’ consent; in other jurisdictions regulators have not agreed to restrict the private collection of biometric data, so people elsewhere have no such recourse.

Continually Produced and Evolved Interactions

Sociotechnical systems exhibit a continuous feedback loop of interactions between the social causes and the effects of technological change. This loop creates a challenge for responsible computing research: social phenomena and their relationships with technologies can seem stochastic and hard to interpret. From the viewpoint of any individual person and any specific technical system, it can seem impossible to predict, let alone prevent, what happens with technologies as they unfold. As a result, attention needs to be paid at the earliest stages of research to all three scales of social phenomena, including the values that they support or hinder. Such attention and a plan for ongoing monitoring and reevaluation by those deploying technologies or otherwise responsible for their governance is needed as research insights make their way into deployed systems and expectations and concerns shift over time. In the case of facial recognition, vendors have found additional uses, for example, in community-led neighborhood watches and other public safety activities. The concerns at the beginning of a technology’s developmental lifecycle are not the same as the ones that surface after wide-scale deployment. As laws, cultural norms, personal experiences, and other social phenomena react to and absorb technologies as they land in people’s everyday lives, the uses of any given technology are

___________________

29 T. Gillespie, 2018, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, Yale University Press, New Haven, CT.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

likely to change, in a constantly shifting and evolving ecology that entwines technologies, people, and social contexts.30

Predictability of Future Use

A technology’s design shapes possible uses, but its interplay with social phenomena is a key factor in determining actual uses. For example, Wikipedia was enabled by wiki technology, but its success was not ensured by that alone. Such micro phenomena as personal interests in sharing facts about Anime and meso phenomena as educated people with leisure time and the organized working structure of Wikipedia were critical.31 As a result, it is not possible to predict the full set of future uses of any computing research result based on the technical results alone, as social changes could open new possibilities. As a consequence, systems of accountability, equipped to track computing research as it iterates, are critical to addressing its social impact.

All research products have characteristics and capabilities that privilege certain uses in their designs. Particular social, legal, cultural, economic, and political conditions are required for technologies to work in the ways their designers envisioned. Technologies can also be used in ways that researchers, designers, and builders do not fully expect. A sociotechnical approach and the methods and analytic tools of the social sciences enable hypothesizing ways that technology might be used and identifying uses likely to align with value choices and salient macro, meso, and micro level social phenomena.

* * *

The concepts and methods of ethical and sociotechnical analyses presented in this chapter complement one another as essential constituents of responsible computing research in this era of widely deployed and highly networked computing systems. The sociotechnical perspective described in Section 2.2 along with ethical analyses of values and tradeoffs as described in Section 2.1, combined with methods of ethical reasoning and such social science methods as ethnographic observation, in-depth interviews, survey studies, and historical analysis, can support computing researchers in identifying and resolving the ethical and societal impact challenges that arise from introducing novel technologies into social worlds. Chapter 3 illustrates their use in identifying underlying roots of such challenges.

___________________

30 T. Hughes, 1989, “The Evolution of Large Technological Systems,” Pp. 51-82 in The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (W.E. Bijker, T.P. Hughes, and T.J. Pinch, eds.), MIT Press, Cambridge, MA; M. Finn, 2018, Documenting Aftermath: Information Infrastructures in the Wake of Disasters, MIT Press, Cambridge, MA.

31 B.M. Hill, 2013, “Essays on Volunteer Mobilization in Peer Production,” Ph.D. dissertation, Massachusetts Institute of Technology.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×

Together, these concepts and methods enable the development of pragmatic practices that can guide researchers in ways to carry out socially attuned computing research. It is important to note again that computer scientists cannot be expected to become expert ethicists and social scientists. Rather, responsible computing research requires that they collaborate with experts in other disciplines who can bring these important instruments to bear as computing research is designed and carried out.

Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 23
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 24
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 25
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 26
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 27
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 28
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 29
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 30
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 31
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 32
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 33
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 34
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 35
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 36
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 37
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 38
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 39
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 40
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 41
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 42
Suggested Citation:"2 Theoretical Foundations from Ethical and Social Science Frameworks." National Academies of Sciences, Engineering, and Medicine. 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. doi: 10.17226/26507.
×
Page 43
Next: 3 Sources of Ethical Challenges and Societal Concerns for Computing Research »
Fostering Responsible Computing Research: Foundations and Practices Get This Book
×
 Fostering Responsible Computing Research: Foundations and Practices
Buy Paperback | $40.00 Buy Ebook | $32.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

With computing technologies increasingly woven into our society and infrastructure, it is vital for the computing research community to be able to address the ethical and societal challenges that can arise from the development of these technologies, from the erosion of personal privacy to the spread of false information.

Fostering Responsible Computing Research: Foundations and Practices presents best practices that funding agencies, academic organizations, and individual researchers can use to formulate and conduct computing research in a responsible manner. This report explores ethical issues in computing research as well as ways to promote responsible practices through education and training.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!