Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Addressing harms through design

May 16th, 2024 / in CCC / by Haley Griffin

The following blog post was written by CCC’s Addressing the Unforeseen Deleterious Impacts of Technology (AUDIT) Task Force

This article is the second of two related blog posts on proactively addressing the unforeseen harms of technology. In the previous post, we discussed the importance of addressing the negative consequences of technology and the difference between genuinely unforeseen risks and those that could have been foreseen but remained unacted upon. In this article, we will discuss the importance of proactively designing technology to reduce the potential for both foreseen and unforeseen negative consequences.

Mitigating risks in the design phase is a common tenet of software engineering. A 2002 report from the National Institute of Standards and Technology found that the cost of correcting an error once a product is released is 30X higher than making the correction if it is found in the design phase [Tassey, 2002]. It is costlier to fix a problem the further downstream in the development cycle that it occurs. These lessons apply not only to errors; they apply to properties such as the security of a system as well. As early as 1972, James P. Anderson wrote in an Air Force Technical Report that “merely saying a system is secure will not alter the fact that unless the security for a system is designed in at its inception, there are no simple measures to later make it secure” [Anderson, 1972].

Unfortunately, while these tenets have been known for decades, they have often not been practiced. Mark Zuckerberg famously declared in 2012 that Facebook had a saying: “Move fast and break things” [Zuckerberg, 2012]. Security has often been treated in a similarly cavalier fashion, with the design paradigm of “penetrate-and-patch,” where software vulnerabilities are fixed after a compromise is found through a patch to the software system, representing an expected mode of operation [McGraw, 1998]. For example, Microsoft has maintained “Patch Tuesday” updates to its core software for years [Neumann, 2006] to address security updates; this model continues on modern devices, from computers and laptops to smartphones and IoT devices.

How can we ensure that systems are responsible by design? The answer can be challenging, as designers need to take the time to discuss and anticipate negative consequences. Even if these consequences are addressed,  there may still be future cases, uses, or contexts that need to be accounted for in the design. For example, the MS-DOS operating system was designed based on the design requirements of the IBM PC in the early 1980s, when computers were designed to be single-user machines used in isolation. When computers became multi-user and, more importantly, connected to networks such as the Internet, the usage model changed, which meant that the risks such a computer would face also changed in ways that made MS-DOS unsuitable. 

Therefore, designers must be as forward-thinking as possible so that potential consequences are not unforeseen. There are many lessons that we can draw from the past to guide design and ensure that the unforeseen can still be addressed. Let us consider privacy as an example. By changing the system to requiring the user to explicitly opt-in to information collection, and setting the default to opt-out, information exposure is reduced, which can prevent information loss if a central data store is breached, even if the method of compromise is currently unforeseen. Such a design may conflict with other stakeholders in a solution, such as a business unit that seeks to monetize data collected from a system. However, being aware of the risks and metrics, as well as externalities engendered by risks, can allow for informed discussion that incentivizes security by design before a system is deployed [Bliss et al., 2020].

We are facing significant questions about risks with the development and deployment of AI systems. For example, generative AI has the potential to enable entirely new fields and fundamentally change the face of industry, but these systems also change the threat landscape, providing both attackers and defenders with new capabilities [Barrett et al., 2023]. While there has been some discussion of installing guardrails around AI systems while we seek to understand the threat surface they represent, others have proposed charging forward, not wanting to give up the strategic value of being first past the post with technological advances that could potentially change industry or even society as a whole. We must ask ourselves: do we want large AI systems to be fixed after the fact, or should we put in the time and effort to ensure that secure and private designs are in place to mitigate future concerns, especially given the potential for catastrophic risk from these systems [Hendrycks et al., 2023]? The calculus is not always easy in this situation given the effort involved in designing a system for safety from scratch. The key difference, of course, is that a system that merely fixes previous vulnerabilities will still be susceptible to vulnerabilities in the future, while designing for safety can make a system resilient to future threats.

The choices that we make now can have implications for every aspect of society in the future.

 

Citations

P. Anderson, “Computer Security Technology Planning Study.” Air Force Electronic Systems Division Technical Report ES-TR-73-51, Volume 1. October 1972.

Barrett, B.  Boyd, E.  Burzstein, N.  Carlini, B.  Chen, J.  Choi, A. R. Chowdhury, et al. “Identifying and Mitigating the Security Risks of Generative AI.” Foundations and Trends® in Privacy and Security 6 (1): 1–52, 2023. https://doi.org/10.1561/3300000041.

Bliss, L. Gordon, D. Lopresti D, F. Schneider, and S. Venkatasubramanian S. “A Research Ecosystem for Secure Computing.” 2020. https://cra.org/ccc/resources/ccc-led-whitepapers/#2020-quadrennial-papers

Hendrycks, M. Mazeika, and T.  Woodside. “An Overview of Catastrophic AI Risks.” arXiv preprint arXiv:2306.12001, 2023.

McGraw, “Testing for Security during Development: Why We Should Scrap Penetrate-and-Patch.” IEEE Aerospace and Electronic Systems Magazine 13 (4): 13–15. April 1998. https://doi.org/10.1109/62.666831.

Neumann, “.NET Framework 1.1 Servicing Releases on Windows Updates for 64-bit Systems”, Microsoft TechNet Blogs, March 2006. 

Tassey, “The Economic Impacts of Inadequate Infrastructure for Software Testing,” Technical Report, NIST, 2002.

Zuckerberg, Founder’s Letter, Facebook, 2012.

Addressing harms through design