Narrating the Material Turn

          Jacob Gaboury      is a digital historian and Assistant Professor of Film & Media Studies at the University of California, Berkeley. His published writings include articles on new media, digital materiality, and the early history of computer graphics, as well as a series on  queer histories of computing  for Rhizome. He received his PhD in Media, Culture, and Communication at NYU and was recently a Postdoctoral Fellow at the Max Planck Institute for the History of Science.                           Primary Materials:  You describe your current book project  Image Objects  as a material history of early computer graphics in the United States. Can you tell us a bit about it? What does an attention to technical objects and material culture help us understand about the development of computing?           JG:   Image Objects  is about the origins of computer graphics as a cultural and technical practice. It tells a history that is much earlier than I think most people realize, reaching back to the immediate postwar period and formalizing over the course of the 1960s and early 1970s. While many of us may think of computer graphics as a relatively recent phenomenon, in fact it was a fundamental part of the development of computer science and the computing industry from the very beginning. In examining this pre-history of computer graphics the book argues that it is through graphics that the computer was transformed from a tool for mathematical calculation into an interactive medium as we know it today.  Methodologically, the book is deeply invested in the question of digital materiality. I often describe the project as object-oriented, in that it is structured around a series of technical objects that were developed in this early period but continue to structure and inform the way we produce computer graphics and digital images today. This focus on objects in turn reflects one of the broad theoretical claims the book makes: that graphics mark the moment at which computer science begins to make an ontological claim as to the nature of objects and their simulation. In examining these technical objects as material forms whose shape and history can be felt into the present, my goal is not to refuse or ignore the importance of social or cultural history by privileging a technologically deterministic methodology, but rather to use these objects as a means of reflecting on a broad set of concerns that have been historically inscribed into the medium of graphics itself: visibility, memory, simulation, textuality, etc. The goal of this approach, as you mention, is to offer a material history of this quintessentially immaterial object: the simulated image.  Of course this materialist approach is in direct contrast to the way computer graphics, and indeed computing as a whole, has been historically understood. This is due in large part to popular discourses surrounding virtuality and computer simulation beginning in the 1980s and 1990s, as well as a perceived distinction between the indexical quality of analog media such as photography or audio recording, and the supposed evacuation of that indexical trace brought about by digital technologies. Significantly, this rise in visibility for the field of computing - and for computer graphics as its most visible articulation - corresponded with the rise of postmodernism in the American academy and its corresponding refusal of materialism in favor of more explicitly cultural, relativist, and reception-based theories of media. Likewise, during this period the computer became wrapped up in a utopian discourse that saw digital technologies as a means for producing greater connectivity, communication, and expression. The past ten years have seen a dramatic turn away from these preoccupations, and a reassertion of the primacy of materiality across a wide range of disciplines. Yet digital imaging and computer graphics remain tied to this notion of immateriality, to the idea that graphics obscure the true functionality of the computer, and to critiques of what Nick Montfort has called “screen essentialism” – mistaking the visual appearance of a computer for its material function. This book asks that we begin to consider simulated images as material in their own right, as having their own material history, and as materially shaping the world we live in.       

  

  	
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


                 PM:  How did you arrive at this subject? Was there existing work on material culture and the digital that inspired you? Have other conversations have been useful to you?           JG:  It’s a cliché, but I really arrived at this subject through the archive. I assumed, as I think many people would, that the history of computer graphics was well established. And indeed a number of wonderful histories have been written on early experiments in computer art, beginning in the immediate postwar era. But the technical history of computer graphics, that is, the history of graphics as a sub-discipline of computer science, is almost entirely unwritten.  Image Objects  focuses in large part on the computer science department at the University of Utah in Salt Lake City, which was the first and arguably most important research center for the development of computer graphics in the United States. In the roughly 15 year period from 1966-1980 the graphics program at Utah was responsible for developing almost all the fundamental concepts that structure contemporary graphics, and a large number of prominent researchers and entrepreneurs in the contemporary computing industry got their start as graduate students in Salt Lake City, including the founders of Adobe, Pixar, Netscape, Atari, WordPerfect, and Silicon Graphics. So when I learned that the history of this program was largely unknown, and that the archives of its founder, David C. Evans, had yet to be engaged by historians of computing, I dove in. After months in the Evans archive, as well as secondary archives in Silicon Valley and Washington D.C., the book began to emerge as a kind of prehistory of computer graphics that foregrounded the material objects that made these images possible.  As you can imagine, this materialist methodology was inspired by a great deal of existing scholarship across media studies and the history of science. Perhaps most prominent is somewhat disparate group of scholars that are often grouped in the American academy under the term “German Media Theory.” This includes scholars from Claus Pias and Christina Vagt to Wolfgang Ernst and Bernhard Siegert, many of whom worked with and emerged from the school of media founded by the late Freidrich Kittler. Likewise in North America there has been a great deal of interest in media materiality over the past decade, from a renewed attention to the legacy of Marshall McLuhan to the work of scholars such as Matthew Kirschenbaum, Matthew Fuller, Lisa Gitelman, Lev Manovich, Ian Bogost, and Alexander Galloway. The challenge, as always, has been in finding a bridge between a rich understanding of the technical materiality of a given medium and the ways in which it both shapes and is shaped by the social and political conditions of its production.                         PM:  You recently gave a colloquium here at the Max Planck in which you narrated the history of early computer graphics through a unique object—a teapot. Can you tell us a bit about the Utah teapot?           JG:  The Utah teapot is a fascinating object, as it offers a lens through which we can better understand how computer graphics articulates and standardizes the object world. It is by far the most famous graphical standard, sort of the  lorem ipsum  of the graphical world, meant to stand in as a good enough approximation of any given object when researchers are testing a new algorithm for lighting, reflectance, texture, etc. For that reason it is likely the single most rendered object in the history of graphics. The original teapot on which the standard was modeled was purchased by Sandra Newell in 1974 at a Mormon department store in Salt Lake City, brought to the lab by her husband Martin Newell, and digitized for research later that same year. By tracing the teapot as it has moved and transformed over the past forty years I try to trace the spread and development of the field as a whole, its interests and concerns. The very fact that the teapot is, to this day, a readily available standard form included as a sort of readymade in the vast majority of graphical software demonstrates the incredibly longevity of this early research. While every year graphics seems to inch closer and closer to a kind of simulated realism, many of the algorithms and equations that structure computer graphics remain – like the teapot – unchanged over the fifty year history of the discipline.       

  

  	
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


                 PM:  You track the digital teapot as it variously serves as model, icon, array, and standard. What do cases like the Utah teapot reveal about the status of the object in the digital age?             JG:  The teapot is fascinating because the thing it is meant to standardize is precisely “objectness” itself. Many scholars have written about media standards in the past, perhaps most notably Matthew Fuller in his   Media Ecologies   and Jonathan Sterne in   MP3: The Meaning of a Format  . What makes the teapot unique is the way it offers us a glimpse into a much larger standardization that computer graphics enacts over the object world, in which objects become geometries onto which various properties – color, lighting, texture, shade, etc. – can be applied, and which can be imbued with various qualities that affect their interaction with the world around them. This moves us beyond the standardization of a particular medium to suit the tastes, preferences, perceptions, and context of its intended use, and to a kind of digital ontology in which all objects become alike in their standardized form. One of the principal arguments of  Image Objects  is that this computational ontology is developed in large part through graphics but becomes diffuse throughout computer science in the second half of the twentieth century, and has since leaked into our contemporary understanding of the object world more generally.                         PM:  It was interesting to see the separation of concerns of form from those of material qualities such as texture, color, pattern, etc. as the teapot is translated into a digital space. This brought to mind recent critiques of the privileging of human designs over the "secondary," "inert" qualities of matter by scholars like Tim Ingold and Webb Keane. Do you think digital objects reproduce our age old ideas about the material? Or might they be a means of decentering our inherited epistemologies of the object?             JG:  The historical privileging of the human in discussions around materiality and ontology is certainly a hotly debated topic across multiple disciplines. My use of the term “object oriented” often provokes questions around object-oriented ontology and the so-called realist branch of new materialism popularized by scholars such as Graham Harman, though many theorists have argued for a de-centering of the human in our consideration of the materiality of the world. What is so interesting about computer graphics, and indeed computational ontology as a whole, is its wholesale refusal of that expansive gesture. For computer graphics the world is entirely limited to that which may be made legible as an object for simulation, as simulation requires an epistemological claim over how an object works or is perceived to work. This final point, that of human perception, is crucial here. Computer graphics as a rule is not interested in the complete simulation of the world as it truly is – an impossible task, at any rate – but rather the simulation of the world as it is perceived and functions for us, for the human. In this sense the realism that computer graphics offers is deep, but very narrow, in that only those parts of the world that are readily legible and accessible to us are made subject to simulation. It offers, in this sense, an enframing of the world, a Gestell, to use Heidegger’s term. It is a deeply troubling ontology, and it is for this reason that my own materialist method foregrounds the cultural, social, and aesthetic dimension of its history, aligning itself less with Harman and other realist philosophers than with the tradition of feminist materialism, in which we understand the agency of nonhuman actors and systems in the deeply political realm of the human.                         PM:  Contemporary discourse is saturated with reflections on the gulf between the "real" and "digital" worlds. How do you think we should understand this divide? Is it a divide? Is materiality a useful thread for thinking through this distinction?             JG:  Simply put, there is no distinction. Or perhaps it would be better to say that neither exists in isolation from the other. I am particularly drawn to contemporary scholarship in art history that suggests we are living in a post-digtial moment, not in the sense that the digital is over, but in the way that we might speak of the post-colonial – a coming-after that is nonetheless shaped by the legacy, infrastructure, and politics of the digital. A post-digital moment is one that is saturated with the digital, in which it informs almost all aspects of modern life regardless of whether it explicitly engages with computers, the internet, or digital technology more broadly. The digital, as both a technical and conceptual framework, has come to influence us on every level, and if we hope to understand these changes, where they come from, and what kinds of politics they enact, we must look first to the history of these technologies.  Published: 9-25-2017  Preferred citation: "Interview with Jacob Gaboury,"  Primary Materials  (2017) ,  eds. T. Asmussen, M. Buning, R. Kett, and J. Remond, www.primarymaterials.org.         

Jacob Gaboury is a digital historian and Assistant Professor of Film & Media Studies at the University of California, Berkeley. His published writings include articles on new media, digital materiality, and the early history of computer graphics, as well as a series on queer histories of computing for Rhizome. He received his PhD in Media, Culture, and Communication at NYU and was recently a Postdoctoral Fellow at the Max Planck Institute for the History of Science. 

 
 

Primary Materials: You describe your current book project Image Objects as a material history of early computer graphics in the United States. Can you tell us a bit about it? What does an attention to technical objects and material culture help us understand about the development of computing?

JG: Image Objects is about the origins of computer graphics as a cultural and technical practice. It tells a history that is much earlier than I think most people realize, reaching back to the immediate postwar period and formalizing over the course of the 1960s and early 1970s. While many of us may think of computer graphics as a relatively recent phenomenon, in fact it was a fundamental part of the development of computer science and the computing industry from the very beginning. In examining this pre-history of computer graphics the book argues that it is through graphics that the computer was transformed from a tool for mathematical calculation into an interactive medium as we know it today.

Methodologically, the book is deeply invested in the question of digital materiality. I often describe the project as object-oriented, in that it is structured around a series of technical objects that were developed in this early period but continue to structure and inform the way we produce computer graphics and digital images today. This focus on objects in turn reflects one of the broad theoretical claims the book makes: that graphics mark the moment at which computer science begins to make an ontological claim as to the nature of objects and their simulation. In examining these technical objects as material forms whose shape and history can be felt into the present, my goal is not to refuse or ignore the importance of social or cultural history by privileging a technologically deterministic methodology, but rather to use these objects as a means of reflecting on a broad set of concerns that have been historically inscribed into the medium of graphics itself: visibility, memory, simulation, textuality, etc. The goal of this approach, as you mention, is to offer a material history of this quintessentially immaterial object: the simulated image.

Of course this materialist approach is in direct contrast to the way computer graphics, and indeed computing as a whole, has been historically understood. This is due in large part to popular discourses surrounding virtuality and computer simulation beginning in the 1980s and 1990s, as well as a perceived distinction between the indexical quality of analog media such as photography or audio recording, and the supposed evacuation of that indexical trace brought about by digital technologies. Significantly, this rise in visibility for the field of computing - and for computer graphics as its most visible articulation - corresponded with the rise of postmodernism in the American academy and its corresponding refusal of materialism in favor of more explicitly cultural, relativist, and reception-based theories of media. Likewise, during this period the computer became wrapped up in a utopian discourse that saw digital technologies as a means for producing greater connectivity, communication, and expression. The past ten years have seen a dramatic turn away from these preoccupations, and a reassertion of the primacy of materiality across a wide range of disciplines. Yet digital imaging and computer graphics remain tied to this notion of immateriality, to the idea that graphics obscure the true functionality of the computer, and to critiques of what Nick Montfort has called “screen essentialism” – mistaking the visual appearance of a computer for its material function. This book asks that we begin to consider simulated images as material in their own right, as having their own material history, and as materially shaping the world we live in.

25685949.jpg
 

PM: How did you arrive at this subject? Was there existing work on material culture and the digital that inspired you? Have other conversations have been useful to you?

JG: It’s a cliché, but I really arrived at this subject through the archive. I assumed, as I think many people would, that the history of computer graphics was well established. And indeed a number of wonderful histories have been written on early experiments in computer art, beginning in the immediate postwar era. But the technical history of computer graphics, that is, the history of graphics as a sub-discipline of computer science, is almost entirely unwritten. Image Objects focuses in large part on the computer science department at the University of Utah in Salt Lake City, which was the first and arguably most important research center for the development of computer graphics in the United States. In the roughly 15 year period from 1966-1980 the graphics program at Utah was responsible for developing almost all the fundamental concepts that structure contemporary graphics, and a large number of prominent researchers and entrepreneurs in the contemporary computing industry got their start as graduate students in Salt Lake City, including the founders of Adobe, Pixar, Netscape, Atari, WordPerfect, and Silicon Graphics. So when I learned that the history of this program was largely unknown, and that the archives of its founder, David C. Evans, had yet to be engaged by historians of computing, I dove in. After months in the Evans archive, as well as secondary archives in Silicon Valley and Washington D.C., the book began to emerge as a kind of prehistory of computer graphics that foregrounded the material objects that made these images possible.

As you can imagine, this materialist methodology was inspired by a great deal of existing scholarship across media studies and the history of science. Perhaps most prominent is somewhat disparate group of scholars that are often grouped in the American academy under the term “German Media Theory.” This includes scholars from Claus Pias and Christina Vagt to Wolfgang Ernst and Bernhard Siegert, many of whom worked with and emerged from the school of media founded by the late Freidrich Kittler. Likewise in North America there has been a great deal of interest in media materiality over the past decade, from a renewed attention to the legacy of Marshall McLuhan to the work of scholars such as Matthew Kirschenbaum, Matthew Fuller, Lisa Gitelman, Lev Manovich, Ian Bogost, and Alexander Galloway. The challenge, as always, has been in finding a bridge between a rich understanding of the technical materiality of a given medium and the ways in which it both shapes and is shaped by the social and political conditions of its production.

 
 

PM: You recently gave a colloquium here at the Max Planck in which you narrated the history of early computer graphics through a unique object—a teapot. Can you tell us a bit about the Utah teapot?

JG: The Utah teapot is a fascinating object, as it offers a lens through which we can better understand how computer graphics articulates and standardizes the object world. It is by far the most famous graphical standard, sort of the lorem ipsum of the graphical world, meant to stand in as a good enough approximation of any given object when researchers are testing a new algorithm for lighting, reflectance, texture, etc. For that reason it is likely the single most rendered object in the history of graphics. The original teapot on which the standard was modeled was purchased by Sandra Newell in 1974 at a Mormon department store in Salt Lake City, brought to the lab by her husband Martin Newell, and digitized for research later that same year. By tracing the teapot as it has moved and transformed over the past forty years I try to trace the spread and development of the field as a whole, its interests and concerns. The very fact that the teapot is, to this day, a readily available standard form included as a sort of readymade in the vast majority of graphical software demonstrates the incredibly longevity of this early research. While every year graphics seems to inch closer and closer to a kind of simulated realism, many of the algorithms and equations that structure computer graphics remain – like the teapot – unchanged over the fifty year history of the discipline.

The_Six_Platonic_Solids.png
 

PM: You track the digital teapot as it variously serves as model, icon, array, and standard. What do cases like the Utah teapot reveal about the status of the object in the digital age?

JG: The teapot is fascinating because the thing it is meant to standardize is precisely “objectness” itself. Many scholars have written about media standards in the past, perhaps most notably Matthew Fuller in his Media Ecologies and Jonathan Sterne in MP3: The Meaning of a Format. What makes the teapot unique is the way it offers us a glimpse into a much larger standardization that computer graphics enacts over the object world, in which objects become geometries onto which various properties – color, lighting, texture, shade, etc. – can be applied, and which can be imbued with various qualities that affect their interaction with the world around them. This moves us beyond the standardization of a particular medium to suit the tastes, preferences, perceptions, and context of its intended use, and to a kind of digital ontology in which all objects become alike in their standardized form. One of the principal arguments of Image Objects is that this computational ontology is developed in large part through graphics but becomes diffuse throughout computer science in the second half of the twentieth century, and has since leaked into our contemporary understanding of the object world more generally.

 
 

PM: It was interesting to see the separation of concerns of form from those of material qualities such as texture, color, pattern, etc. as the teapot is translated into a digital space. This brought to mind recent critiques of the privileging of human designs over the "secondary," "inert" qualities of matter by scholars like Tim Ingold and Webb Keane. Do you think digital objects reproduce our age old ideas about the material? Or might they be a means of decentering our inherited epistemologies of the object?

JG: The historical privileging of the human in discussions around materiality and ontology is certainly a hotly debated topic across multiple disciplines. My use of the term “object oriented” often provokes questions around object-oriented ontology and the so-called realist branch of new materialism popularized by scholars such as Graham Harman, though many theorists have argued for a de-centering of the human in our consideration of the materiality of the world. What is so interesting about computer graphics, and indeed computational ontology as a whole, is its wholesale refusal of that expansive gesture. For computer graphics the world is entirely limited to that which may be made legible as an object for simulation, as simulation requires an epistemological claim over how an object works or is perceived to work. This final point, that of human perception, is crucial here. Computer graphics as a rule is not interested in the complete simulation of the world as it truly is – an impossible task, at any rate – but rather the simulation of the world as it is perceived and functions for us, for the human. In this sense the realism that computer graphics offers is deep, but very narrow, in that only those parts of the world that are readily legible and accessible to us are made subject to simulation. It offers, in this sense, an enframing of the world, a Gestell, to use Heidegger’s term. It is a deeply troubling ontology, and it is for this reason that my own materialist method foregrounds the cultural, social, and aesthetic dimension of its history, aligning itself less with Harman and other realist philosophers than with the tradition of feminist materialism, in which we understand the agency of nonhuman actors and systems in the deeply political realm of the human.

 
 

PM: Contemporary discourse is saturated with reflections on the gulf between the "real" and "digital" worlds. How do you think we should understand this divide? Is it a divide? Is materiality a useful thread for thinking through this distinction?

JG: Simply put, there is no distinction. Or perhaps it would be better to say that neither exists in isolation from the other. I am particularly drawn to contemporary scholarship in art history that suggests we are living in a post-digtial moment, not in the sense that the digital is over, but in the way that we might speak of the post-colonial – a coming-after that is nonetheless shaped by the legacy, infrastructure, and politics of the digital. A post-digital moment is one that is saturated with the digital, in which it informs almost all aspects of modern life regardless of whether it explicitly engages with computers, the internet, or digital technology more broadly. The digital, as both a technical and conceptual framework, has come to influence us on every level, and if we hope to understand these changes, where they come from, and what kinds of politics they enact, we must look first to the history of these technologies.

Published: 9-25-2017

Preferred citation: "Interview with Jacob Gaboury," Primary Materials (2017)eds. T. Asmussen, M. Buning, R. Kett, and J. Remond, www.primarymaterials.org. 

 
2015HT3259_2500-2.jpg

DP132888.jpg