A Machine-Learning Approach to the Study of Technological Effects on Temporal Perspectives
Technofuturology is a research initiative concerned with employing recent computational technologies (natural language processing and machine learning) to render empirically tangible some much-debated speculations about the influence of technoscientific paradigms and media environments on temporal perspectives and attitudes.
The Anthropocene is more than a marker of humanity’s inscription into geological time. It also has been said to correspond to a crisis of the imagination. As a characteristic of Western culture after what Fukuyama called the end of history, a defeatist, pessimistic attitude has infected politics from the inside. It is now commonplace for theorists to lament the age’s incapacity to imagine the future: popular critics like Fredric Jameson (2003), Slavoj Zizek (2010), and Mark Fisher (2009), for instance, have suggested that it is now easier to imagine human extinction than to imagine life beyond capitalism. The urgency of dealing with the looming threat of climate catastrophe, the realpolitik that grinds risk societies into gridlock, the rising prevalence of preemptive military action, the increasing speeds of information flows and the adoption of new technologies that favor social disengagement, have potentially contributed to an increasing difficulty to imagine the future beyond the contemporary paradigm.
At the same time, the contemporary age has been characterized as being in a state of constant flux, conditioned by the “ever-increasing flow of technological innovations” (Felt 2015). Forty years ago, Paul Virilio (2006) went so far as to present the thesis that speed itself was the primary force behind the formation of society’s structures. Jeremy Rifkin (1987) defended slower traditional modes of production against the modern ideals of speed and efficiency, which he deemed unsustainable. Social scientists like Barbara Adams (2005) and Judy Wajcman (2014) critically assess the increasing frequency at which life events and technological innovations take place in modern societies, and Harmut Rosa (2013) suggests that though acceleration was always a constitutive part of modernity, it has crossed a “critical threshold in ‘late modernity’ beyond which the demand for societal synchronization and social integration can no longer be met.”
Ulrike Felt’s (2018) account underscores the important entanglement between social and political participation with this modern imperative of “compressing time” (Rifkin 1987) and the epistemological logic of what Arjun Appadurai (2013) calls “trajectorism”. Trajectorism presupposes that time is causally closed and pre-determined, a logic of inevitability echoing Hegel’s and Marx’s respective views on history, de Tocqueville’s view of time as driven toward democratic organizations—a “providential fact” as he called it—and the social Darwinist’s self-congratulatory assumption that evolution necessarily implies progress to more refined and intelligent forms of life. Importantly, this trajectorist logic infects most contemporary accounts of technological progress. The self-styled “rational optimist” Matt Ridley (2010), for instance, argues that, on the whole, “life is getting better”; even though each generation offers dire assessments of the future, in fact populations keep growing, life expectancy keeps rising and general prosperity continues to spread across the globe as a result of accelerating innovation. As I discussed in “Techno-Optimism and Rational Superstition” (Wilson 2017), what is most interesting here is that both techno-optimists and techno-pessimists seem to converge in their embracing of this trajectorist logic: both sides endorse the implicit belief that technology is a unidirectional force, an arrow of time that we can either resist or embrace.
In my view, however what is too often insufficiently addressed in diagnoses of late modern social temporalities, is how our technological environments condition our experience of time. Therefore, in parallel to Felt’s (2015) compelling plea for considering the various “temporal choreographies” at play in society and their heterogeneous effects on technoscientific development, I believe the reverse influence also deserves a thorough assessment: how technoscientific environments modulate those very temporal choreographies, by enabling different modes of retention and anticipation.
Indeed, among the primary tenets of contemporary philosophy of technology is the notion that technological environments condition our most intimate temporal experiences. The thesis of originary technicity, with roots in the work of Martin Heidegger, André Leroi-Gourhan, Jacques Derrida, and greatly developed by the French philosopher of technics Bernard Stiegler (1998a, 1998b, 2001), implies that we should not understand technologies as mere instruments in the service of human wills and desires, but rather as part of the conditioning principles that constitute us as human beings, along with our social habitus, our physiological structures, and the modes of cognition they enable. We know, for instance, that our early ancestors lived in a technological environment that was relatively stable, changing little over two million years from the Oldowan to the Aurignacian, during which several distinct species of hominid evolved and went extinct without incurring any substantial change to the industrial production of stone tools. In a sense, it is not we who have created tools, but our tools that have created us; since the beginning, humans have evolved with and through technologies. The thesis of originary technicity hence suggests that technologies and technical environments continue to participate at every step in our constitution as cognitive and social agents, including our intimate temporal experience, our horizons of expectation and attitudes toward the future.
These questions also intersect media studies and in particular the tradition known as “media archeology”, which is concerned with exposing the sedimented layers of media that have successively participated in the constitution of various cultural zeitgeists. Theorists in this tradition, from Kittler (1999) and Zielinski (2011) to Ernst (2013) and Parikka (2012) have investigated how different media instantiate different forms of technical memory. One familiar case explored by Ernst (2013) is the primary distinction between analog and digital forms of archiving, where, as is well known, analog media deteriorate with time and through each reproduction, while once temporal phenomena are encoded into the digital medium, they ironically obtain an essential timelessness in their capacity to be infinitely copied to full perfection. Ernst furthermore contributes the useful concept of the “delayed present” (Ernst 2017). Echoing Aristotle’s claim that time is a “measure” of change, Ernst notes that what we today refer to as “real time”, in so far as it is always mediated through various forms of media (which, Kittler insisted, are always forms of measurement), is necessarily delayed, the present is in some sense asynchronous with itself, because by the time the measurement has taken place, the present has always already lapsed into the past. The increasing sampling frequencies and transmission speeds of contemporary algorithmic technologies can therefore be understood in terms of the information they increasingly compress into a correspondingly dilating present, conditioning, I would suggest, the intensifying contemporaneity of our “post-historical” (Flusser 2013) Western culture.
COMPUTATIONAL FUTUROLOGY: A GENERAL OVERVIEW
My hypothesis is that if the theses of “originary technicity” and technologically “delayed presence” are correct, and changes to technological and media environments do modulate human experiences, temporal attitudes, and modes of relating to the world, then such modulations should leave implicit traces in the historical archive. When shifting attitudes toward reality spread through populations as the result of the adoption of new technologies, there should at least be subtle correlative shifts in the semantic patterns that get recorded into the archives.
The advancement of digital tools for processing natural language in increasingly meaningful ways now allows us to turn the lens of technology toward itself. It is now becoming possible to go beyond merely anecdotal accounts of how technical and scientific paradigms modulate the human experience of history, time, and reality, and gain insight into the broader tectonics at play whenever new technologies, scientific paradigms, and cultural practices are implemented in society. The task becomes one of asking the right questions, of translating these questions into machine-interpretable metrics, conducting data-mining and topic modeling experiments on relevant text corpora, and comparing the statistical findings to records of the geo-historical adoption of various technologies and social practices. Thus my current research employs custom computational tools tailored to making some traditional areas of speculation in STS empirically tangible. The project enlists recent advancements in machine learning in the practice of the “distant reading” (Moretti 2005) and analysis of vast archives of text from a variety of sources (depending on the given experiment). Essentially, the method involves training algorithms to model, identify, track, and visualize relevant characteristics in natural language use.
I am particularly interested in language that indicates temporal attitudes: speculations about the future, doubts about the past, evaluations of possibilities, various forms of synchronization and routinization, suggestions of cyclical vs. historical or trajectorist views on time, evocations of the time-scales being considered, etc. The idea is thus to use computational techniques to produce an overall picture of how temporal attitudes are evolving in specific historical and geographical (cities, regions, or nation states) contexts, and cross-referencing this to language-use indicating the evolving influence of specific technologies.
To some extent, my proposed approach to these questions shares characteristics with what is called “sentiment analysis”, widely used in business and economics to evaluate how populations “feel” about various products, services, and brands. In many cases, sentiment analysis begins with training an algorithm on unstructured text that is accompanied by metadata and numerical values. For instance, one can train a model to identify positive, neutral, and negative sentiment based on movie reviews from IMDB.com, which are always accompanied by a grading of the film between 1 and 10 “stars”. This numerical value is used as a “base truth” according to which the model is trained: by analyzing many text reviews and correlating them with their numerical values, the machine learning algorithm can compute a model of what kind of language is used in each case. These models can then be tested on further reviews, to see whether they can guess how many stars a reviewer may have given a movie, based only on the text provided.
In the context of my research, however, no “base truth” is available. There are no readily available archives of people expressing and assigning a numerical value to their attitudes toward the future. This is where unsupervised topic modeling can be of use. Unsupervised topic modeling involves letting the algorithm try to figure out what topics are discussed in a text on its own, without supervision. Several different schemes have been developed for doing this, the most conventional being latent semantic analysis (LSA) and latent Dirichlet allocation (LDA), which are widely used in text mining, computer vision, and computational biology applications. These methods employ variants of hierarchical Bayesian analysis to assign probability vectors to the entities under consideration: in natural language processing applications, text entities (words, collocations, regular expressions) are evaluated based on their contextual embeddings, and their changing frequency and proximity within a given corpus. By analyzing these evolving relations between text entities, different topics are defined as different distributions of probabilistically clustered entities: “bags of words” as they are sometimes called. Many of these topic-modeling tools have now been packaged as open source libraries which can be accessed by computational researchers and adapted to custom applications.
In my proposed methodology, after the program has modeled what it finds to be the topics of a corpus, the next step is for a human researcher to step in and assess the models. Once generated, I will qualitatively evaluate the models, keeping only those that correspond to the metrics I am concerned with (as described below). Once they are identified, I can fine-tune these (unsupervised) preliminary models by conducting further (supervised) iterations of the process. This involves using the models to search for and identify new text locations which correspond to them, and then qualitatively assessing the results (grading them) to further train the algorithms and further specify the models. This can be iterated several times until we have a robust model for each of the metrics.
My primary target with this research is to test the theoretical assumption according to which the West has increasingly become unable to “imagine” a future. Are we really warranted in making this claim? In order to be able to evaluate this question through empirical evidence, we must break it down into more manageable parts. We can start with a related question: can we really see a decline of future-talk in contemporary society, or are people actually increasingly expressing concern about what the future holds? This leads us to ask: what are the matters of concern? The models ultimately need to be trained to discern between language that is about the future in general versus language expressing concern over someone’s upcoming mortgage payment or what someone expects to have for dinner. We can characterize this as a measure of the subjectivity of future-talk. We can then ask: is the future-talk expressing concern with short-term issues or with the distant future? In other words, are the time-scales being discussed increasing, decreasing, or remaining stable? Furthermore, does the language used express belief that the future is “open” and filled with possibilities, or rather that the future is predetermined, inevitable, and causally “closed”? The variability between these two poles can loosely be interpreted as a measure of trajectorist thought.
Thus, in sum, I propose to turn these questions into metrics for the qualitative evaluation of unsupervised topic models. Once robust candidate models have been generated, selected, and fine-tuned through further training, they can then be used in the evaluation of changing futurological trends.
1. Frequency of future-talk
3. Time scale: short-term vs. long-term projection
4. Possibilities spectrum: open future (many possibilities) vs. closed future (few possibilities – ie: “trajectorism”)
EVALUATING TECHNOLOGICAL INFLUENCE
Once working metrics have been established for inferring futurological perspectives in text archives, we can begin to cross-reference our observations with the known geohistorical record of technoscientific paradigm change. But my goal is to go beyond the mere description of a given social habitus in terms of forms of future-thinking and their correlative modes of discourse: can we not also discover evolving correlations between futurological attitudes and functionally entrenched practices? Though establishing causal influence will of course be difficult, it may nevertheless be possible to provide a cartography of those correlations, to in other words resolve a “map” of the co-evolving relations between temporal attitudes and technoscientific implementations.
To get closer to this goal, we can again enlist the same machine learning methods, and build topic models that help us identify language use indicating various relations to technological environments or scientific paradigms. For instance, we can devise a metric that allows us to ask: do the semantics of a text indicate that a given technology has been “adopted” or “embraced”, or do they rather express a resistance, suspicion, or ignorance of the technology? Call this the adoption metric. Once this adoption metric has been established, we can potentially go beyond the mere cross-referencing of our future-talk models to known geohistorical records of technological adoption, and map the intrinsic relations within text corpora themselves between attitudes toward given technologies and the temporal attitudes suggested by future-talk analysis. If this works to any degree of precision, which remains to be seen, we might even succeed in finding real evidence of stable attractors between the degrees of adoption of a technology and some set of correlative futurological attitudes.
Technological Influence Metrics
1. Frequency of reference to the given technological entity (or scientific paradigm, depending on the experiment)
2. Subjectivity: How subjective/objective is the reference to the technology?
3. Adoption metric: Does the language-use express that the technology in question has been adopted or entrenched?
MAPPING THE TECHNO-TRANSCENDENTAL: SOME PHILOSOPHICAL CONSIDERATIONS
Emerging digital methods may in fact allow us to map out what could be referred to as the techno-transcendental, that is, the relative a priori conditioning of cognition and subjectivity by technological environments, scientific and religious paradigms, and social practices as they evolve in history. However, it remains the case that, as Michel Bitbol (2010) argues, all the tools we use, including those with which we investigate the conditions of cognition, come with their own implicit a prioris. This means that such digital approaches do not in principle achieve any level of objectivity. They merely allow us to expand the scope and rigor of our reflections and broaden the horizon of critique. Taking these observations to be of central importance, it is crucial to adopt a self-reflective and critical engagement with the tools we are ourselves using.
My methodology therefore stands against the recent trend to read the implementation of such digital methods in the humanities and social sciences as inaugurating a “post-theoretical age” (Cohen, 2010). As Frabetti (2011) argues, the self-reflective or reflexive approach means recognizing the originary technicity of the human, taking into account the fact that humans have always evolved through technology and that there is therefore no way, through technological means, to “step out” of our biased relation to the world. This implies that the various digital methods we employ to gather data are themselves to be scrutinized self-reflexively within the research process. Theory, speculation, and critical assessment, can therefore simply not be rendered obsolete through the collection of identified patterns and heuristics in large data sets. These new horizons of empirical assessment only deepen the necessity for philosophical or critical contextualization of our methods. Thus at each step of the process, it is crucial to “render[…] our methodological decisions explicit and integrat[e] them into the overall epistemological outlook of our projects, rather than bury […] them under layers of impressive evidence” (Reider and Rohl, in Berry 2012).
As human beings, we are immersed in experience. We are irreducibly perspectival, asymmetrical, and oriented within the broader structures of reality. In a tradition inaugurated with Kant’s critiques, philosophy has been concerned with describing the conditions of this subjective experience. There is a transcendental background to all thought, and each event of cognition is necessarily enabled by processes that are eclipsed by that very cognitive experience. As I explored at length in my first monograph, Aesthesis and Perceptronium: On the Entanglement of Organism and World (forthcoming, University of Minnesota Press), it has been understood since Solomon Maimon’s astute criticism of the Kantian project, followed by the work of Schelling and Hegel, that part of what was missing from the Kantian picture was an account of the genesis of the transcendental structures that constitute us: how is it that asymmetrical and incomplete perspectives upon the world emerge from the fluxes of the real? In an effort to naturalize the genesis of the transcendental subject, I have offered a speculative account of the nested processes and constraints that may participate in their constitution; from the physical and material constraints of cosmology, to the organic structures of biology and genetic evolution, all the way up to the socio-technical systems that propagate through human culture. There is perhaps an infinite regress of contextual frames that constrain the possibilities of contemporary human cognition. What we experience, how we cognize, is always conditioned by a deep structure of nested transcendental constraints, and these constraints evolve in time: they have a history. But until recently, it has been exceedingly difficult to gain any insight on these transcendental conditions of subjectivity, much less track their evolution.
Echoing Hegel, one way to understand the telos of science and philosophy in this setting is to see knowledge as a process of progressively pushing back the boundaries of our cognitive blind spots and horizons, to integrate more and more seemingly incompatible perspectives upon the transcendental complex, thereby increasing our degrees of freedom within it. The nested biological structures and evolutionary constraints that cooperate in the production of each waking human experience, themselves arise from the contingent integration of prior independent perspectival processes, each new emergent whole sublating and supervening on the previous ones. As we study any aspect of science—whether it be the evolutionarily forwarded psychological biases that skew our evaluations and intentions, or the strange behaviors of quantum physics that defy our common sense of objectivity—we are indeed shedding light on some of the transcendental conditions of our subjective experience.
Of course, as a matter of principle, cognition will never transcend its perspectival predicament, since it is bounded by definition (Kant). But this is no reason for renouncing altogether the project of expanding our degrees of freedom within the structures and influences conditioning our experiences. It is a matter of operating strategically piloted transcendental variations, and integrating these seemingly incompatible perspectives on the transcendental background of thought into a broader, more inclusive framework. As Gabriel Catren (2018) puts it:
The fact that human experience is necessarily framed by a system of transcendental structures (physiological, technological, conceptual, linguistic, cultural, and so on and so forth) does not imply that we cannot try to modify, deform, or perturb these structures. Rather than gaining access to a hypothetical ‘great outdoors,’ these transcendental variations […] simply allow us—nothing more and nothing less—to absolve experience of any form of fixed transcendental framing. (Catren 2018)
Essentially, Catren argues that even though we cannot step out of the transcendental frame, we must nevertheless not resign ourselves to remaining in a fixed transcendental frame. If we do, we indeed fall prey to the same transcendental error that was Kant’s: to assume that our place within these conditioning factors is fixed and permanent. Gilles Deleuze sometimes defined philosophy as the project of avoiding stupidity, and I believe this definition applies quite well in this context. To avoid stupidity, or to avoid remaining “all to human” (Nietzsche), we must pursue the expansion of our phase spaces, by perturbing our frames and sublating seemingly incompatible perspectives into our “total science” (Quine). This goes hand in hand with uncovering the processes at work in the co-constitution of subjectivities and their correlated objectivities.
These same questions must be raised in the context of media and technology studies. As Richard Beardsworth notes, “the question arises as to whether it is possible to think something [technology, media] that is nothing less than the condition of thought itself.” (Beardsworth, 1998) Indeed, if the thesis of originary technicity is correct, we should assume that the very digital methods we enlist in the service of the exploration of the technological conditions of contemporary subjectivity, themselves participate in our very subjective constitution, as researchers, as philosophers, and so forth. But again, the goal cannot be that of naively trying to supersede all of our cognitive conditions, and attempting to gain a “view from nowhere” (Nagel 1986), but simply to urge the transcendental structures that constitute us to reveal as much as possible about our circumstances, even while remaining within the limits of the bounded cognitive domain.
This applies to the humanities and social sciences as much as to the “pure sciences”. Just as physicists probe the microphysical world for evidence of hidden symmetries in nature, I believe that our reflection upon culture can also benefit from similar empirical and statistical probings. Computation offers the study of culture, society, science, and technology its own telescopes and super colliders. If the right questions are asked, the methods of the digital humanities can participate in the process of pushing back the horizons of our transcendental frames, and of pursuing the task of integrating diverse perspectives on the world. In other words, these methods can be applied to our archival background in the service of an expanded understanding of the conditions of contemporary human subjectivity and its correlated objectivity.
Adam, Barbara. Timescapes of Modernity The Environment and Invisible Hazards, 2005.
Appadurai, Arjun, ed. The Future as Cultural Fact: Essays on the Global Condition. London: New York?: Verso Books, 2013.
Bailey, G. (2007). Time perspectives, palimpsests and the archaeology of time. Journal of Anthropological Archaeology, 26(2), 198–223. https://doi.org/10.1016/j.jaa.2006.08.002
Baudrillard, J. (1994). Simulacra and Simulation. (S. F. Glaser, Trans.) (14th Printing edition). Ann Arbor: University of Michigan Press.
Barad, K. (2014). Diffracting Diffraction: Cutting Together-Apart. Parallax, 20(3), 168–187. https://doi.org/10.1080/13534645.2014.927623
Beardsworth, R. (1998). Thinking technicity. Cultural Values, 2(1), 70–86.
Berry, D. M. (Ed.). (2012). Understanding digital humanities. Houndmills, Basingstoke, Hampshire?; New York: Palgrave Macmillan.
Bitbol, M. (2010). De l’intérieur du monde: pour une philosophie et une science des relations. Paris: Flammarion.
Boghossian, P. (2007). Fear of Knowledge: Against Relativism and Constructivism (1 edition). Oxford; Oxford; New York: Clarendon Press.
Borges, J. L. (2010). Everything and Nothing. (D. A. Yates, J. E. Irby, J. M. Fein, & E. Weinberger, Trans.) (Reprint edition). New York: New Directions.
Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly, 53(211), 243–255. https://doi.org/10.1111/1467-9213.00309
Bybee, C. (1999). Can Democracy Survive in the Post-Factual Age?: A Return to the Lippmann-Dewey Debate about the Politics of News. Journalism & Communication Monographs, 1(1), 28–66. https://doi.org/10.1177/152263799900100103
Catren, G. (2018) “A Plea for Narcisssus: On the Transcendental Reflexion /\ Refractive Mediation Tandem” in, Gironi, F. (ed.). The legacy of Kant in Sellars and Meillassoux: analytic and continental Kantianism. Routledge.
Cohen, P. (2010). Digital keys for unlocking the humanities’ riches. New York Times, 16.
Crary, J. (2014). 24/7: Late Capitalism and the Ends of Sleep. London: Verso.
Derrida, J. (1996). Archive fever: a Freudian impression. Chicago: University of Chicago Press.
Felt, U. (2015). The temporal choreographies of participation: Thinking innovation and society from a time-sensitive perspective. https://doi.org/10.4324/9780203797693
Fisher, M. (2009). Capitalist Realism: Is There No Alternative? Winchester: O Books.
Friend, T. (2016, October 3). Sam Altman’s Manifest Destiny. The New Yorker. Retrieved from https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny
Fukuyama, F. (2006). The End of History and the Last Man (Reissue edition). New York: Free Press.
Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge Cambridgeshire?; New York: Cambridge University Press.
Heidegger, M. (2002). Being and Time. New York: HarperSanFrancisco.
Jameson, F. (2003). The End of Temporality. Critical Inquiry, 29(4), 695–718. https://doi.org/10.1086/377726
Jameson, Frederic. 2003. “Future City”. New Left Review, (21), 65–79.
Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185
Laruelle, F. (2011). The Concept of Non-Photography (1st edition). Falmouth, UK?: New York: Urbanomic/Sequence Press. p. 58
Latour, Bruno. “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30, no. 2 (January 2004): 225–48. https://doi.org/10.1086/421123.
Leibniz, G. W. (1898). The monadology and other philosophical writings. (R. Latta, Trans.). Clarendon Press. Retrieved from https://archive.org/details/monadologyandot01lattgoog
Li, Zhou, Baughman Amy W, Lei Victor J, Lai Kenneth H, Navathe Amol S, Chang Frank, Sordo Margarita, et al. “Identifying Patients with Depression Using Free-Text Clinical Documents.” Studies in Health Technology and Informatics, 2015, 629–633. https://doi.org/10.3233/978-1-61499-564-7-629.
Link, D. (2016). Archaeology of Algorithmic Artefacts. Minneapolis, MN: Univocal Publishing.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.2307/1738360
Kaiser, M. (2015). Reactions to the Future: the Chronopolitics of Prevention and Preemption. NanoEthics, 9(2), 165–177. https://doi.org/10.1007/s11569-015-0231-4
Kittler, F. (1999). Gramophone, Film, Typewriter. (G. Winthrop-Young & M. Wutz, Trans.). Stanford, Calif: Stanford University Press.
McLuhan, M. (1994). Understanding Media: The Extensions of Man (REV edition). Cambridge, Mass: The MIT Press.
Meillassoux, Q. (2006). Après la finitude?: Essai sur la nécessité de la contingence. Paris: Seuil.
Nagel, T. (1986). The View From Nowhere. Oxford University Press.
Zizek, S. (2011). Living in the End Times (Rev Upd edition). London?; New York: Verso.